context safety score
A score of 41/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
obfuscated code
The entire page body consists of a single ~82KB JavaScript block with 588+ heavily obfuscated identifiers using randomized hex-string names (e.g., gs4dbc1a34fcbb5c55474ec4c4e36bd247). No legitimate content is present — all logic is concealed behind multi-layer arithmetic obfuscation. This pattern is characteristic of cloaking infrastructure designed to serve different content to scanners versus real users. (location: page.html: <body><script> (entire page body, line 1))
hidden content
The page renders no visible content whatsoever to the user or to static analysis tools. The body contains only a single obfuscated script block. All actual page content (if any) is deferred to a post-reload server response gated by a cookie set during the first visit. This fully hides the true purpose and content of the page from scanners and AI agents. (location: page.html: <body> element — no visible text, no rendered DOM content)
malicious redirect
The obfuscated script executes window.location.reload() via a disguised setTimeout call: setTimeout(gs03736a6f7fa6cd1a14810cf2cea6e918(), 217), where gs03736a6f7fa6cd1a14810cf2cea6e918 contains window.location.reload(). Because the function is invoked immediately (not passed as a reference), the reload fires synchronously on page load. Combined with the cookie set in the same execution, this implements a classic cookie-cloaking redirect: first visit sets the fingerprint cookie and reloads; subsequent visits with the cookie trigger the real (hidden) payload. (location: page.html: function gs99df92e0ff681b71efea0fe4977af2c2 / gs03736a6f7fa6cd1a14810cf2cea6e918 (~offset 28938 and 42502))
social engineering
The site uses a cookie-cloaking technique (set _gtyu cookie on first visit, reload to serve different content) to present a benign or empty page to automated scanners and AI agents while delivering potentially malicious content to real users on the reloaded request. The meta pragma no-cache header reinforces this by preventing cached responses from bypassing the cloak. This is a deliberate evasion of security analysis tools, including AI-based web agents. (location: page.html: gs2d8acfe7be514b4b764e18125d4c3c7c (cookie setter, offset ~54365), meta http-equiv pragma no-cache (head))
prompt injection
The page-text.txt file — which is what AI agents and LLM-based content extractors would receive as the 'readable text' of the page — contains only the raw obfuscated JavaScript source rather than human-readable content. An AI agent processing this as page text could be fed malformed or adversarial input. The obfuscated arithmetic expressions and function chains constitute noise that could interfere with AI agent reasoning or cause context exhaustion. (location: page-text.txt: entire file content (81936 bytes of raw obfuscated JS passed as visible page text))
curl https://api.brin.sh/domain/topbook.meCommon questions teams ask before deciding whether to use this domain in agent workflows.
topbook.me currently scores 41/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.