context safety score
A score of 38/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
hidden instruction
high hidden content ratio detected in DOM
encoded payload
suspicious base64-like blobs detected in page content
hidden content
A hidden anchor tag with display:none contains an obfuscated href path (/dAgFoJqcVGJ3/553e44ac/QEmI6t4j_nlC/KmfIQaELbOvR/ZBt_rZL41dvW) and random-looking link text (3X5yp2Z_NOBN), consistent with cloaked link injection or SEO spam. Not visible to users but present in DOM for crawlers or automated agents. (location: page.html:10)
obfuscated code
Large inline JavaScript block contains heavily obfuscated, base64/encoded string payload assigned to $_ts.cd. The script loads and executes dynamic code at runtime (calls $_ts.lcd() and _$ft()), making static analysis of its behavior impossible. This pattern is commonly used to evade detection and deliver malicious payloads conditionally. (location: page.html:5)
prompt injection
The page body is effectively empty of legitimate content — the only visible text extracted is the obfuscated JS call '_$ft()' and the hidden link text '3X5yp2Z_NOBN'. An AI agent visiting this page expecting content would receive no meaningful input, while the hidden and obfuscated elements could be designed to manipulate agent behavior or data pipelines parsing the DOM. (location: page-text.txt:4-5)
curl https://api.brin.sh/domain/189.cnCommon questions teams ask before deciding whether to use this domain in agent workflows.
189.cn currently scores 38/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.