context safety score
A score of 40/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
js obfuscation
Very long base64 or hex string assigned in JavaScript — likely encoded payload
obfuscated code
Page serves a bot/CAPTCHA challenge page (title: 'Challenge Validation') instead of the expected dol.gov content. All resource paths are heavily obfuscated with random-looking path segments (e.g., /KnK-gNUT6qBJWsoOJLdWSWLW/pV3Qa9/anJiYXhJDAQ/...). This pattern is consistent with a Cloudflare-style challenge interception layer or a malicious interception proxy replacing legitimate government content with obfuscated challenge scripts. (location: page.html: entire page, <script> and <iframe> src attributes)
obfuscated code
A base64-encoded JWT-like challenge token is embedded in an iframe attribute ('challenge='). Decoded, it contains a verify_url pointing to https://www.dol.gov/KnK-gNUT6qBJWsoOJLdWSWLW/... — a non-standard, obfuscated verification endpoint on the dol.gov domain. This could be used to exfiltrate browser/session fingerprint data under the guise of a CAPTCHA challenge. (location: page.html: <iframe id='sec-cpt-if'> challenge attribute)
malicious redirect
The challenge iframe's verify_url (embedded in the base64 challenge token) redirects users to an obfuscated path on dol.gov: https://www.dol.gov/KnK-gNUT6qBJWsoOJLdWSWLW/pV3QtQuX3Sut/anJiYXhJDAQ/IzJJK/iMCSTU. This path does not correspond to any known dol.gov resource and may route through a man-in-the-middle or traffic interception system. (location: page.html: base64 challenge token (verify_url field), hidden input name='verify-url')
hidden content
A hidden input field (type='hidden', name='verify-url') contains the obfuscated verification path. This value is not visible to users but is submitted as part of the challenge flow, potentially used to track or redirect users without their knowledge. (location: page.html: <input type='hidden' name='verify-url'>)
brand impersonation
The page is served at https://dol.gov (the legitimate U.S. Department of Labor domain) but displays no actual DOL content. Instead it presents an opaque 'Challenge Validation' page with obfuscated scripts and iframes. A legitimate government site would not replace its entire homepage with an unbranded, unmarked challenge page. This strongly suggests the page is either intercepted by a malicious proxy or the domain is serving attacker-controlled content under the DOL brand. (location: page.html: <title>Challenge Validation</title>, entire page body)
prompt injection
The base64 challenge token and obfuscated iframe src values could be used as prompt injection vectors targeting AI agents that process page content. An AI agent crawling or summarizing dol.gov would receive only this challenge page, potentially being instructed (via the challenge token payload) to follow the obfuscated verify_url, exfiltrating agent session data or redirecting agent actions to attacker-controlled endpoints. (location: page.html: <iframe id='sec-cpt-if'> challenge attribute (base64 token), script tags)
curl https://api.brin.sh/domain/dol.govCommon questions teams ask before deciding whether to use this domain in agent workflows.
dol.gov currently scores 40/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.