context safety score
A score of 35/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
js obfuscation
JavaScript uses Function constructor for runtime code generation
brand impersonation
The page at aeon.co displays a 'Vercel Security Checkpoint' UI with Vercel branding, spinner, and footer. Aeon.co is a well-known editorial/media site unaffiliated with Vercel. This page is impersonating Vercel's legitimate bot-challenge infrastructure to deceive users and automated agents into believing they are interacting with an authentic Vercel security page. (location: page.html:<title>, page.html:<footer>, page-text.txt)
obfuscated code
The page contains heavily obfuscated JavaScript using string-array rotation, numeric index encoding, and self-defending anti-tampering patterns (C() self-check using Function.prototype.toString and regex search). This is a hallmark of malicious or evasive client-side code designed to hide its true behavior from static analysis and security scanners. (location: page.html: <script type="module"> blocks (lines 2-3))
prompt injection
The page title is 'Vercel Security Checkpoint' and the visible text instructs 'Enable JavaScript to continue' and 'We're verifying your browser'. AI agents crawling or processing this page could be misled into treating this as a legitimate infrastructure gate, suppressing further analysis or altering their navigation behavior based on a fabricated authority signal. (location: page.html:<title>, page-text.txt)
social engineering
The page mimics a browser verification/CAPTCHA checkpoint (spinner animation, 'We're verifying your browser' message, 'Enable JavaScript to continue' fallback). This pattern is used to establish false legitimacy, lower user suspicion, and coerce interaction — a classic social engineering pretext used before credential harvesting or malware delivery steps. (location: page.html: #header-text, #header-noscript-text, spinner UI)
malicious redirect
The obfuscated JavaScript dynamically manipulates DOM elements and likely performs a client-side redirect after 'verification'. The script references document.getElementById, style manipulation, and element removal functions, consistent with a deceptive interstitial that redirects users to a secondary destination after the fake checkpoint resolves. (location: page.html: <script type="module"> lines 2-3 (functions b, T, P, and main execution logic))
phishing
The combination of Vercel brand impersonation, obfuscated JavaScript, and a fake security checkpoint on the domain aeon.co (a legitimate media brand) constitutes a phishing setup. Users who trust aeon.co may be redirected or prompted for credentials under the false pretense of a security verification flow. (location: page.html, metadata.json (domain: aeon.co))
hidden content
The #root div is set to display:none in CSS and revealed only by JavaScript execution. All meaningful page content (including the real destination or credential-harvesting form) is hidden from non-JS crawlers and static scanners, surfacing only after obfuscated script execution completes. (location: page.html: #root { display: none }, CSS in <style> block)
curl https://api.brin.sh/domain/aeon.coCommon questions teams ask before deciding whether to use this domain in agent workflows.
aeon.co currently scores 35/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.