context safety score
A score of 43/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
js obfuscation
JavaScript uses Function constructor for runtime code generation
brand impersonation
The page at kalshi.com presents itself as a 'Vercel Security Checkpoint' with Vercel branding, spinner UI, and footer. kalshi.com is a regulated prediction markets platform and has no affiliation with Vercel. This is either a misconfigured deployment intercepting traffic or a deliberate impersonation of Vercel's bot-protection page to add false legitimacy. (location: page.html <title> and footer: 'Vercel Security Checkpoint')
obfuscated code
The page contains heavily obfuscated JavaScript using numeric string array lookups, self-invoking shuffle loops, and computed property access (e.g., parseInt(c(167))/1 patterns and large encoded string arrays). This obfuscation technique is commonly used to hide malicious logic such as fingerprinting, credential harvesting, or redirect payloads from static analysis. (location: page.html <script type='module'> block, lines 2)
social engineering
The page displays 'We\'re verifying your browser' and 'Enable JavaScript to continue' — classic browser verification lure used in social engineering attacks (e.g., ClickFix, fake CAPTCHA campaigns) to manipulate users or automated agents into enabling JavaScript execution or interacting with deceptive UI elements. (location: page.html #header-text and #header-noscript-text elements)
malicious redirect
The heavily obfuscated script dynamically manipulates DOM elements and likely controls post-'verification' navigation. The true redirect destination is concealed inside the obfuscated code. A link to 'https://vercel.link/security-checkpoint' is present but the actual post-challenge redirect target for kalshi.com visitors is not transparent and may differ. (location: page.html obfuscated <script> block and #fix-text href='https://vercel.link/security-checkpoint')
prompt injection
The page-text.txt contains raw HTML markup mixed into the visible text output, including data-astro-cid attributes and structural tags. If an AI agent is scraping this page and feeding its text content into an LLM pipeline, the embedded markup and potential hidden instructions within dynamically rendered content could manipulate agent reasoning or actions. (location: page-text.txt full content — raw HTML injected into text layer)
curl https://api.brin.sh/domain/kalshi.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
kalshi.com currently scores 43/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.