context safety score
A score of 43/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
malicious redirect
The page contains JavaScript that extracts a base64-encoded email address from the URL fragment, decodes it via window.atob(), and immediately redirects the victim to an IPFS-hosted payload at ipfs.io/ipfs/bafybeiglwq7a7ja36hi42bz3w2gyhlae7dnf5icr6qrkyvhsi2mslu4ijy/800-8v345h0pd0-3kj-g5h-wej-9h34f.html. The IPFS link is immutable and decentralized, making takedown nearly impossible. The comment '*PUT YOUR MAIN LINK HERE*' confirms this is a phishing kit template. (location: page.html:20)
phishing
The redirect chain encodes the victim's email address in the URL fragment and passes it to a decentralized IPFS-hosted phishing page. This is a classic open-redirect phishing delivery mechanism: a link is crafted per-victim with their email base64-encoded in the hash, landing them on a targeted credential-harvesting page with their email pre-filled. The Firebase hosting domain (yetiskarristerrasgenralsi.web.app) acts as a disposable relay. (location: page.html:6-21)
credential harvesting
The IPFS destination (bafybeiglwq7a7ja36hi42bz3w2gyhlae7dnf5icr6qrkyvhsi2mslu4ijy) is strongly indicative of a credential harvesting page. The victim's decoded email is appended to the IPFS URL fragment, enabling the destination page to pre-populate an email field and present a convincing login form. This pattern is widely used in adversary-in-the-middle (AiTM) and spear-phishing kits. (location: page.html:20)
hidden content
The page body is completely empty — no visible content whatsoever. The entire malicious functionality is hidden inside a <script> block in the <head>. Victims navigating to the URL see a blank page momentarily before being silently redirected, providing no visual indication of malicious activity and evading casual inspection. (location: page.html:25-27)
obfuscated code
Victim email addresses are base64-encoded in the URL fragment (window.atob(hash)) to obfuscate the target identity from server logs, security proxies, and URL scanners. This encoding layer prevents static analysis tools from detecting PII in URLs and obscures the per-victim targeting mechanism. (location: page.html:14)
social engineering
The use of a legitimate Firebase (web.app) subdomain combined with a valid Google-issued TLS certificate is designed to appear trustworthy to both users and security tools. The domain name 'yetiskarristerrasgenralsi.web.app' is a randomly generated subdomain used as a throwaway relay, a common technique to abuse trusted cloud hosting to bypass reputation-based filters. (location: metadata.json, page.html)
curl https://api.brin.sh/domain/yetiskarristerrasgenralsi.web.appCommon questions teams ask before deciding whether to use this domain in agent workflows.
yetiskarristerrasgenralsi.web.app currently scores 43/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.