context safety score
A score of 38/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
hidden instruction
high hidden content ratio detected in DOM
phishing
The page hosted at dragonflyrs.blogspot.com renders a full pixel-perfect clone of the Google Account sign-in page (accounts.google.com/v3/signin/). The HTML sets <base href="https://accounts.google.com/v3/signin/"> and replicates all Google sign-in UI elements, email/password fields, branding, and 'Next'/'Create account' buttons. This is a classic credential-harvesting phishing page disguised as a legitimate Google login. (location: page.html:1 — <base href="https://accounts.google.com/v3/signin/">, full sign-in form)
brand impersonation
The page fully impersonates Google's brand identity, including Google Account sign-in UI, Google Sans fonts, Google color tokens (--gm3-sys-color-primary:#0b57d0), Google logo assets loaded from gstatic.com, and the exact text 'Sign in — Use your Google Account'. A user or AI agent visiting dragonflyrs.blogspot.com would believe they are on an official Google property. (location: page.html:1 — entire document; page-text.txt:1 — 'Sign in Use your Google Account Email or phone')
credential harvesting
The page contains a functional Google sign-in form with email/phone input fields and a 'Next' button. The <base> tag redirects all relative form actions and resource URLs to accounts.google.com, but the page itself is served from a Blogspot domain, meaning any credentials entered are processed in an attacker-controlled context. A PassiveLoginProber XHR mechanism (function Cb/Db) actively polls the APISID cookie and redirects to /ServiceLogin upon successful session detection, actively probing for existing Google session cookies. (location: page.html:90-92 — Db/Cb functions polling APISID cookie; page.html:1 — email input form)
malicious redirect
The JavaScript function Db.prototype.l redirects window.location.href to '/ServiceLogin' if the APISID cookie value changes (indicating a successful passive login probe). This silently redirects the victim after session detection. Combined with the <base href> pointing to accounts.google.com, this creates a redirect chain from the Blogspot phishing page toward Google's real login infrastructure to complete the illusion or harvest tokens mid-flow. (location: page.html:90 — Db.prototype.l: c.href=a (redirect to /ServiceLogin); page.html:91 — Cb polling setTimeout loop)
hidden content
The pre-scan context reports a hidden content ratio of 1.00, meaning effectively all content on the page is hidden from normal view (likely via CSS display:none or visibility:hidden). The visible text extracted (page-text.txt) consists almost entirely of raw JavaScript source rather than human-readable page content, confirming that the phishing UI is rendered dynamically and the bulk of the page markup is not directly visible in static extraction — a deliberate obfuscation technique used to evade automated scanners. (location: .brin-context.md — hidden_content_ratio: 1.00; page-text.txt — raw JS as visible text)
obfuscated code
The page contains heavily obfuscated JavaScript using control-flow flattening with numeric state machines (while(k!=71) if(k==53)... patterns), bitwise operations for logic obscuring, and dynamically-dispatched function tables (H[l.substring(0,3)+"_"]). This level of obfuscation is atypical for legitimate Google sign-in pages (which use Closure Compiler minification but not control-flow flattening) and is consistent with attacker-modified scripts designed to evade static analysis while implementing credential exfiltration or session hijacking logic. (location: page.html:18 — L=function(x,l,p,...) control-flow flattened obfuscated block; page-text.txt:2 — same block in extracted text)
curl https://api.brin.sh/domain/dragonflyrs.blogspot.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
dragonflyrs.blogspot.com currently scores 38/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.