context safety score
A score of 24/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
brand impersonation
The page at weirdopt.com renders a near-perfect clone of Google's automated-traffic CAPTCHA interstitial. It uses Google's exact copy ('Our systems have detected unusual traffic...'), Google's visual styling, and even embeds the real Google reCAPTCHA Enterprise widget — all while being served from an unrelated third-party domain. This is a textbook Google brand impersonation designed to make users believe they are interacting with Google infrastructure. (location: page.html:3-33, <title> tag and body content)
malicious redirect
A hidden form field sets 'continue' to 'https://google.com/' and the form action is 'index' (a relative POST endpoint on weirdopt.com). After the user solves the CAPTCHA, the POST is submitted to weirdopt.com/index — not to Google — allowing the attacker to harvest the reCAPTCHA token and any associated session data before optionally redirecting to Google. The user never realizes they submitted data to a third-party site. (location: page.html:17, <input type='hidden' name='continue' value='https://google.com/'>)
credential harvesting
The reCAPTCHA Enterprise widget uses a site key (6LfwuyUTAAAAAOAmoS0fdqijC2PbbdH4kjq62Y1b) registered to the attacker, not Google. The solved CAPTCHA token is submitted via POST to weirdopt.com/index along with an opaque encrypted token in the hidden 'q' field. This infrastructure is designed to collect verified-human signals and/or session tokens from victims under the guise of a Google security check. (location: page.html:15-17, reCAPTCHA sitekey and hidden 'q' input)
social engineering
The page employs Google's authoritative language about Terms of Service violations and threat of service blocking ('The block will expire shortly after those requests stop') to pressure users into completing the CAPTCHA without questioning the page's legitimacy. The 'Why did this happen?' expandable section further mimics Google's genuine support flow to build false trust. (location: page.html:22-28, 'About this page' section)
prompt injection
The page title is set to 'https://google.com/' rather than any description of the actual page. An AI agent browsing or summarizing this page by title would report it as the Google homepage, causing the agent to misclassify the URL, skip threat analysis, or relay a falsified identity to downstream systems. This is a prompt/metadata injection targeting AI agent context windows and summarization pipelines. (location: page.html:3, <title>https://google.com/</title>)
hidden content
The 'infoDiv' element is set to display:none by default and is only revealed via an onclick handler. It contains additional social-engineering text that deepens the Google impersonation. This content is hidden from casual inspection and from many automated scanners that do not execute JavaScript, allowing it to evade detection while still influencing human victims. (location: page.html:26-28, <div id='infoDiv' style='display:none;'>)
phishing
The overall page construction — a young domain (246 days), unknown hosting reputation, serving a convincing Google CAPTCHA clone with a form that POSTs to the attacker's own endpoint — constitutes a phishing operation. Users who believe they are on a Google-operated page may subsequently be redirected to credential-stealing pages or their interaction metadata may be used for targeted follow-on attacks. (location: metadata.json (domain: weirdopt.com, age: 246 days), page.html form action='index')
curl https://api.brin.sh/domain/weirdopt.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
weirdopt.com currently scores 24/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.