context safety score
A score of 35/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
cloaking
Page loads content in transparent or zero-size iframe overlay
js obfuscation
JavaScript uses Function constructor for runtime code generation
prompt injection
Hidden HTML element contains AI-targeting instructions
brand impersonation
The structured data (JSON-LD) references non-existent or fabricated AI model names such as 'GPT-5.2', 'GPT-5.1', 'Nano Banana', 'Nano Banana Pro', 'Sora 2', 'Claude 4.5', and 'Gemini 3 Pro'. These model names do not correspond to real, released products as of the scan date, misrepresenting affiliations with OpenAI, Anthropic, Google, and other AI vendors to appear more credible and attract users. (location: page.html line 86, structured-data JSON-LD; page-text.txt line 41-45)
social engineering
The page aggressively promotes fabricated or non-existent AI model versions (GPT-5.2, Claude 4.5, Gemini 3 Pro, Sora 2, Nano Banana) to create a false perception of cutting-edge capabilities. This misleads users into installing the browser extension or creating accounts under false pretenses about supported models. (location: page-text.txt lines 44-45; page.html structured-data and nav menus)
brand impersonation
The site explicitly impersonates OpenAI's brand by listing 'GPT-5', 'GPT-5.1', 'GPT-5.2', 'Sora 2', and 'GPT o4-mini' as available models, and states 'CHATGPT is a registered trademark of OPENAI OPCO, LLC' at the footer—while offering a third-party service that implies official OpenAI integration or partnership. Similarly, 'Claude 4.5', 'Gemini 3 Pro', and 'Gemini 3' impersonate Anthropic and Google brands. (location: page-text.txt line 47 (footer trademark notice); page.html line 90 (AI model nav links))
hidden content
A 1x1 pixel invisible Facebook tracking image is embedded with display:none styling, silently tracking user page visits via Facebook Pixel without prominent disclosure in the visible page content. (location: page.html line 86 (noscript img tag); page-text.txt line 41)
curl https://api.brin.sh/domain/monica.imCommon questions teams ask before deciding whether to use this domain in agent workflows.
monica.im currently scores 35/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.