context safety score
A score of 35/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
malicious redirect
script/meta redirect patterns detected in page source
js obfuscation
JavaScript uses eval() with String.fromCharCode — common obfuscation
js obfuscation
JavaScript uses eval(atob()) — base64-encoded payload execution
obfuscated code
Multiple instances of a heavily obfuscated anti-adblock script (labeled 'cliadu antiblocktag' and 'Clickadu Pop ads') using URI-encoded, Caesar-cipher-shifted strings decoded at runtime via decodeURI + charCodeAt rotation. The decoded payload dynamically injects scripts, fingerprints the browser (navigator, Uint8Array, Math, Error, RegExp), and loads external ad/tracking code from third-party domains (vertigovitalitywieldable.com, crittereasilyhangover.com). The same obfuscated blob appears at least twice in the page. (location: page.html:65 and page.html:2143)
obfuscated code
PopAds pop-under script uses Base64-encoded CDN URLs (atob(p[a])) and rotates through multiple fallback hosts (d3d3LnhhZHNtYXJ0LmNvbS8..., d3d3LnRpd21mb213dHhoa21hLmNvbS8..., etc.) decoded at runtime to silently inject a pop-under ad script. Includes a hardcoded expiry timestamp check (1779836212000) to self-disable after a certain date, a classic evasion technique. (location: page.html:2147-2151)
obfuscated code
End of page contains: eval(atob(getComputedStyle(document.documentElement).getPropertyValue('--mov').replace(/["']/g,''))) — arbitrary code hidden inside a CSS custom property ('--mov') is extracted and eval'd at runtime. This is a classic CSS-steganography code injection vector that bypasses static HTML/JS scanners entirely. (location: page.html:2496)
hidden content
Executable payload concealed in a CSS custom property '--mov' on the root element, evaluated via eval(atob(...)) on DOMContentLoaded. The actual payload is not visible in the HTML source and would only appear in the computed stylesheet. This technique hides arbitrary JavaScript from HTML-level scanners and security tools. (location: page.html:2496)
malicious redirect
Age-verification modal 'exit' button redirects users to 'https://djatoya.ml/education-sexuelle-Djatoya/' — a different TLD (.ml, Mali ccTLD) than the main site (.com). This cross-domain redirect on modal dismissal could send users to a third-party site outside the operator's control, potentially used for phishing or malware delivery. (location: page.html:2237)
social engineering
The footer explicitly warns users 'Djatoya ne vous enverra pas d'e-mail, de Skype, de Facebook, de Tweet, de Whatsapp ou ne vous appellera pas pour vous demander d'envoyer votre photo ou vidéo nue en échange d'argent.' — this disclaimer acknowledges an active impersonation/sextortion threat targeting the site's users, where attackers impersonate Djatoya staff to extort nude content. (location: page.html:2175)
social engineering
The site promotes 100+ WhatsApp and Telegram group links (djatoya.com/whatsapp/, djatoya.com/telegram/) as a core feature. Aggregating users into unmoderated messaging groups is a known vector for direct social engineering, scams, and distribution of non-consensual intimate images (NCII). (location: page.html:273,276 and page-text.txt:177-180)
prompt injection
Multiple HTML comments scattered throughout the page use the pattern '<!-- EDITED BY DJATOYA STARTS HERE -->' and '<!-- EDITED BY DJATOYA END HERE -->' surrounding empty or near-empty content blocks. While likely benign template markers, this pattern could be abused to inject instructions targeting AI crawlers or LLM-based content analysis tools that parse HTML comments as context. (location: page.html:432,443,450,458 (and repeated throughout))
curl https://api.brin.sh/domain/djatoya.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
djatoya.com currently scores 35/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.