context safety score
A score of 35/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
phishing
1 deceptive links where visible host does not match destination host
obfuscated code
Large inline script uses URI-encoded obfuscated string with character-rotation cipher and split index array to conceal its payload. The decoded content constructs URLs and logic at runtime, preventing static analysis of its true behavior. (location: page.html:288)
malicious redirect
External script loaded from //badlandlispyippee.com/on.js — a low-reputation, suspicious domain name consistent with ad-fraud or drive-by redirect networks. Loaded with data-cfasync='false' to bypass Cloudflare's async safety filter, with onerror/onload callbacks feeding into a local error handler (hsfgq), suggesting it is part of the obfuscated redirect/ad-injection framework. (location: page.html:289)
hidden content
Thumbnail images use a 1x1 transparent GIF placeholder (data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7) as the src attribute while the real image URLs are stored in data-original and data-webp attributes. This lazy-load pattern hides the actual content source from non-JavaScript crawlers and some security scanners. (location: page.html:320 (and repeated across all video thumbnails))
social engineering
Site hosts a 'DeepFakesPorn' partner link (deepfakesporn.com) and a 'FakeKpop' partner link (fakekpop.com) in the navigation Sites dropdown, explicitly promoting deepfake/non-consensual synthetic intimate imagery platforms. This constitutes social engineering by normalizing and directing users to non-consensual content. (location: page.html:264-265)
hidden content
Cloudflare challenge-platform script is injected via a hidden 1x1 invisible iframe (position:absolute, visibility:hidden) appended to the document body at runtime. While common for Cloudflare Bot Management, the iframe-based injection pattern can also be used to load hidden third-party content or tracking pixels. (location: page.html:1718)
hidden content
Ad provider scripts from a.magsrv.com and a.pemsrv.com are embedded in the footer alongside inline AdProvider push calls. These ad networks are associated with aggressive adult ad networks that may serve popunders, malvertising, or forced redirects to users. (location: page.html:1521-1527, 1540-1542)
obfuscated code
A second ad/tracker script is loaded from //cdn.tsyndicate.com with spot IDs and session parameters. TSyndicate is a known adult traffic monetization network that injects popunder redirects and interstitial ads, often obfuscating destination URLs. (location: page.html:284, 291)
curl https://api.brin.sh/domain/kissjav.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
kissjav.com currently scores 35/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.