context safety score
A score of 40/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
obfuscated code
Multiple instances of heavily obfuscated JavaScript using URI-encoded strings, character-code shifting (charCodeAt/fromCharCode with modular arithmetic), and array-slicing to conceal payload assembly and dynamic script loading. The same obfuscated block appears at least three times in the page (lines 1050, 1124, and in page-text.txt lines 843 and 917). The technique decodes a large encoded string at runtime, splits it into segments, then constructs URLs and executable code that is never visible in plain source. (location: page.html:1050, page.html:1124, page-text.txt:843)
malicious redirect
Obfuscated script dynamically loads an external script from //detoxifylagoonsnugness.com/bn.js — a randomly-named, non-reputable domain with no relation to the site. This pattern is consistent with malvertising/redirect chains used to push unwanted destinations or drive-by downloads onto visitors. (location: page.html:1051)
malicious redirect
Obfuscated script dynamically loads an external script from //renamereptiliantrance.com/on.js — another randomly-named, non-reputable domain with no relation to the site. Same malvertising/redirect chain pattern as above. (location: page.html:1125)
malicious redirect
Script dynamically injects a script from //presentdust.com with a long obfuscated path (/cnDA9.6ybR2W5SlyS/W-Ql9ZNvT/M/wEMfzrcx2INNSl0-1BMVzgA/z/NkzcYf2N), loaded with referrerPolicy='no-referrer-when-downgrade'. This is a known pattern for ad-traffic monetization networks that can redirect users or serve malicious ads. The domain name and obfuscated path are characteristic of traffic distribution systems (TDS). (location: page.html:1129-1138, page-text.txt:922-931)
hidden content
A 1x1 pixel invisible iframe is dynamically injected into the document body (position:absolute, top:0, left:0, visibility:hidden, height=1, width=1) via inline JavaScript. This hidden iframe is used to execute Cloudflare challenge-platform scripts, but the same technique is also a well-known vector for hidden iframe injection attacks. The iframe loads /cdn-cgi/challenge-platform/scripts/jsd/main.js with an embedded script that sets window.__CF$cv$params containing base64-encoded data. (location: page.html:1160)
hidden content
The page contains multiple data-cl-spot div elements (IDs: 2093352, 2091090, 2091088, 2091094) which are empty placeholder divs that are populated by external ad network scripts at runtime. The actual content loaded into these spots is not visible in the HTML and cannot be audited statically. (location: page.html:256, page.html:610, page.html:612, page.html:630, page.html:841)
social engineering
The site presents a login modal with username/password fields, a password reset form, and 'Sign up'/'Login' links. Registration is explicitly disabled ('Registration is disabled.' message), yet the login form remains active. This could be used to harvest credentials from users who believe they are logging into a legitimate account. The form posts to https://sexbebin.com/ with action=wpst_login_member. (location: page.html:884-903)
curl https://api.brin.sh/domain/sexbebin.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
sexbebin.com currently scores 40/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.