context safety score
A score of 39/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
malicious redirect
script/meta redirect patterns detected in page source
malicious redirect
A third-party tracking script is injected in the very first line of the <head> via an obfuscated self-executing function: it dynamically creates a <script> element, assigns zone ID '10154333', and loads 'https://llvpn.com/tag.min.js'. The domain 'llvpn.com' is suspicious (evocative of VPN/proxy services) and the injection pattern — using array filter/pop to append to the document before any other content — is a known technique used by ad-fraud and malvertising networks to evade detection. (location: page.html:5)
obfuscated code
The tag-loading script at page.html line 5 uses a deliberately obfuscated self-invoking pattern: `[document.documentElement, document.body].filter(Boolean).pop().appendChild(document.createElement('script'))` passed as an argument to an IIFE to avoid straightforward static analysis. This structural obfuscation is used to hide the true intent (loading an external script from llvpn.com) from scanners and reviewers. (location: page.html:5)
malicious redirect
A prominent call-to-action banner button uses `window.location.href='https://videystream.asia'` to redirect users to an entirely different domain ('videystream.asia'). The banner presents this as a 'new website' upgrade, a classic social-engineering lure to drive traffic off the current domain to an unverified third-party site. (location: page.html:638-640)
social engineering
The page presents itself as a legitimate video-sharing platform ('Videy.Design - Platform Free') with professional UI, pagination, view counts, and filter controls to build trust. The content is an adult/explicit video repository (titles in Indonesian referencing sexual acts: 'Sepong in om', 'Asli hot mam', 'T4nt3 bahenol', 'Ad3k gemas minta jth', etc.) that uses leetspeak substitutions (3→e, 4→a) to obfuscate explicit keywords from automated filters while remaining human-readable — a deliberate evasion technique. (location: page.html:664-1531)
hidden content
HTML elements (the floatingMessage div, popupForm div, and their associated <style> and <script> blocks) are placed *outside* the closing </body> and </html> tags (lines 1630–1820). This is an injection pattern where content is appended after the formal document close — often used to hide secondary payloads from parsers that stop at </html>, while browsers still render and execute it. (location: page.html:1630-1820)
credential harvesting
A popup form collecting user name, email address, and content URL is rendered outside the </html> close tag and submitted via POST to 'https://formspree.io/f/manrjaak' — a third-party form endpoint not controlled by the site owner. Users who submit 'reports' surrender their name and email to an external service. The form is framed as a content-reporting tool to appear legitimate, but the data destination is a third-party endpoint with no disclosed privacy policy for that collection. (location: page.html:1636-1642)
social engineering
The page references '18 U.S.C 2257' compliance and includes links to Terms, DMCA Policy, and Privacy Policy pages, creating a false veneer of legal legitimacy for what appears to be an unlicensed adult content distribution site. These compliance signals are commonly used by illicit adult sites to appear trustworthy and deter takedown efforts. (location: page.html:1803-1808)
curl https://api.brin.sh/domain/videy.designCommon questions teams ask before deciding whether to use this domain in agent workflows.
videy.design currently scores 39/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.