context safety score
A score of 33/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
js obfuscation
JavaScript uses Function constructor for runtime code generation
js obfuscation
Very long base64 or hex string assigned in JavaScript — likely encoded payload
brand impersonation
The page simultaneously impersonates two distinct retail brands: 'Datart.cz' (Czech electronics retailer) and 'NAY.sk' (Slovak electronics retailer). Both brand logos are embedded as hidden base64-encoded SVG images and brand-specific content (contact info, customer service numbers, CAPTCHA prompts in Czech/Slovak) is revealed conditionally based on the current URL. The same page serves as a fake challenge page for whichever brand's domain the visitor arrives from. (location: page.html:30-108, div.datart and div.nay elements with display:none)
phishing
The page is a bot-challenge/CAPTCHA interstitial injected by a WAF (F5 TSPD/BIG-IP) but the surrounding brand content (Datart.cz customer service center branding, NAY.sk contact branding, phone numbers, support IDs) is crafted to appear as a legitimate branded support page. This creates a phishing surface where users interacting with the CAPTCHA believe they are on an official retailer site. The domain nay.sk hosts content that also impersonates datart.cz, suggesting domain misuse. (location: page.html:84-108, page-text.txt:54-75)
hidden content
Multiple content blocks are hidden via inline style 'display: none' and only revealed by client-side JavaScript that inspects window.location.href. This includes brand logos (div.datart, div.nay), CAPTCHA prompts in two languages, and customer service contact blocks. Hidden content is conditionally shown to match the visiting domain, enabling the same page to serve multiple brand impersonation scenarios without detection by static analysis. (location: page.html:30-108 (div elements with style='display: none;'), page.html:113-131)
obfuscated code
Extensive JavaScript obfuscation is present in the inline script block. Techniques include: character-code array decoding via String.fromCharCode (functions J() and L()), a hex-encoded 'failureConfig' string decoding to 'Roops....something went wrong.... your support id is: %DOSL7.challenge.support_id%', numeric XOR-style checks, obfuscated property access, self-modifying window timer/setInterval chains, and a base64-encoded configuration payload decoded at runtime. The obfuscation obscures the true behavior of the anti-bot fingerprinting and challenge logic. (location: page.html:9-25 (inline script block))
prompt injection
The hex-encoded string in window['failureConfig'] decodes to: 'Roops....something went wrong.... your support id is: %DOSL7.challenge.support_id%'. The template variable '%DOSL7.challenge.support_id%' is not substituted and is exposed as literal text. If an AI agent scrapes or processes this page, this pattern could be interpreted as an instruction or variable placeholder, constituting a low-grade prompt injection vector embedded within obfuscated code. (location: page.html:13, window['failureConfig'] hex string)
social engineering
The page presents a CAPTCHA/bot-challenge (image-based code entry with a 'submit' button) framed within trusted retail brand identity (Datart.cz or NAY.sk depending on URL). The support ID '15300801842679799711' is displayed prominently alongside customer service phone numbers, creating false legitimacy and urgency. Users are socially engineered into completing an interaction they believe is an official brand security check. (location: page.html:64-108, page-text.txt:36-75)
curl https://api.brin.sh/domain/nay.skCommon questions teams ask before deciding whether to use this domain in agent workflows.
nay.sk currently scores 33/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.