context safety score
A score of 45/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
encoded payload
suspicious base64-like blobs detected in page content
cloaking
Page loads content in transparent or zero-size iframe overlay
malicious redirect
The scanned URL is justpremium.com but the page fully renders as GumGum (gumgum.com), with all canonical URLs, schema.org data, og:tags, and Webflow site ID pointing to gumgum.com. The domain justpremium.com is silently serving the GumGum homepage without any disclosure, indicating an undisclosed redirect or domain masking. Visitors and AI agents browsing justpremium.com receive GumGum's full site with no indication they are on a different domain. (location: metadata.json domain=justpremium.com vs page.html title/og:title/schema pointing to gumgum.com)
brand impersonation
justpremium.com is serving content that is entirely branded as GumGum, including GumGum logos, product names, address, employee data, and legal terms. JustPremium was acquired by GumGum, but no disclosure or redirect notice is shown to users — the site fully impersonates or co-opts the GumGum brand without user-visible attribution of the domain relationship, which could mislead users and AI agents into believing they are interacting with gumgum.com. (location: page.html <title>, og:title, schema.org Organization name, footer branding)
hidden content
The Intellimize A/B testing framework (customer ID 117517753) injects an 'anti-flicker' CSS class that sets visibility:hidden and opacity:0 on all elements for up to 4000ms. While common in CRO tools, this technique hides the entire page from rendering during the experiment evaluation window, which can conceal content variants from security scanners and automated agents that do not wait for JS execution. (location: page.html line 247: <style>.anti-flicker, .anti-flicker * {visibility: hidden !important; opacity: 0 !important;}</style> and Intellimize script)
hidden content
Scripts are loaded from two CodeSandbox (csb.app) subdomains: https://es6cmx.csb.app/counter.js and https://fgj2bc.csb.app/wf-forms.js and https://fgj2bc.csb.app/wf-block-domains.js. CodeSandbox is a user-controlled ephemeral hosting platform where anyone can deploy arbitrary JavaScript. Loading production scripts from csb.app is anomalous and high-risk — these URLs could be modified or taken over to inject malicious code. The scripts are not from a verified CDN and have no integrity (SRI) hashes. (location: page.html line 646: https://es6cmx.csb.app/counter.js; line 714: https://fgj2bc.csb.app/wf-forms.js; line 717: https://fgj2bc.csb.app/wf-block-domains.js)
obfuscated code
Intellimize tracking script is loaded dynamically by injecting a script element via JavaScript rather than a standard <script src> tag. This pattern bypasses some CSP enforcement and static analysis tools, and is a common technique used to load third-party code in a less-detectable manner. (location: page.html line 247-248: var wfClientScript=document.createElement('script'); wfClientScript.src='https://cdn.intellimize.co/snippet/117517753.js')
hidden content
A hidden div with class 'show-page-url-4' contains the literal string 'insertpageurl' which appears to be an unfilled template placeholder. Similarly, hidden form fields for gclid, utm_source, utm_medium, utm_campaign, utm_term collect tracking parameters. While standard ad-tech practice, the unfilled placeholder text 'insertpageurl' is anomalous and may indicate incomplete or tampered page templating. (location: page.html line 566-572 and page-text.txt line 203-205: insertpageurl)
curl https://api.brin.sh/domain/justpremium.comCommon questions teams ask before deciding whether to use this domain in agent workflows.
justpremium.com currently scores 45/100 with a suspicious verdict and medium confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.