context safety score
A score of 31/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
credential exposure
Found 14 secret pattern match(es) in repository files
supply chain
Found 8 install-script pattern(s) in documentation (likely install instructions, not executable)
supply chain
Found 8 remote script pattern(s) in documentation (likely install instructions, not executable)
supply chain
Found 5 unexpected binary file(s) in source repository
doc injection
AGENTS.md falsely claims authorship by 'Vercel Engineering' (lines 3-4) and states it is for agents/LLMs working 'at Vercel' (lines 7-11). The repository is owned by supercent-io, an unverified organization with 13 stars, not by Vercel. This false attribution gives the agent configuration file unearned authority when consumed by AI agents, who would treat the instructions as coming from Vercel's official engineering team. The technical content itself is legitimate React best practices with no malicious instructions. (location: agent-configs/.agent-skills__react-best-practices__AGENTS.md:3-11)
typosquat
Skill named 'looker-studio-bigquery' references two Google products (Looker Studio, BigQuery) but is published by unverified org 'supercent-io' from a generic 'skills-template' repo. Not affiliated with Google. 13 stars, no license, no registry listing, empty SKILL.md — no evidence of legitimate functionality. Name appears designed to capture installs from users seeking official Google integrations. (location: metadata.json (skill_name, full_name, org_verified))
scope violation
skill_description field contains 'width=device-width, initial-scale=1' — an HTML viewport meta tag, not a skill description. This suggests the metadata was scraped from a webpage rather than authored legitimately, or the field is being used to inject HTML/content into contexts that render it. Combined with an empty SKILL.md, the skill provides no documented functionality whatsoever. (location: metadata.json (skill_description))
supply chain
Install count of 7.69M is wildly inconsistent with 13 stars, 2 forks, 3 contributors, no registry listing, no license, and an empty SKILL.md. This pattern is consistent with inflated install metrics used to manufacture trust signals, or a metadata integrity issue. Either way, agents or users relying on install count as a trust signal would be misled. (location: metadata.json (install_count vs stars/forks))
curl https://api.brin.sh/skill/supercent-io%2Fskills-template%2Flooker-studio-bigqueryCommon questions teams ask before deciding whether to use this skill in agent workflows.
supercent-io/skills-template/looker-studio-bigquery currently scores 31/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this skill.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.