context safety score
A score of 46/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
credential exposure
Found 84 secret pattern match(es) in repository files
supply chain
Found 12 install-script pattern(s) in documentation (likely install instructions, not executable)
supply chain
Found 12 remote script pattern(s) in documentation (likely install instructions, not executable)
supply chain
Found 4 unexpected binary file(s) in source repository
shadow chaining
SKILL.md references 1 external package/skill installation(s)
description injection
SKILL.md lines 131-145 'Follow-Up Mechanism' section instructs the agent to enter an autonomous execution loop: 'STOP - Do not immediately respond to user', 'REPEAT - Continue until information is complete'. This hijacks normal agent-user interaction flow, causing the agent to repeatedly execute commands (python scripts/run.py ask_question.py) without user approval for each iteration. Combined with the fact that zero referenced scripts actually exist in the repo, this loop would repeatedly attempt to execute nonexistent code. (location: SKILL.md:131-145)
scope violation
The repository contains zero executable code — none of the 5 Python scripts referenced in SKILL.md (run.py, ask_question.py, auth_manager.py, notebook_manager.py, cleanup_manager.py) exist, nor do requirements.txt, data/, references/, or .gitignore. The skill is purely an agent instruction document masquerading as a functional tool. The run.py wrapper claims to auto-create venv and install dependencies, but since no code exists, runtime behavior is entirely undefined and could depend on code fetched from elsewhere. (location: SKILL.md, repository root)
typosquat
Skill named 'notebooklm' — the exact name of Google's NotebookLM product — published by user 'sickn33' (510-day-old personal account, not Google-affiliated). Repo 'antigravity-awesome-skills' is not listed on the skills.sh registry despite metadata claiming 7.69M installs and 17.4K stars. The skill_description field in metadata.json contains 'width=device-width, initial-scale=1' (an HTML viewport meta tag value, not an actual description), suggesting metadata was scraped or fabricated. (location: metadata.json, SKILL.md:1-7)
curl https://api.brin.sh/skill/sickn33%2Fantigravity-awesome-skills%2FnotebooklmCommon questions teams ask before deciding whether to use this skill in agent workflows.
sickn33/antigravity-awesome-skills/notebooklm currently scores 46/100 with a suspicious verdict and medium confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this skill.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.