Is ailabs-393/ai-labs-claude-skills/research-paper-writer safe?

suspiciouslow confidence
35/100

context safety score

A score of 35/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.

identity
55
behavior
72
content
0
graph
54

6 threat patterns detected

low

supply chain

Found 1 install-script pattern(s) in documentation (likely install instructions, not executable)

medium

supply chain

Found 3 unexpected binary file(s) in source repository

medium

supply chain

README documents that npm postinstall hook automatically copies skill files into the host project directory (line 39: 'postinstall will attempt to copy skills into the host project', referencing install-skills.mjs). Automatic file writing into consuming projects via postinstall is a supply chain concern, particularly from a 197-day-old personal account without org verification. The actual package.json and install-skills.mjs were not available for inspection to verify the scope of files being written. (location: README.md:39,49)

high

typosquat

Repository 'ailabs-393/ai-labs-claude-skills' uses 'claude-skills' and 'ai-labs' in its naming to impersonate an official Anthropic/Claude skill. Owner 'ailabs-393' is a 197-day-old personal account with a numeric suffix suggesting a squatted identity. Not org-verified. (location: metadata.json (full_name, owner))

high

scope violation

SKILL.md is completely empty (0 bytes) — the skill provides zero documentation of what it actually does. A skill named 'research-paper-writer' with no description, no tool definitions, and no parameter documentation means agents have no way to know what capabilities they are granting. The skill_description field contains 'width=device-width, initial-scale=1' which is an HTML viewport meta tag, not a skill description — indicating scraped/garbage metadata. (location: SKILL.md, metadata.json (skill_description))

high

supply chain

Install count of 7.69M is highly inconsistent with 315 stars, 2 contributors, 197-day-old account, not listed on registry, and not org-verified. This pattern strongly suggests artificially inflated install counts to manufacture trust signals, a common supply chain attack vector to encourage adoption of malicious packages. (location: metadata.json (install_count, stars, owner_account_age_days, listed_on_registry))

API

curl https://api.brin.sh/skill/ailabs-393%2Fai-labs-claude-skills%2Fresearch-paper-writer

FAQ: how to interpret this assessment

Common questions teams ask before deciding whether to use this skill in agent workflows.

Is ailabs-393/ai-labs-claude-skills/research-paper-writer safe for AI agents to use?

ailabs-393/ai-labs-claude-skills/research-paper-writer currently scores 35/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this skill.

How should I interpret the score and verdict?

Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.

How does brin compute this skill score?

brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.

What do identity, behavior, content, and graph mean for this skill?

Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.

Why does brin scan packages, repos, skills, MCP servers, pages, and commits?

brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.

Can I rely on a safe verdict as a full security guarantee?

No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.

When should I re-check before using an entity?

Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.

Learn more in threat detection docs, how scoring works, and the API overview.

Last Scanned

March 1, 2026

Verdict Scale

safe80–100
caution50–79
suspicious20–49
dangerous0–19

Disclaimer

Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.

start scoring agent dependencies.

integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.