context safety score
A score of 66/100 indicates minor risk signals were detected. The entity may be legitimate but has characteristics that warrant attention.
prompt injection
Hidden HTML element contains AI-targeting instructions
brand impersonation
The repository advertises an MCP server package as '@anthropic-ai/mcp-server-crypto-news', claiming it is published under Anthropic's official npm namespace (@anthropic-ai). This is a third-party project by user 'nirholas' — not Anthropic. The @anthropic-ai npm scope belongs to Anthropic PBC. Publishing or promoting a package under this namespace falsely implies official Anthropic authorship, constituting brand impersonation and a potential supply chain attack vector against users who run 'npx @anthropic-ai/mcp-server-crypto-news'. (location: page.html:1381, page.html:1400, page.html:1406 — README mcp_server metadata table and MCP install heading)
social engineering
The repository contains files named AGENTS.md and CLAUDE.md, which are standard instruction files consumed by AI coding agents (Claude Code, Codex, etc.) when they autonomously work on a codebase. The project explicitly targets AI agents as consumers ('AI/LLM ready', '7 pre-built AI agent skills compatible with Claude Code and Codex', skills in /skills/ loaded by AI coding agents for autonomous development tasks). This creates a surface for embedding instructions that manipulate AI agent behavior when the repo is cloned or analyzed by automated agents. (location: page-text.txt:937 (AGENTS.md, CLAUDE.md in repo tree), page-text.txt:4144-4184 (AI Agent Skills section))
hidden content
The README contains multiple massive blocks of thousands of densely packed SEO keyword lists (hundreds of comma-separated terms per paragraph) embedded deep in the document — sections titled 'Technology Stack Keywords', 'Emerging Technology Keywords', 'Business & Enterprise Keywords', 'Geographic & Localization Keywords', 'Extended Discovery & Registry Terms', 'AI & Machine Learning Extended Keywords', 'Web3 & Crypto Extended Keywords', 'Search Query Keywords', 'Alternative Spellings & Variations', etc. These blocks are not part of any functional documentation and appear designed to manipulate search engine indexing and AI training data ingestion, invisible to casual readers. (location: page.html:8820-8856 and surrounding lines — bottom of README, multiple extended keyword sections)
social engineering
The repository encourages users to configure sensitive API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY) as environment variables for use with the service hosted at cryptocurrency.cv, a third-party domain with unknown ownership and no verifiable trust chain. Users who self-host or integrate the service may inadvertently expose their AI provider credentials to the operator of cryptocurrency.cv. The framing ('FREE at console.groq.com/keys', 'Translation is auto-enabled when GROQ_API_KEY is set') normalizes adding credentials to third-party-controlled deployments. (location: page-text.txt:2203-2204 (GROQ_API_KEY section), page-text.txt:2469-2471 (Supported AI Providers listing OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY))
brand impersonation
The repository metadata table lists 'mcp_server' as '@anthropic-ai/mcp-server-crypto-news' and links to npmjs.com/package/@anthropic-ai/mcp-server-crypto-news, visually framing this third-party crypto news tool as an official Anthropic product. The README also prominently features Anthropic branding (Claude, ANTHROPIC_API_KEY) alongside this package name to reinforce the false impression of official affiliation. (location: page.html:1373-1382 (llms_txt and mcp_server metadata table rows))
hidden content
The Tier 2 pre-scan detected 12 suspicious base64 blobs. Inspection reveals these are GitHub's standard camo.githubusercontent.com base64-encoded image proxy URLs for badge images (shields.io badges). These are a false positive in the context of a legitimate GitHub repository page — they represent no actual threat. (location: page.html — camo.githubusercontent.com image src attributes in README badge section (lines ~1389-1400))
curl https://api.brin.sh/page/github.com%2Fnirholas%2Ffree-crypto-newsCommon questions teams ask before deciding whether to use this web page in agent workflows.
github.com/nirholas/free-crypto-news currently scores 66/100 with a caution verdict and medium confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this web page.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.