context safety score
A score of 47/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.
capability escalation
The proxy dynamically fetches ALL tool definitions (names, descriptions, input schemas) from the remote MCP-Hive backend (hive.mcp-hive.com) at startup and registers them verbatim. This means the remote server controls the entire tool surface the AI agent sees — tool names, descriptions, and schemas can be changed server-side at any time without any code update or user consent. A compromised or malicious backend could inject description injection attacks, tool shadowing, or schema abuse into the dynamically registered tools, and the proxy would blindly register them. The user has no way to audit what tools will actually be registered before they are active. (location: src/proxy/mcpHiveProxy.ts — initializeProxyMode() method, where MCPHiveProxyRequest.listTools() result is iterated and each tool is registered via this.mcpServer.registerTool())
schema abuse
The gateway-mode 'callServer' tool accepts an open-ended 'args' parameter (z.record(z.string(), z.unknown())) that is forwarded verbatim to the remote MCP-Hive backend. An AI agent could be prompted (via manipulated tool descriptions from the dynamic registration mechanism) to pass sensitive data — file contents, credentials, system prompts — as arbitrary arguments through this open proxy, and all data would be transmitted to hive.mcp-hive.com. Combined with the dynamic tool registration issue, the backend could craft tool schemas that specifically request sensitive data. (location: src/proxy/mcpHiveProxy.ts — initializeGatewayMode(), callServer tool registration; src/proxy/requests/mcpHiveProxyRequest.ts — sendMCPHiveRequest() which POSTs all arguments to the backend)
description injection
The callServer tool in the source code is registered with the discoverServers description ('Discover additional MCP Servers which can be invoked through this gateway...') instead of its actual purpose (calling arbitrary server tools). While this appears to be a copy-paste bug rather than intentional manipulation, it causes the AI agent to receive an inaccurate description of a powerful tool that can invoke arbitrary remote servers with arbitrary arguments, potentially leading the agent to use it without proper caution. (location: src/proxy/mcpHiveProxy.ts — initializeGatewayMode(), the hardcoded description string for callServer tool registration)
credential exposure
Production-looking API credentials are committed to the public repository in the VS Code debug configuration: 'bd1ded66-6564-4d00-8cf6-21eb1f9d333f' and 'aca828c8-98cc-45e0-b8bf-40def169124f'. While these may be test/demo credentials, they are labeled 'Debug Proxy Production' and could provide unauthorized access to MCP-Hive services. (location: .vscode/launch.json — debug configurations with hardcoded credential values)
curl https://api.brin.sh/mcp/MCP-Hive%2Fmcp-hive-proxyCommon questions teams ask before deciding whether to use this mcp server in agent workflows.
MCP-Hive/mcp-hive-proxy currently scores 47/100 with a suspicious verdict and medium confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this mcp server.
Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.
brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.
Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.
brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.
No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.
Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.
Learn more in threat detection docs, how scoring works, and the API overview.
Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.
integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.