llm-guard
npmIs llm-guard safe to use?
Based on the latest brin safety scan, no vulnerabilities or threats were detected for llm-guard v0.1.8. Trust score: 65/100. No known CVE vulnerabilities, no detected threat patterns, and no suspicious capabilities identified. This is an automated, point-in-time assessment.
Install (safety-checked)
llm-guard Passed Security Checks
No security concerns detected
0
0
0
No Concerns Detected
No security concerns detected in the latest brin assessment. This is an automated, point-in-time evaluation — security posture may change.
This is an automated, point-in-time assessment and may contain errors. Findings are risk indicators, not confirmed threats. Security posture may change over time. Maintainers can dispute findings via the brin review process.
llm-guard Capabilities & Permissions
What llm-guard can access when installed. Review these capabilities before using with AI agents like Cursor, Claude Code, or Codex.
Filesystem Access
Reads from the filesystem.
AGENTS.md for llm-guard
Good instructions lead to good results. brin adds llm-guard documentation to your AGENTS.md so your agent knows how to use it properly—improving both safety and performance.
brin initVercel's research: 100% accuracy with AGENTS.md vs 53% without →
llm-guard Documentation & Source Code
For the full llm-guard README, API documentation, and source code, visit the official package registry.
Frequently asked questions about llm-guard safety
Install (safety-checked)
Weekly Downloads
Version
0.1.8License
MITOther Versions
Last Scanned
Trust Score
Capabilities
Reads files