llm-guard
PyPIIs llm-guard safe to use?
Based on the latest brin safety scan, no vulnerabilities or threats were detected for llm-guard v0.3.16. Trust score: 70/100. No known CVE vulnerabilities, no detected threat patterns, and no suspicious capabilities identified. This is an automated, point-in-time assessment.
Install (safety-checked)
llm-guard Passed Security Checks
No security concerns detected
0
0
0
No Concerns Detected
No security concerns detected in the latest brin assessment. This is an automated, point-in-time evaluation — security posture may change.
This is an automated, point-in-time assessment and may contain errors. Findings are risk indicators, not confirmed threats. Security posture may change over time. Maintainers can dispute findings via the brin review process.
llm-guard Capabilities & Permissions
What llm-guard can access when installed. Review these capabilities before using with AI agents like Cursor, Claude Code, or Codex.
Network Access
This package makes network requests.
Filesystem Access
Writes to the filesystem.
Environment Variables
Accesses the following environment variables.
Native Modules
Contains native code that runs outside the JavaScript sandbox.
AGENTS.md for llm-guard
Good instructions lead to good results. brin adds llm-guard documentation to your AGENTS.md so your agent knows how to use it properly—improving both safety and performance.
brin initVercel's research: 100% accuracy with AGENTS.md vs 53% without →
llm-guard Documentation & Source Code
For the full llm-guard README, API documentation, and source code, visit the official package registry.
Frequently asked questions about llm-guard safety
Install (safety-checked)
Weekly Downloads
Version
0.3.16License
MITOther Versions
Last Scanned
Trust Score
Capabilities
Connects to: example.com, example.com), github.blog...
Writes files
Accesses: TOKENIZERS_PARALLELISM
Contains native modules