flash-linear-attention
PyPIIs flash-linear-attention safe to use?
Based on the latest brin safety scan, no vulnerabilities or threats were detected for flash-linear-attention v0.4.1. Trust score: 65/100. No known CVE vulnerabilities, no detected threat patterns, and no suspicious capabilities identified. This is an automated, point-in-time assessment.
Install (safety-checked)
flash-linear-attention Passed Security Checks
No security concerns detected
0
0
0
No Concerns Detected
No security concerns detected in the latest brin assessment. This is an automated, point-in-time evaluation — security posture may change.
This is an automated, point-in-time assessment and may contain errors. Findings are risk indicators, not confirmed threats. Security posture may change over time. Maintainers can dispute findings via the brin review process.
flash-linear-attention Capabilities & Permissions
What flash-linear-attention can access when installed. Review these capabilities before using with AI agents like Cursor, Claude Code, or Codex.
Environment Variables
Accesses the following environment variables.
AGENTS.md for flash-linear-attention
Good instructions lead to good results. brin adds flash-linear-attention documentation to your AGENTS.md so your agent knows how to use it properly—improving both safety and performance.
brin initVercel's research: 100% accuracy with AGENTS.md vs 53% without →
flash-linear-attention Documentation & Source Code
For the full flash-linear-attention README, API documentation, and source code, visit the official package registry.
Frequently asked questions about flash-linear-attention safety
Install (safety-checked)
Weekly Downloads
Version
0.4.1Last Scanned
Trust Score
Capabilities
Accesses: FLA_CONV_BACKEND