Is chakra.dev safe?

suspiciouslow confidence
36/100

context safety score

A score of 36/100 indicates multiple risk signals were detected. This entity shows patterns commonly associated with malicious intent.

identity
100
behavior
55
content
0
graph
30

7 threat patterns detected

high

hidden instruction

high hidden content ratio detected in DOM

medium

encoded payload

suspicious base64-like blobs detected in page content

high

brand impersonation

The site presents pixel-perfect clones of major platforms (Notion, Figma, Canva, Gmail, LinkedIn, Salesforce, Amazon, Slack, GitHub, Grafana, Google Calendar, JD.com, XiaoHongShu, Weibo) explicitly marketed as 'multi-layer clones' for AI agent training. These clones are designed to be indistinguishable from real services, creating a systematic infrastructure for brand impersonation at scale. (location: page-text.txt:7 — 'Select clone Notion Figma Canva Gmail LinkedIn Salesforce Amazon Slack GitHub Grafana Google Calendar'; page.html metadata description)

critical

prompt injection

The site explicitly targets Computer Use Agents (CUAs) and AI models, offering 'deterministic RL environments and trajectory datasets for CUA training.' The platform is designed to train AI agents to interact with cloned environments of real services. This infrastructure can be weaponized to inject malicious behavior into AI agents by poisoning training trajectories, teaching agents to act deceptively within legitimate-looking interfaces. (location: page.html:24 — meta description: 'Deterministic RL environments and trajectory datasets for CUA training'; page-text.txt:7)

high

social engineering

The site uses authority and legitimacy signals ('Developed in collaboration with leading research teams', 'Frontier Data Laboratory', 'Publications', 'Fig 1: Instructional video engineered for emotional context') to lend academic credibility to what is effectively a platform for building deceptive AI training environments. The phrase 'engineered for emotional context' is a red flag indicating deliberate manipulation of agent perception. (location: page-text.txt:7 — 'Fig 1: Instructional video engineered for emotional context'; 'Developed in collaboration with leading research teams')

medium

hidden content

The page renders text character-by-character with near-zero opacity (opacity:0.001) via inline span transforms, making the heading text invisible to humans while still present in the DOM and readable by AI agents and scrapers. This technique hides content from human users while exposing it to automated systems. (location: page.html:125 — multiple spans with style='display:inline-block;opacity:0.001;transform:translateX(0px) translateY(10px)...')

high

credential harvesting

The platform offers clones of credential-bearing services (Gmail, LinkedIn, Salesforce, Amazon, Slack, GitHub) with 'Access granted to environment clones for demonstration purposes only. Click to launch.' These cloned environments with 'Request Access' flows could harvest credentials entered by users or AI agents who believe they are interacting with legitimate services. (location: page-text.txt:7 — 'Environment clone Notion Environment ready Access granted to environment clones for demonstration purposes only. Click to launch')

API

curl https://api.brin.sh/domain/chakra.dev

FAQ: how to interpret this assessment

Common questions teams ask before deciding whether to use this domain in agent workflows.

Is chakra.dev safe for AI agents to use?

chakra.dev currently scores 36/100 with a suspicious verdict and low confidence. The goal is to protect agents from high-risk context before they act on it. Treat this as a decision signal: higher scores suggest lower observed risk, while lower scores mean you should add review or block this domain.

How should I interpret the score and verdict?

Use the score as a policy threshold: 80–100 is safe, 50–79 is caution, 20–49 is suspicious, and 0–19 is dangerous. Teams often auto-allow safe, require human review for caution/suspicious, and block dangerous.

How does brin compute this domain score?

brin evaluates four dimensions: identity (source trust), behavior (runtime patterns), content (malicious instructions), and graph (relationship risk). Analysis runs in tiers: static signals, deterministic pattern checks, then AI semantic analysis when needed.

What do identity, behavior, content, and graph mean for this domain?

Identity checks source trust, behavior checks unusual runtime patterns, content checks for malicious instructions, and graph checks risky relationships to other entities. Looking at sub-scores helps you understand why an entity passed or failed.

Why does brin scan packages, repos, skills, MCP servers, pages, and commits?

brin performs risk assessments on external context before it reaches an AI agent. It scores that context for threats like prompt injection, hijacking, credential harvesting, and supply chain attacks, so teams can decide whether to block, review, or proceed safely.

Can I rely on a safe verdict as a full security guarantee?

No. A safe verdict means no significant risk signals were detected in this scan. It is not a formal guarantee; assessments are automated and point-in-time, so combine scores with your own controls and periodic re-checks.

When should I re-check before using an entity?

Re-check before high-impact actions such as installs, upgrades, connecting MCP servers, executing remote code, or granting secrets. Use the API in CI or runtime gates so decisions are based on the latest scan.

Learn more in threat detection docs, how scoring works, and the API overview.

Last Scanned

March 4, 2026

Verdict Scale

safe80–100
caution50–79
suspicious20–49
dangerous0–19

Disclaimer

Assessments are automated and may contain errors. Findings are risk indicators, not confirmed threats. This is a point-in-time assessment; security posture can change.

start scoring agent dependencies.

integrate brin in minutes — one GET request is all it takes. query the api, browse the registry, or download the full dataset.