Description: Malicious versions of the popular Nx monorepo tool and plugins were reportedly published to npm after attackers compromised its CI workflow. The malware's postinstall script reportedly harvested credentials and exfiltrated data, reportedly weaponizing local AI coding agents such as Claude Code, Gemini, and Amazon q. By invoking unsafe flags, it allegedly coerced the tools into scanning developer machines for sensitive files, marking one of the first known AI-assisted supply chain attacks.
Entities
View all entitiesAlleged: Anthropic , Google and Amazon developed an AI system deployed by Malicious actors compromising Nx’s CI/CD pipeline and publishing tainted npm packages, which harmed Nx users and organizations installing compromised npm packages.
Alleged implicated AI systems: Nx (monorepo tool and plugins) , npm registry , Claude Code CLI , Google Gemini CLI , Amazon q CLI and GitHub
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.2. AI system security vulnerabilities and attacks
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
On August 26--27, 2025 (UTC), eight malicious Nx and Nx Powerpack releases were pushed to npm across two version lines and were live for ~5 hours 20 minutes before removal. The attack also impacts the Nx Console VS Code extension.
September…
Loading...
LAS VEGAS --- While many business sectors are still weighing the pluses and minuses of generative AI, criminal hackers are jumping in with both feet.
They have figured out how to turn the artificial intelligence programs proliferating on mo…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents


