Description: FBI Deputy Assistant Director Cynthia Kaiser stated that adversarial actors, particularly those affiliated with China and organized cybercriminal groups, are increasingly integrating AI tools across the cyberattack lifecycle, with documented use cases reportedly including purported AI-generated spear phishing, business identity fabrication, internal network mapping, and deepfake-enabled fraud. The tools are allegedly already assisting intrusions targeting U.S. infrastructure.
Editor Notes: Reconstructing the reported timeline of events for this incident: (1) In late 2023, Chinese state-backed actor Volt Typhoon reportedly infected outdated routers to build a botnet and gained stealth access to U.S. critical infrastructure networks, including energy systems. (2) In 2024, Salt Typhoon compromised at least nine U.S. telecommunications and government networks and reportedly exploited over 1,000 Cisco devices. (3) In January 2025, Salt Typhoon continued its campaign with additional exploitation attempts targeting internet-facing infrastructure. (4) By early 2025, FBI analysts observed increasing integration of AI tools by Chinese actors and cybercriminals across the attack lifecycle, including for spear phishing, deepfakes, and internal network mapping. (5) On April 29th, 2025, FBI Deputy Assistant Director Cynthia Kaiser publicly characterized these developments at RSA Conference, warning that AI was enabling more efficient and scalable operations by adversaries, especially those linked to the Chinese government. Although Volt and Salt Typhoon were referenced in the same interview as the discussion of adversarial AI use, Kaiser did not appear to explicitly link these specific campaigns to the use of AI tools. Their inclusion in this record reflects contextual relevance, not confirmed technical association.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Unknown generative AI developers developed an AI system deployed by Government of China and Chinese Communist Party, which harmed United States critical infrastructure , Private companies , Government agencies , Employees targeted by phishing and Employees targeted by deepfake impersonations.
Alleged implicated AI systems: Unknown deepfake technology , Unknown automated phishing tools , Content moderation systems and Unknown large language models
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
The biggest threat to US critical infrastructure, according to FBI Deputy Assistant Director Cynthia Kaiser, can be summed up in one word: "China."
In an interview with The Register during RSA Conference, she said Chinese government-backed …
Loading...
The federal government's ability to counter cyberespionage, destructive hacks and organized criminal scams is declining under the Trump administration just as artificial intelligence supercharges those threats, according to interviews with …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Selected by our editors
Did our AI mess up? Flag the unrelated incidents
Loading...

Game AI System Produces Imbalanced Game
· 11 reports
Loading...

Deepfake Obama Introduction of Deepfakes
· 29 reports
Similar Incidents
Selected by our editors
Did our AI mess up? Flag the unrelated incidents
Loading...

Game AI System Produces Imbalanced Game
· 11 reports
Loading...

Deepfake Obama Introduction of Deepfakes
· 29 reports
