Content moderation systems
Incidents implicated systems
Incidente 9555 Reportes
Global Cybercrime Network Storm-2139 Allegedly Exploits AI to Generate Deepfake Content
2024-12-19
A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
MásIncidente 10551 Reporte
FBI Reports AI Use by Threat Actors in Broader Cyber Context Including Infrastructure Intrusions
2025-04-29
FBI Deputy Assistant Director Cynthia Kaiser stated that adversarial actors, particularly those affiliated with China and organized cybercriminal groups, are increasingly integrating AI tools across the cyberattack lifecycle, with documented use cases reportedly including purported AI-generated spear phishing, business identity fabrication, internal network mapping, and deepfake-enabled fraud. The tools are allegedly already assisting intrusions targeting U.S. infrastructure.
Más