Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 955: Global Cybercrime Network Storm-2139 Allegedly Exploits AI to Generate Deepfake Content

Description: A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
Editor Notes: The date for this incident, 12/19/2024, is the date Microsoft filed its lawsuit, which can be read here: https://www.noticeofpleadings.net/fizzdog/files/COMPLAINT_AND_SUMMONS/2024.12.19_DE_001_%5BMicrosoft%5D_Complaint.pdf. However, the lawsuit explains, "Defendants conspired to operate the Azure Abuse Enterprise through a pattern of racketeering activity in furtherance of the common purpose of the Enterprise sometime prior to July 2024." Additionally, it details allegations of wire fraud (18 U.S.C. § 1343), stating that prior to July 2024, the defendants stole authentication information from Microsoft customers to fraudulently access the Azure OpenAI Service and deplete account balances (page 36). For information on a specific timeline, Microsoft's lawsuit also explains that "[f]rom July 26, 2024, to at least September 17, 2024, Defendants transmitted and/or caused to be transmitted by means of wire communication in interstate and foreign commerce writings, signals, and pictures for the purpose of executing their scheme to defraud" (page 37).

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unidentified Storm-2139 actor from Illinois , Unidentified Storm-2139 actor from Florida , Storm-2139 , Ricky Yuen (cg-dot) , Phát Phùng Tấn (Asakuri) , Arian Yadegarnia (Fiz) , Alan Krysiak (Drago) , Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms and AI safety guardrails developed and deployed an AI system, which harmed Victims of deepfake abuse , OpenAI , Microsoft , celebrities , Azure OpenAI customers and AI service providers.
Alleged implicated AI systems: Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms and AI safety guardrails

Incident Stats

Incident ID
955
Report Count
5
Incident Date
2024-12-19
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident Occurrence+4
Disrupting a global cybercrime network abusing generative AI
Disrupting a global cybercrime network abusing generative AI

Disrupting a global cybercrime network abusing generative AI

blogs.microsoft.com

Microsoft IDs developers behind alleged generative AI hacking-for-hire scheme

Microsoft IDs developers behind alleged generative AI hacking-for-hire scheme

cyberscoop.com

Microsoft Disrupts Storm-2139 for LLMjacking and Azure AI Exploitation

Microsoft Disrupts Storm-2139 for LLMjacking and Azure AI Exploitation

hackread.com

Microsoft expands legal action against AI abuse network Storm-2139

Microsoft expands legal action against AI abuse network Storm-2139

techmonitor.ai

Microsoft Names Developers Behind AI Jailbreaking Tools in Legal Crackdown on Storm-2139

Microsoft Names Developers Behind AI Jailbreaking Tools in Legal Crackdown on Storm-2139

enterprisesecuritytech.com

Disrupting a global cybercrime network abusing generative AI
blogs.microsoft.com · 2025

In an amended complaint to recent civil litigation, Microsoft is naming the primary developers of malicious tools designed to bypass the guardrails of generative AI services, including Microsoft's Azure OpenAI Service. We are pursuing this …

Microsoft IDs developers behind alleged generative AI hacking-for-hire scheme
cyberscoop.com · 2025

Microsoft has identified individuals from Iran, China, Vietnam and the United Kingdom as primary players in an alleged international scheme to hijack and sell Microsoft accounts that could bypass safety guidelines for generative AI tools.

I…

Microsoft Disrupts Storm-2139 for LLMjacking and Azure AI Exploitation
hackread.com · 2025

Microsoft exposes Storm-2139, a cybercrime network exploiting Azure AI via LLMjacking. Learn how stolen API keys enabled harmful content generation and Microsoft's legal action

Microsoft has taken legal action against a cybercriminal networ…

Microsoft expands legal action against AI abuse network Storm-2139
techmonitor.ai · 2025

Microsoft has expanded its legal action against a cybercrime group accused of developing tools that bypass security measures in generative AI (GenAI) services. In an updated complaint, the company has publicly named several individuals alle…

Microsoft Names Developers Behind AI Jailbreaking Tools in Legal Crackdown on Storm-2139
enterprisesecuritytech.com · 2025

In a landmark move against AI exploitation, Microsoft has unmasked key developers behind Storm-2139, a global cybercrime network accused of manipulating generative AI models for illicit activities. The company's legal action targets four na…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 9427ecd