Description: A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
Editor Notes: The date for this incident, 12/19/2024, is the date Microsoft filed its lawsuit, which can be read here: https://www.noticeofpleadings.net/fizzdog/files/COMPLAINT_AND_SUMMONS/2024.12.19_DE_001_%5BMicrosoft%5D_Complaint.pdf. However, the lawsuit explains, "Defendants conspired to operate the Azure Abuse Enterprise through a pattern of racketeering activity in furtherance of the common purpose of the Enterprise sometime prior to July 2024." Additionally, it details allegations of wire fraud (18 U.S.C. § 1343), stating that prior to July 2024, the defendants stole authentication information from Microsoft customers to fraudulently access the Azure OpenAI Service and deplete account balances (page 36). For information on a specific timeline, Microsoft's lawsuit also explains that "[f]rom July 26, 2024, to at least September 17, 2024, Defendants transmitted and/or caused to be transmitted by means of wire communication in interstate and foreign commerce writings, signals, and pictures for the purpose of executing their scheme to defraud" (page 37).
Entités
Voir toutes les entitésAlleged: Unidentified Storm-2139 actor from Illinois , Unidentified Storm-2139 actor from Florida , Storm-2139 , Ricky Yuen (cg-dot) , Phát Phùng Tấn (Asakuri) , Arian Yadegarnia (Fiz) , Alan Krysiak (Drago) , Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms et AI safety guardrails developed and deployed an AI system, which harmed Victims of deepfake abuse , OpenAI , Microsoft , celebrities , Azure OpenAI customers et AI service providers.
Systèmes d'IA présumés impliqués: Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms et AI safety guardrails
Statistiques d'incidents
ID
955
Nombre de rapports
1
Date de l'incident
2024-12-19
Editeurs
Daniel Atherton
Rapports d'incidents
Chronologie du rapport
In an amended complaint to recent civil litigation, Microsoft is naming the primary developers of malicious tools designed to bypass the guardrails of generative AI services, including Microsoft's Azure OpenAI Service. We are pursuing this …
Variantes
Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.
Incidents similaires
Did our AI mess up? Flag the unrelated incidents
Incidents similaires
Did our AI mess up? Flag the unrelated incidents