Description: A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
Editor Notes: The date for this incident, 12/19/2024, is the date Microsoft filed its lawsuit, which can be read here: https://www.noticeofpleadings.net/fizzdog/files/COMPLAINT_AND_SUMMONS/2024.12.19_DE_001_%5BMicrosoft%5D_Complaint.pdf. However, the lawsuit explains, "Defendants conspired to operate the Azure Abuse Enterprise through a pattern of racketeering activity in furtherance of the common purpose of the Enterprise sometime prior to July 2024." Additionally, it details allegations of wire fraud (18 U.S.C. § 1343), stating that prior to July 2024, the defendants stole authentication information from Microsoft customers to fraudulently access the Azure OpenAI Service and deplete account balances (page 36). For information on a specific timeline, Microsoft's lawsuit also explains that "[f]rom July 26, 2024, to at least September 17, 2024, Defendants transmitted and/or caused to be transmitted by means of wire communication in interstate and foreign commerce writings, signals, and pictures for the purpose of executing their scheme to defraud" (page 37).
Entidades
Ver todas las entidadesPresunto: un sistema de IA desarrollado e implementado por Unidentified Storm-2139 actor from Illinois , Unidentified Storm-2139 actor from Florida , Storm-2139 , Ricky Yuen (cg-dot) , Phát Phùng Tấn (Asakuri) , Arian Yadegarnia (Fiz) , Alan Krysiak (Drago) , Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms y AI safety guardrails, perjudicó a Victims of deepfake abuse , OpenAI , Microsoft , celebrities , Azure OpenAI customers y AI service providers.
Sistemas de IA presuntamente implicados: Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms y AI safety guardrails
Estadísticas de incidentes
ID
955
Cantidad de informes
1
Fecha del Incidente
2024-12-19
Editores
Daniel Atherton
Informes del Incidente
Cronología de Informes
In an amended complaint to recent civil litigation, Microsoft is naming the primary developers of malicious tools designed to bypass the guardrails of generative AI services, including Microsoft's Azure OpenAI Service. We are pursuing this …
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents