Description: A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
Editor Notes: The date for this incident, 12/19/2024, is the date Microsoft filed its lawsuit, which can be read here: https://www.noticeofpleadings.net/fizzdog/files/COMPLAINT_AND_SUMMONS/2024.12.19_DE_001_%5BMicrosoft%5D_Complaint.pdf. However, the lawsuit explains, "Defendants conspired to operate the Azure Abuse Enterprise through a pattern of racketeering activity in furtherance of the common purpose of the Enterprise sometime prior to July 2024." Additionally, it details allegations of wire fraud (18 U.S.C. § 1343), stating that prior to July 2024, the defendants stole authentication information from Microsoft customers to fraudulently access the Azure OpenAI Service and deplete account balances (page 36). For information on a specific timeline, Microsoft's lawsuit also explains that "[f]rom July 26, 2024, to at least September 17, 2024, Defendants transmitted and/or caused to be transmitted by means of wire communication in interstate and foreign commerce writings, signals, and pictures for the purpose of executing their scheme to defraud" (page 37).
Alleged: Unidentified Storm-2139 actor from Illinois , Unidentified Storm-2139 actor from Florida , Storm-2139 , Ricky Yuen (cg-dot) , Phát Phùng Tấn (Asakuri) , Arian Yadegarnia (Fiz) , Alan Krysiak (Drago) , Proxy and credential abuse networks , Microsoft Azure OpenAI Service , Generative AI platforms , Content moderation systems , Azure Abuse Enterprise , API authentication mechanisms と AI safety guardrails developed and deployed an AI system, which harmed Victims of deepfake abuse , OpenAI , Microsoft , celebrities , Azure OpenAI customers と AI service providers.
インシデントのステータス
インシデントID
955
レポート数
1
インシデント発生日
2024-12-19
エディタ
Daniel Atherton
インシデ ントレポート
レポートタイムライン
In an amended complaint to recent civil litigation, Microsoft is naming the primary developers of malicious tools designed to bypass the guardrails of generative AI services, including Microsoft's Azure OpenAI Service. We are pursuing this …
バリアント
「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents