概要: 連邦検察官は、ロングアイランド在住の55歳のマイケル・ガン容疑者が、AIツールを用いて爆発装置(IED)の製造に必要な化学物質と製造手順を特定したとしている。ガン容疑者は7個の爆弾を製造し、マンハッタンに輸送した後、5個を散弾銃の薬莢と共に屋上に保管したとされている。当局によると、ガン容疑者は2025年6月5日の逮捕前に、公共の場所で爆弾の一部をテストし、廃棄していたという。ガン容疑者は連邦法違反の罪で起訴されているが、負傷者は報告されていない。
Editor Notes: Reconstructing the reported timeline of events: (1) in late March 2025, Michael Gann allegedly posted online suggesting the use of explosives in New York City; (2) in the following weeks, he reportedly used AI tools to research chemical components and built improvised explosive devices; (3) he allegedly tested and discarded some devices in public locations and stored others on a Manhattan rooftop; (4) on June 5, 2025, he was reportedly stopped by law enforcement while carrying explosives; (5) on July 23, 2025, he was indicted on federal charges.
Alleged: Unknown generative AI developer developed an AI system deployed by Michael Gann, which harmed General public of New York City.
関与が疑われるAIシステム: Unknown generative AI tool
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.2. Cyberattacks, weapon development or use, and mass harm
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional