概要: ハリケーン・ヘレン(2024年9月24日~29日)の際、AIが生成した画像がソーシャルメディア上で拡散し、人々を誤解させ、災害対応活動を妨げました。屋根に取り残された動物や洪水に巻き込まれた政治家などの偽画像は、混乱を招き、緊急対応に支障をきたしました。これらの画像の拡散は、停電や通信障害といった既存の課題を悪化させ、支援を必要とする人々への正確な情報提供を困難にしました。
Editor Notes: Motives for the misinformation seemed partisan in nature, while some appeared opportunistic and fraudulent.
Alleged: Unknown deepfake technology developers developed an AI system deployed by Unknown deepfake creators と Partisan social media influencers, which harmed Hurricane Helene victims , General public , Emergency responders と Communities impacted by Hurricane Helene.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional