概要: ハリケーン・ヘレン(2024年9月24日~29日)の際、AIが生成したとされる画像がソーシャルメディア上で拡散し、一般市民を誤解させ、災害対応活動を妨害したと報じられています。屋根に取り残された動物や洪水に浸かった政治家などの偽画像が、混乱を招き、緊急対応に支障をきたしたと報告されています。これらの画像の拡散は、停電や通信障害といった既存の課題を悪化させ、支援を必要とする人々への正確な情報提供を困難にしたとされています。
Editor Notes: Reported motives for the misinformation seemed partisan in nature, while some appeared opportunistic and fraudulent.
Alleged: Unknown deepfake technology developers と Unknown AI image generator technology developers developed an AI system deployed by Unknown deepfake creators , Partisan social media influencers , Unknown actors と Unknown malicious actors, which harmed Hurricane Helene victims , General public , Emergency responders と Communities impacted by Hurricane Helene.
関与が疑われるAIシステム: Unknown deepfake technology と Unknown AI image generator developer
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional


1728112074-0/Untitled-design-(51)1728112074-0.jpg)
