概要: カンファレンス主催者が使用したAI画像拡大ツールが、マーケティング広告のエリザベス・ララキ氏のプロフィール写真を意図せず改変し、ブラウスのボタンが外れたように見え、下着が見えてしまうという不自然な画像になってしまいました。AIは縦向きに拡大する際に画像の下部を生成してしまいました。カンファレンス主催者は速やかに謝罪し、改変されたコンテンツを削除しました。
Alleged: Unknown developer developed an AI system deployed by Unknown conference employee, which harmed Elizabeth Laraki.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.1. Unfair discrimination and misrepresentation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional