概要: ウィニペグのコレージュ・ベリヴォーでは、7年生から12年生の女子生徒がディープフェイクヌードの作成を標的にされ、インターネット上に拡散されました。被害者と加害者の具体的な人数や身元は公表されておらず、既存の法律と事件の性質の乖離により、最終的に起訴には至りませんでした。
Editor Notes: Reconstructing the timeline of events: Parents were notified of the incident on December 11, 2023. On February 14, 2024, police said no criminal charges would be laid in connection to the incident.
Alleged: Unknown deepfake technology developers developed an AI system deployed by Unknown deepfake creators, which harmed Collège Béliveau students.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional