概要: A study by the American Sunlight Project is reported to have found that 1 in 6 Congresswomen were targeted by AI-generated nonconsensual intimate imagery (NCII) shared on deepfake websites. The study reports having found 35,000 mentions of explicit content involving 26 members of Congress, with 25 being women. Women were 70 times more likely than men to be victimized, according to the report.
Editor Notes: The American Sunlight Project study can be accessed here: https://static1.squarespace.com/static/6612cbdfd9a9ce56ef931004/t/67586997eaec5c6ae3bb5e24/1733847451191/ASP+DFP+Report.pdf.
Alleged: Unknown deepfake technology developers developed an AI system deployed by Unknown deepfake creators, which harmed Congresswomen.
関与が疑われるAIシステム: Unknown deepfake tools
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
インシデントレポート
レポートタイムライン

性的に露骨なディープフェイクの被害者となった議員は20人以上に上り、その大半 は女性であることが、この技術における男女格差の著しい拡大と、女性の政治参加やその他の市民活動へのリスクの高まりに焦点を当てた新たな調査で明らかになった。
偽情報を研究し、民主主義を促進する政策を提唱するシンクタンクである[アメリカン・サンライト・プロジェクト(ASP)](https://www.americansunlight.org/)は水曜日、ディープフェイクのウェブサイトで最近発見された、議員2…