Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 874: 1 in 6 Congresswomen Have Reportedly Been Targeted by AI-Generated Nonconsensual Intimate Imagery

Description: A study by the American Sunlight Project is reported to have found that 1 in 6 Congresswomen were targeted by AI-generated nonconsensual intimate imagery (NCII) shared on deepfake websites. The study reports having found 35,000 mentions of explicit content involving 26 members of Congress, with 25 being women. Women were 70 times more likely than men to be victimized, according to the report.
Editor Notes: The American Sunlight Project study can be accessed here: https://static1.squarespace.com/static/6612cbdfd9a9ce56ef931004/t/67586997eaec5c6ae3bb5e24/1733847451191/ASP+DFP+Report.pdf.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake technology developers developed an AI system deployed by Unknown deepfake creators, which harmed Congresswomen.
Alleged implicated AI system: Unknown deepfake tools

Incident Stats

Incident ID
874
Report Count
7
Incident Date
2024-12-11
Editors
Daniel Atherton
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+4
1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes – The Markup
1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit DeepfakesWomen in congress are 70 times more likely to be victims of AI-generated deepfakes than male counterpartsWomen lawmakers 70 times more likely to be victims of deepfake porn, study finds
1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes – The Markup

1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes – The Markup

themarkup.org

Deepfake Pornography Targeting Members of Congress

Deepfake Pornography Targeting Members of Congress

americansunlight.org

AI enters Congress: Sexually explicit deepfakes target women lawmakers

AI enters Congress: Sexually explicit deepfakes target women lawmakers

19thnews.org

Dozens of lawmakers victims of sexually explicit deepfakes: Report

Dozens of lawmakers victims of sexually explicit deepfakes: Report

thehill.com

1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes

1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes

gizmodo.com

Women in congress are 70 times more likely to be victims of AI-generated deepfakes than male counterparts

Women in congress are 70 times more likely to be victims of AI-generated deepfakes than male counterparts

mashable.com

Women lawmakers 70 times more likely to be victims of deepfake porn, study finds

Women lawmakers 70 times more likely to be victims of deepfake porn, study finds

usatoday.com

1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes – The Markup
themarkup.org · 2024

More than two dozen members of Congress have been the victims of sexually explicit deepfakes — and an overwhelming majority of those impacted are women, according to a new study that spotlights the stark gender disparity in this technology …

Deepfake Pornography Targeting Members of Congress
americansunlight.org · 2024

New Report Reveals Tens of Thousands of Pieces of Deepfake Sexual Abuse

As reported in The 19th, the American Sunlight Project today released a new report exposing the alarming prevalence of AI-generated Non-Consensual Intimate Imagery (NCI…

AI enters Congress: Sexually explicit deepfakes target women lawmakers
19thnews.org · 2024

More than two dozen members of Congress have been the victims of sexually explicit deepfakes — and an overwhelming majority of those impacted are women, according to a new study that spotlights the stark gender disparity in this technology …

Dozens of lawmakers victims of sexually explicit deepfakes: Report
thehill.com · 2024

More than two dozen lawmakers have been the victims of deepfake pornography, with female lawmakers significantly more likely to be targeted, according to a new report released Wednesday. 

The report from the American Sunlight Project, an ad…

1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes
gizmodo.com · 2024

More than two dozen members of Congress have been the victims of sexually explicit deepfakes — and an overwhelming majority of those impacted are women, according to a new study that spotlights the stark gender disparity in this technology …

Women in congress are 70 times more likely to be victims of AI-generated deepfakes than male counterparts
mashable.com · 2024

The field of politics has become even more complicated for women leaders, with the rise of generative AI posing an even greater risk for female politicians than men, according to a report released this week.

The analysis, published by disin…

Women lawmakers 70 times more likely to be victims of deepfake porn, study finds
usatoday.com · 2024

Women in Congress are more likely to be victims of deepfake pornography than their male counterparts, according to a national organization that combats disinformation online.

Deepfakes, also known as AI-generated non-consensual intimate ima…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df