Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4388

Associated Incidents

Incident 8747 Report
1 in 6 Congresswomen Have Reportedly Been Targeted by AI-Generated Nonconsensual Intimate Imagery

Loading...
Deepfake Pornography Targeting Members of Congress
americansunlight.org · 2024

New Report Reveals Tens of Thousands of Pieces of Deepfake Sexual Abuse

As reported in The 19th, the American Sunlight Project today released a new report exposing the alarming prevalence of AI-generated Non-Consensual Intimate Imagery (NCII) — colloquially known as deepfake pornography — targeting Members of Congress. The comprehensive study identifies over 35,000 instances of such content depicting 26 Members of Congress, with women disproportionately affected. The report underscores the urgent need for legislative action to address the rise of malicious AI technology.

“This report reveals a stark and disturbing reality. Female lawmakers are being targeted by AI-generated deepfake pornography at an alarming rate, with nearly one in six women in Congress affected compared to only one in 386 men,” said Nina Jankowicz, CEO of the American Sunlight Project. “This isn't just a tech problem; it’s a direct assault on women in leadership and democracy itself.”

The study reveals that 35,239 mentions of Members were discovered across eleven prominent websites dedicated to deepfake sexual abuse. Women Members of Congress were 70 times more likely to be depicted in deepfake NCII content than their male counterparts. The analysis found no significant relationship between the likelihood of being targeted and factors such as political party affiliation or state representation, highlighting that the attacks are indiscriminate except for their focus on gender.

The report also underscores the psychological, reputational, and professional toll on victims. NCII can undermine trust in its targets, erode personal safety, and jeopardize careers, with ripple effects for public confidence in democratic institutions and equal representation. The misuse of AI-driven technology to target women leaders amplifies the broader issue of gendered disinformation and harassment, creating barriers for women in public life.

In the immediate aftermath of ASP notifying Members of Congress that they were found to be targeted, the imagery targeting many members was entirely or almost entirely removed from the sites. This highlights a large disparity of privilege. Women who lack the resources afforded to Members of Congress would be unlikely to achieve such a rapid response from deepfake pornography sites if they initiated a takedown request themselves.

The study comes at a critical moment, as Congress considers a suite of bills to regulate the creation and dissemination of deepfake pornography. The American Sunlight Project urges lawmakers to move quickly to pass these measures before the conclusion of the 118th Congress.

ASP CEO Nina Jankowicz added, “The vast majority of targets of deepfake sexual abuse are private citizens, and even minors, who frequently lack the resources to rectify the harm done to them. I myself have been targeted with this vile content. As both a survivor and a researcher, I strongly feel that all women like me deserve to be protected by their government and have a path to justice for the sexual abuse they have endured. It is long past time for Congress to act.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd