Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 754: British Female Politicians Victimized by Deepfake Pornography

Description: British female politicians, including Angela Rayner, Gillian Keegan, Penny Mordaunt, Priti Patel, Stella Creasy, and Dehenna Davison, have been targeted by nonconsensual AI-generated deepfake pornography. The images, some online for years, have caused significant distress and led to police involvement.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake creators developed and deployed an AI system, which harmed Stella Creasy , Priti Patel , Penny Mordaunt , Gillian Keegan , Dehenna Davison and Angela Rayner.

Incident Stats

Incident ID
754
Report Count
2
Incident Date
2024-07-01
Editors
Daniel Atherton
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+1
British female politicians targeted by fake pornography
We must fight the deepfake future
Loading...
British female politicians targeted by fake pornography

British female politicians targeted by fake pornography

theguardian.com

Loading...
We must fight the deepfake future

We must fight the deepfake future

newstatesman.com

Loading...
British female politicians targeted by fake pornography
theguardian.com · 2024

British female politicians have become the victims of fake pornography, with some of their faces used in nude images created using artificial intelligence.

Political candidates targeted on one prominent fake pornography website include: the…

Loading...
We must fight the deepfake future
newstatesman.com · 2025

Penny Mordaunt broadsworded her way into Britain's collective imagination when she became the unexpected breakout star of King Charles III's May 2023 coronation. We had lost one stoic queen; here was another. Mordaunt bore the Sword of Stat…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports
Loading...
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Jun 2020 · 12 reports
Loading...
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports
Loading...
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Jun 2020 · 12 reports
Loading...
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d37129b