Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4527

Associated Incidents

Incident 90317 Report
Northern Ireland MLA Cara Hunter Allegedly Targeted by Deepfake Pornography Ahead of May 2022 Assembly Election

UK moves to criminalize non-consensual deepfake porn
the-decoder.com · 2025

This report does not focus entirely on the details of Incident 903, but makes reference to them.

According to Reuters, the British Ministry of Justice wants to close a significant legal gap. While sharing intimate photos or videos without consent (revenge porn) has been illegal since 2015, the law hasn't covered AI-generated content until now.

Under the proposed legislation, first announced in April 2024, creating or sharing sexually explicit deepfakes without consent could result in fines or jail time. The new law would cover all forms of synthetic media - including video, images, and audio recordings. No timeline for implementation has been announced.

Technology Minister Margaret Jones says the government also plans to crack down on platforms that host this kind of abusive content, implementing stricter controls and tougher penalties. The UK's Revenge Porn Helpline reports that deepfake-related abuse has jumped by more than 400 percent since 2017.

Real-World Impact

Non-consensual sexual deepfakes are reaching into schools. A recent study by The Human Factor found that 60 percent of teachers worry that their students could be involved in deepfake scandals, while 73 percent of parents believe their children wouldn't be involved.

In Miami, two teenagers, ages 13 and 14, were arrested for creating and sharing AI-generated nude images of their classmates. What makes these cases particularly troubling is that the images often spread through private chat groups, where they can circulate for months.

The case of Northern Irish politician Cara Hunter shows how devastating these attacks can be. Three weeks before an important election, someone shared a pornographic deepfake video of her that spread rapidly on WhatsApp. The aftermath included vulgar messages, street harassment, and even family members questioning her denials because the fakes looked so convincing.

Germany is also stepping up its response to the deepfake threat. A new bill proposed by the Federal Council would make sharing AI-generated content that violates personal rights punishable by up to two years in prison or a fine. For deepfakes that intrude on highly personal aspects of life, offenders could face up to five years behind bars.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 818b77a