Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 508: Celebrities' Deepfake Voices Abused with Malicious Intent

Responded
Description: Voices of celebrities and public figures were deepfaked using voice synthesis for malicious intents such as impersonation or defamation, and were shared on social platforms such as 4chan and Reddit.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: ElevenLabs developed an AI system deployed by Reddit users , ElevenLabs users and 4chan users, which harmed public figures and celebrities.

Incident Stats

Incident ID
508
Report Count
4
Incident Date
2023-01-30
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+2
AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse
Emma Watson reads Mein Kampf on 4Chan in deepfake audio trickDeepfake audio: Emma Watson's case is proof voice-cloning is a problem set to rise
AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse

AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse

vice.com

Tweet: @elevenlabsio

Tweet: @elevenlabsio

twitter.com

Emma Watson reads Mein Kampf on 4Chan in deepfake audio trick

Emma Watson reads Mein Kampf on 4Chan in deepfake audio trick

thetimes.co.uk

Deepfake audio: Emma Watson's case is proof voice-cloning is a problem set to rise

Deepfake audio: Emma Watson's case is proof voice-cloning is a problem set to rise

cosmopolitan.com

AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse
vice.com · 2023

4chan members used ElevenLabs to make deepfake voices of Emma Watson, Joe Rogan, and others saying racist, transphobic, and violent things.

It was only a matter of time before the wave of artificial intelligence-generated voice startups bec…

Tweet: @elevenlabsio
twitter.com · 2023
ElevenLabs post-incident response

Crazy weekend - thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases. We want to reach out to Twitter comm…

Emma Watson reads Mein Kampf on 4Chan in deepfake audio trick
thetimes.co.uk · 2023

A British start-up has been forced to introduce safeguards after deepfake audio recordings of the actress Emma Watson reading Mein Kampf by Adolf Hitler and the broadcaster Sir David Attenborough being racist were released.

Users took the v…

Deepfake audio: Emma Watson's case is proof voice-cloning is a problem set to rise
cosmopolitan.com · 2023

Since finding fame at the tender age of ten, through her role as Hermione Grainger in the Harry Potter movie franchise, Emma Watson has become a household name. She's also, sadly, become one of the most deepfaked celebrities online.

This wo…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f