Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2872

Associated Incidents

Incident 5084 Report
Celebrities' Deepfake Voices Abused with Malicious Intent

Emma Watson reads Mein Kampf on 4Chan in deepfake audio trick
thetimes.co.uk · 2023

A British start-up has been forced to introduce safeguards after deepfake audio recordings of the actress Emma Watson reading Mein Kampf by Adolf Hitler and the broadcaster Sir David Attenborough being racist were released.

Users took the voice-cloning technology to create the audio files and posted them on the message board 4Chan in the latest abuse of AI technology.

The research company ElevenLabs released its new text-to-audio tool that enables users to type in words and hear them reproduced immediately in a human voice. Founded by two former engineers from Google and Palantir, it has also developed voice cloning and dubbing technology to be used in the film and publishing industries.

However, some users appear to have taken the voice-cloning tool, which can generate a deepfake from a recording of just one minute, to create recordings of Watson, Attenborough and other prominent people. There were also fake clips of Sir Alec Guinness and President Biden making sexist and transphobic comments.

ElevenLabs says it will introduce safeguards to prevent further abuse of the product. The company tweeted: "Thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice-cloning misuse cases."

Text-to-speech AI and voice cloning are among "generative AI" technologies that are catching the eye of investors and the public. The chatbot ChatGPT and the image generators Stable Diffusion, DALL-E and Midjourney are all promising to revolutionise and disrupt the creative industries and other businesses.

Mati Staniszewski and Piotr Dabkowski, the founders of ElevenLabs, recently announced $2 million (£1.6 million) in funding. However, AI companies are currently caught in an "open-ness dilemma" of whether to release the powerful tools, which generate interest but also risks.

Microsoft recently announced an artificial intelligence program called VALL-E that can clone someone's voice from a three-second audio clip. It has not made the technology available to the general public, however, citing ethical issues including misuse with "spoofing voice identification or impersonating a specific speaker".

ElevenLabs has also recognised that "speaking in somebody else's voice raises ethical concerns since it can be used for nefarious purposes".

Henry Ajder, an expert in deepfakes and generative AI, said: "What they [ElevenLabs] have developed, technologically, is very impressive [but] by opening up these models they have made, unfortunately, a fairly naive judgment about the intentions of those people out there who will use those models.

"In this climate of hype cycle around generative AI, a lot of companies are rushing and trying very hard to stand out from a very noisy scene. I worry that the kind of feverish rush is perhaps leading certain organisations to cut corners when it comes to safety considerations, although I'm not saying that's necessarily what motivated Eleven in this case," Ajder said.

Deepfake audio has previously been used to narrate a documentary film, rile a controversial commentator and defraud a bank. The makers of a film about the chef Anthony Bourdain ignited a debate in 2021 after they used an AI simulation of the late chef's voice to read some of his writing. They called it modern storytelling but others said viewers should have been informed.

The Canadian psychologist Jordan Peterson threatened to sue a website in 2019 after it enabled users to generate clips of him saying whatever they typed, and the following year a bank in the United Arab Emirates was defrauded of $35 million when the deepfake voice of a company director was used to convince a branch manager to authorise transactions.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf