Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2872

関連インシデント

インシデント 5084 Report
Celebrities' Deepfake Voices Abused with Malicious Intent

Emma Watson reads Mein Kampf on 4Chan in deepfake audio trick
thetimes.co.uk · 2023

A British start-up has been forced to introduce safeguards after deepfake audio recordings of the actress Emma Watson reading Mein Kampf by Adolf Hitler and the broadcaster Sir David Attenborough being racist were released.

Users took the voice-cloning technology to create the audio files and posted them on the message board 4Chan in the latest abuse of AI technology.

The research company ElevenLabs released its new text-to-audio tool that enables users to type in words and hear them reproduced immediately in a human voice. Founded by two former engineers from Google and Palantir, it has also developed voice cloning and dubbing technology to be used in the film and publishing industries.

However, some users appear to have taken the voice-cloning tool, which can generate a deepfake from a recording of just one minute, to create recordings of Watson, Attenborough and other prominent people. There were also fake clips of Sir Alec Guinness and President Biden making sexist and transphobic comments.

ElevenLabs says it will introduce safeguards to prevent further abuse of the product. The company tweeted: "Thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice-cloning misuse cases."

Text-to-speech AI and voice cloning are among "generative AI" technologies that are catching the eye of investors and the public. The chatbot ChatGPT and the image generators Stable Diffusion, DALL-E and Midjourney are all promising to revolutionise and disrupt the creative industries and other businesses.

Mati Staniszewski and Piotr Dabkowski, the founders of ElevenLabs, recently announced $2 million (£1.6 million) in funding. However, AI companies are currently caught in an "open-ness dilemma" of whether to release the powerful tools, which generate interest but also risks.

Microsoft recently announced an artificial intelligence program called VALL-E that can clone someone's voice from a three-second audio clip. It has not made the technology available to the general public, however, citing ethical issues including misuse with "spoofing voice identification or impersonating a specific speaker".

ElevenLabs has also recognised that "speaking in somebody else's voice raises ethical concerns since it can be used for nefarious purposes".

Henry Ajder, an expert in deepfakes and generative AI, said: "What they [ElevenLabs] have developed, technologically, is very impressive [but] by opening up these models they have made, unfortunately, a fairly naive judgment about the intentions of those people out there who will use those models.

"In this climate of hype cycle around generative AI, a lot of companies are rushing and trying very hard to stand out from a very noisy scene. I worry that the kind of feverish rush is perhaps leading certain organisations to cut corners when it comes to safety considerations, although I'm not saying that's necessarily what motivated Eleven in this case," Ajder said.

Deepfake audio has previously been used to narrate a documentary film, rile a controversial commentator and defraud a bank. The makers of a film about the chef Anthony Bourdain ignited a debate in 2021 after they used an AI simulation of the late chef's voice to read some of his writing. They called it modern storytelling but others said viewers should have been informed.

The Canadian psychologist Jordan Peterson threatened to sue a website in 2019 after it enabled users to generate clips of him saying whatever they typed, and the following year a bank in the United Arab Emirates was defrauded of $35 million when the deepfake voice of a company director was used to convince a branch manager to authorise transactions.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf