Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2815

関連インシデント

インシデント 4945 Report
Female Celebrities' Faces Shown in Sexually Suggestive Ads for Deepfake App

Disturbing App That Advertised Emma Watson Deepfake Was Removed From App Stores
jezebel.com · 2023

Earlier this week, an NBC report unearthed a celebrity face-swapping app, Facemega, with the potential to easily create deepfake porn depicting famous or public-facing women. Deepfake porn refers to fake but highly realistic, often AI-generated porn and sexual imagery of people without their consent. Unsurprisingly, it typically targets women and especially female celebrities; victims have thus far included popular female Twitch streamers and, as early as 2018, Gal Gadot.

Per NBC’s report, Facemega ran hundreds of different ads on Facebook, Instagram, and other Meta apps, advertising how easily users could face-swap themselves with the likes of Emma Watson or Scarlet Johansson, accompanied by highly realistic, sexually suggestive video examples. (This, of course, is despite how Meta banned most deepfake content back in 2020.) The app also placed ads in the Apple store, where it first became available sometime last year. As of Tuesday evening, NBC’s Kat Tenbarge reported that the Apple app store had taken down the face-swapping app, and Meta stopped running ads for it, shortly after NBC’s original report. But the app remains on Google Play.

NBC’s report notes that “no sexual acts were shown in the videos” but “their suggestive nature” makes it clear that at least one purpose of the app is to “generate faked sexual content.” NBC also found that “some of the ads showed what looked like the beginning of pornographic videos with the well-known sound of the porn platform Pornhub’s intro track playing.”

“Replace face with anyone,” the captions on some of the Meta ads say. “Enjoy yourself with AI swap face technology.” Facemega also offers users the opportunity to face-swap varying celebrities onto preset videos, and among other categories, includes a “Hot” category that “features videos of scantily clad women and men dancing and posing.”

Even as Meta and Apple seem to have taken action to rein in Facemega, with or without the app, deepfake porn is a rapidly growing crisis. Per one streaming researcher who spoke to NBC, last month saw the greatest volume of deepfake porn videos uploaded ever. These alarming numbers—and a recent Twitch scandal involving a popular male streamer outed for watching deepfake porn of female streamers in January—make it clear that none of this is hypothetical. And the problem is getting worse, as deepfake tools become more accessible.

Face-swapping apps and other AI technologies can be wielded to create deepfake porn targeting not just celebrities, but friends, co-workers, and even casual acquaintances. Experts say that increasingly popular AI-generated imagery apps like Lensa AI have been a boon for child sexual abuse content. One researcher wrote in Wired last year that upon uploading childhood photos of herself to Lensa, “what resulted were fully nude photos of an adolescent and sometimes childlike face but a distinctly adult body.”

And, especially concerning, our laws are wildly unequipped to protect people. While most states have varying anti-cyber-exploitation laws to rein in “revenge porn” (nonconsensual nude images of individuals shared by former partners or harassers), the only states that explicitly prohibit nonconsensual deepfake sexual content are California, Virginia, and Texas. As apps like Facemega proliferate, likely aided by social platforms like Meta, we’re simply not prepared for the scope of the damage this could inflict.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c