Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3732

関連インシデント

インシデント 64112 Report
Nonconsensual Deepfake Porn of Bobbi Althoff Spreads Rapidly on X

Loading...
Podcaster Bobbi Althoff is the latest target of explicit deepfakes
abcnews.go.com · 2024

American podcast host Bobbi Althoff, 26, is the latest target of a disturbing new trend of non-consensual deepfakes.

"Hate to disappoint you all, but the reason I'm trending is 100% not me & is definitely AI generated," the popular podcaster posted Wednesday on an Instagram story.

Just a few weeks ago, fabricated explicit images of Taylor Swift circulated on the social media platform X generating an uproar from fans and politicians alike. And now, Althoff is the latest public victim.

Althoff, a mother of two young children, became known on TikTok for her deadpan humor and sarcastic parenting advice. She's been posting on TikTok since 2021, but recently pivoted her attention on promoting her podcast, "The Really Good Podcast."

With over 7 million followers on TikTok and 3 million on Instagram, Althoff is quickly becoming a massive star in podcasting. She recently landed big names like rapper Wiz Khalifa, Bobby Flay and Jessica Alba.

Wednesday afternoon, Althoff said she noticed her name was trending on X so she went to look at the posts, she thought it might be related to her podcast.

Speaking in a video posted to her Instagram story, Althoff said, "I felt like it was a mistake or something, that it was bots or something. I didn't realize that it was people actually believing that that was me until my whole team called me and were like, 'Is this real?'"

Althoff added that because the clip was so graphic her reaction was to cover her eyes.

Genevieve Oh, a researcher who focuses on deepfakes, watched the trend evolve in real time online and reach over 6.5 million views. She told ABC News the trend was fueled by "voluminous tweets" with users soliciting comments and retweets in exchange for a link to the explicit video.

While the fake Swift images may have been viewed millions of times, it was clear to anyone who saw them that the images were fabricated. However, this AI-generated video of Althoff was nearly indistinguishable from reality and extremely graphic in nature.

Because of the realism, this fake video was portrayed as a "leak" with many online users trying to pile on and post the footage, incentivizing retweets and comments.

According to Oh, examples of the video portraying Althoff were still up on X 12 hours after her name started trending online. X received criticism for not reacting quickly enough to take down the fake Swift images, allowing them to spread on the platform.

At the time, X issued a statement saying it had a "zero-tolerance policy towards such content" and that it was "actively removing all identified images and taking appropriate actions against the accounts responsible for posting them."

Non-consensual deepfakes have made headlines recently, and this technology, which has been used to target women and girls for years, is becoming increasingly accessible.

A few years back, a user needed to have a certain level of technical skills to create AI-generated content, but now it's just a matter of downloading an app or clicking a few buttons.

Now experts say there's an entire commercial industry that thrives on creating and sharing digitally created content of sexual abuse, including websites that have hundreds of thousands of paying members.

"Sites dedicated to sharing this abuse are able to grow and make money off of violating consent," explained filmmaker Sophie Compton who produced "Another Body", a documentary following a college student as she discovers non-consensual deepfakes of herself circulating online.

"Creators are getting bolder and bolder because they face no consequences. And as creators get bolder, women are being silenced and shamed," Compton added.

Many of the victims aren't celebrities and don't end up coming forward for fear of attracting more attention to the images.

Dr. Mary Anne Franks, a law professor and president of the Cyber Civil Rights Initiative, a non-profit organization dedicated to combatting online abuse, previously told ABC News that it's hard to undo the harm once this content, real or fabricated, is shared publicly.

"It is not just the psychological harm, the intense depression, the anxiety, but also the economic consequences, because it can lead to further harassment online, online and offline harassment, requiring a lot of victims to invest in security systems or change the way that they go to work or go to school," she added.

Oh believes that we will continue to see unsuspecting women targeted in these types of viral campaigns.

While individual states are moving toward passing bills that would make sharing this type of content illegal, there is no federal legislation protecting victims or deterring bad actors.

ABC News has reached out to Althoff for comment.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd