Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2014

関連インシデント

インシデント 3282 Report
Fake Accounts Using GAN Faces Deployed by Propaganda Campaign on Social Platforms

Loading...
Pro-China Propaganda Act Used Fake Followers Made With AI-Generated Images
pcmag.com · 2020

A pro-China propaganda campaign that’s been bashing the US on social media created fake followers with the help of AI-generated images. 

Since June, the campaign has been posting English-language videos critical of the Trump administration on Facebook, Twitter, and YouTube, according(Opens in a new window) to research company Graphika, which has been tracking the group’s activities. 

Graphika dubs the campaign “Spamouflage Dragon.” And like other propaganda activities, the pro-China group uses fake accounts to share and post comments on its content to help it gain wider circulation. However, Graphika noticed something odd with the profile photos belonging to these fake accounts: In some cases, the headshots appear to be the work of an AI program designed to create artificial human faces. 

At first glance, the profile photos look legitimate. But Graphika spotted strange commonalities in the images, such as the blurred backgrounds, and how the eyeball positions in the profile photos all matched up. This indicates the photos were likely the product of a “generative adversarial network” or GAN, a machine learning technology adept at creating seemingly real, but ultimately fake human faces.

The GANs can generate a synthetic face by studying existing images of real people, and learning how to recreate the facial features into a new image. However, the results aren’t always perfect. The AI program often has trouble rendering earrings and other objects around the fake person’s face. The backgrounds are also left vague.

Nevertheless, the technology has sparked fears about bad actors exploiting AI-created media to help them pump out disinformation over social media. For instance, a reverse image search can often reveal whether a user's profile photo is legit or has been repurposed from somewhere else. But the same can't be done for a freshly generated photo made by an AI.

In the case of Spamouflage Dragon, the pro-China group used the AI-generated photos to create fake followers on Twitter and YouTube. However, the campaign itself was pretty shoddy, according to Graphika. “The videos were clumsily made, marked by language errors and awkward automated voice-overs,” the research company said in its report(Opens in a new window). 

The computer-assisted text-to-voice recordings were so bad some videos pronounced the US as “us.” Other language errors include using headlines and subtitles that mentioned “Public blamed Trump sinaction,” and “very good at be mischievous.”

As a result, the videos failed to receive any engagement from real social media users. The campaign ran from June to early August, posting videos critical of President Trump’s ban on TikTok and his approach to COVID-19. However, the social media companies have since taken down the group’s videos and the affiliated user accounts. 

Whether the Chinese government was behind the campaign remains unclear. However, US intelligence officials warned(Opens in a new window) last week that foreign governments—including China, Russia and Iran—will try to sway US public opinion to influence the upcoming presidential election. 

Graphika says it isn’t the first time the company has encountered(Opens in a new window) a propaganda campaign incorporating AI-generated photos into their schemes. But the company warns: “Given the ease with which threat actors can now use publicly available services to generate fake profile pictures, this tactic is likely to become increasingly prevalent.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd