Incident 486: AI Video-Making Tool Abused to Deploy Pro-China News on Social Media

Description: Synthesia's AI-generated video-making tool was reportedly used by Spamouflage to disseminate pro-China propaganda news on social media using videos featuring highly realistic fictitious news anchors.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Synthesia developed an AI system deployed by Spamouflage Dragon, which harmed Synthesia , YouTube users , Twitter Users and Facebook users.

Incident Stats

Incident ID
486
Report Count
5
Incident Date
2022-12-01
Editors
Khoa Lam
The People Onscreen Are Fake. The Disinformation Is Real.
nytimes.com · 2023

In one video, a news anchor with perfectly combed dark hair and a stubbly beard outlined what he saw as the United States’ shameful lack of action against gun violence.

In another video, a female news anchor heralded China’s role in geopoli…

Deepfake It Till You Make It
graphika.com · 2023

Pro-Chinese Actors Promote AI-Generated Video Footage of Fictitious People in Online Influence Operation

  • In late 2022, Graphika observed limited instances of Spamouflage, a pro-Chinese influence operation (IO), promoting content that inclu…
Research: Deepfake 'News Anchors' in Pro-China Footage
voanews.com · 2023

Washington — The "news broadcasters" appear stunningly real, but they are AI-generated deepfakes in first-of-their-kind propaganda videos that a research report published Tuesday attributed to Chinese state-aligned actors.

The fake anchors …

Deepfake newsreaders pump out pro-China news
thetimes.co.uk · 2023

A British company's AI technology has been hijacked to create a pro-Chinese news channel featuring a pair of computer-generated presenters.

Wolf News had been building a profile on social media but was exposed this month as being linked to …

Deepfakes and Disinformation Pose a Growing Threat in Asia
thediplomat.com · 2023

Recent news on the use of computer-generated avatars by a pro-China influence campaign has once again aimed a spotlight at the usage of deepfakes. The influence campaign was first observed by intelligence company Graphika in late 2022. Vide…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents