Incident 565: AI-Generated Imagery and Multilingual Disinformation in Chinese Campaign Regarding Maui Wildfires

Description: In a disinformation campaign concerning wildfires across Maui, Chinese operatives utilized AI-generated imagery to enhance the credibility of false narratives. These narratives claimed that the wildfires were the result of a secret "weather weapon" being tested by the United States. Researchers from Microsoft and other organizations identified these AI-generated images as a significant new tactic in influence operations.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: unknown developed an AI system deployed by Chinese government, which harmed Hawaiian government , General public and American government.

Incident Stats

Incident ID
Report Count
Incident Date
Daniel Atherton
China Sows Disinformation About Hawaii Fires Using New Techniques · 2023

When wildfires swept across Maui last month with destructive fury, China’s increasingly resourceful information warriors pounced.

The disaster was not natural, they said in a flurry of false posts that spread across the internet, but was th…

Chinese Disinfo Blames Maui Fires on Deadly US 'Weather Weapon' · 2023

Researchers say they’ve discovered 85 social media accounts and blogs originating from China and working in tandem to amplify a conspiracy theory claiming the deadly fires in Maui were caused by a secretive “weather weapon” unleashed by the…

Mistrust on cause of Maui fire fueled by Chinese disinformation · 2023

Mistrust on cause of Maui fire fueled by Chinese disinformation

Erika Pless, with a mask to filter out dust, looks towards a field near the alleged origin of the West Maui Wildfire, in Lahaina on the island of Maui, Hawaii, Wednesday, Aug. …

Researchers: Disinformation campaign spread after wildfires slowed disaster response · 2023

HONOLULU (HawaiiNewsNow) - A disinformation campaign that sprung up almost immediately after wildfires ravaged Maui was spread by China and Russia, researchers have concluded.

And, they say, that campaign made the government’s response to t…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents