Description: In January 2023, purportedly AI-generated videos reportedly began circulating on WhatsApp and social media showing avatars of apparent American pan-Africanists expressing support for Burkina Faso's military junta. The videos were reportedly created using the Synthesia platform. Synthesia later confirmed the videos violated its terms of service and banned the user. The source of the videos remains unidentified.
Editor Notes: This incident reportedly occurred around 01/23/2023. It was added to the AI Incident Database on 06/01/2025. See also Incident 1283.
Entities
View all entitiesAlleged: Synthesia developed an AI system deployed by Pro-Ibrahim Traoré information ecosystem and Unknown actors, which harmed Truth , General public of Burkina Faso , General public , Epistemic integrity , National security and intelligence stakeholders and General public of West Africa.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
Bizarre AI-generated videos of people expressing support for Burkina Faso's new military junta have appeared online, in what could be a clumsy attempt to spread pro-military propaganda.
It's unclear who created the videos, but it appears th…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?


