Description: A reported wave of allegedly AI-generated and misleading media content promoting Burkina Faso's junta leader Ibrahim Traoré has been circulating. Investigators described videos attributing development projects to Traoré, reportedly part of a campaign by Russian-linked media to support junta regimes across the Sahel. Experts warned of the campaign's destabilizing effects and its potential to normalize military rule and foreign ideological influence.
Editor Notes: This incident is part of a cluster of reported deepfake videos circulated online to promote Ibrahim Traoré’s leadership, all of which are seemingly contributing to a wider disinformation campaign linked to pro-junta sentiment. The cluster seems to have arisen in March 2025 and continued through to at least the end of May 2025. See also Incident 1091 for a 2023 deepfake incident involving similar themes. See also Incident 1283: Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim Traoré and Disseminated Across African Media.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Unknown deepfake technology developers developed an AI system deployed by Pro-Russian influencers , Pro-Kremlin actors , Pro-Ibrahim Traoré information ecosystem and Government of Russia, which harmed Truth , General public of West Africa , General public of Burkina Faso , General public , Epistemic integrity and National security and intelligence stakeholders.
Alleged implicated AI systems: Unknown deepfake technology and Social media
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
Throughout West Africa it has become difficult to avoid the blitz of viral videos, memes and social media posts glorifying the military rulers of Burkina Faso, Mali and Niger.
Experts say that there has been a calculated campaign to elevate…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?