Description: A video reportedly published on May 25, 2025, claimed to show Burkina Faso President Ibrahim Traoré delivering a forceful speech against Western media and exploitation. The video, which purportedly used AI-generated visuals and audio, was shared widely on social platforms. Despite containing disclaimers identifying it as fictional, the video was reportedly circulated and interpreted by some viewers as authentic.
Editor Notes: This incident is part of a cluster of reported deepfake videos circulated online to promote Ibrahim Traoré’s leadership, all of which are seemingly contributing to a wider disinformation campaign linked to pro-junta sentiment. The cluster seems to have arisen in March 2025 and continued through to at least the end of May 2025. See also Incident 1091 for a 2023 deepfake incident involving similar variables. In addition to 1091, please also refer to 1283.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Unknown voice cloning technology developers and Unknown deepfake technology developers developed an AI system deployed by Univers Inspirant and Pro-Ibrahim Traoré information ecosystem, which harmed Truth , General public of Burkina Faso , Epistemic integrity , General public and National security and intelligence stakeholders.
Alleged implicated AI systems: Unknown voice cloning technology and Unknown deepfake technology
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
Recently, a video has been circulated across various internet platforms, claiming to show a speech by Burkina Faso's military officer and President Ibrahim Traoré under the title: "Protesting Western Media's False Representation of Africa a…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?