Description: Between March and May 2025, at least 294 purported AI-generated deepfake music videos were uploaded across a reported 127 YouTube channels, collectively garnering over 8.6 million views by the end of May. The videos depict global celebrities (including Beyoncé, Rihanna, R. Kelly, Selena Gomez, Eminem, and Justin Bieber) praising Burkinabe President Ibrahim Traoré and endorsing military rule.
Editor Notes: This incident is part of a cluster of reported deepfake videos circulated online to promote Ibrahim Traoré’s leadership, all of which are seemingly contributing to a wider disinformation campaign linked to pro-junta sentiment. The cluster seems to have arisen in March 2025 and continued through to at least the end of May 2025. See also Incident 1091 for a 2023 deepfake incident involving similar themes. See also Incident 1283: Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim Traoré and Disseminated Across African Media.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Unknown voice cloning technology and Unknown deepfake technology developers developed an AI system deployed by Unknown actors and Pro-Ibrahim Traoré information ecosystem, which harmed Truth , Selena Gomez , Rihanna , R. Kelly , Justin Bieber , General public of Burkina Faso , General public , Epistemic integrity , Eminem , Beyoncé and National security and intelligence stakeholders.
Alleged implicated AI systems: Unknown voice cloning technology , YouTube , Unknown deepfake technology and Social media
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
What do Beyoncé, Rihanna, R. Kelly, Selena Gomez and Eminem have in common -- besides being world famous singers? They also all appear to have recorded a version of a banger called "God Protect Ibrahim Traoré" and posted it on YouTube, Face…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

