Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1092: Purported AI-Generated Video Portrays Pope Leo XIV Addressing Ibrahim Traoré of Burkina Faso

Description: A purported 36-minute AI-generated deepfake video circulated online portraying Pope Leo XIV delivering a speech to Burkina Faso's President Ibrahim Traoré. The Vatican Press Office and Catholic broadcaster Patrick Madrid publicly confirmed the video was fabricated.
Editor Notes: This incident is part of a cluster of reported deepfake videos circulated online to promote Ibrahim Traoré’s leadership, all of which are seemingly contributing to a wider disinformation campaign linked to pro-junta sentiment. The cluster seems to have arisen in March 2025 and continued through to at least the end of May 2025. See also Incident 1091 for a 2023 deepfake incident involving similar themes.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
 

Entities

View all entities
Alleged: Unknown deepfake technology developers and Unknown voice cloning technology developers developed an AI system deployed by Unknown actors and Pro-Traoré information ecosystem, which harmed Truth , Roman Catholic public and Pope Leo XIV.
Alleged implicated AI systems: Unknown voice cloning technology and Unknown deepfake technology

Incident Stats

Incident ID
1092
Report Count
1
Incident Date
2025-05-23
Editors
Daniel Atherton
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.1. Disinformation, surveillance, and influence at scale

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+1
Patrick Madrid Exposes Deepfake Video of Pope Leo XIV
Loading...
Patrick Madrid Exposes Deepfake Video of Pope Leo XIV

Patrick Madrid Exposes Deepfake Video of Pope Leo XIV

relevantradio.com

Loading...
Patrick Madrid Exposes Deepfake Video of Pope Leo XIV
relevantradio.com · 2025

Patrick Madrid and producer Cyrus are sounding the alarm: a slick 36-minute video making the rounds on YouTube---supposedly a speech from Pope Leo XIV---isn't real at all. It's an AI-generated deepfake.

The video kicks off with a dramatic m…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Purported Synthesia Avatars Used in Alleged Pro-Junta Deepfake Video Supporting Ibrahim Traoré in Burkina Faso

Jan 2023 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Loading...
Hackers Break Apple Face ID

Hackers Break Apple Face ID

Sep 2017 · 24 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Purported Synthesia Avatars Used in Alleged Pro-Junta Deepfake Video Supporting Ibrahim Traoré in Burkina Faso

Jan 2023 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Loading...
Hackers Break Apple Face ID

Hackers Break Apple Face ID

Sep 2017 · 24 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • f5f2449