Description: A widely circulated video reportedly falsely depicts Donald Trump endorsing Nigerian politician Peter Obi and criticizing President Bola Tinubu. FactCheckAfrica confirmed the video was false, originally sourced from a 2017 U.S. presidential interview about Trump's first 100 days in office. AI-generated audio and altered visuals were reportedly used to fabricate a politically charged statement unrelated to the original content.
Editor Notes: Timeline notes: This video was reportedly in circulation sometime in August 2024. It was included in the database on April 21, 2025.
Entities
View all entitiesAlleged: Unknown deepfake technology developers and Unknown voice cloning technology developers developed an AI system deployed by Unknown actors, which harmed Peter Obi , Media integrity , General public of Nigeria , Electoral integrity , Donald Trump and Bola Tinubu.
Alleged implicated AI systems: Unknown voice cloning technology and Unknown deepfake technology
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...

Claim:
Donald Trump opposes Tinubu and endorses Peter Obi.
Verdict:
False! The video is a deepfake and originates from Donald Trump's presidential interview in 2017 where he speaks about his first 100 days in office.
Full Text:
A circulat…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents



