Description: A purported AI-manipulated video falsely showing Citizen TV anchor Swaleh Mdoe reporting on the bombing of a Kenyan doctor's home circulated widely on Facebook. The video reportedly used AI-generated audio and visuals to fabricate a conspiracy in which pharmaceutical companies targeted the doctor for promoting a "miracle cure." In reality, the explosion footage was from Ohio, the doctor was fictitious, and the content aimed to manipulate viewers into purchasing an unproven health product.
Editor Notes: Timeline notes: The reported video emerged sometime in January 2025. By the time Africa Check published its report on January 20, 2025, they claim that it had garnered over 497,000 views. The report was included in the database on April 21, 2025.
Entities
View all entitiesAlleged: Unknown deepfake technology developers and Unknown voice cloning technology developers developed an AI system deployed by Unknown scammers, which harmed Swaleh Mdoe , Citizen TV , General public of Kenya and Media integrity.
Alleged implicated AI systems: Unknown deepfake app and Unknown voice cloning technology
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...

IN SHORT: A viral Facebook video claims that a Kenyan doctor's house has been destroyed in an explosion linked to his criticism of pharmaceutical companies. It also shows him promoting a "miracle cure" for unnamed chronic diseases. But the …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Loading...

Deepfake Obama Introduction of Deepfakes
· 29 reports
Loading...
Defamation via AutoComplete
· 28 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Loading...

Deepfake Obama Introduction of Deepfakes
· 29 reports
Loading...
Defamation via AutoComplete
· 28 reports

