Description: TikTok videos purportedly depicted large crowds as chanting pro-AfD slogans by reportedly replacing the original audio with manipulated sound. The footage, originally from January 2024 anti-extremism protests, was repurposed in January 2025 to reportedly mislead viewers. Purported AI-generated or altered audio was added to create the illusion of far-right support. Fact-checkers confirmed the deception, and the accounts responsible later deleted the content, claiming no intent to spread misinformation.
Editor Notes: Please also refer to Incidents 909, 929, 1060, and 1061 for similar incidents. While this incident has not been explicitly attributed to Operation Overload (also known as Matryoshka or Storm-1679), its thematic content and manipulation tactics appear adjacent. No direct forensic or institutional linkage has been established as of this writing.
Entities
View all entitiesAlleged: Unknown voice cloning technology developers developed an AI system deployed by TikTok user Alice Weidel Fan , TikTok user AfD John and AfD supporters, which harmed Protesters against right-wing extremism in Germany , German election integrity and General public of Germany.
Alleged implicated AI systems: Unknown voice cloning technology and TikTok
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
Several Tiktok videos some with millions of views show large demonstrations where chants of "East, East, East Germany" can be heard. The profiles that spread them are called "Alice Weidel Fan" or "AfD John". In the comments, many users expr…
Loading...
Ahead of Germany's federal elections, fake news and AI-generated propaganda is multiplying online.
As German federal elections draw closer, TikTok videos allegedly showing hundreds of thousands of people marching for the far-right Alternati…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?


