Description: In New York, linked "Citizens Against Mamdani" accounts reportedly posted AI-generated videos of fictional constituents criticizing then-mayor-elect Zohran Mamdani across Instagram, TikTok, and X. Some videos reportedly displayed a visible Sora watermark, and forensic reviewers reportedly assessed at least one clip as highly likely to be a deepfake. The campaign reportedly appeared to simulate grassroots opposition and drew substantial engagement.
Editor Notes: Timeline notes: Incident date set to 11/05/2025 based on a reported recovered Instagram post from the citizensagainstmamdani account showing a visible Sora watermark and an on-platform posting date of November 5, 2025. This appears to currently be the earliest verified public posting identified for the campaign. The incident ID was created 03/21/2026.
Entities
View all entitiesAlleged: Unknown voice cloning technology developers , Unknown deepfake technology developers and OpenAI developed an AI system deployed by Unknown social media account operators , Unknown disinformation actors targeting Zohran Mamdani , Unknown disinformation actors , Citizens Against Mamdani social media accounts and Citizens Against Mamdani, which harmed Zohran Mamdani , Voters in New York , General public of New York , Epistemic integrity , Electoral integrity and American social media users.
Alleged implicated AI systems: X (Twitter) , Unknown voice cloning technology , Unknown deepfake technology , TikTok , Sora and Instagram
Incident Stats
Incident ID
1425
Report Count
1
Incident Date
2025-11-05
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
A creepy account that's almost certainly using AI to generate videos of imaginary New Yorkers criticizing mayor-elect Zohran Mamdani raises a frightening prospect: that deepfakes could be used not just to impersonate politicians, but also c…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Loading...

Deepfake Obama Introduction of Deepfakes
· 29 reports
Loading...

Predictive Policing Biases of PredPol
· 17 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Loading...

Deepfake Obama Introduction of Deepfakes
· 29 reports
Loading...

Predictive Policing Biases of PredPol
· 17 reports
