Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1425: 'Citizens Against Mamdani' Accounts Reportedly Posted AI-Generated Videos of Fictional New Yorkers to Simulate Political Opposition

Description: In New York, linked "Citizens Against Mamdani" accounts reportedly posted AI-generated videos of fictional constituents criticizing then-mayor-elect Zohran Mamdani across Instagram, TikTok, and X. Some videos reportedly displayed a visible Sora watermark, and forensic reviewers reportedly assessed at least one clip as highly likely to be a deepfake. The campaign reportedly appeared to simulate grassroots opposition and drew substantial engagement.
Editor Notes: Timeline notes: Incident date set to 11/05/2025 based on a reported recovered Instagram post from the citizensagainstmamdani account showing a visible Sora watermark and an on-platform posting date of November 5, 2025. This appears to currently be the earliest verified public posting identified for the campaign. The incident ID was created 03/21/2026.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown voice cloning technology developers , Unknown deepfake technology developers and OpenAI developed an AI system deployed by Unknown social media account operators , Unknown disinformation actors targeting Zohran Mamdani , Unknown disinformation actors , Citizens Against Mamdani social media accounts and Citizens Against Mamdani, which harmed Zohran Mamdani , Voters in New York , General public of New York , Epistemic integrity , Electoral integrity and American social media users.
Alleged implicated AI systems: X (Twitter) , Unknown voice cloning technology , Unknown deepfake technology , TikTok , Sora and Instagram

Incident Stats

Incident ID
1425
Report Count
1
Incident Date
2025-11-05
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceNew anti-Mamdani AI videos and the deepfake generation of astroturfing
Loading...
New anti-Mamdani AI videos and the deepfake generation of astroturfing

New anti-Mamdani AI videos and the deepfake generation of astroturfing

fastcompany.com

Loading...
New anti-Mamdani AI videos and the deepfake generation of astroturfing
fastcompany.com · 2025

A creepy account that's almost certainly using AI to generate videos of imaginary New Yorkers criticizing mayor-elect Zohran Mamdani raises a frightening prospect: that deepfakes could be used not just to impersonate politicians, but also c…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Loading...
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Loading...
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd