Description: A viral video falsely accused Democratic vice-presidential nominee Tim Walz of misconduct by using the stolen identity of former student Matthew Metro. Circulated on X and other platforms, the video reached millions before being flagged for manipulation. U.S. intelligence later revealed it and three other similar events were part of the Russian disinformation campaign Storm-1516, whose aim is to disrupt the 2024 elections.
Editor Notes: Incident 824 appears to be part of a sophisticated disinformation campaign that leverages stolen identities and fabricated content to sow confusion and distrust. While initial reports suggested that the video falsely accusing Tim Walz of misconduct was AI-generated, further analysis clarified that it was actually a "cheap fake" impersonation, not created using AI. However, this incident is still tied to a Russian-aligned network, Storm-1516, known for promoting false narratives aimed at disrupting the 2024 election, with apparent support for Trump’s candidacy, often using generative AI. Storm-1516’s tactics include using fabricated whistleblower videos, often amplified by far-right influencers, which complicates public perception. Even debunked claims can persist and damage reputations, particularly through the phenomenon of the liar's dividend, where the existence of disinformation technologies creates ambiguity and erodes trust. In cases like Incident 824, the combination of real biographical details (such as Matthew Metro’s identity) with false narratives makes it harder for the public to discern truth from fiction. While these manipulations do spread false claims, they also amplify skepticism about genuine content, and that in turn deepens the challenge of ascertaining the truth in a media landscape rife with disinformation. Three other known harm events that were part of the line of attack were a false accusation on October 5, where an anonymous individual claimed Governor Tim Walz abused him during the 2004-2005 FLEX Program in Minnesota. In this case, the accuser was interviewed by John Mark Dougan, a former Florida sheriff's deputy now living in Russia, who is known to have alleged ties to Russian disinformation campaigns. (For more information on Dougan and his network of fake sites, see Incident 701 in particular.) Second, a baseless claim on October 12, posted by an X account credited as "Matt Wallace," alleged inappropriate behavior by Walz during student trips to China. Third, a manipulated email screenshot was shared on October 13 by an X account called "Black Insurrectionist," which suggested misconduct with a minor at a concert in 1995. Each of these incidents followed a similar pattern of disinformation, using fabricated or altered details to create credibility, further complicating efforts to refute the false claims.
Entités
Voir toutes les entitésAlleged: Unknown deepfake technology developer developed an AI system deployed by Anonymous X user, which harmed Tim Walz et Matthew Metro.
Systèmes d'IA présumés impliqués: Unknown deepfake apps et Unknown voice cloning technology
Statistiques d'incidents
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional