Description: A viral video falsely attributed allegations of misconduct to Democratic vice-presidential nominee Tim Walz using the stolen identity of a former student, Matthew Metro. The video, circulated on X and other social media platforms, garnered millions of views before being flagged for manipulation. Experts concluded that the video was not AI-generated but involved real actors and potential audio manipulation.
Editor Notes: Incident 824 embodies elements of ambiguity that position it within the framework of the liar's dividend, where the presence of manipulated or falsified content creates confusion and leads both real and fake events to be viewed with skepticism, particularly in the context of AI-related harm discourses. In this case, a viral video using the stolen identity of Matthew Metro falsely accused Tim Walz of misconduct. Although experts concluded the video was not a deepfake, the spread of the fabricated content fueled widespread doubt, with some speculating it might involve deepfake or audio manipulation. The video's apparent legitimacy, which was bolstered by Metro's stolen identity and certain accurate biographical details, created the circumstances whereby false claims could circulate widely, further complicating public perception. The uncertainty surrounding the video reflects a key feature of the liar's dividend: even when experts might confidently rule out the use of AI, the lingering doubt caused by technological manipulation undermines trust in authentic narratives. The existence of disinformation technologies complicates the ability to discern truth.
Entities
View all entitiesAlleged: Unknown deepfake technology developer developed an AI system deployed by Anonymous X user, which harmed Tim Walz and Matthew Metro.
Incident Stats
Incident ID
824
Report Count
1
Incident Date
2024-10-16
Editors
Daniel Atherton
Incident Reports
Reports Timeline
washingtonpost.com · 2024
- View the original report at its source
- View the report at the Internet Archive
Matthew Metro didn't recognize the face that popped up on his cellphone screen when he clicked a link that a friend texted him last week. But after hitting play on the online video, he was dismayed by what he saw.
"My name is Matthew Metro,…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Passport checker Detects Asian man's Eyes as Closed
· 22 reports
Biased Google Image Results
· 18 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Passport checker Detects Asian man's Eyes as Closed
· 22 reports
Biased Google Image Results
· 18 reports