Description: AI-driven election disinformation is escalating globally, leveraging easy-to-use generative AI tools to create convincing deepfakes that mislead voters. This shift has simplified the process for individuals to generate fake content, having already eroded trust in elections by undermining public trust and manipulating voter perceptions. Evidence has, for example, been documented in incidents across the U.S., Moldova, Slovakia, Bangladesh, and Taiwan.
Editor Notes: This incident ID is for collective incident reports that detail and survey worldwide AI disinformation campaigns rather than just nation, state, or local incidents. Such related incident IDs should when possible be marked as a similar incident ID so that we can connect this ID to the others.
Entities
View all entitiesAlleged: Unknown deepfake creators , OpenAI and Google developed an AI system deployed by Russian government , Political operatives , Political consultants and Chinese Communist Party, which harmed Voters , Public trust , Political figures , General public , Electoral integrity , Democracy and Civic society.
Incident Stats
Incident ID
674
Report Count
1
Incident Date
2024-03-14
Editors
Daniel Atherton
Incident Reports
Reports Timeline
apnews.com · 2024
- View the original report at its source
- View the report at the Internet Archive
LONDON (AP) --- Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake -- but convincing -- content aimed at fooling vot…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.