Description: AI deepfake detection tools are reportedly failing voters in the Global South due to biases in their training data. These tools, which prioritize English language and Western faces, show reduced accuracy when detecting manipulated content from non-Western regions. As a result of this detection gap, election integrity faces threats from and the amplification of misinformation, which leaves journalists and researchers with inadequate resources to combat the issue.
Entities
View all entitiesAlleged: Unknown deepfake detection technology developers , True Media and Reality Defender developed and deployed an AI system, which harmed Global South Citizens , Political researchers , Global South local fact-checkers , Non-native English speakers , Global South journalists and Civil society organizations in developing countries.
Incident Stats
Incident ID
801
Report Count
1
Incident Date
2024-09-02
Editors
Daniel Atherton
Incident Reports
Reports Timeline
wired.com · 2024
- View the original report at its source
- View the report at the Internet Archive
Recently, former president and convicted felon Donald Trump posted a series of photos that appeared to show fans of pop star Taylor Swift supporting his bid for the US presidency. The pictures looked AI-generated, and WIRED was able to conf…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.