Incident 392: Facebook's AI-Supported Moderation Failed to Classify Terrorist Content in East African Languages

Description: Facebook's system involving algorithmic content moderation for East African languages was reportedly failing to identify violating content on the platform such as mistakenly classifying non-terrorist content.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: Facebook developed and deployed an AI system, which harmed Facebook users speaking East African languages and Facebook users in East Africa.

Incident Stats

Incident ID
392
Report Count
2
Incident Date
2015-06-01
Editors
Khoa Lam

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.