Description: Facebook's ad moderation system involving algorithms failed to flag hateful language and violating content such as calls for killings for ads in English and Swahili.
Entities
View all entitiesAlleged: Facebook developed and deployed an AI system, which harmed Facebook users speaking Swahili , Facebook users speaking English and Facebook users.
CSETv1 Taxonomy Classifications
Taxonomy DetailsIncident Number
The number of the incident in the AI Incident Database.
393
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.2. Exposure to toxic content
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline

Facebook is letting violent hate speech slip through its controls in Kenya as it has in other countries, according to a new report from the nonprofit groups Global Witness and Foxglove.
It is the third such test of Facebook’s ability to det…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents