Incidents involved as both Developer and Deployer
Incident 7226 Reports
Facebook translates 'good morning' into 'attack them', leading to arrest
2017-10-17
Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel.
MoreIncident 2846 Reports
Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship
2018-05-01
Facebook’s removal of posts featuring renowned artworks by many historical artists and their promotional content due to nudity via both automated and human-moderated means were condemned by critics, such as museums and tourism boards, as cultural censorship and prevention of artwork promotion.
MoreIncident 2135 Reports
Facebook’s Political Ad Detection Reportedly Showed High and Geographically Uneven Error Rates
2020-07-01
The performance of Facebook’s political ad detection was revealed by researchers to be imprecise, uneven across countries in errors, and inadequate for preventing systematic violations of political advertising policies.
MoreIncident 3805 Reports
Facebook's Auto-Generated Targeting Ad Categories Contained Anti-Semitic Options
2014-03-04
Facebook's automated advertising categories generated using users' declared interests contained anti-Semitic categories such as "Jew hater" and "How to burn Jews" which were listed as fields of study.
MoreIncidents Harmed By
Incident 3994 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
2022-11-15
Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
MoreIncidents involved as Deployer
Incident 4693 Reports
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
MoreRelated Entities
Incidents involved as both Developer and Deployer
- Incident 3432 Reports
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
- Incident 1421 Report
Facebook’s Advertisement Moderation System Routinely Misidentified Adaptive Fashion Products as Medical Equipment and Blocked Their Sellers
Incidents involved as Deployer
Meta
Incidents involved as both Developer and Deployer
- Incident 3994 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
- Incident 4713 Reports
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia
Incidents Harmed By
- Incident 3994 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
- Incident 3994 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content