Description: Facebook's AI wrongly labeled 20 posts from the Auschwitz Memorial Museum as violating community standards for "bullying" and "nudity," even deleting one image of orphans. The mislabeling of respectful historical content outraged the museum, which demanded an explanation. Meta, Facebook's parent company, apologized, attributing the error to mistaken notices sent by their AI system and acknowledged the posts did not violate any policies.
Entities
View all entitiesAlleged: Meta developed and deployed an AI system, which harmed Auschwitz Memorial Museum , Survivors of Holocaust victims and General public.
Incident Stats
Incident ID
710
Report Count
1
Incident Date
2024-04-15
Editors
Daniel Atherton
Incident Reports
Reports Timeline
telegraph.co.uk · 2024
- View the original report at its source
- View the report at the Internet Archive
Facebook has apologised for wrongly labelling photographs of Auschwitz victims as showing "bullying" and "nudity".
The social media giant incorrectly labelled 20 of the Auschwitz Memorial Museum's posts with a note saying they had been move…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents