Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 710: Facebook AI Reportedly Mislabels Auschwitz Photos as 'Bullying' and 'Nudity'

Description: Facebook's AI system allegedly mislabeled around 20 posts from the Auschwitz Memorial Museum as violating community standards for "bullying" and "nudity," reportedly deleting at least one image of orphans. The purported misclassification of historical content allegedly prompted outrage from the museum, which reportedly demanded an explanation. Meta later purportedly apologized, attributing the issue to mistaken notices sent by its AI system and reportedly acknowledged that the posts did not, in fact, violate company policies.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
 

Entities

View all entities
Alleged: Meta and Facebook developed and deployed an AI system, which harmed Survivors of Holocaust victims , General public and Auschwitz Memorial Museum.
Alleged implicated AI system: Facebook

Incident Stats

Incident ID
710
Report Count
1
Incident Date
2024-04-15
Editors
Daniel Atherton
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
Facebook wrongly labels photographs of Auschwitz victims as showing ‘bullying’ and ‘nudity’
Loading...
Facebook wrongly labels photographs of Auschwitz victims as showing ‘bullying’ and ‘nudity’

Facebook wrongly labels photographs of Auschwitz victims as showing ‘bullying’ and ‘nudity’

telegraph.co.uk

Loading...
Facebook wrongly labels photographs of Auschwitz victims as showing ‘bullying’ and ‘nudity’
telegraph.co.uk · 2024

Facebook has apologised for wrongly labelling photographs of Auschwitz victims as showing "bullying" and "nudity".

The social media giant incorrectly labelled 20 of the Auschwitz Memorial Museum's posts with a note saying they had been move…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship

Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship

May 2018 · 6 reports
Loading...
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Loading...
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship

Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship

May 2018 · 6 reports
Loading...
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Loading...
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65