Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 326: Facebook Automated Year-in-Review Highlights Showed Users Painful Memories

Description: Facebook’s “Year in Review” algorithm which compiled content in users’ past year as highlights inadvertently showed painful and unwanted memories to users, including death of family member.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Facebook developed and deployed an AI system, which harmed Facebook users having posts about painful events and Facebook users.

Incident Stats

Incident ID
326
Report Count
3
Incident Date
2014-12-09
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceInadvertent Algorithmic Cruelty+1
Facebook apologises over 'cruel' Year in Review clips
Inadvertent Algorithmic Cruelty

Inadvertent Algorithmic Cruelty

meyerweb.com

Facebook apologises over 'cruel' Year in Review clips

Facebook apologises over 'cruel' Year in Review clips

theguardian.com

Facebook Apologizes for Year in Review Gaffe

Facebook Apologizes for Year in Review Gaffe

pcmag.com

Inadvertent Algorithmic Cruelty
meyerweb.com · 2014

I didn’t go looking for grief this afternoon, but it found me anyway, and I have designers and programmers to thank for it. In this case, the designers and programmers are somewhere at Facebook.

I know they’re probably pretty proud of the w…

Facebook apologises over 'cruel' Year in Review clips
theguardian.com · 2014

Facebook has apologised after learning, yet again, that not everything can be done algorithmically. Some things, it seems, need the human touch.

The company’s latest blunder stems from a seemingly innocuous feature it rolls out to its users…

Facebook Apologizes for Year in Review Gaffe
pcmag.com · 2014

Facebook's "Year in Review" feature is a delight to some and a painful reminder for others of events in 2014 they'd rather not be reminded of as the year comes to a close.

Web designer Eric Meyer's daughter Rebecca died in 2014. Last week, …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Amazon Censors Gay Books

Amazon Censors Gay Books

May 2008 · 24 reports
TayBot

TayBot

Mar 2016 · 28 reports
Images of Black People Labeled as Gorillas

Images of Black People Labeled as Gorillas

Jun 2015 · 24 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Amazon Censors Gay Books

Amazon Censors Gay Books

May 2008 · 24 reports
TayBot

TayBot

Mar 2016 · 28 reports
Images of Black People Labeled as Gorillas

Images of Black People Labeled as Gorillas

Jun 2015 · 24 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5