Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2012

Associated Incidents

Incident 3263 Report
Facebook Automated Year-in-Review Highlights Showed Users Painful Memories

Loading...
Facebook apologises over 'cruel' Year in Review clips
theguardian.com · 2014

Facebook has apologised after learning, yet again, that not everything can be done algorithmically. Some things, it seems, need the human touch.

The company’s latest blunder stems from a seemingly innocuous feature it rolls out to its users shortly before Christmas every year, called the Year in Review.

It lets users automatically select photos, wall posts and other content from a user’s past year, offering those which gained the most responses in likes or comments as “highlights”. Facebook users can then piece together a scrapbook of the past year, and experience instant nostalgia.

This year, to go one step further, Facebook automatically picked one particularly well-engaged photo to present to users, under the banner: “Here’s what your year looked like!” For many users, that will have been a happy memory, such as a graduation, wedding, or the birth of a child.

But for some users, the algorithm forced painful memories back to the surface.

Web designer Eric Meyer wrote on his blog that Facebook had shown him a picture of his daughter, Rebecca, who died in 2014.

“Yes, my year looked like that. True enough. My year looked like the now-absent face of my little girl. It was still unkind to remind me so forcefully,” he wrote in a post titled “Inadvertent Algorithmic Cruelty”.

“And I know, of course, that this is not a deliberate assault. This inadvertent algorithmic cruelty is the result of code that works in the overwhelming majority of cases, reminding people of the awesomeness of their years, showing them selfies at a party or whale spouts from sailing boats or the marina outside their vacation house.”

In a follow-up post, Meyer said the product manager for Year in Review, Jonathan Gheller, had personally apologised to him for the blunder. The Washington Post said Gheller described the app as “awesome for a lot of people, but clearly in this case we brought him [Meyer] grief rather than joy … We can do better — I’m very grateful he took the time in his grief to write the blog post.”

The feature has already been tweaked following feedback: it initially ended the slideshow with the words “It’s been a great year! Thanks for being a part of it.” It now uses the more neutral language “See you next year!”

Writer Julieanne Smolinski shared another story: her ex-boyfriend’s year in review, which framed a picture of his house on fire.

And many other users had similar experiences:

I'm so glad that Facebook made my 'Year in Review' image a picture of my now dead dog. I totally wanted to sob uncontrollably this Xmas Eve.

— Sarah-Jane (@isloveSJ) December 24, 2014

Facebook "year in review" thing is kind of awful as it chose 2 pictures of my dogs that died this year & uses poor graphic design elements.

— Travis Louie (@travislouie) December 27, 2014

Won't be sharing my Facebook Year in Review, which "highlights" a post on a friend's death in May despite words like "killed" and "sad day"

— Andrew Katz (@katz) December 29, 2014

As for Meyer, he suggests two things Facebook, and firms like it, can do to avoid this sort of inadvertent cruelty. “First, don’t pre-fill a picture until you’re sure the user actually wants to see pictures from their year. And second, instead of pushing the app at people, maybe ask them if they’d like to try a preview—just a simple yes or no. If they say no, ask if they want to be asked again later, or never again. And then, of course, honour their choices.

“It may not be possible to reliably pre-detect whether a person wants to see their year in review, but it’s not at all hard to ask politely—empathetically—if it’s something they want. That’s an easily-solvable problem. Had the app been designed with worst-case scenarios in mind, it probably would have been.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd