Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 1101

Associated Incidents

Incident 6023 Report
FaceApp Racial Filters

Loading...
FaceApp Blames AI for Whitening up Black People
gizmodo.co.uk · 2017

People who actually want to see their faces reflected in their phone screens have been having fun with FaceApp recently, with Facebook and Twitter currently overloaded with images of users sharing what they look like young, old and "hot" as adapted by the app's filters.

Which is all harmless, self-loving fun, but for the fact that the "hot" image is becoming something of a problem for the app's maker, as users with darker skin are finding out that all it really does is slim down their features and turn them white. Which is really quite awkward and is resulting in some top tier public relations disaster tweets like this:

So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. It's 2017, c'mon guys smh#FaceApp pic.twitter.com/9U9dv9JuCm — Shahquelle L. (@RealMoseby96) April 20, 2017

It also deletes glasses as it thinks all glasses-wearers are losers. The app maker isn't taking responsibility for it either, and is instead blubbering about problems with machine learning and the neural network. He told the BBC that: "We are deeply sorry for this unquestionably serious issue. It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."

So they accidentally created a racist computer and you have to be angry with the computer, OK? And instead of fixing it, they've simply renamed the feature from "hot" to "spark."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd