Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2549

Associated Incidents

Incident 4542 Report
Emotion Detection Models Showed Disparate Performance along Racial Lines

Loading...
Racial Influence on Automated Perceptions of Emotions
papers.ssrn.com · 2018

Abstract

The practical applications of artificial intelligence are expanding into various elements of society, leading to a growing interest in the potential biases of such algorithms. Facial analysis, one application of artificial intelligence, is increasingly used in real-word situations. For example, some organizations tell candidates to answer predefined questions in a recorded video and use facial recognition to analyze the potential applicant faces. In addition, some companies are developing facial recognition software to scan the faces in crowds and assess threats, specifically mentioning doubt and anger as emotions that indicate threats. This study provides evidence that facial recognition software interprets emotions differently based on the person’s race. Using a publically available data set of professional basketball players’ pictures, I compare the emotional analysis from two different facial recognition services, Face and Microsoft's Face API. Both services interpret black players as having more negative emotions than white players; however, there are two different mechanisms. Face consistently interprets black players as angrier than white players, even controlling for their degree of smiling. Microsoft registers contempt instead of anger, and it interprets black players as more contemptuous when their facial expressions are ambiguous. As the players’ smile widens, the disparity disappears.

This finding has implications for individuals, organizations, and society, and it contributes to the growing literature of bias and/or disparate impact in AI.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd