Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 124: Algorithmic Health Risk Scores Underestimated Black Patients’ Needs

Description: Optum's algorithm deployed by a large academic hospital was revealed by researchers to have under-predicted the health needs of black patients, effectively de-prioritizing them in extra care programs relative to white patients with the same health burden.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Optum developed an AI system deployed by unnamed large academic hospital, which harmed Black patients.

Incident Stats

Incident ID
124
Report Count
7
Incident Date
2019-10-24
Editors
Sean McGregor, Khoa Lam
Applied Taxonomies
CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

124

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.3. Unequal performance across groups

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+4
A Health Care Algorithm Offered Less Care to Black Patients
These Algorithms Look at X-Rays-and Somehow Detect Your Race'Racism is America’s oldest algorithm': How bias creeps into health care AIAlgorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism
A Health Care Algorithm Offered Less Care to Black Patients

A Health Care Algorithm Offered Less Care to Black Patients

wired.com

Racial bias in a medical algorithm favors white patients over sicker black patients

Racial bias in a medical algorithm favors white patients over sicker black patients

washingtonpost.com

Millions of black people affected by racial bias in health-care algorithms

Millions of black people affected by racial bias in health-care algorithms

nature.com

New York Insurance Regulator to Probe Optum Algorithm for Racial Bias

New York Insurance Regulator to Probe Optum Algorithm for Racial Bias

fiercehealthcare.com

These Algorithms Look at X-Rays-and Somehow Detect Your Race

These Algorithms Look at X-Rays-and Somehow Detect Your Race

wired.com

'Racism is America’s oldest algorithm': How bias creeps into health care AI

'Racism is America’s oldest algorithm': How bias creeps into health care AI

statnews.com

Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism

Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism

aclu.org

A Health Care Algorithm Offered Less Care to Black Patients
wired.com · 2019

Care for some of the sickest Americans is decided in part by algorithm. New research shows that software guiding care for tens of millions of people systematically privileges white patients over black patients. Analysis of records from a ma…

Racial bias in a medical algorithm favors white patients over sicker black patients
washingtonpost.com · 2019

A widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine, researchers have …

Millions of black people affected by racial bias in health-care algorithms
nature.com · 2019

An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, a sweeping analysis has found.

The study, published in Science on 24 October, concluded that the algor…

New York Insurance Regulator to Probe Optum Algorithm for Racial Bias
fiercehealthcare.com · 2019

New York's Financial Services and Health departments sent a letter to UnitedHealth Group’s CEO David Wichmann Friday regarding an algorithm developed by Optum, The Wall Street Journal reported. The investigation is in response to a study pu…

These Algorithms Look at X-Rays-and Somehow Detect Your Race
wired.com · 2021

Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these …

'Racism is America’s oldest algorithm': How bias creeps into health care AI
statnews.com · 2022

Artificial intelligence and medical algorithms are deeply intertwined with our modern health care system. These technologies mimic the thought processes of doctors to make medical decisions and are designed to help providers determine who n…

Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism
aclu.org · 2022

Artificial intelligence (AI) and algorithmic decision-making systems — algorithms that analyze massive amounts of data and make predictions about the future — are increasingly affecting Americans’ daily lives. People are compelled to includ…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Kidney Testing Method Allegedly Underestimated Risk of Black Patients

Kidney Testing Method Allegedly Underestimated Risk of Black Patients

Mar 1999 · 3 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Kidney Testing Method Allegedly Underestimated Risk of Black Patients

Kidney Testing Method Allegedly Underestimated Risk of Black Patients

Mar 1999 · 3 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c