Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2652

Associated Incidents

Incident 1247 Report
Optum Algorithmic Health Risk Scores Reportedly Underestimated Black Patients' Needs

Loading...
'Racism is America’s oldest algorithm': How bias creeps into health care AI
statnews.com · 2022

Artificial intelligence and medical algorithms are deeply intertwined with our modern health care system. These technologies mimic the thought processes of doctors to make medical decisions and are designed to help providers determine who needs care. But one big problem with artificial intelligence is that it very often replicate the biases and blind spots of the humans who create them.

Researchers and physicians have warned that algorithms used to determine who gets kidney transplants, heart surgeries and breast cancer diagnoses display racial bias. Those problems can lead to detrimental care that, in some cases, can jeopardize the health of millions of patients.

So how exactly does bias seep into these algorithms? And what can be done to prevent it?

In this episode, we hear from Casey Ross, STAT’s national health tech correspondent, about his reporting on racial bias in AI. Chris Hemphill, the VP for applied AI & growth at Actium Health, tells us about the rise of responsible AI in health care. Ziad Obermeryer, an emergency medicine physician and researcher at the UC Berkeley School of Public Health, walks us through how his team found bias in an algorithm widely used in our health care system and an instance where AI was used to correct a health care injustice.

A transcript of this episode is available here.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd