Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Incidente 81: Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers

Descripción: A study by the University of Toronto, the Vector Institute, and MIT showed the input databases that trained AI systems used to classify chest X-rays led the systems to show gender, socioeconomic, and racial biases.

Herramientas

Nuevo InformeNuevo InformeNueva RespuestaNueva RespuestaDescubrirDescubrirVer HistorialVer Historial

Entidades

Ver todas las entidades
Alleged: Google , Qure.ai , Aidoc y DarwinAI developed an AI system deployed by Mount Sinai Hospitals, which harmed patients of minority groups , low-income patients , female patients , Hispanic patients y patients with Medicaid insurance.

Estadísticas de incidentes

ID
81
Cantidad de informes
1
Fecha del Incidente
2020-10-21
Editores
Sean McGregor, Khoa Lam
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

Clasificaciones de la Taxonomía CSETv0

Detalles de la Taxonomía

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Unclear/unknown

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Expert

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

medical imagery databases

Clasificaciones de la Taxonomía CSETv1

Detalles de la Taxonomía

Incident Number

The number of the incident in the AI Incident Database.
 

81

Clasificaciones de la Taxonomía MIT

Machine-Classified
Detalles de la Taxonomía

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.3. Unequal performance across groups

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Pre-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Informes del Incidente

Cronología de Informes

+1
Los investigadores encuentran evidencia de sesgo racial, de género y socioeconómico en los clasificadores de rayos X de tórax
Los investigadores encuentran evidencia de sesgo racial, de género y socioeconómico en los clasificadores de rayos X de tórax

Los investigadores encuentran evidencia de sesgo racial, de género y socioeconómico en los clasificadores de rayos X de tórax

venturebeat.com

Los investigadores encuentran evidencia de sesgo racial, de género y socioeconómico en los clasificadores de rayos X de tórax
venturebeat.com · 2020
Traducido por IA

Google y nuevas empresas como Qure.ai, Aidoc y DarwinAI están desarrollando sistemas de inteligencia artificial y aprendizaje automático que clasifican las radiografías de tórax para ayudar a identificar afecciones como fracturas y pulmones…

Variantes

Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.

Incidentes Similares

Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Collection of Robotic Surgery Malfunctions

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data

Jul 2015 · 12 informes
Racist AI behaviour is not a new problem

Racist AI behaviour is not a new problem

Mar 1998 · 4 informes
Algorithmic Health Risk Scores Underestimated Black Patients’ Needs

A Health Care Algorithm Offered Less Care to Black Patients

Oct 2019 · 7 informes
Incidente AnteriorSiguiente Incidente

Incidentes Similares

Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Collection of Robotic Surgery Malfunctions

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data

Jul 2015 · 12 informes
Racist AI behaviour is not a new problem

Racist AI behaviour is not a new problem

Mar 1998 · 4 informes
Algorithmic Health Risk Scores Underestimated Black Patients’ Needs

A Health Care Algorithm Offered Less Care to Black Patients

Oct 2019 · 7 informes

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df