Incident 81: Des chercheurs trouvent des preuves de biais raciaux, sexistes et socioéconomiques dans les classificateurs de radiographies thoraciques
Description: Une étude menée par l’Université de Toronto, le Vector Institute et le MIT a montré que les bases de données d’entrée utilisées par les systèmes d’IA formés pour classer les radiographies thoraciques ont conduit ces systèmes à montrer des biais sexistes, socioéconomiques et raciaux.
Entités
Voir toutes les entitésAlleged: Google , Qure.ai , Aidoc et DarwinAI developed an AI system deployed by Mount Sinai Hospitals, which harmed patients of minority groups , low-income patients , female patients , Hispanic patients et patients with Medicaid insurance.
Classifications de taxonomie CSETv1
Détails de la taxonomieIncident Number
The number of the incident in the AI Incident Database.
81
Classifications de taxonomie CSETv0
Détails de la taxonomieProblem Nature
Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
Specification
Physical System
Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
Software only
Level of Autonomy
The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
Unclear/unknown
Nature of End User
"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
Expert
Public Sector Deployment
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
No
Data Inputs
A brief description of the data that the AI system(s) used or were trained on.
medical imagery databases
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.3. Unequal performance across groups
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Pre-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Rapports d'incidents
Chronologie du rapport

Google et des startups comme Qure.ai, Aidoc et DarwinAI développent des systèmes d'IA et d'apprentissage automatique qui classent les radiographies pulmonaires pour aider à identifier des conditions telles que les fractures et les poumons e…
Variantes
Une "Variante" est un incident de l'IA similaire à un cas connu—il a les mêmes causes, les mêmes dommages et le même système intelligent. Plutôt que de l'énumérer séparément, nous l'incluons sous le premier incident signalé. Contrairement aux autres incidents, les variantes n'ont pas besoin d'avoir été signalées en dehors de la base de données des incidents. En savoir plus sur le document de recherche.
Vous avez vu quelque chose de similaire ?
Incidents similaires
Did our AI mess up? Flag the unrelated incidents
Collection of Robotic Surgery Malfunctions
· 12 rapports
Racist AI behaviour is not a new problem
· 4 rapports
Incidents similaires
Did our AI mess up? Flag the unrelated incidents
Collection of Robotic Surgery Malfunctions
· 12 rapports
Racist AI behaviour is not a new problem
· 4 rapports