Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

patients

Incidents Harmed By

Incident 512 Report
Collection of Robotic Surgery Malfunctions

2015-07-13

Study on database reports of robotic surgery malfunctions (8,061), including those ending in injury (1,391) and death (144), between 2000 and 2013.

More

Incident 5912 Report
Cigna Algorithm PXDX Allegedly Rejected Thousands of Patient Claims En Masse in Breach of California Law

2023-07-24

Cigna health insurer faces a class-action lawsuit for allegedly using the PXDX ("procedure-to-diagnosis") algorithm to automatically reject over 300,000 patient claims in violation of California law, prompting two members to file the lawsuit seeking damages and a jury trial. Cigna disputes the allegations, claiming the process expedites physician reimbursement and does not result in care denials.

More

Incident 4061 Report
Facebook's Friend Suggestion Feature Recommends Patients of Psychiatrist to Each Other

2015-07-15

Facebook's "People You May Know" (PYMK) feature was reported by a psychiatrist for recommending her patients as friends through recommendations, violating patients' privacy and confidentiality.

More

Incident 8271 Report
AI Transcription Tool Whisper Reportedly Inserting Fabricated Content in Medical Transcripts

2024-10-26

OpenAI's AI-powered transcription tool Whisper, used to translate and transcribe audio content such as patient consultations with doctors, is advertised as having near “human level robustness and accuracy.” However, software engineers, developers and academic researchers have alleged that it is prone to making up chunks of text or even entire sentences and that some of the hallucinations can include racial commentary, violent rhetoric, and even imagined medical treatments.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

Hospitals

Incidents involved as Deployer
  • Incident 5
    12 Reports

    Collection of Robotic Surgery Malfunctions

More
Entity

Doctors

Incidents involved as Deployer
  • Incident 5
    12 Reports

    Collection of Robotic Surgery Malfunctions

More
Entity

Intuitive Surgical

Incidents involved as Developer
  • Incident 5
    12 Reports

    Collection of Robotic Surgery Malfunctions

More
Entity

Facebook

Incidents involved as both Developer and Deployer
  • Incident 406
    1 Report

    Facebook's Friend Suggestion Feature Recommends Patients of Psychiatrist to Each Other

More
Entity

pseudonymized psychiatrist's patients

Incidents Harmed By
  • Incident 406
    1 Report

    Facebook's Friend Suggestion Feature Recommends Patients of Psychiatrist to Each Other

More
Entity

pseudonymized psychiatrist

Incidents Harmed By
  • Incident 406
    1 Report

    Facebook's Friend Suggestion Feature Recommends Patients of Psychiatrist to Each Other

More
Entity

healthcare providers

Incidents Harmed By
  • Incident 406
    1 Report

    Facebook's Friend Suggestion Feature Recommends Patients of Psychiatrist to Each Other

More
Entity

Cigna

Incidents involved as both Developer and Deployer
  • Incident 591
    2 Reports

    Cigna Algorithm PXDX Allegedly Rejected Thousands of Patient Claims En Masse in Breach of California Law

More
Entity

OpenAI

Incidents involved as both Developer and Deployer
  • Incident 827
    1 Report

    AI Transcription Tool Whisper Reportedly Inserting Fabricated Content in Medical Transcripts

More
Entity

Patients reliant on Whisper

Incidents Harmed By
  • Incident 827
    1 Report

    AI Transcription Tool Whisper Reportedly Inserting Fabricated Content in Medical Transcripts

More
Entity

Medical practitioners reliant on Whisper

Incidents Harmed By
  • Incident 827
    1 Report

    AI Transcription Tool Whisper Reportedly Inserting Fabricated Content in Medical Transcripts

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df