Incident 287: OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

Description: The French digital care company, Nabla, in researching GPT-3’s capabilities for medical documentation, diagnosis support, and treatment recommendation, found its inconsistency and lack of scientific and medical expertise unviable and risky in healthcare applications. This incident has been downgraded to an issue as it does not meet current ingestion criteria.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: OpenAI and Nabla developed an AI system deployed by none, which harmed Nabla customers.

Incident Stats

Incident ID
287
Report Count
1
Incident Date
2020-10-27
Editors
Khoa Lam

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Question Answering

Known AI Technology

Transformer, Language Modeling, Distributional Learning

Known AI Technical Failure

Distributional Artifacts

Potential AI Technical Failure

Limited Dataset, Problematic Input, Robustness Failure, Overfitting, Underfitting, Inadequate Sequential Memory

Incident Reports

Reports Timeline

AI Incident Database Incidents Converted to Issues
github.com · 2022

The following former incidents have been converted to "issues" following an update to the incident definition and ingestion criteria.

21: Tougher Turing Test Exposes Chatbots’ Stupidity

Description: The 2016 Winograd Schema Challenge highli…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents