Incident 287: OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

Description: The French digital care company, Nabla, in researching GPT-3’s capabilities for medical documentation, diagnosis support, and treatment recommendation, found its inconsistency and lack of scientific and medical expertise unviable and risky in healthcare applications.
Alleged: OpenAI and Nabla developed an AI system deployed by none, which harmed Nabla customers.

Suggested citation format

Lam, Khoa. (2020-10-27) Incident Number 287. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
287
Report Count
5
Incident Date
2020-10-27
Editors
Khoa Lam

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

Doctor GPT-3: hype or reality?

You may have heard about GPT-3 this summer, the new cool kid on the AI block. GPT-3 came out of OpenAI, one of the top AI research labs in the world which was founded in late 2015 by Elon Musk, Sam Altman and others and later backed with a …

Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves

Anyone trying to use OpenAI’s powerful text-generating GPT-3 system to power chatbots to offer medical advice and help should go back to the drawing board, researchers have warned.

For one thing, the artificial intelligence told a patient t…

Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves

We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further.

If you’ve been living under a rock, GPT-3 is essentially a very clever text generator that’s been making various headlines in rece…

This bot actually suggests patients to kill themselves

Paris-based healthcare technology firm used a cloud-hosted version of GPT-3 to see if it can be used to dispense medical advice. Various tasks ranked from low to high sensitivity from a medical perspective were used to test GPT3’s abilities…

AI Incident Database Incidents Converted to Issues

The following former incidents have been converted to "issues" following an update to the incident definition and ingestion criteria.

21: Tougher Turing Test Exposes Chatbots’ Stupidity

Description: The 2016 Winograd Schema Challenge highli…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents