Incident 732: Whisper Speech-to-Text AI Reportedly Found to Create Violent Hallucinations

Description: Researchers at Cornell reportedly found that OpenAI's Whisper, a speech-to-text system, can hallucinate violent language and fabricated details, especially with long pauses in speech, such as from those with speech impairments. Analyzing 13,000 clips, they determined 1% contained harmful hallucinations. These errors pose risks in hiring, legal trials, and medical documentation. The study suggests improving model training to reduce these hallucinations for diverse speaking patterns.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Incident Stats

Incident ID
732
Report Count
1
Incident Date
2024-02-12
Editors
Daniel Atherton

Incident Reports

Reports Timeline

AI speech-to-text can hallucinate violent language
news.cornell.edu · 2024

Speak a little too haltingly and with long pauses, and a speech-to-text transcriber might put harmful, violent words in your mouth, Cornell researchers have discovered.

 OpenAI's Whisper -- an artificial intelligence-powered speech recognit…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents