Entities

Users whose speech is misinterpreted by Whisper

Incidents Harmed By

Incident 7321 Report
Whisper Speech-to-Text AI Reportedly Found to Create Violent Hallucinations

2024-02-12

Researchers at Cornell reportedly found that OpenAI's Whisper, a speech-to-text system, can hallucinate violent language and fabricated details, especially with long pauses in speech, such as from those with speech impairments. Analyzing 13,000 clips, they determined 1% contained harmful hallucinations. These errors pose risks in hiring, legal trials, and medical documentation. The study suggests improving model training to reduce these hallucinations for diverse speaking patterns.

More

Related Entities