Tools
Entities
View all entitiesIncident Stats
Risk Subdomain
3.1. False or misleading information
Risk Domain
- Misinformation
Entity
AI
Timing
Post-deployment
Intent
Unintentional
Incident Reports
Reports Timeline
AIID editor's note: This peer-reviewed journal article is abridged in parts. See the original source for the complete version, specifically Table 1 and the References section.
Abstract
Ingestion of bromide can lead to a toxidrome known as b…
A case study out this month offers a cautionary tale ripe for our modern times. Doctors detail how a man experienced poison-caused psychosis after he followed AI-guided dietary advice.
Doctors at the University of Washington documented the …

A man consulted ChatGPT prior to changing his diet. Three months later, after consistently sticking with that dietary change, he ended up in the emergency department with concerning new psychiatric symptoms, including paranoia and hallucina…
A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.
An article in the Annals of Internal Medi…
Hyderabad: Doctors in Hyderabad have cautioned people against relying solely on artificial intelligence (AI) tools such as ChatGPT for medical advice. They emphasised that patients, especially those with chronic or serious health conditions…

