Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1166: ChatGPT Reportedly Suggests Sodium Bromide as Chloride Substitute, Leading to Bromism and Hospitalization

Description: A published medical case report describes a 60-year-old man hospitalized for three weeks with severe bromide toxicity (bromism) after replacing dietary sodium chloride with sodium bromide purchased online. The patient reported making this substitution following consultation with ChatGPT, which allegedly suggested bromide as a chloride substitute without safety warnings. The harm included psychosis, electrolyte imbalances, dermatologic changes, and micronutrient deficiencies.
Editor Notes: Timeline note: The date reflects the publication of the case report by Eichenberger et al. in Annals of Internal Medicine: Clinical Cases, vol. 4, no. 8 (August 5, 2025). According to the authors, the patient had been ingesting sodium bromide for approximately three months before hospitalization, which occurred earlier in 2024; the exact dates of onset and admission were not specified. See also Incident 1281: Alleged Harmful Health Outcomes Following Reported Use of Purported ChatGPT-Generated Medical Advice in Hyderabad.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
 

Entities

View all entities
Alleged: OpenAI and ChatGPT developed and deployed an AI system, which harmed Unnamed 60-year-old male patient with bromism and ChatGPT users.
Alleged implicated AI system: ChatGPT

Incident Stats

Incident ID
1166
Report Count
5
Incident Date
2025-08-05
Editors
Daniel Atherton
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

3.1. False or misleading information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Misinformation

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
A Case of Bromism Influenced by Use of Artificial Intelligence
+2
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis
Doctors warn against relying on AI tools for medical advice
Loading...
A Case of Bromism Influenced by Use of Artificial Intelligence

A Case of Bromism Influenced by Use of Artificial Intelligence

doi.org

Loading...
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis

Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis

gizmodo.com

Loading...
Man sought diet advice from ChatGPT and ended up with dangerous 'bromism' syndrome

Man sought diet advice from ChatGPT and ended up with dangerous 'bromism' syndrome

livescience.com

Loading...
Man develops rare condition after ChatGPT query over stopping eating salt

Man develops rare condition after ChatGPT query over stopping eating salt

theguardian.com

Loading...
Doctors warn against relying on AI tools for medical advice

Doctors warn against relying on AI tools for medical advice

timesofindia.indiatimes.com

Loading...
A Case of Bromism Influenced by Use of Artificial Intelligence
doi.org · 2025

AIID editor's note: This peer-reviewed journal article is abridged in parts. See the original source for the complete version, specifically Table 1 and the References section.

Abstract

Ingestion of bromide can lead to a toxidrome known as b…

Loading...
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis
gizmodo.com · 2025

A case study out this month offers a cautionary tale ripe for our modern times. Doctors detail how a man experienced poison-caused psychosis after he followed AI-guided dietary advice.

Doctors at the University of Washington documented the …

Loading...
Man sought diet advice from ChatGPT and ended up with dangerous 'bromism' syndrome
livescience.com · 2025

A man consulted ChatGPT prior to changing his diet. Three months later, after consistently sticking with that dietary change, he ended up in the emergency department with concerning new psychiatric symptoms, including paranoia and hallucina…

Loading...
Man develops rare condition after ChatGPT query over stopping eating salt
theguardian.com · 2025

A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.

An article in the Annals of Internal Medi…

Loading...
Doctors warn against relying on AI tools for medical advice
timesofindia.indiatimes.com · 2025

Hyderabad: Doctors in Hyderabad have cautioned people against relying solely on artificial intelligence (AI) tools such as ChatGPT for medical advice. They emphasised that patients, especially those with chronic or serious health conditions…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Alleged Harmful Health Outcomes Following Reported Use of Purported ChatGPT-Generated Medical Advice in Hyderabad

Nov 2025 · 2 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Collection of Robotic Surgery Malfunctions

Collection of Robotic Surgery Malfunctions

Jul 2015 · 12 reports
Loading...
Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Jan 2021 · 2 reports
Loading...
Machine Personal Assistants Failed to Maintain Social Norms

Machine Personal Assistants Failed to Maintain Social Norms

Jul 2008 · 1 report
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Alleged Harmful Health Outcomes Following Reported Use of Purported ChatGPT-Generated Medical Advice in Hyderabad

Nov 2025 · 2 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Collection of Robotic Surgery Malfunctions

Collection of Robotic Surgery Malfunctions

Jul 2015 · 12 reports
Loading...
Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Jan 2021 · 2 reports
Loading...
Machine Personal Assistants Failed to Maintain Social Norms

Machine Personal Assistants Failed to Maintain Social Norms

Jul 2008 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5440a2a