Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1166: ChatGPT Reportedly Suggests Sodium Bromide as Chloride Substitute, Leading to Bromism and Hospitalization

Description: A published medical case report describes a 60-year-old man hospitalized for three weeks with severe bromide toxicity (bromism) after replacing dietary sodium chloride with sodium bromide purchased online. The patient reported making this substitution following consultation with ChatGPT, which allegedly suggested bromide as a chloride substitute without safety warnings. The harm included psychosis, electrolyte imbalances, dermatologic changes, and micronutrient deficiencies.
Editor Notes: Timeline note: The date reflects the publication of the case report by Eichenberger et al. in Annals of Internal Medicine: Clinical Cases, vol. 4, no. 8 (August 5, 2025). According to the authors, the patient had been ingesting sodium bromide for approximately three months before hospitalization, which occurred earlier in 2024; the exact dates of onset and admission were not specified.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and ChatGPT developed and deployed an AI system, which harmed Unnamed 60-year-old male patient with bromism and ChatGPT users.
Alleged implicated AI system: ChatGPT

Incident Stats

Incident ID
1166
Report Count
2
Incident Date
2025-08-05
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
A Case of Bromism Influenced by Use of Artificial Intelligence
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis
Loading...
A Case of Bromism Influenced by Use of Artificial Intelligence

A Case of Bromism Influenced by Use of Artificial Intelligence

doi.org

Loading...
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis

Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis

gizmodo.com

Loading...
A Case of Bromism Influenced by Use of Artificial Intelligence
doi.org · 2025

AIID editor's note: This peer-reviewed journal article is abridged in parts. See the original source for the complete version, specifically Table 1 and the References section.

Abstract

Ingestion of bromide can lead to a toxidrome known as b…

Loading...
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis
gizmodo.com · 2025

A case study out this month offers a cautionary tale ripe for our modern times. Doctors detail how a man experienced poison-caused psychosis after he followed AI-guided dietary advice.

Doctors at the University of Washington documented the …

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Collection of Robotic Surgery Malfunctions

Collection of Robotic Surgery Malfunctions

Jul 2015 · 12 reports
Loading...
Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Jan 2021 · 2 reports
Loading...
Machine Personal Assistants Failed to Maintain Social Norms

Machine Personal Assistants Failed to Maintain Social Norms

Jul 2008 · 1 report
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Collection of Robotic Surgery Malfunctions

Collection of Robotic Surgery Malfunctions

Jul 2015 · 12 reports
Loading...
Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation

Jan 2021 · 2 reports
Loading...
Machine Personal Assistants Failed to Maintain Social Norms

Machine Personal Assistants Failed to Maintain Social Norms

Jul 2008 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • b9764d4