Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1084: Federal 'Make America Healthy Again' Report Released with Multiple Reportedly Erroneous and Unverifiable Citations

Description: The federal "Make America Healthy Again" (MAHA) report, released under HHS Secretary Robert F. Kennedy Jr., included hundreds of citations, some of which were reportedly nonexistent or erroneous. Analysts reportedly identified markers consistent with AI-generated text, such as repeated entries, nonexistent studies, and URLs containing "oaicite," suggesting use of tools like ChatGPT.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI developed an AI system deployed by United States Department of Health and Human Services, which harmed United States Department of Health and Human Services , Scientific rigor , Academic integrity and General public.
Alleged implicated AI system: ChatGPT

Incident Stats

Incident ID
1084
Report Count
3
Incident Date
2025-05-22
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceWhite House Health Report Included Fake Citations+1
The MAHA Report’s AI fingerprints, annotated
White House Health Report Included Fake Citations

White House Health Report Included Fake Citations

nytimes.com

The MAHA Report’s AI fingerprints, annotated

The MAHA Report’s AI fingerprints, annotated

washingtonpost.com

RFK Jr.’s ‘Make America Healthy Again’ report seems riddled with AI slop

RFK Jr.’s ‘Make America Healthy Again’ report seems riddled with AI slop

theverge.com

White House Health Report Included Fake Citations
nytimes.com · 2025

The Trump administration released a report last week that it billed as a "clear, evidence-based foundation" for action on a range of children's health issues.

But the report, from the presidential Make America Healthy Again Commission, cite…

The MAHA Report’s AI fingerprints, annotated
washingtonpost.com · 2025

The White House's "Make America Healthy Again" report, which issued a dire warning about the forces responsible for Americans' declining life expectancy, bears hallmarks of the use of artificial intelligence in its citations. That appears t…

RFK Jr.’s ‘Make America Healthy Again’ report seems riddled with AI slop
theverge.com · 2025

There are some questionable sources underpinning Robert F. Kennedy Jr.’s controversial “Make America Healthy Again” commission report. Signs point to AI tomfoolery, and the use of ChatGPT specifically, which calls into question the veracity…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Collection of Robotic Surgery Malfunctions

Collection of Robotic Surgery Malfunctions

Jul 2015 · 12 reports
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Amazon Censors Gay Books

Amazon Censors Gay Books

May 2008 · 24 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Collection of Robotic Surgery Malfunctions

Collection of Robotic Surgery Malfunctions

Jul 2015 · 12 reports
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Amazon Censors Gay Books

Amazon Censors Gay Books

May 2008 · 24 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • f196a6b