Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1402: DOGE Reportedly Relied on Unvetted ChatGPT Outputs in Canceling National Endowment for the Humanities Grants

Description: Department of Government Efficiency (DOGE) staff reportedly fed National Endowment for the Humanities grant descriptions into ChatGPT to determine whether projects were "DEI," then allegedly used those outputs to help compile a list of grants to terminate. Beginning in April 2025, NEH canceled awards and clawed back funding, reportedly disrupting humanities organizations and projects nationwide after officials allegedly relied on a purportedly flawed, insufficiently vetted AI screening process.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI developed an AI system deployed by United States Department of Government Efficiency (DOGE) , Justin Fox (DOGE) and Nate Cavanaugh (DOGE), which harmed National Endowment for the Humanities grantees , Humanities organizations , Scholars and Epistemic integrity.
Alleged implicated AI system: ChatGPT

Incident Stats

Incident ID
1402
Report Count
4
Incident Date
2025-04-02
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident Occurrence+3
When DOGE Unleashed ChatGPT on the Humanities
Loading...
When DOGE Unleashed ChatGPT on the Humanities

When DOGE Unleashed ChatGPT on the Humanities

nytimes.com

Loading...
DOGE Employees Used ChatGPT to Cancel Humanities Grants, Suits Allege

DOGE Employees Used ChatGPT to Cancel Humanities Grants, Suits Allege

artforum.com

Loading...
How DOGE Gutted the NEH in 22 Days

How DOGE Gutted the NEH in 22 Days

insidehighered.com

Loading...
DOGE cancelled a $349,000 grant to replace a museum’s HVAC after ChatGPT flagged it as DEI, court documents show

DOGE cancelled a $349,000 grant to replace a museum’s HVAC after ChatGPT flagged it as DEI, court documents show

fortune.com

Loading...
When DOGE Unleashed ChatGPT on the Humanities
nytimes.com · 2026

When the Trump administration went looking last spring for National Endowment for the Humanities grants to cut, it turned to a familiar scourge of professors: ChatGPT.

Last March, two employees from Elon Musk's Department of Government Effi…

Loading...
DOGE Employees Used ChatGPT to Cancel Humanities Grants, Suits Allege
artforum.com · 2026

According to lawsuits filed on Friday, two employees of the Department of Government Efficiency (DOGE) used ChatGPT to determine whether previously approved National Endowment for the Humanities (NEH) grants should be canceled based on prox…

Loading...
How DOGE Gutted the NEH in 22 Days
insidehighered.com · 2026

When the Department of Government Efficiency was asked last year to identify National Endowment of the Humanities grants that violated President Trump's executive orders, it enlisted the help of ChatGPT. "Does the following relate at all to…

Loading...
DOGE cancelled a $349,000 grant to replace a museum’s HVAC after ChatGPT flagged it as DEI, court documents show
fortune.com · 2026

The Trump administration's efforts to slash diversity, equity, and inclusion (DEI) initiatives left another acronym on the chopping block: one museum's $350,000 grant to replace its heating, ventilation, and air conditioning (HVAC) system.

…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

Dec 2012 · 2 reports
Loading...
Security Robot Drowns Itself in a Fountain

Security Robot Drowns Itself in a Fountain

Jul 2017 · 29 reports
Loading...
Images of Black People Labeled as Gorillas

Images of Black People Labeled as Gorillas

Jun 2015 · 23 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

Dec 2012 · 2 reports
Loading...
Security Robot Drowns Itself in a Fountain

Security Robot Drowns Itself in a Fountain

Jul 2017 · 29 reports
Loading...
Images of Black People Labeled as Gorillas

Images of Black People Labeled as Gorillas

Jun 2015 · 23 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 70bfe3d