Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1252: Judges in New Jersey and Mississippi Admit AI Tools Produced Erroneous Federal Court Filings

Description: Two U.S. federal judges, Julien Neals (D.N.J.) and Henry Wingate (S.D. Miss.), reportedly admitted that staff in their chambers used AI tools, ChatGPT and Perplexity, to draft rulings containing purportedly fabricated quotes, cases, and parties. The erroneous filings were reportedly docketed before review, prompting Senate and judicial inquiries. Both rulings were retracted.
Editor Notes: Timeline notes: The reported timeline of events is as follows: (1) 06/30/2025, U.S. District Judge Julien Neals (District of New Jersey) issued an opinion in a securities case against CorMedix Inc. that reportedly contained fabricated case citations and quotes later traced to ChatGPT use by a law school intern; (2) 07/20/2025, U.S. District Judge Henry Wingate (Southern District of Mississippi) issued a temporary restraining order with fictitious plaintiffs and quotations, reportedly drafted with Perplexity AI by a law clerk; (3) 07/22/2025, Wingate retracted and reissued the order after being alerted to errors; (4) 10/20–10/21/2025, both judges admitted AI involvement in letters to Senator Chuck Grassley and the Administrative Office of the U.S. Courts; (5) 10/23/2025, Bloomberg Law reported the admissions; (6) 10/29/2025, The Washington Post published a detailed account confirming Senate inquiry and judicial reforms. The incident ID was added on 10/29/2025.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and Perplexity AI developed an AI system deployed by United States District Court for the District of New Jersey , United States District Court for the Southern District of Mississippi , Julien Neals , Henry Wingate , Unnamed law clerk (S.D. Miss.) and Unnamed law school intern (D.N.J.), which harmed United States District Court for the District of New Jersey , United States District Court for the Southern District of Mississippi , Julien Neals , Henry Wingate , Plaintiffs and defendants in CorMedix Securities Litigation , Plaintiffs and defendants in Jackson Federation of Teachers et al. v. Mississippi State Board of Education , Epistemic integrity and Judicial integrity.
Alleged implicated AI systems: ChatGPT and Perplexity

Incident Stats

Incident ID
1252
Report Count
2
Incident Date
2025-06-30
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceJudges Admit to Using AI After Made-Up Rulings Called Out (1)Federal judges using AI filed court orders with false quotes, fake names
Loading...
Judges Admit to Using AI After Made-Up Rulings Called Out (1)

Judges Admit to Using AI After Made-Up Rulings Called Out (1)

news.bloomberglaw.com

Loading...
Federal judges using AI filed court orders with false quotes, fake names

Federal judges using AI filed court orders with false quotes, fake names

washingtonpost.com

Loading...
Judges Admit to Using AI After Made-Up Rulings Called Out (1)
news.bloomberglaw.com · 2025

Two federal judges blamed faulty rulings on the use of artificial intelligence tools by staff members, raising questions about how much they scrutinize documents issued under their names.

US district judges Julien Neals in New Jersey and He…

Loading...
Federal judges using AI filed court orders with false quotes, fake names
washingtonpost.com · 2025

Two federal judges in New Jersey and Mississippi admitted this month that their offices used artificial intelligence to draft factually inaccurate court documents that included fake quotes and fictional litigants --- drawing a rebuke from t…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Loading...
Google and Delphi Self-Driving Prototypes Allegedly Involved in Near-Miss on San Antonio Road, Palo Alto

Google and Delphi Self-Driving Prototypes Allegedly Involved in Near-Miss on San Antonio Road, Palo Alto

Jun 2015 · 12 reports
Previous Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Loading...
Google and Delphi Self-Driving Prototypes Allegedly Involved in Near-Miss on San Antonio Road, Palo Alto

Google and Delphi Self-Driving Prototypes Allegedly Involved in Near-Miss on San Antonio Road, Palo Alto

Jun 2015 · 12 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65