Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1453: Attorney in Fletcher v. Experian Information Solutions, Inc. Reportedly Submitted Reply Brief with Purportedly AI-Generated Material Misrepresentations

Description: In Fletcher v. Experian Information Solutions, Inc., the Fifth Circuit found that attorney Heather Hersh had used generative AI to draft a substantial portion of a reply brief that contained fabricated legal support and factual representations. The court first issued a show-cause order and later imposed a $2,500 sanction, linking the filing errors to unverified AI-generated output.
Editor Notes: Timeline notes: The incident seems to have occurred sometime in early September 2025. The incident ID date of 12/18/2025 refers to the Fifth Circuit having issued a show-cause order. On 02/18/2026, the court imposed the fine. The incident ID was created 04/18/2026.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown large language model developers developed an AI system deployed by Heather Hersh, which harmed United States Court of Appeals for the Fifth Circuit , Judicial integrity , James Fletcher , Experian Information Solutions, Inc. and Epistemic integrity.
Alleged implicated AI system: Unknown large language models

Incident Stats

Incident ID
1453
Report Count
1
Incident Date
2025-12-18
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceUS appeals court orders lawyer to pay $2,500 over AI hallucinations in brief
Loading...
US appeals court orders lawyer to pay $2,500 over AI hallucinations in brief

US appeals court orders lawyer to pay $2,500 over AI hallucinations in brief

reuters.com

Loading...
US appeals court orders lawyer to pay $2,500 over AI hallucinations in brief
reuters.com · 2026

Feb 18 (Reuters) - A federal appeals court on Wednesday expressed frustration that lawyers continue to submit briefs containing AI-generated fictitious case citations and other hallucinated material, saying the problem "shows no sign of aba…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Loading...
New Jersey Police Wrongful Arrested Innocent Black Man via FRT

New Jersey Police Wrongful Arrested Innocent Black Man via FRT

Jan 2019 · 4 reports
Loading...
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Jan 2020 · 11 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Loading...
New Jersey Police Wrongful Arrested Innocent Black Man via FRT

New Jersey Police Wrongful Arrested Innocent Black Man via FRT

Jan 2019 · 4 reports
Loading...
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Jan 2020 · 11 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 70bfe3d