Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1073: $31,000 Sanction in Lacey v. State Farm Tied to Purportedly Undisclosed Use of LLMs and Erroneous Citations

Description: In the case of Lacey v. State Farm, two law firms were sanctioned $31,000 after submitting a legal brief containing reportedly erroneous citations generated using AI tools. The court reportedly found that the lawyers failed to disclose the use of AI, neglected to verify its output, and refiled a revised brief with additional inaccuracies. Judge Michael Wilner deemed the conduct reckless and issued sanctions for what he described as "improper" and "misleading" legal filings.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unnamed large language model developer developed an AI system deployed by K&L Gates LLP and Ellis George LLP, which harmed K&L Gates LLP , Ellis George LLP , Michael Wilner , Judicial process integrity and Defense counsel in Lacey v. State Farm.
Alleged implicated AI system: Unknown large language model

Incident Stats

Incident ID
1073
Report Count
2
Incident Date
2025-04-15
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceAI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in SanctionsJudge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief
AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions

AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions

reason.com

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief

webpronews.com

AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions
reason.com · 2025

I should note up front that both of the firms involved (the massive 1700-lawyer national one and the smaller 45-lawyer predominantly California one) have, to my knowledge, excellent reputations, and the error is not at all characteristic of…

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief
webpronews.com · 2025

A judge in California has imposed sanctions to law firms that relied on AI for case research, resulting in an error-filled brief.

In the case of Lacey v. State Farm, Judge Michael Wilner (serving as Special Master in the case) took the two …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Jun 2022 · 7 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Jun 2022 · 7 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5