Judicial integrity
Incidents Harmed By
Incident 96011 Report
Plaintiffs' Lawyers Admit AI Generated Erroneous Case Citations in Federal Court Filing Against Walmart
2025-02-06
Lawyers Rudwin Ayala, T. Michael Morgan (Morgan & Morgan), and Taly Goody (Goody Law Group) were fined a total of $5,000 after their Wyoming federal lawsuit filing against Walmart cited fake cases "hallucinated" by AI. Judge Kelly Rankin sanctioned them, removing Ayala from the case and noting attorneys must verify AI sources. The filing, flagged by Walmart’s legal team, led to its withdrawal and an internal review.
MoreIncident 11455 Report
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
2025-02-25
In February 2025, lawyers Christopher I. Kachouroff and Jennifer T. DeMaster, representing Mike Lindell, reportedly used generative AI to draft a court brief that contained nearly 30 defective or fabricated citations. The error-filled filing violated federal court rules requiring factual and legal accuracy. The judge fined both lawyers $3,000 each, citing either the improper use of AI or gross carelessness as the cause of the misleading legal content.
MoreIncident 11384 Report
South African Legal Team Reportedly Relied on Unverified ChatGPT Case Law in Johannesburg Body Corporate Defamation Matter
2023-03-01
In a defamation case at the Johannesburg Regional Court, Rodrigues Blignaut Attorneys, representing plaintiff Michelle Parker, reportedly relied on purportedly non-existent legal judgments generated by ChatGPT to help argue their case. Magistrate Arvin Chaitram reportedly found the case names and citations were fictitious, causing a two-month delay. The court issued a punitive costs order and rebuked the plaintiff's legal team for uncritically accepting AI-generated research.
MoreIncident 10743 Report
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel
2025-05-15
In a legal filing in Universal Music Group et al. v. Anthropic, lawyers for Anthropic acknowledged that expert witness testimony submitted in the case contained erroneous citations generated by the company's Claude AI system. The filing stated that the inaccuracies, which included incorrect article titles and author names, were not caught during manual review. Anthropic characterized the issue as an honest mistake and apologized in court.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Legal integrity
Incidents Harmed By
- Incident 11455 Reports
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
- Incident 11384 Reports
South African Legal Team Reportedly Relied on Unverified ChatGPT Case Law in Johannesburg Body Corporate Defamation Matter
Epistemic integrity
Incidents Harmed By
- Incident 11455 Reports
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
- Incident 11384 Reports
South African Legal Team Reportedly Relied on Unverified ChatGPT Case Law in Johannesburg Body Corporate Defamation Matter