Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4826

Associated Incidents

Incident 96011 Report
Plaintiffs' Lawyers Admit AI Generated Erroneous Case Citations in Federal Court Filing Against Walmart

Loading...
A Major Law Firm’s ChatGPT Fail
davidlat.substack.com · 2025

We’re all familiar with the infamous tale of the lawyers who filed a brief full of nonexistent cases—courtesy of ChatGPT, the AI tool that made up aka “hallucinated” the fake citations. In the end, Judge Kevin Castel (S.D.N.Y.) sanctioned the attorneys, to the tune of $5,000—but the national notoriety was surely far worse.

The offending lawyers, Steven Schwartz and Peter LoDuca, worked at a tiny New York law firm by the name of Levidow, Levidow & Oberman. And it seems that their screw-up stemmed in part from resource constraints, which small firms frequently struggle with. As they explained to Judge Castel at the sanctions hearing, at the time their firm did not have access to Westlaw or LexisNexis—which are, as we all know, extremely expensive—and the type of subscription they had to Fastcase did not provide them with full access to federal cases.

But what about lawyers who work for one of the nation’s largest law firms? They shouldn’t have any excuse, right?

Whether they have an excuse or not, it appears that they too can make the same mistake. Yesterday, Judge Kelly Rankin of the District of Wyoming issued an order to show cause in Wadsworth v. Walmart Inc. (emphasis in the original):

This matter is before the Court on its own notice. On January 22, 2025, Plaintiffs filed their Motions in Limine. [ECF No. 141]. Therein, Plaintiffs cited nine total cases:

1. Wyoming v. U.S. Department of Energy, 2006 WL 3801910 (D. Wyo. 2006);

2. Holland v. Keller, 2018 WL 2446162 (D. Wyo. 2018);

3. United States v. Hargrove, 2019 WL 2516279 (D. Wyo. 2019);

4. Meyer v. City of Cheyenne, 2017 WL 3461055 (D. Wyo. 2017);

5. U.S. v. Caraway, 534 F.3d 1290 (10th Cir. 2008);

6. Benson v. State of Wyoming, 2010 WL 4683851 (D. Wyo. 2010);

7. Smith v. United States, 2011 WL 2160468 (D. Wyo. 2011);

8. Woods v. BNSF Railway Co., 2016 WL 165971 (D. Wyo. 2016); and

9. Fitzgerald v. City of New York, 2018 WL 3037217 (S.D.N.Y. 2018).

See [ECF No. 141].

The problem with these cases is that none exist, except United States v. Caraway, 534 F.3d 1290 (10th Cir. 2008). The cases are not identifiable by their Westlaw cite, and the Court cannot locate the District of Wyoming cases by their case name in its local Electronic Court Filing System. Defendants aver through counsel that “at least some of these mis-cited cases can be found on ChatGPT.” [ECF No. 150] (providing a picture of ChatGPT locating “Meyer v. City of Cheyenne” through the fake Westlaw identifier).

As you might expect, Judge Rankin is… not pleased:

When confronted with similar situations, courts have ordered the filing attorneys to show cause why sanctions or discipline should not issue. Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 3696209 (S.D.N.Y. May 4, 2023); United States v. Hayes, No. 2:24-CR-0280-DJC, 2024 WL 5125812 (E.D. Cal. Dec. 16, 2024); United States v. Cohen, No. 18-CR-602 (JMF), 2023 WL 8635521 (S.D.N.Y. Dec. 12, 2023). Accordingly, the Court orders as follows:

IT IS ORDERED that at least one of the three attorneys shall provide a true and accurate copy of all cases used in support of [ECF No. 141], except for United States v. Caraway, 534 F.3d 1290 (10th Cir. 2008), no later than 12:00 PM, Mountain Standard Time, on February 10, 2025.

And if they can’t provide the cases in question, the lawyers “shall separately show cause in writing why he or she should not be sanctioned pursuant to: (1) Fed. R. Civ. P. 11(b), (c); (2) 28 U.S.C. § 1927; and (3) the inherent power of the Court to order sanctions for citing non-existent cases to the Court.” And this written submission, due on February 13, “shall take the form of a sworn declaration” that contains “a thorough explanation for how the motion and fake cases were generated,” as well as an explanation from each lawyer of “their role in drafting or supervising the motion.”

Who are the lawyers behind this apparent snafu? They’re called out by name on page three of the order:

The three undersigned counsel to [ECF No. 141] are:

  • Mr. Rudwin Ayala;

  • Ms. Taly Goody; and

  • Mr. Timothy Michael Morgan.

As you can see from the signatures on the offending motions in limine, Taly Goody works at Goody Law Group, a California-based firm that appears to have three lawyers. But Rudwin Ayala and Michael Morgan work at the giant Morgan and Morgan, which describes itself on its website as “America’s largest injury law firm™”—and is the #42 firm in the country based on headcount, according to The American Lawyer.

Moral of the story: lawyers at large firms can misuse ChatGPT as well as anyone. And although Morgan and Morgan is a plaintiff’s firm—which might cause snobby attorneys at big defense firms to say, with a touch of hauteur, “Of course it is”—I think it’s only a matter of time before a defense-side, Am Law 100 firm makes a similar misstep in a public filing.

These “lawyers engage in ChatGPT fail” stories tend to be popular with readers, which is one reason why I’ve written this one—but I don’t want to exaggerate their significance. As I said to Bridget McCormack and Zach Abramowitz on the AAAi Podcast, “ChatGPT doesn’t engage in these screw-ups; humans improperly using ChatGPT engage in these screw-ups.” But the stories still go viral sometimes because they have a certain novelty value: AI is, at least in the world of legal practice, still (relatively) new.

The danger, however, is that the “ChatGPT fail” stories could have a chilling effect, in terms of deterring lawyers from (responsibly) exploring how AI and other transformative technologies can help them serve their clients more efficiently and effectively. As McCormack said on the AAAi podcast after I mentioned the S.D.N.Y. debacle, “I’m still mad at that one Southern District of New York lawyer because I feel like he set the whole profession back by two years. I’m literally so mad at that dude.”

I reached out to Ayala, Goody, and Morgan by email, but have not heard back yet; if and when I do, I’ll update this post. Otherwise, tune in next week, when they’ll file their responses to the order to show cause.

And in the meantime, if you rely on ChatGPT or another AI tool for legal research, please, please use an actual legal-research platform to confirm that (1) the cases exist and (2) you’ve cited them accurately. That’s not too much to ask, right?

UPDATE (5:21 p.m.): If you'd like to educate yourself about how you can leverage AI responsibly to better serve your clients, check out these excellent resources from Hotshot—and you can even get CLE credit for it (depending on your jurisdiction).

UPDATE (2/8/2025, 12:48 a.m.): The three lawyers representing the plaintiffs in Wadsworth v. Walmart withdrew their motions in limine on Friday night, as reported by Law360—so presumably they know there are problems with them. But the attorneys have not yet explained why the motions contain what appear to be fabricated authorities.

The Law360 article also contains some details about the lawsuit itself: “The underlying litigation was filed in July 2023 by Stephanie and Matthew Wadsworth on behalf of their four minor children after they allege that a ‘defective and unreasonably dangerous hoverboard’ exploded and caught fire in their home. The family alleges that the product, a Jetson Plasma Iridescent Hoverboard, is defective, hazardous and malfunctioned when it was being used in the intended manner.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd