Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3024

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer Blames ChatGPT For Fake Citations In Court Filing
huffpost.com · 2023

A lawyer who relied on ChatGPT to prepare a court filing for his client is finding out the hard way that the artificial intelligence tool has a tendency to fabricate information.

Steven Schwartz, a lawyer for a man suing the Colombian airline Avianca over a metal beverage cart allegedly injuring his knee, is facing a sanctions hearing on June 8 after admitting last week that several of the cases he supplied the court as evidence of precedent were invented by ChatGPT, a large language model created by OpenAI.

Lawyers for Avianca first brought the concerns to the judge overseeing the case.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” U.S. District Judge P. Kevin Castel said earlier this month after reviewing Avianca’s complaint, calling the situation an “unprecedented circumstance.”

The invented cases included decisions titled “Varghese v. China Southern Airlines Ltd.,” “Miller v. United Airlines Inc.” and “Petersen v. Iran Air.”

Schwartz ― an attorney with Levidow, Levidow & Oberman who’s been licensed in New York for more than 30 years ― then confessed in an affidavit that he’d used ChatGPT to produce the cases in support of his client and was “unaware of the possibility that its content could be false.”

Schwartz “greatly regrets having utilized generative artificial intelligence to supplement to the legal research performed herein and will never do so in the future without absolute verification of its authenticity,” he stated in the affidavit.

Peter LoDuca, another lawyer at Schwartz’s firm, argued in a separate affidavit that “sanctions are not appropriate in this instance as there was no bad faith nor intent to deceive either the Court or the defendant.”

The sanctions may involve Schwartz paying the attorneys’ fees that the other side incurred while uncovering the false information.

This isn’t the first time ChatGPT has “hallucinated” information, as AI researchers refer to the phenomenon. Last month, The Washington Post reported on ChatGPT putting a professor on a list of legal scholars who had sexually harassed someone, citing a Post article that didn’t exist.

“It was quite chilling,” the law professor, Jonathan Turley, said in an interview with the Post. “An allegation of this kind is incredibly harmful.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd