Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3054

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer faces sanctions for using ‘bogus’ citations from ChatGPT
lsj.com.au · 2023

Steven Schwartz, a lawyer at US law firm Levidow & Oberman, will face a sanctions hearing on 8 June 2023 after submitting a brief filled with fictitious cases and fabricated court citations.

Schwartz was acting for airline passenger Roberto Mata in a negligence lawsuit against Avianca airlines and used ChatGPT in his legal research.

The brief cited cases that were decided between 1999 and 2019 to support Schwartz’s argument that the case should be heard despite it being beyond the statute of limitations. However, neither the judge nor the airline’s lawyers could find the decisions or quotations summarised.

Judge Kevin Castel, from the Southern District of New York, asserted at least six of the cases submitted “appear to be bogus judicial decisions with bogus quotes and bogus internal citations”.

“The court is presented with an unprecedented circumstance,” Castel wrote.

The court is presented with an unprecedented circumstance.

Justice Kevin Castel, New York

Schwartz’s brief included fictitious cases ChatGPT had made up including Varghese v China South Airlines, Shaboon v EgyptAir, Martinez v Delta Airlines, Miller v United Airlines, Petersen v Iran Air and Estate of Durden v KLM Royal Dutch Airlines.

In an affidavit, Schwartz admitted he “was unaware of the possibility that [ChatGPT] content could be false”. He took full responsibility for failing to verify the sources and greatly regretted “having utilised generative artificial intelligence to supplement the legal research performed”.

Schwartz vowed to “never do so [again] in the future without absolute verification of its [ChatGPT] authenticity”.

After being ordered to show cause for why he should not be sanctioned, Schwartz included screenshots of his attempts to confirm the authenticity of cases with ChatGPT.

“Is Varghese a real case?” Schwartz asked ChatGPT.

“Yes, it is a real case,” the chatbot replied.

Schwartz asked ChatGPT to confirm its source to which it replied: “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.”

“I apologise for any inconvenience or confusion my earlier responses may have caused,” ChatGPT wrote.

Schwartz also asked the chatbot if any of the other cases provided were fake and ChatGPT responded: “No, the other cases I provided are real and can be found in reputable legal databases.”

Legal ethics professor Stephen Gillers, from New York University Law School, told the New York Times that the case illustrates what lawyers should not do.

“You cannot just take the output and cut and paste it into your court filings,” said Gillers.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd