Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3030

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer Faces Sanctions for Using Fake ChatGPT Citations
pcmag.com · 2023

A New York lawyer is in hot water for submitting a legal brief with references to cases that were made up by ChatGPT. 

As The New York Times reports(Opens in a new window), Steven Schwartz, from Levidow, Levidow and Oberman, submitted six fake judicial decisions in a 10-page brief while representing a plaintiff who was suing the Colombian airline Avianca because of an injury sustained on a flight.

The brief, which argued why the suit should go ahead, cited fake cases that had been completely made up by ChatGPT, and which Schwartz had failed to verify.

In an affidavit(Opens in a new window), Schwartz admitted to using ChatGPT while researching for the brief, and accepted responsibility for not verifying the AI chatbot’s sources.

Schwartz said he “was unaware of the possibility that its content could be false” and maintained that he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

This came after US District Judge Kevin Castel wrote in a May 4 order(Opens in a new window): “The court is presented with an unprecedented circumstance… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The affidavit contained screenshots of the attorney being told by ChatGPT that the cases it was providing were real and could be found on any “reputable legal database.” The screenshots also show Schwartz asking the AI chatbot for the source of one bogus case: Varghese v. China Southern Airlines.

ChatGPT replied: “I apologize for the confusion earlier. Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”

As the Times notes, Schwartz is now set to face a sanctions hearing on June 8. 

The saga is the latest in a series of incidents of AI chatbots pumping out misinformation. Long conversations with Microsoft’s Bing resulted in the chatbot exhibiting some manipulation tactics such as gaslighting, while Bard, Google’s AI chatbot, spouted lies about the James Webb Space Telescope during its debut.  

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd