Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3009

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A US attorney faces punishment for citing fake cases ChatGPT fed him
qz.com · 2023

A US attorney is now “greatly regretting” his decision to trust OpenAI’s ChatGPT in a litigation process. Steven Schwartz will be charged in a New York court for using fake citations cooked up by the AI tool in legal research for a case he was handling.

In a sworn affidavit, Schwartz admitted to using ChatGPT to research for the case, representing his client Roberto Mata, to sue Colombian airline Avianca for injuries sustained onboard one of its planes in 2019.

Despite ChatGPT’s widely known warnings that it can sometimes produce incorrect information, Schwartz defended himself saying he was “unaware that its content could be false.”

Presiding judge Kevin Castel expressed his doubts over the authenticity of the cases the attorney presented. “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Castel said.

The cases ChatGPT presented the lawyer in his research were Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines.

These cases did not exist, Castel said in an order demanding an explanation for their citation by Mata’s legal team. They were found to be made up and had fake judicial wording. Even the quotes and internal citations were just make-believe.

Though Schwartz has vowed to never use AI in future to “supplement” his legal research “without absolute verification of its authenticity,” he now faces sanctions. A hearing on the matter is now scheduled for June 8.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd