Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3056

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer admits using AI for research after citing ‘bogus’ cases from ChatGPT
telegraph.co.uk · 2023

A New York lawyer has been forced to admit he used the artificial intelligence tool ChatGPT to carry out legal research after it referenced several made-up court cases.

Steven Schwartz, who works for Levidow, Levidow and Oberman, is on a team representing airline passenger Roberto Mata who is suing the firm Avianca for injuries suffered when a serving cart hit his knee during a flight from El Salvador to JFK airport in New York in 2019.

Mr Schwartz used the AI program to “supplement” his research for a 10-page submission to the Manhattan federal court outlining why his client’s case should not be thrown out.

The legal brief, submitted in March, cited six previous cases dated from 1999 to 2019 to bolster his argument for why the case should be heard despite the statute of limitations having expired.

Could not find quotations

But neither the airline’s lawyers nor the judge could find the decisions or quotations summarised in the brief.

Judge P Kevin Castel ordered Mr Mata’s attorneys to provide copies of the opinions referenced.

The lawyers submitted a list of the court and judges who issued them, but these were connected to different cases.

For one of the cases, Varghese v. China Southern Airlines, lawyers for Avianca said they had “not been able to locate this case by caption or citation, nor any case bearing any resemblance to it”.

Ordering a hearing for June 8 to discuss potential sanctions, Judge Castel said a clerk had confirmed the docket number on the opinion was connected to a separate case.

‘Bogus judicial decisions’

Judge Castel said in an order that he had been presented with “an unprecedented circumstance” of being provided with “bogus judicial decisions, with bogus quotes and bogus internal citations”.

Developed by Silicon Valley scientists at OpenAI, ChatGPT is designed to mimic human conversation and language while drawing upon a wealth of knowledge to answer questions and solve problems.

Mr Schwartz said in an affidavit filed on Thursday that he had consulted ChatGPT “to supplement” his own work.

In screenshots supplied with the submission, the lawyer was seen asking the AI programme whether the cases cited were “fake”.

He asked if one of the opinions referenced was “a real case”. The chatbot replied confirming it was and provided a citation.

‘Are the other cases fake?’

In another screenshot, he asked the programme: “Are the other cases you provided fake?”

ChatGPT responded: “No, the other cases I provided are real and can be found in reputable legal databases.”

Mr Schwartz said he had used the AI tool to do his research, which he described as “a source that has revealed itself to be unreliable”.

The lawyer said he had never used ChatGPT for his research before and “therefore was unaware of the possibility that its content could be false”.

He told Judge Castel he “greatly regrets” his actions. Mr Schwartz said he had no intent to deceive the court or the airline and will not use the tool in the “future without absolute verification of its authenticity”.

Mr Schwartz was approached for comment on Saturday.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd