Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3033

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A lawyer apologized after ChatGPT made up case law in an affidavit he submitted
businessinsider.com · 2023

ChatGPT has seen its popularity rise in recent months as optimism and skepticism about the new generative AI program soars. 

However, the tool is at the heart of a case to discipline a New York lawyer. Steven Schwartz, a personal injury lawyer with Levidow, Levidow & Oberman, faces a sanctions hearing on June 8, after it was revealed that he used ChatGPT to write up an affidavit. 

Another attorney at the same law firm, Peter LoDuca, is also facing a sanctions, but in a court filing he said did not do any of the research in the affidavit. 

The affidavit that used ChatGPT was for a lawsuit involving a man who alleged he was injured by a serving cart aboard an Avianca flight, and featured several made up court decisions. 

In an order, Judge Kevin Castel said the incident presented the court with "an unprecedented circumstance."

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Castel wrote. 

Neither the lawyers for the airline nor Castel himself were able to find the cases mentioned in the affidavit. 

Bart Banino, a lawyer with Condon & Forsyth,  which represents Avianca, told The New York Times that his company could tell the cases were fake and were initially skeptical that a chatbot was used. 

On Thursday, Schwartz apologized to Castel, adding that he had never used the AI tool before and was unaware of the possibility that its content could be false,"  the Times reported. 

Shwartz also added that ChatGPT was "a source that has revealed itself to be unreliable."

Avianca, LoDuca, and Shwartz did not respond to Insider's requests for comment at the time of publication.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd