Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3150

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer who cited cases concocted by AI asks judge to spare sanctions
reuters.com · 2023

A New York lawyer on Thursday asked a Manhattan federal judge not to sanction him after he included made-up case citations generated by an artificial intelligence chatbot in a legal brief.

The lawyer, Steven Schwartz, admitted in May that he had used OpenAI's ChatGPT program to help research the brief, which cited six non-existent court decisions, in a client's personal injury case against Avianca Airlines.

Schwartz said at Thursday's hearing that he "never" could have imagined ChatGPT could make up fake cases and did not intend to mislead the court.

U.S. District Judge P. Kevin Castel, who had asked Schwartz and his law firm Levidow, Levidow & Oberman to explain why they should not be sanctioned, said he would issue a decision later.

Lawyers for Avianca first alerted the court that they could not locate the cases cited in the brief.

A growing number of lawyers and law firms have been exploring the use of generative AI. Schwartz's messy collision with the technology made headlines as an early illustration of its potential pitfalls.

Prompted in part by the New York case, a federal judge in Texas last week issued a requirement for lawyers in cases before him to certify that they did not use AI to draft their filings without a human checking their accuracy.

Schwartz said Thursday he should have done more to confirm and verify the cases.

Schwartz apologized to the court, Avianca and others at the hearing. He said he has "suffered professionally and personally" and is embarrassed and humiliated.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd