Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3011

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A lawyer faces sanctions after he used ChatGPT to write a brief riddled with fake citations
engadget.com · 2023

With the hype around AI reaching a fever pitch in recent months, many people fear programs like ChatGPT will one day put them out of a job. For one New York lawyer, that nightmare could become a reality sooner than expected, but not for the reasons you might think. As reported by The New York Times, attorney Steven Schwartz of the law firm Levidow, Levidow and Oberman recently turned to OpenAI’s chatbot for assistance with writing a legal brief, with predictably disastrous results.

Schwartz’s firm has been suing the Colombian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. When the airline recently asked a federal judge to dismiss the case, Mata’s lawyers filed a 10-page brief arguing why the suit should proceed. The document cited more than half a dozen court decisions, including “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines” and “Miller v. United Airlines.” Unfortunately for everyone involved, no one who read the brief could find any of the court decisions cited by Mata’s lawyers. Why? Because ChatGPT fabricated all of them. Oops.

In an affidavit filed on Thursday, Schwartz said he had used the chatbot to “supplement” his research for the case. Schwartz wrote he was "unaware of the possibility that [ChatGPT’s] content could be false.” He even shared screenshots showing that he had asked ChatGPT if the cases it cited were real. The program responded they were, claiming the decisions could be found in “reputable legal databases,” including Westlaw and LexisNexis. 

Schwartz said he “greatly regrets” using ChatGPT “and will never do so in the future without absolute verification of its authenticity.” Whether he has another chance to write a legal brief is up in the air. The judge overseeing the case has ordered a June 8th hearing to discuss potential sanctions for the “unprecedented circumstance” created by Schwartz’s actions.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd