Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3023

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer 'was unaware' ChatGPT could generate fake legal research, now faces sanctions
lawandcrime.com · 2023

A personal injury lawyer representing a man suing an airline now faces sanctions for citing fake cases generated by ChatGPT in court documents.

Roberto Mata sued airline Avianca after he was injured by a metal serving cart colliding with his knee during a flight. As is typical procedure in many personal injury cases, Avianca moved to dismiss the claim on the grounds that the applicable statute of limitations had expired. Mata’s lawyers opposed the motion to dismiss and in the accompanying court documents cited multiple cases that supported their client’s legal position: Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.

Problematically, though, Avianca’s lawyers could not find the cases cited in the motion anywhere, even after extensive legal research. They raised the issue in a letter to U.S. District Judge Kevin Castel, a George W. Bush appointee. In the letter, the airline’s lawyers said, “Defendant respectfully submits that the authenticity of many of these cases is questionable,” and indicated that despite doing standard legal research, no sign of the cases cited could be found.

The origin of the mysterious “cases” unraveled when Steven A. Schwartz of the New York law firm Levidow, Levidow & Oberman submitted an affidavit to the court explaining that he had used the artificial intelligence program ChatGPT to “supplement the legal research” while drafting the documents. Schwartz told the judge that the program now “has revealed itself to be unreliable.”

Schwartz, who has been an attorney since 1991, said that he “consulted with” the chatbot for the legal work, but that because it had been his first time using the program, he “was unaware of the possibility that its content could be false.” Indeed ChatGPT provided case names, captions, summaries, and citations in a standard format.

Schwartz accepted full responsibility for the error and said that he had no intent to deceive the court. He also said that he “greatly regrets using generative artificial intelligence” and promised he “will never do so in the future without absolute verification of its authenticity.” Schwartz also said that Peter LoDuca, the attorney named as counsel of record on the case, had no part in drafting the document that contained false sources.

LoDuca, Schwartz, and the law firm now face potential consequences for Schwartz’s mistake. Castel ordered them Friday to appear on June 8 to face possible sanctions pursuant to the Federal Rules of Civil Procedure for “the citation of non-existent cases.” A written response to the possibility of sanctions is due by June 2.

Representatives of Levidow, Levidow & Oberman did not immediately respond to a request for comment.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd