Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3017

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A lawyer used ChatGPT to prepare a court filing. It went horribly awry.
cbsnews.com · 2023

A lawyer who relied on ChatGPT to prepare a court filing on behalf of a man suing an airline is now all too familiar with the artificial intelligence tool's shortcomings — including its propensity to invent facts. 

Roberto Mata sued Colombian airline Avianca last year, alleging that a metal food and beverage cart injured his knee on a flight to Kennedy International Airport in New York. When Avianca asked a Manhattan judge to dismiss the lawsuit based on the statute of limitations, Mata's lawyer, Steven A. Schwartz, submitted a brief based on research done by ChatGPT, Schwartz, of the law firm Levidow, Levidow & Oberman, said in an affidavit. 

While ChatGPT can be useful to professionals in numerous industries, including the legal profession, it has proved itself to be both limited and unreliable. In this case, the AI invented court cases that didn't exist, and asserted that they were real.

The fabrications were revealed when Avianca's lawyers approached the case's judge, Kevin Castel of the Southern District of New York, saying they couldn't locate the cases cited in Mata's lawyers' brief in legal databases.

The made-up decisions included cases titled Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines.

"It seemed clear when we didn't recognize any of the cases in their opposition brief that something was amiss," Avianca's lawyer Bart Banino, of Condon & Forsyth, told CBS MoneyWatch. "We figured it was some sort of chatbot of some kind." 

Schwartz responded in an affidavit last week, saying he had "consulted" ChatGPT to "supplement" his legal research, and that the AI tool was "a source that has revealed itself to be unreliable." He added that it was the first time he'd used ChatGPT for work and "therefore was unaware of the possibility that its content could be false."

He said he even pressed the AI to confirm that the cases it cited were real. ChatGPT confirmed it was. Schwartz then asked the AI for its source. 

ChatGPT's response? "I apologize for the confusion earlier," it said. The AI then said the Varghese case could be located in the Westlaw and LexisNexis databases.

Judge Castel has set a hearing regarding the legal snafu for June 8 and has ordered Schwartz and the law firm Levidow, Levidow & Oberman to argue why they should not be sanctioned.

Levidow, Levidow & Oberman could not immediately be reached for comment. 

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd