Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3010

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer's reliance on ChatGPT leads to false case citations in airline lawsuit
siliconangle.com · 2023

A New York lawyer has found himself in trouble in a lawsuit between a man and the airline Avianca Holding S.A. after presenting nonexistent citations in the case generated by ChatGPT.

The case involved a man named Roberto Mata suing Avianca, claiming he was injured when a metal service cart struck his knee during a flight. Injury claims are typically uninteresting, aside from the broader cultural considerations about how the U.S. is so litigious, but the case took an interesting twist after the airline attempted to have the case dismissed.

The New York Times reported Saturday that in response to the filing, lawyers representing Mata submitted a 10-page brief citing more than a half-dozen relevant court cases, arguing that the cases show the “tolling effect of the automatic stay on a statute of limitations.”

One huge problem, however, is that none of the cases was genuine. The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, had used OpenAI LP’s ChatGPT to write it.

Schwartz, who is said to have practiced law for three decades, defended himself, claiming that he wasn’t aware of the AI’s potential to generate false content. Schwartz told Judge P. Kevin Castel that he had no intent to deceive the court or the airline and vowed not to use ChatGPT again without thorough verification. The unusual situation prompted the judge to call a hearing on potential sanctions against Schwartz, describing the incident as an “unprecedented circumstance” filled with “bogus judicial decisions.”

The incident has sparked discussions among the legal community about the values and risks of AI. Stephen Gillers, a legal ethics professor at New York University School of Law, told the Times that the case highlights that legal professionals can’t simply take the output from an AI and incorporate it into court filings. “The discussion now among the bar is how to avoid exactly what this case describes,” Gillers added.

The case creates a precedent because of the role AI plays in legal research and argument construction, which has surfaced severe concerns about the reliability of AI tools in the legal profession. The case also underscores the potential hazards of trusting AI outputs in not only court filings but also in general use without secondary verification.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd