Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3024

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer Blames ChatGPT For Fake Citations In Court Filing
huffpost.com · 2023

A lawyer who relied on ChatGPT to prepare a court filing for his client is finding out the hard way that the artificial intelligence tool has a tendency to fabricate information.

Steven Schwartz, a lawyer for a man suing the Colombian airline Avianca over a metal beverage cart allegedly injuring his knee, is facing a sanctions hearing on June 8 after admitting last week that several of the cases he supplied the court as evidence of precedent were invented by ChatGPT, a large language model created by OpenAI.

Lawyers for Avianca first brought the concerns to the judge overseeing the case.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” U.S. District Judge P. Kevin Castel said earlier this month after reviewing Avianca’s complaint, calling the situation an “unprecedented circumstance.”

The invented cases included decisions titled “Varghese v. China Southern Airlines Ltd.,” “Miller v. United Airlines Inc.” and “Petersen v. Iran Air.”

Schwartz ― an attorney with Levidow, Levidow & Oberman who’s been licensed in New York for more than 30 years ― then confessed in an affidavit that he’d used ChatGPT to produce the cases in support of his client and was “unaware of the possibility that its content could be false.”

Schwartz “greatly regrets having utilized generative artificial intelligence to supplement to the legal research performed herein and will never do so in the future without absolute verification of its authenticity,” he stated in the affidavit.

Peter LoDuca, another lawyer at Schwartz’s firm, argued in a separate affidavit that “sanctions are not appropriate in this instance as there was no bad faith nor intent to deceive either the Court or the defendant.”

The sanctions may involve Schwartz paying the attorneys’ fees that the other side incurred while uncovering the false information.

This isn’t the first time ChatGPT has “hallucinated” information, as AI researchers refer to the phenomenon. Last month, The Washington Post reported on ChatGPT putting a professor on a list of legal scholars who had sexually harassed someone, citing a Post article that didn’t exist.

“It was quite chilling,” the law professor, Jonathan Turley, said in an interview with the Post. “An allegation of this kind is incredibly harmful.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd