Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3033

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A lawyer apologized after ChatGPT made up case law in an affidavit he submitted
businessinsider.com · 2023

ChatGPT has seen its popularity rise in recent months as optimism and skepticism about the new generative AI program soars. 

However, the tool is at the heart of a case to discipline a New York lawyer. Steven Schwartz, a personal injury lawyer with Levidow, Levidow & Oberman, faces a sanctions hearing on June 8, after it was revealed that he used ChatGPT to write up an affidavit. 

Another attorney at the same law firm, Peter LoDuca, is also facing a sanctions, but in a court filing he said did not do any of the research in the affidavit. 

The affidavit that used ChatGPT was for a lawsuit involving a man who alleged he was injured by a serving cart aboard an Avianca flight, and featured several made up court decisions. 

In an order, Judge Kevin Castel said the incident presented the court with "an unprecedented circumstance."

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Castel wrote. 

Neither the lawyers for the airline nor Castel himself were able to find the cases mentioned in the affidavit. 

Bart Banino, a lawyer with Condon & Forsyth,  which represents Avianca, told The New York Times that his company could tell the cases were fake and were initially skeptical that a chatbot was used. 

On Thursday, Schwartz apologized to Castel, adding that he had never used the AI tool before and was unaware of the possibility that its content could be false,"  the Times reported. 

Shwartz also added that ChatGPT was "a source that has revealed itself to be unreliable."

Avianca, LoDuca, and Shwartz did not respond to Insider's requests for comment at the time of publication.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd