Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3042

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Attorneys Face Sanctions After Citing Information ‘Hallucinated’ by ChatGPT
pymnts.com · 2023

Two attorneys may face judicial sanctions after citing cases that were made up by ChatGPT.

The attorneys could even be disbarred after using information “hallucinated” by the generative artificial intelligence (AI) tool on two separate occasions, CNBC reported Tuesday (May 30).

In the first instance, the attorneys filed motions in a federal court in New York City that cited nine legal cases that the opposing counsel later told the judge they could not find, according to the report.

The judge ordered the attorneys to provide copies of the cases they had cited without telling them that both he and the opposing counsel had determined that the cases did not exist, the report said.

The attorneys filed the full text of eight of the nine cases, per the report.

It was later determined that both the initial citations and the full text of cases had been invented by ChatGPT, according to the report.

The judge has set a hearing at which the attorneys must explain themselves, the report said.

“The Court is presented with an unprecedented circumstance,” the judge said, per the report.

As PYMNTS reported in March, “hallucination” — which is AI researchers’ term for the generation of text that is completely false or misleading — is a problem that affects all chatbots.

It is not native to ChatGPT but to all AI more broadly as a technical solution trained on historical data.

These solutions are inherently limited by a foundation of information that will never be truly up to date.

Large language models (LLMs) like ChatGPT are also prone to hallucination and returning inaccurate or misleading information because if bad data becomes the source of a response, it can then be further propagated by serving as an informational foundation for future responses an AI is tasked with.

This report comes at a time when tech industry leaders like Amazon, Microsoft, Alphabet and Meta are doubling down on future-fit innovations in generative AI and machine learning (ML).

During recent earnings calls by these companies, “AI” was mentioned by executives more than 200 times.

These companies are already using AI behind the scenes and are now expanding it to more consumer-facing applications.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd