Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3050

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
Lawyer Regrets Using ChatGPT in Lawsuit After It Cited Made-Up Cases
complex.com · 2023

A lawyer says he "greatly regrets" using ChatGPT in his work for a client who's suing an airline, as the artificial intelligence language model has since been determined to have cited multiple nonexistent cases in its purported research.

As first reported by the New York Times over the weekend, the suit in question stems from a man who alleges he was hurt when a serving cart hit his knee during a flight to New York. The man, identified as Roberto Mata, sued Avianca for the alleged injuries.

But when the airline company pushed for the case to be tossed, the man's legal team---including a lawyer by the name of Steven A. Schwartz---pointed to a number of prior court rulings they argued supported their stance. The real issue here, as it turns out, was that none of these cases were real.

Schwartz, who works as part of the Levidow, Levidow & Oberman law firm, said in a subsequent affidavit---available to read here via a separate report from The Verge---that he had "consulted" ChatGPT "in order to supplement the legal research" process.

This admitted use of ChatGPT resulted in the citing of at least six different would-be cases, all of which were later found to be "nonexistent." According to Schwartz, he had not used ChatGPT prior to this incident, claiming this made him "unaware of the possibility that its content could be false." Schwartz, in the same court document, also said he had "no intent to deceive" either the court or the defendants in the case.

Moving forward, Schwartz---who now has a sanctions hearing on the books for next month---said he will not use ChatGPT in the future unless he's able to secure "absolute verification" of any such claims.

Of course, this is far from the first story in which the perils of relying on such technology have been made strikingly clear. While certain CEOs have remained capitalistically (and predictably) bullish on diving all the way into AI, several leaders within the field have cautioned against doing so without regulations first being instituted.

For example, Sam Altman---the CEO of ChatGPT developers OpenAI---warned during a recent Senate Judiciary Committee hearing that this technology "can go quite wrong." In the same hearing, Altman conceded that he was "nervous" about certain aspects of this stage of AI's development, including his "worst fears" that those behind the tech could "cause significant harm to the world."

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd