Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3021

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A Lawyer Used ChatGPT to Support a Lawsuit. It Didn't Go Well.
thestreet.com · 2023

People have tried to use ChatGPT for everything from gauging stock performance and automating work messages to writing college essays that they then pass off as their own.

While the debate over just how far one can (and should) take the use of artificial intelligence rages on, clients paying for professional services might not be happy to see it used on their behalf. 

One lawyer ended up in hot water after using ChatGPT-generated court research in a case he was retained to pursue. ChatGPT is the conversational AI system developed by OpenAI and massively backed by Microsoft.

As first reported by The New York Times, Roberto Mata enlisted the New York law firm Levidow, Levidow & Oberman after he was struck by a metal cart at John F. Kennedy Airport. 

Lawyer 'Unaware That Its Content Could Be False'

Mata had sued the Colombia carrier Avianca Airlines  (AVHOQ)  over his injuries. When the airline asked a Manhattan federal court to throw out the case, the judge couldn't find the cases referenced by Mata's lawyers to argue why it should proceed.

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge P. Kevin Castel wrote in a request for clarification from the law firm.

After cases of other customers suing Delta Airlines  (DAL) and Korean Air Lines were found not to exist, lawyer Steven A. Schwartz said he had used ChatGPT while being "unaware that its content could be false."

Prior to producing text on behalf of users, ChatGPT issues warnings that it may "produce inaccurate information."

Schwartz told the judge that that he did not intend to deceive the court or the airline, that he "greatly regrets" the situation, and that he wouldn't again use ChatGPT without fully verifying its results.

The judge called the situation an "unprecedented circumstance." 

Schwartz is now facing a June 8 court hearing regarding whether he should be sanctioned, CNN reported.

Debate About When ChatGPT Can Be Used 

ChatGPT use is less frowned-upon if it used to speed up certain tasks. A web developer going under the name Daniel recently told Vice's Motherboard that he uses the platform to answer certain emails and do the early work of writing code that he then goes over manually.

In general, however, those using ChatGPT on the job stay quiet about it due to the ongoing discussion around just how much one can use it before it becomes unethical. 

"ChatGPT does like 80 percent of my job if I'm being honest," a tech worker who got a side job writing marketing materials told Vice. The worker stayed anonymous so as not to get in trouble with his employer.

And clearly no one should be using the program without ensuring the veracity of the results it turns out.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd