Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3098

関連インシデント

インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief
abovethelaw.com · 2023

“Here’s What Happens When Your Lawyer Uses ChatGPT,” blasted the New York Times headline to the delight of tech skeptic lawyers everywhere. A seemingly quite irate Judge Kevin Castel of the Southern District of New York issued a show cause order directed at the law firm of Levidow, Levidow & Oberman and its attorneys Peter LoDuca and Steven Schwartz asking why he shouldn’t impose sanctions after a number of cases in the firm’s recent filing turned out to be made up — another of ChatGPT’s well-documented court case hallucinations.

An airline defendant filed a motion to dismiss a personal injury claim that had wound its way into federal court. The case belonged to Schwartz, but after it was removed to federal court, LoDuca became counsel of record and filed the response to the motion to dismiss under his name. It deals with a lot of thorny issues — competing statutes of limitations, the Bankruptcy Code, international treaties — but the response managed to find on-point citations for every procedural hurdle. Pretty compelling stuff!

The United States Court of Appeals for the Eleventh Circuit specifically addresses the effect of a bankruptcy stay under the Montreal Convention in the case of Varghese v. China Southern Airlines Co.. Ltd., 925 F.3d 1339 (11th Cir. 2019), stating “Appellants argue that the district court erred in dismissing their claims as untimely. They assert that the limitations period under the Montreal Convention was tolled during the pendency of the Bankruptcy Court proceedings. We agree. The Bankruptcy Code provides that the filing of a bankruptcy petition operates as a stay of proceedings against the debtor that were or could have been commenced before the bankruptcy case was filed. 11 U.S.C. § 362(a)….

And the purported quote from the Eleventh Circuit detailing the precise result plaintiff sought under the precise instance of bankruptcy impaired, Montreal Convention statute of limitations goes on with multiple internal citations for another half a page.

Unfortunately, this case doesn’t exist. And some of the internal citations don’t either. Perhaps finding a whole page of directly quotable support for a hyperspecific legal question should’ve tipped someone off?

Schwartz had asked the buzzy AI application to give him a research assist, unaware that those of us covering generative AI have flagged its propensity to flagrantly make stuff up to please its user. And it’s very, very confident in its output regardless of its ability to back it up, which is why the epithet “Mansplaining as a Service” rings so true. That’s also why the conversation among the tech savvy has advanced from what generative AI can accomplish to how do you put ethical and professional guardrails on this thing?

But while the media — and the social media zeitgeist — spent the weekend ripping ChatGPT, this isn’t about generative AI. They can chase the clicks with their GPT headlines, but this is a simple lawyering story.

This isn’t any different than turning in a brief with red-flagged cases or just slapping the first 10 results from a database search into the filing and calling it a day. The problem wasn’t the search that ChatGPT produced, it was the lawyer failing to bother to read the full opinions in the cases he chose to cite.

That’s why I’m not really buying the Schwartz defense that he had never really used the app and “therefore was unaware of the possibility that its content could be false.” It doesn’t matter if the results were right or wrong, you still have to read the frickin’ opinions! Back in the day, a Westlaw or Lexis search would rarely turn up the right result on a lawyer’s first stab at it — and you had to check to make sure the opinion really was useful.

Adding generative AI to the mix of research tools doesn’t alter that calculus. It just — when deployed with the right protections — reduces the number of tries an attorney will need to get the right cases. That’s going to make a significant improvement in the accuracy and efficiency of lawyering. But it’s not replacing the attorney.

Don’t blame AI for lawyering fails. Like a lot of things in tech, the source of the error here lies between the keyboard and the chair.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd