Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3529

関連インシデント

インシデント 62312 Report
Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case

Loading...
Donald Trump's Ex-Lawyer Michael Cohen Cites Fake Cases Imagined By Google's AI Bard; Thought It Was 'Super-Charged Search Engine'
english.jagran.com · 2023

Soon after news of lawyers using ChatGPT to supercharge legal research but ending up with a disaster gripped the media, Google Bard is emerging as a second option. In the latest move, Michael Cohen, the former lawyer of Donald Trump, had acknowledged that he cited AI-generated fake cases in a legal document, according to The New York Times.

The filing said Cohen used California-based tech giant's AI model Bard for research and assumed it to be a "super-charged search engine" and not an AI tool. The document was a motion to ask the federal judge to reduce the length of Cohen's probation, The Verge noted. After scanning the brief, US District Judge Jesse Furman observed that "none of these cases exist" and demanded an explanation from David Schwartz, Cohen's lawyer.

As a response, Cohen admitted in writing that he used Google's ChatGPT rival for legal research and shared findings with his lawyer, but the purpose was not to mislead the court. Cohen added that he was not aware these cases may be fake, and his lawyer would add them without any background check.

"As a non-lawyer I have not kept up with emerging trends (and related risks) in legal technology and did not know that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not," Cohen stated.

He added that he used Bard repeatedly (in the past) in other contexts to find accurate information. This is not the first time instances of AI citing non-existent cases have floated in the media. Top companies (including OpenAI and Google) competing in the AI space and experts have cautioned about the accuracy of the information shared by the models at regular intervals.

The output from these must be verified via human review to assign them the legitimate tag. Meanwhile, a Radio host filed a lawsuit against OpenAI's viral ChatGPT model for framing false defamation charges. Moreover, in another instance, the language model falsely accused a law professor of sexual harassment. These cases serve as a stark reminder of the limitations of the evolving technology.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd