Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3527

Associated Incidents

Incident 62312 Report
Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case

Loading...
Trump’s former lawyer blames AI for fake citations in legal documents
cointelegraph.com · 2023

Michael Cohen, a former lawyer for Donald Trump, confessed to mistakenly giving his attorney incorrect case citations created by the artificial intelligence (AI) chatbot Google Bard.

In a recent court filing, Cohen, who is set to be a witness against Trump in his upcoming criminal trials, admitted to sending Google Bard-generated legal citations to his lawyer, David Schwartz, in support of his case.

"The invalid citations at issue --- and many others that Mr. Cohen found but were not used in the motion --- were produced by Google Bard, which Mr. Cohen misunderstood to be a supercharged search engine, not a generative AI service like Chat-GPT."

United States v. Michael Cohen. Source: Reuters

However, it was argued that Cohen is not an active legal professional and was only passing on the information to his attorney, suggesting the information should have been reviewed before being included in official court documents.

"Mr. Cohen is not a practicing attorney and has no concept of the risks of using AI services for legal research, nor does he have an ethical obligation to verify the accuracy of his research," the statement further stated, reiterating further review was required:

"To summarize: Mr. Cohen provided Mr. Schwartz with citations (and case summaries) he had found online and believed to be real. Mr. Schwartz added them to the motion but failed to check those citations or summaries."

Related: Searches for 'AI' on Google smashes Bitcoin and crypto this year

This isn't the first instance of a lawyer being exposed for relying on AI, only to realize it generated inaccurate results.

Earlier this year, Cointelegraph reported that Steven Schwartz, an attorney with the New York law firm Levidow, Levidow & Oberman, faced criticism for using AI in creating what turned out to be false court citations.

Despite Schwartz claiming it was his first time using ChatGPT for legal research, the judge strongly critiqued him for the inaccuracies.

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," the judge stated.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd