Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5188

Associated Incidents

Incident 10743 Report
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Loading...
Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation
techcrunch.com · 2025

A lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday.

Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations.

Anthropic apologized for the error and called it "an honest citation mistake and not a fabrication of authority."

Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic's expert witness --- one of the company's employees, Olivia Chen --- of using Claude to cite fake articles in her testimony. Federal judge, Susan van Keulen, then ordered Anthropic to respond to these allegations.

The music publishers' lawsuit is one of several disputes between copyright owners and tech companies over the supposed misuse of their work to create generative AI tools.

This is the latest instance of lawyers using AI in court and then regretting the decision. Earlier this week, a California judge slammed a pair of law firms for submitting "bogus AI-generated research" in his court. In January, an Australian lawyer was caught using ChatGPT in the preparation of court documents and the chatbot produced faulty citations.

However, these errors aren't stopping startups from raising enormous rounds to automate legal work. Harvey, which uses generative AI models to assist lawyers, is reportedly in talks to raise over $250 million at a $5 billion valuation.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd