Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3019

Associated Incidents

Incident 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Loading...
A lawyer used ChatGPT to cite bogus cases. What are the ethics?
reuters.com · 2023

A New York lawyer is facing potential sanctions over an error-riddled brief he drafted with help from ChatGPT.

It's a scenario legal ethics experts have warned about since ChatGPT burst onto the scene in November, marking a new era for AI that can produce human-like responses based on vast amounts of data.

Steven Schwartz of Levidow, Levidow & Oberman faces a June 8 sanctions hearing before U.S. District Judge P. Kevin Castel after he admitted to using ChatGPT for a brief in his client's personal injury case against Avianca Airlines. The brief cited six non-existent court decisions.

Schwartz said in a court filing that he "greatly regrets" his reliance on the technology and was "unaware of the possibility that its contents could be false."

Lawyers representing Avianca alerted the court to the non-existent cases cited by Schwartz, who did not respond to a request for comment Tuesday.

The American Bar Association’s Model Rules of Professional Conduct do not explicitly address artificial intelligence. But several existing ethics rules apply, experts say.

“You are ultimately responsible for the representations you make,” said Daniel Martin Katz, a professor at Chicago-Kent College of Law who teaches professional responsibility and studies artificial intelligence in the law. "It’s your bar card."

DUTY OF COMPETENCE

This rule requires lawyers to provide competent representation and be up to date on current technology. They must ensure that the technology they use provides accurate information—a major concern given that tools such as ChatGPT have been found to make things up. And lawyers must not rely too heavily upon the tools lest they introduce mistakes.

“Blindly relying on generative AI to give you the text you use to provide services to your client is not going to pass muster,” said Suffolk University law dean Andrew Perlman, a leader in legal technology and ethics.

Perlman envisions duty of competence rules eventually requiring some level of proficiency in artificial intelligence technology. AI could revolutionize legal practice so significantly that someday not using it could be akin to not using computers for research, he said.

DUTY OF CONFIDENTIALITY

This rule requires lawyers to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Lawyers who use programs like ChatGPT or Bing Chat risk giving AI companies their clients' data to train and improve their models, potentially violating confidentiality rules.

That’s one reason why some law firms have explicitly told lawyers not to use ChatGPT and similar programs on client matters, said Holland & Knight partner Josias Dewey, who has been working on developing internal artificial intelligence programs at his firm.

Some law-specific artificial intelligence programs, including CaseText’s CoCounsel and Harvey, address the confidentiality issue by keeping their data walled off from outside AI providers.

RESPONSIBILITIES REGARDING NONLAWYER ASSISTANCE

Under this rule, lawyers must supervise lawyers and nonlawyers who assist them to ensure that their conduct complies with professional conduct rules. The ABA in 2012 clarified that the rule also applies to non-human assistance.

That means lawyers must supervise the work of AI programs and understand the technology well enough to make sure it meets the ethics standards that attorneys must uphold.

“You have to make reasonable efforts to ensure the technology you are using is consistent with your own ethical responsibility to your clients,” Perlman said.

Reporting by Karen Sloan

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd