Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6344

Associated Incidents

Incident 12573 Report
Argentine Court Reportedly Annuls Criminal Conviction After Judge Allegedly Used ChatGPT to Draft Ruling Without Disclosure

Loading...
The judge in Esquel who used Chat to draft a sentence will be investigated.
lanacion.com.ar · 2025

The Criminal Court of Esquel decided on Wednesday to annul, ex officio, a sentence handed down on June 4 by Judge Carlos Rogelio Richeri, who had sentenced Raúl Amelio Payalef to two years and six months of effective imprisonment.

The reason for the annulment is unprecedented in the Argentine justice system: the judge accidentally included in the text of the ruling a phrase that revealed the use of a generative artificial intelligence (AI) assistant—presumably ChatGPT—to draft part of the decision.

The phrase that raised the alarm, included in the section on decisive points, read verbatim: "Here is point IV reedited, without citations and ready to copy and paste." This error allowed Judges Carina Estefanía, Martín Zacchino, and Hernán Dal Verme to realize that the judge had used an automated tool without acknowledging it or exercising due human control over its content.

“This evidence (‘cut and paste’) leaves too wide a gap to determine how much of the text is attributable to the generative AI and how much to the judge,” the judges stated in their ruling, while emphasizing that the practice strains the prohibition against delegating judicial decisions to automated systems, violating the principle of the natural judge.

The Court understood that Richeri did not exercise the mandatory human supervision and control over the content generated with technological assistance and, furthermore, signed the ruling without acknowledging the AI citation.

The court stressed that the lack of traceability and record of what was requested of the tool makes it impossible to reconstruct the judge's reasoning, “equating the decision to a merely dogmatic or unmotivated response.”

For these reasons, the judges resolved to annul both the ruling and the trial that preceded it, ordering that the proceedings be conducted again with the participation of a different judge. At the same time, they ordered the case to be sent to the Superior Court of Justice of Chubut (STJ), which will investigate the ethical and disciplinary implications of Richeri's conduct.

"The improper use of generative artificial intelligence produced serious consequences in this process, impacting litigants, the entire citizenry, and the State, which is responsible for guaranteeing access to justice and effective judicial protection," the Chamber's ruling states.

In addition to the potential ethical breach, the judges noted that the magistrate may have violated the confidentiality rules established in Plenary Agreement No. 5435 of the STJ itself, which requires safeguarding the identity of parties, witnesses, and experts in the digital processing of judicial information. The use of an online assistant would have involved the exposure of personal and sensitive data in an environment outside the judicial system.

The situation opens a debate that transcends this particular case: to what extent can AI tools interfere with judicial work? While the use of artificial intelligence for document assistance or jurisprudence analysis is gaining ground in various judicial systems, experts agree that judicial decisions cannot be delegated to automated systems, precisely because of the risks of bias, opacity, and lack of verifiable justification.

The Esquel incident—the first of its kind with disciplinary consequences in the Argentine justice system—has reignited discussions about the ethical and technical limits of artificial intelligence in the judicial sphere.

As the Superior Court of Justice of Chubut analyzes Richeri's conduct, the case will likely serve as a precedent for the responsible use of these tools in the public decision-making process.

Who is Judge Richeri?


Before this questioning of the misuse of artificial intelligence came to light, Judge Carlos Rogelio Richeri already had a distinguished career in the criminal justice system of Chubut.

Richeri was selected as Criminal Judge of Esquel through a competitive process held by the Chubut Magistrates' Council, the results of which were submitted to the Legislature for approval. He formally assumed office on March 1, 2023, in a ceremony at the Esquel Judicial Office.

Before joining the court, he served as Attorney General in Esquel and was a member of the specialized cybercrime team, with experience in cases of high technical complexity. He also teaches courses in Litigation and Digital Evidence, reflecting his interest in the technological aspects of the criminal process.

During his tenure as judge, he presided over high-profile cases, such as the conviction of two brothers and their mother for a multi-million dollar "confidence trick" scam targeting an 87-year-old retiree, and a ruling that upheld the prosecutor's dismissal of a complaint against members of the Superior Court of Justice, in which he issued a stern warning to the lawyers involved.

Some sectors had praised him for taking on complex cases that other judges recused themselves from, while others criticized him for intervening in cases outside his territorial jurisdiction.

In any case, his background as a jurist with technological training makes the episode that now places him at the center of the legal controversy over the misuse of artificial intelligence all the more striking.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd