Associated Incidents
Lawyers acting for a woman who claimed she was defamed by a body corporate in Parkwood, Johannesburg, were left red-faced when it emerged they tried to use non-existent judgments generated by ChatGPT to bolster her case.
“The names and citations are fictitious, the facts are fictitious, and the decisions are fictitious,” said a judgment of the Johannesburg regional court, last week.
It also slapped the woman with a punitive costs order.
Magistrate Arvin Chaitram said the incident was a “timely reminder” that “when it comes to legal research, the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading”.
The fictitious judgments meant a two-month delay, after Chaitram postponed the case for the lawyers to track down the judgments. The case first came before the magistrate in March, when Claire Avidon, counsel for the trustees of the body corporate, argued that a body corporate cannot be sued for defamation. This would have put an end to the case.
But Jurie Hayes, counsel for the plaintiff, Michelle Parker, argued that there were earlier judgments that answered this question. It was just that by the time of the hearing, his team had not yet been able to access them.
Chaitram said in his judgment he decided at the time to grant a postponement to late May. “As the question appeared to be a novel one that could be dispositive of the entire action, the court requested that both parties make a concerted effort to source the suggested authorities.”
What followed was two months in which two sets of lawyers tried to track down fictitious judgments that Parker’s lawyers, using artificial intelligence (AI), had earlier found references to — but did not actually exist.
The plaintiff’s attorney, Chantal Rodrigues of Rodrigues Blignaut Attorneys said that because it was an unprecedented and complex case, “finding an unreported case and precedents is not a simple and straightforward task”.
“All possible avenues for information and potential leads had to be investigated and exhausted ... via old or new-fashioned means.” Her firm explored “one the most advanced AI tools to date” and then “followed up on the findings gathered”.
This meant “extensive discussions with the various high courts throughout our country, discussions with court registrars in multiple provinces and conversing with law librarians at the Legal Practice Council library,” she said.
The attorney for the body corporate, Max Rossle of Le Mottée Rossle Attorneys, said his team engaged in lengthy correspondence with Rodrigues Blignaut Attorneys, which sent a list of case names complete with citations — references to where the cases were reported.
But when his team tried to track the judgments down, they couldn’t find them. Their junior counsel, Ziphozihle Raqowa, spent 15 hours over four days looking for the judgments, to no avail. The case names were real cases. The citations were real citations. But the citations related to different cases from the ones named. None of the cases or citations related to defamation suits between body corporates and individuals, said Rossle.
When the team sought help from the Johannesburg Bar’s library, the librarians said they could not track down the judgments either and they suspected ChatGPT the cases were generated by ChatGPT.
Eventually, on the Friday before they were all due back in court, Rossle contacted Parker’s attorneys and asked if they had found the real judgments. He was told that they were still searching.
In court on Monday “the plaintiff’s counsel explained that his attorney had sourced the cases through the medium of ChatGPT”, said the judgment.
Rodrigues said: “At no stage did we mislead the court or our opposition and the only reason our findings were provided to the opposition was at the request of the regional court magistrate in the pursuit of justice.”
The judgment said Parker’s attorneys had accepted the AI chatbot’s result “without satisfying themselves as to its accuracy”.
But since the legal team had not submitted the judgments to the court, and only to the opposing side, the magistrate said they had not misled the court — which would have meant severe consequences. “It seems that the attorneys were simultaneously simply overzealous and careless,” said the magistrate.
They also “did not intend to mislead anyone”. But the “inevitable result of this debacle” was that the defendant's team was misled, said the judgment. A punitive costs order was therefore reasonable.
“Indeed, the court does not even consider it to be punitive. It is simply appropriate. The embarrassment associated with this incident is probably sufficient punishment for the plaintiff’s attorneys,” he said.
“Courts expect lawyers to bring a legally independent and questioning mind to bear on, especially, novel legal matters, and certainly not to merely repeat in parrot-fashion, the unverified research of a chatbot,” he said.
Rodrigues said: “We shouldn’t act like luddites, resisting the natural progression of technology in the pursuit of justice and it goes without saying that due diligence and rigour is always needed when exploring these uncharted waters.”
The use of ChatGPT for legal research was the subject of media reports in the US last month after a lawyer in a personal injury suit used ChatGPT to prepare a filing, “but the artificial intelligence bot delivered fake cases that the attorney then presented to the court”, reported online news site Forbes.com.
In that case the court was considering sanctioning the attorney because, unlike in this case, the bogus cases cited had been put before the court. But the US attorney said he had not intended to mislead the court, reported Forbes.com, adding that the attorney had explained in court papers that he had not used ChatGPT for legal research before and “had learned of the technology ‘from his college-age children’”.