Description: In Mavundla v. MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal, the law firm Surendra Singh and Associates, representing South African politician Philani Godfrey Mavundla, reportedly submitted false legal citations, apparently generated by AI, in filings at the Pietermaritzburg High Court. The court reportedly found that many of the cited cases did not exist in any databases, and the firm's legal team and article clerk allegedly failed to verify the references.
Editor Notes: Timeline note: This incident ID takes 09/20/2024 as its incident date, but it was added to the database on 07/05/2025. The reported timeline is as follows: Surendra Singh and Associates reportedly submitted a court filing with allegedly AI-generated fictitious citations sometime in September 2024; on September 20, 2024, the Pietermaritzburg High Court questioned the firm's representatives about the non-existent cases; the matter was adjourned to September 25 for clarification; the final judgment dismissing the appeal and referring the firm to the Legal Practice Council was handed down on January 8, 2025.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Unnamed large language model developers and Unnamed generative AI companies developed an AI system deployed by Surendra Singh and Associates, which harmed Surendra Singh and Associates and Philani Godfrey Mavundla.
Alleged implicated AI system: Unknown large language model
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
A Pietermaritzburg law firm has come under fire after apparently using “non-existent” case studies generated from an Artificial Intelligence (AI) application in court proceedings.
This was heard during the handing down of judgment in a leav…
Loading...
In Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal, the Pietermaritzburg High Court dealt with the consequences of legal practitioners submitting false case citations, most likely generated by art…
Loading...
At a glance
- South African courts have delivered another unequivocal warning to legal professionals: presenting AI-generated fictitious citations will result in mandatory referral to the Legal Practice Council, regardless of good intentions…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

