Description: ChatGPT erroneously alleged regional Australian mayor Brian Hood served time in prison for bribery. Mayor Hood is considering legal action against ChatGPT's makers for alleging a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.
Editor Notes: See also Incident 855: Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition.
Entities
View all entitiesAlleged: OpenAI developed and deployed an AI system, which harmed Brian Hood.
Incident Stats
Incident ID
507
Report Count
2
Incident Date
2023-03-15
Editors
Sean McGregor, Daniel Atherton
Applied Taxonomies
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
/cloudfront-us-east-2.images.arcpublishing.com/reuters/56IRXWYJKRIUXF4PYIBCBMSIUY.jpg)
SYDNEY, April 5 (Reuters) - A regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT's false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated …

Brian Hood is a whistleblower who was praised for "showing tremendous courage" when he helped expose a worldwide bribery scandal linked to Australia's National Reserve Bank.
But if you ask ChatGPT about his role in the scandal, you get the …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?