Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5044

Associated Incidents

Incident 9983 Report
ChatGPT Allegedly Defamed Norwegian User by Inventing Child Homicide and Imprisonment

Loading...
Norwegian man files complaint against ChatGPT for falsely saying he killed his sons
abc.net.au · 2025

A Norwegian man has filed a complaint after artificial intelligence (AI) chatbot ChatGPT falsely claimed he was convicted of murdering two of his children. 

Arve Hjalmar Holmen was given the false information after he used ChatGPT to ask if it had any information about him.

The response he got back included: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.

"He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020."

However, not all the details were made up.

The number and the gender of his children were correct, as well as his hometown, suggesting it did have some accurate information about him.

Digital rights group Noyb, which filed the complaint to the Norwegian Data Protection Authority on his behalf, claimed the answer ChatGPT gave him was defamatory and breaks European data protection rules.

They said Mr Holmen "has never been accused nor convicted of any crime and is a conscientious citizen".

ChatGPT presents users with a disclaimer at the bottom of its main interface that says the chatbot may produce false results and to "check important info".

But Noyb data protection lawyer Joakim Söderberg said that is insufficient.

"You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," Mr Söderberg said in a statement.

Demand for OpenAI to be fined

Noyb ordered OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccuracies.

It also asked that an administrative fine be paid by OpenAI "to prevent similar violations in the future".

Since the incident in August 2024, OpenAI has released a new version of OpenAI's chatbot, named GPT-4.5, which reportedly makes fewer mistakes.

However, experts say that the current generation of generative AI will always "hallucinate".

What is AI 'hallucination'?

AI "hallucination" is when chatbots present false information as facts.

OpenAI says its latest chatbot should make fewer "hallucination" errors based on a measurement system the company devised. So how does it work, and what does it show?

A German journalist, Martin Bernklau, was falsely described as a child molester, drug dealer and con man on Microsoft's AI tool, Copilot.

In Australia, the mayor of regional Victoria's Hepburn Shire Council, Brian Hood, was wrongly described by ChatGPT as a convicted criminal.

And last year, Google's AI Gemini suggested users stick cheese to pizza using glue and said geologists recommend humans eat one rock per day.

In a statement, Mr Holmen said this particular hallucination was very damaging.

"Some think that there is no smoke without fire.

"The fact that someone could read this output and believe it is true, is what scares me the most."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd