Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4137

Associated Incidents

Incident 8079 Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report

Loading...
AI ban ordered after child protection worker used ChatGPT in Victorian court case
theguardian.com · 2024

Victoria's child protection agency has been ordered to ban staff from using generative AI services after a worker was found to have entered significant amounts of personal information, including the name of an at-risk child, into ChatGPT.

The Department of Families, Fairness and Housing reported the matter to the Office of the Victorian Information Commissioner in December last year after the worker was suspected of using ChatGPT while drafting a protection application report.

Ovic said the report had been submitted to the children's court in a case concerning a young child whose parents had been charged in relation to sexual offences. The charges did not relate to offences against the child.

In Ovic's report on its investigation into the matter released Tuesday, the commissioner, Sean Morrison, found there were multiple indications throughout the report that ChatGPT had been used. This included the use of language not standard under child protection guidelines and inappropriate sentence structures.

"More significantly, parts of the report included personal information that was not accurate," the report said.

"Of particular concern, the report described a child's doll -- which was reported to child protection as having been used by the child's father for sexual purposes -- as a notable strength of the parents' efforts to support the child's development needs with 'age-appropriate toys'."

Ovic said the use of ChatGPT had the effect of downplaying the severity of the actual or potential harm to the child, with the potential to affect decisions about the child's care.

But it also said the report did not change the decision-making of either the child protection agency or the court.

Entering the information into ChatGPT was an unauthorised disclosure of information in the department's control, Ovic said.

After further investigation of the department's use of ChatGPT, Ovic found that the worker may have used ChatGPT in 100 cases in drafting child protection-related documents, and across the department from July to December 2023, nearly 900 employees had accessed the ChatGPT website, accounting for almost 13% of the workforce.

Ovic found none of the other uses had the same potential impact as the initial report.

The worker admitted to using ChatGPT to generate the report "to save time and to present work more professionally" but denied entering in personal information.

Co-workers said the worker in question "had demonstrated their use of ChatGPT to others, and that this involved inputting client names into the tool to create content".

Ovic found that, on the balance of probabilities, the worker had entered in personal information.

The regulator issued orders to the department, including blocking IP and domains for a range of generative AI websites, including ChatGPT, Meta AI, Gemini and Copilot. The block must be in place for two years from 5 November.

In response, the department said it accepted the findings and would work to implement the orders. It said it no longer employed the worker.

Ovic left the door open for the use of generative AI in the future but said the risk environment would need to change and it would be a high bar for child protection.

"The deputy commissioner believes there may be some specific use cases where the risk is less than others, but that child protection, by its nature, requires the very highest standards of care," Ovic said.

"Any application to vary the specified actions in relation to child protection staff, information, or activities would need to be accompanied by the highest standards of verifiable evidence."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd