Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4138

Associated Incidents

Incident 8079 Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report

Loading...
Victorian child protection worker uses ChatGPT for protection report
cyberdaily.au · 2024

Victoria’s child protection agency has been ordered to ban the use of AI tools after a case worker used ChatGPT to write a child’s protection report, resulting in sensitive data being submitted and a number of inaccuracies being generated.

The Office of the Victorian Information Commissioner (OVIC) received reports of the incident in December last year after the Department of Families, Fairness and Housing (DFFH) discovered that a case worker was suspected of drafting a protection application report using ChatGPT.

The report was used in the Children’s Court in a case regarding a child who had changed families as a result of sexual offenses.

Now, the OVIC has found the DFFH failed to “take reasonable steps” to protect the child’s personal data and ensure accurate reporting.

While the outcome for the child did not change, it determined that “a significant amount of personal and delicate information” was input into ChatGPT, meaning it was disclosed to OpenAI outside the control of the DFFH.

It also found that the report contained inaccurate data “which downplayed risks to the child in the case”.

“Of particular concern, the report described a child’s doll – which was reported to child protection as having been used by the child’s father for sexual purposes – as a notable strength of the parents’ efforts to support the child’s development needs with ‘age-appropriate toys’,” said Victorian information commissioner Sean Morrison.

Investigations highlighted several points that indicated that ChatGPT may have been used for the report. Eventually, the worker admitted to using ChatGPT for the report to “save time and to present work more professionally”. The worker never admitted to submitting sensitive data.

The OVIC further investigated the department and found 100 cases in which ChatGPT may have been used in drafting protection-related documents.

Additionally, from July to December 2023, it was determined that almost 900 employees (almost 13 per cent of the departments workers) accessed ChatGPT.

Following the findings, the OVIC ordered the DFFH to effectively ban the use of generative AI tools like ChatGPT, requiring the department to block access to generative AI websites. The block will last two years, starting 5 November.

The OVIC has not ruled out the use of the technology completely, but the technology would need to be used specifically to ensure the safety of vulnerable children.

“The deputy commissioner believes there may be some specific use cases where the risk is less than others, but that child protection, by its nature, requires the very highest standards of care,” said the OVIC.

“Any application to vary the specified actions in relation to child protection staff, information, or activities would need to be accompanied by the highest standards of verifiable evidence.”

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd