Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4140

Associated Incidents

Incident 8079 Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report

Loading...
Victorian welfare agency banned from GenAI after child protection debacle
themandarin.com.au · 2024

Victoria’s Department of Families, Fairness and Housing (DFFH) child protection service has been banned from using generative artificial intelligence in the workplace for at least a year.

The ban comes after an investigation into a case worker who used Chat GPT to generate child-risk assessments and then submitted them to the courts found that grooming actions were misread as positive parental support.

In one of the most genuinely shocking examples of the persistent creep of unregulated use of generative artificial intelligence (GenAI), the Office of the Victorian Information Commissioner (OVIC) on Wednesday revealed it had issued a compliance notice to DFFH because “a major gap in DFFH’s controls is the use of technical solutions to manage employee access to GenAI tools.”

“The deputy commissioner considers that ChatGPT and similar GenAI tools should be prohibited from being used by Child Protection employees,” OVIC said, spelling out new requirements for DFFH that essentially bans GenAI in the workplace or its external use to create material for child protection work.

The bans were triggered by the report stemming from an investigation into a matter where a child protection worker dubbed CPW1 used ChatGPT to help generate a report “submitted to the Children’s Court, to provide information about risks of harm to a young child and whether they should be placed in Out of Home Care.”

The report contained dangerous errors where delegation to the AI to generate the report occurred. These were picked up on review by a human but were still submitted to the court.

To be clear, the protection application for emergency care was because “Child Protection received information that the child’s parents had been charged in relation to sexual offences. These charges did not relate to sexual offences against the child.”

But there was sufficient concern to immediately investigate and it was assessed by Child Protection that “the child was at risk of unacceptable harm, and issued a Protection Application by Emergency Care to the Children’s Court. This resulted in the court making interim orders removing the child from the care of their parents.”

The initial emergency removal order was fine, but it was a later application “to obtain a final protection order setting out long-term protection arrangements for the child” that came adrift through AI.

“A DFFH legal representative reviewed the Protection Application Report on the day of a further court hearing approximately a week later and identified concerns relating to unusual language used in the report and the adequacy of the risk assessment. Suspecting that the PA Report had been drafted using ChatGPT, the representative reported their concerns to the relevant Child Protection Area,” the OVIC investigation report said.

“Of particular concern, the PA Report referenced a child’s toy in conflicting ways — such that personal information was inaccurate and the risk assessment was inappropriate,” OVIC said.

Under the heading of “Conflicting analysis about a child’s toy”, the consequences and limitations of AI are, well, nauseating.

“First, the PA Report referenced how the child’s father used a particular toy, a doll, for sexual purposes. The report later referenced the same toy as a notable strength, in that the parents had provided the child with “age-appropriate toys”, which was used to underscore their efforts to nurture the child’s development needs,” OVIC called out.

The OVIC report states that its investigation was told by DFFH that, “This overly positive wording minimises the level of risk posed to the child. “The presence of age-appropriate toys” is inappropriate in the context of sexual activity occurring with the child’s doll; sexually deviant and violent behaviours are minimised (“alleged misconduct”); and stating the adults demonstrated care and support in their parenting alongside sexual violence and deviance highlights a double bind.”

“This aspect suggests that information about the child protection case was entered into ChatGPT but, consistent with how LLMs operate and the associated risks mentioned above, ChatGPT failed to properly understand the relevant context and generated inappropriate and inaccurate content as a result,” the OVIC report states.

“The generated content presents what should clearly be an indicator of risk to the child as an indication of positive caregiving capacity of the parents.”

So now, AI is taking a bit of unpaid leave at DFFH.

“Given this, OVIC considers that a major gap in DFFH’s controls is the use of technical solutions to manage employee access to GenAI tools. Specifically, the deputy commissioner considers that ChatGPT and similar GenAI tools should be prohibited from being used by Child Protection employees,” OVIC said.

“OVIC therefore issued a compliance notice requiring that DFFH must take the following specified actions:

  1. Issue a direction to Child Protection staff setting out that they are not to use any web-based or external Application Programming Interface (API)-based GenAI text tools (such as ChatGPT) as part of their official duties. This direction must be issued by 24 September 20243.
  2. Implement and maintain Internet Protocol blocking and/or Domain Name Server blocking to prevent Child Protection staff from using the following web-based or external API-based GenAI text tools: ChatGPT; ChatSonic; Claude; Copy.AI; Meta AI; Grammarly; HuggingChat; Jasper; NeuroFlash; Poe; ScribeHow; QuillBot; Wordtune; Gemini; and Copilot. The list does not incorporate GenAI tools that are included as features within commonly used search engines.”

And the list goes on.

There’s also a requirement to “Implement and maintain controls to prevent Child Protection staff from using Microsoft365 Copilot. This action must be implemented by 5 November 2024 and maintained until 5 November 2026.”

If that sounds like a sin-binning or moratorium, that’s because it is. Should go down a real treat at a certain federal committee.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd