Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4136

Associated Incidents

Incident 8077 Report
ChatGPT Introduces Errors in Critical Child Protection Court Report

Vic case worker used ChatGPT to draft child protection report
itnews.com.au · 2024

Victoria's Department of Families, Fairness and Housing (DFFH) has been directed to ban and block access to a range of generative AI tools after a child protection worker used ChatGPT to draft a report submitted to the Children's Court.

The state's information commissioner said the resulting report "contained inaccurate personal information, downplaying the risks to the child."

"Fortunately, it did not change the outcome of the child's case, but it is easy to see the potential harm that could have arisen," the Office of the Victorian Information Commissioner (OVIC) said in an investigation [pdf].

The report submitted to the court is meant to contain a child protection worker's own "assessment of the risks and needs of the child, and of the parents' capacity to provide for the child's safety and development."

ChatGPT, in this instance, was found to have played a role in "describing the risks posed to a young child if they continued living at home with their parents, who had been charged with sexual offences."

The case worker was found to have entered "personal and sensitive" case-specific information into ChatGPT to generate the report text - a serious breach of the state's privacy rules.

"The information in this case was disclosed to OpenAI, an overseas company, and released outside the control of DFFH," OVIC said.

"OpenAI now holds that information and can determine how it is further used and disclosed."

OVIC's examination of the report found numerous "indicators" of ChatGPT use, ranging from inaccuracies with the personal details of the case, to "language not commensurate with employee training and child protection guidelines, as well as inappropriate sentence structure."

Use of ChatGPT may also have been broader than this one court case.

OVIC said an internal review by the department into all cases handled by the case worker's unit over a one-year period "identified 100 cases with indicators that ChatGPT may have been used to draft child protection related documents."

Additionally, in the back half of 2023, "nearly 900 employees across DFFH had accessed the ChatGPT website, representing almost 13 percent of its workforce."

OVIC found that staff had received no specific training or guidance around generative AI use.

In response, OVIC issued a compliance notice to the department to ban the use of generative AI tools, and to technically block access to them internally.

The department was meant to have issued a direction to all staff banning use of public generative AI tools by yesterday.

It has until November 5 to implement technical controls blocking access to "web-based or external API-based GenAI text tools [including] ChatGPT; ChatSonic; Claude; Copy.AI; Meta AI; Grammarly; HuggingChat; Jasper; NeuroFlash; Poe; ScribeHow; QuillBot; Wordtune; Gemini; and [Microsoft 365] Copilot."

The case worker is no longer employed by the department.

In a letter to OVIC, the department "accepted there was an unauthorised use" of ChatGPT when "developing" the child protection report, but went on to claim there was no evidence of ChatGPT being used by "any staff ... to generate content for sensitive work matters" - an assertion the OVIC rejected.

DFFH added that the incident was "isolated" and that use of GenAI in the department "is not prolific."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c