Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6264

Associated Incidents

Incident 8079 Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report

Loading...
OVIC finds department responsible for breaches of privacy through use of ChatGPT
ovic.vic.gov.au · 2024

Investigation finds that the Department of Families, Fairness and Housing failed to take reasonable steps to ensure the accuracy of personal information and to protect personal information from unauthorised disclosure

OVIC has published an investigation report into the use of ChatGPT by a child protection worker at the Department of Families, Fairness and Housing (DFFH). In this case, the worker used ChatGPT, a generative artificial intelligence (GenAI) tool, when drafting a Protection Application Report (PA Report) -- a report that is submitted to the Children's Court to inform decisions about whether a child requires protection.

The investigation found that:

  • The content generated by ChatGPT and then used by the Child Protection worker when drafting the PA report contained inaccurate personal information -- which downplayed risks to the child in the case.
  • The Child Protection worker entered a significant amount of personal and delicate information into ChatGPT, including names and information about risk assessments relating to the child. By doing so, they disclosed this information to OpenAI, an overseas company, and released it outside the control of DFFH

The Deputy Commissioner found that the controls DFFH had in place were insufficient to manage the risks associated with the use of GenAI tools in a child protection context. She concluded that DFFH contravened IPPs 3.1 and 4.1 by failing to take reasonable steps to ensure the accuracy of personal information and to protect personal information from unauthorised disclosure.

DFFH accepted the findings of the investigation report, and is now required to implement the remedial actions it contains.

The Deputy Commissioner has issued a compliance notice on DFFH to ensure it complies with IPP 3.1 and 4.1. The notice outlines six specific actions, including a requirement that DFFH blocks the use of ChatGPT and other similar tools by child protection workers.

The full investigation report is available to view here.

Quote attributable to Victorian Information Commissioner, Sean Morrison:

This case demonstrates that while some uses of GenAI may be beneficial, there are currently circumstances where the privacy risks involved are simply too great -- such as where highly delicate information is involved.

I therefore encourage all organisations to assess the risks involved in their employees' use of GenAI across their different functions and activities. In line with their obligations under the IPPs, organisations must put in place appropriate controls to mitigate these risks.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd