Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4141

Associated Incidents

Incident 8079 Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report

Loading...
Australian Information Commissioner Halts GenAI Use for Child Protection Agency as ChatGPT Downplays Risk
medianama.com · 2024

A state-level Australian Information Commissioner has ordered Victoria state’s child protection agency to stop using generative AI services. According to the Information Commissioner, the agency staff entered a significant amount of personal information into ChatGPT to create a report about the risks a particular child might face if they continued to stay with their parents who were allegedly sexual offenders.

By using ChatGPT, the staff ended up downplaying the risks to the child, the Information Commissioner explained. For instance, the report ended up portraying the child’s doll— which the father used for sexual purposes— as the parents’ efforts to ensure that their child has “age-appropriate toys”. The Information Commissioner has ordered the child protection agency to implement Internet Protocol blocking and/or Domain Name Server blocking to prevent their staff from using AI, by November 5, 2024. This does not include generative AI tools that are features of search engines, which means that the staff will still be able to access tools such as Google’s AI overview.

As indicated in the Information Commissioner’s order, the agency’s use of ChatGPT is “a very real example of the privacy risks associated with GenAI”. It adds harm that can arise when someone uses personal information inappropriately when relying on these tools. AI tools are probabilistic in nature, which means that if someone types something like “Better late than…” as a prompt, the AI would finish the sentence with an output like “never” because there is a high probability that “never” is the outcome the user was hoping for, MediaNama’s founder-editor Nikhil Pahwa explained during an event earlier this year.

This means that AI isn’t looking for accuracy when giving answers, but rather just makes decisions based on what the user wants. As such, if government agencies were to rely on AI models, their decisions could be inaccurate, or potentially harmful, especially when dealing with sensitive personal information or critical situations involving vulnerable individuals. Besides this, governmental use of AI tools also poses privacy risks. In case any government agency puts out people’s personal information into an AI chatbot, this information (such as the personal information of a child in the Australian case) would end up as a part of the AI companies’ training dataset.

In November last year, Australia came out with guidance for how the public sector should use AI tools. This guideline underlined two golden principles:

  • Public service entities should assume that whatever information they enter into an AI model can become public. They should not reveal any classified, personal, or otherwise sensitive information.
  • Public service entities should be able to explain, justify, and take ownership of their advice and decisions.

Besides this, the Australian government also suggested that public service entities should make it clear when their decisions are based on AI models. They must also acknowledge the biases present in AI tools, and must have processes in place to ensure that their decisions are fair and meet community expectations if they rely on an AI-generated output. Similarly, New Zealand has also advised public service bodies to not use generative AI for any sensitive data. It also advised them not to input personal data in GenAI tools if they are outside the public service body’s network.

India also put out a discussion paper highlighting responsible AI principles in 2022. This paper says that AI systems must be reliable and must have built-in safeguards to protect stakeholders. They must also treat people in similar circumstances equally and not discriminate against individuals. Further, the principles state that the personal data of all individuals must be safe and secure, with only authorised personnel gaining access to it. However, these principles lack any specific instructions for government bodies in India, which highlights a gap in the country’s policy frameworks.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd