Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6273

Associated Incidents

Incident 12342 Report
Purported AI-Generated Explicit Deepfakes of Sydney High School Students Reportedly Circulated Online

Loading...
Deepfake image-based abuse happening at least once a week in schools, eSafety data shows
abc.net.au · 2025

Reports of deepfake image-based abuse have doubled in the country over the past 18 months, with at least one incident emerging out of Australian schools each week.

The figures were revealed on Friday by Australia's eSafety Commissioner and come after the ABC uncovered a police investigation into female students being targeted at a high school in Sydney's north.

Commissioner Julie Inman Grant confirmed their investigators were in talks with NSW Police and the NSW Department of Education over the circulation of the digitally altered explicit images on online groups.

"We've put out a deepfake image-based abuse incident management tool for schools so they know when to go to police and then they know when to come to us to have the content taken down," Ms Inman Grant told the ABC.

The ABC understands several families from the school attended Eastwood Police Station on Wednesday evening.

Ms Inman Grant said the number of school-based incidents were only a small proportion of the overall reports they were receiving.

"We are seeing deepfaked image-based abuse incidents happening at least once a week in Australian schools.

"This is real cause for concern. This is really putting potential online harms on steroids."

She warned it was just the tip of the iceberg following the recent release of OpenAI's Sora.

"[It is] an AI-generated social media app where you're able to harvest images of someone else and create a hyper realistic deepfake video in a matter of seconds," she said.

Global impact from US spike in deepfakes

Colm Gannon is the chief executive of the International Centre for Missing and Exploited Children. He said the issue was growing not just in Australia but in the United States and around the world.

He said reports from social media platforms were revealing a concerning trend in the US.

"They've seen a 1,325 per cent increase of AI-generated material and deepfakes that are actually affecting children all over the world,"

he said.

"That escalation is to such an extent that we have the federal government passing legislation.

"We also have the NSW government with a bill currently before the NSW Parliament passing legislation and we have other states and territories passing legislation to combat the harms towards children."

Ms Inman Grant believed Australia was leading the way in its response to the trend and was continuing conversations with international counterparts on implementing restrictions.

"We are taking some action today against some 'nudifying' services and in this case against a company that probably makes some of the most popular undressing apps that are used by at least 100,000 people in Australia," she said.

Working alongside their sister regulator Ofcom in the UK, Ms Inman Grant said they have a 98 per cent success rate in getting deepfake images of this nature taken down or removed from platforms.

Female students 'scared it's going to happen to them'

NSW Women's Safety Commissioner Hannah Tonkin said she had spoken with high school female students who were terrified of being targeted.

"Many are seeing it happen to their friends and they're really scared it's going to happen to them," Dr Tonkin said.

She said 'nudify' apps are "disgusting technology" designed to target, dehumanise and inherently degrade women.

"These types of incidents can have devastating impacts, particularly on women and girls who are overwhelmingly the targets."

Acting NSW Minister for Education Courtney Houssos said they have had productive conversations on the topic.

"We need to work together to address this scourge for our students, and sadly for some of our teachers," Ms Houssos said.

She said the NSW Department of Education was now working with the eSafety Commissioner to develop digital literacy education for students to better equip them on how to use the internet and AI appropriately.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd