Description: Google reported to Australia's eSafety Commission that it received 258 complaints globally about AI-generated deepfake terrorism content and 86 about child abuse material made with its Gemini AI. The regulator called this a "world-first insight" into AI misuse. While Google uses hash-matching to detect child abuse content, it lacks a similar system for extremist material.
Editor Notes: Timeline note: Google's reporting period for this data was April 2023 to February 2024. The information was widely reported on March 5, 2025.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Google and Gemini developed and deployed an AI system, which harmed General public , General public of Australia , Google Gemini users , Victims of deepfake terrorism content , Victims of deepfake child abuse and Victims of online radicalization.
Alleged implicated AI system: Gemini
Incident Stats
Incident ID
963
Report Count
1
Incident Date
2025-03-05
Editors
Daniel Atherton
Incident Reports
Reports Timeline
SYDNEY, March 6 (Reuters) - Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material.
The Alphabet-own…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents