Description: In Lawrence, Kansas, students allege the Gaggle Safety Management AI wrongly flagged benign schoolwork, including art photos and casual messages, as child pornography or threats. The system reportedly deleted content, blocked an email records request, and led to questioning of students. Critics cite chilling effects and privacy risks. A lawsuit filed in August 2025 challenges the district's use of Gaggle as unconstitutional surveillance. Gaggle reportedly denies compromising privacy.
Editor Notes: The underlying reported events associated with this case occurred during the Lawrence Public Schools' deployment of the Gaggle Safety Management system between late 2023 and 2024. Specific reported harms include the misclassification and deletion of student art, the blocking of a records request, and questioning of students for flagged content. The federal lawsuit challenging the district's use of Gaggle was filed on 08/01/2025, which is taken here as the incident date of record. Subsequent coverage by The Washington Post on 09/24/2025 summarized the allegations and student accounts. This report was ingested into the AI Incident Database on 09/29/2025.
Entities
View all entitiesAlleged: Gaggle developed an AI system deployed by Lawrence Public Schools (USD 497), which harmed Lawrence Public Schools (USD 497) students.
Alleged implicated AI system: Gaggle
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
Students at a Kansas high school sometimes worry as they write class presentations or emails to their teachers. They stop and consider their words. They ask each other: "Will this get Gaggled?"
Anything students at Lawrence High write or up…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?