Description: A study by UpGuard reports that misconfigured llama.cpp servers publicly exposed user prompts, including hundreds of interactive roleplay scenarios. Some prompts explicitly described fictional sexual abuse of children aged 7–12. While no real children were involved, the findings demonstrate how open-source LLMs can be exploited to generate AI-enabled child sexual abuse material (CSAM).
Entities
View all entitiesAlleged: Users of llama.cpp servers and Meta developed an AI system deployed by Users of llama.cpp servers, which harmed Users of llama.cpp servers and General public.
Alleged implicated AI systems: llama.cpp , LLaMA and public-facing HTTP API servers with insecure /slots configuration
Incident Stats
Incident ID
1020
Report Count
2
Incident Date
2025-04-11
Editors
Daniel Atherton
Incident Reports
Reports Timeline

Several AI chatbots designed for fantasy and sexual role-playing conversations are leaking user prompts to the web in almost real time, new research seen by WIRED shows. Some of the leaked data shows people creating conversations detailing …
%20(1).png)
The proliferation of AI has rapidly introduced many new software technologies, each with its own potential misconfigurations that can compromise information security. Thus the mission of UpGuard Research: discover the vectors particular to …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Biased Sentiment Analysis
· 7 reports
Gender Biases in Google Translate
· 10 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Biased Sentiment Analysis
· 7 reports
Gender Biases in Google Translate
· 10 reports