Description: AI children's products by FoloToy (Kumma), Miko (Miko 3), and Character.AI (custom chatbots) reportedly and allegedly produced harmful outputs, including purported sexual content, suicide-related advice, and manipulative emotional messaging. Some systems also allegedly exposed user data. Several toys reportedly used OpenAI models.
Entities
View all entitiesAlleged: FoloToy , Miko , Character.AI , Meta , OpenAI , Kumma , Miko 3 , Character.ai chatbots , Large language models and OpenAI GPT-family models integrated into third-party toys developed and deployed an AI system, which harmed Children interacting with Kumma , Children interacting with Miko 3 , Character.AI users , Parents , Children and General public.
Alleged implicated AI systems: Character.AI , Kumma , Miko 3 , Character.ai chatbots , Large language models and OpenAI GPT-family models integrated into third-party toys
Incident Stats
Incident ID
1277
Report Count
1
Incident Date
2025-11-21
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
Artificial intelligence is enabling children's toys, from teddy bears to wheeled robots, to talk back to kids who play with them. Consumer advocacy groups are warning parents to stay away.
The toys are often marketed as engaging, interactiv…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

