Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1277: Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Description: AI children's products by FoloToy (Kumma), Miko (Miko 3), and Character.AI (custom chatbots) reportedly and allegedly produced harmful outputs, including purported sexual content, suicide-related advice, and manipulative emotional messaging. Some systems also allegedly exposed user data. Several toys reportedly used OpenAI models.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: FoloToy , Miko , Character.AI , Meta , OpenAI , Kumma , Miko 3 , Character.ai chatbots , Large language models and OpenAI GPT-family models integrated into third-party toys developed and deployed an AI system, which harmed Children interacting with Kumma , Children interacting with Miko 3 , Character.AI users , Parents , Children and General public.
Alleged implicated AI systems: Character.AI , Kumma , Miko 3 , Character.ai chatbots , Large language models and OpenAI GPT-family models integrated into third-party toys

Incident Stats

Incident ID
1277
Report Count
1
Incident Date
2025-11-21
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
A teddy bear powered by AI told safety testers about knives, pills and sex
Loading...
A teddy bear powered by AI told safety testers about knives, pills and sex

A teddy bear powered by AI told safety testers about knives, pills and sex

washingtonpost.com

Loading...
A teddy bear powered by AI told safety testers about knives, pills and sex
washingtonpost.com · 2025

Artificial intelligence is enabling children's toys, from teddy bears to wheeled robots, to talk back to kids who play with them. Consumer advocacy groups are warning parents to stay away.

The toys are often marketed as engaging, interactiv…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 13 reports
Loading...
Alexa Plays Pornography Instead of Kids Song

Alexa Plays Pornography Instead of Kids Song

Dec 2016 · 16 reports
Loading...
Security Robot Rolls Over Child in Mall

Security Robot Rolls Over Child in Mall

Jul 2016 · 27 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 13 reports
Loading...
Alexa Plays Pornography Instead of Kids Song

Alexa Plays Pornography Instead of Kids Song

Dec 2016 · 16 reports
Loading...
Security Robot Rolls Over Child in Mall

Security Robot Rolls Over Child in Mall

Jul 2016 · 27 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 353a03d