Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3717

Incidentes Asociados

Incidente 6365 Reportes
AI Romance Apps Reportedly Compromise User Privacy for Data Harvesting

Loading...
Don’t date robots — their privacy policies are terrible
theverge.com · 2024

Talkie Soulful Character AI, Chai, iGirl: AI Girlfriend, Romantic AI, Genesia - AI Friend & Partner, Anima: My Virtual AI Boyfriend, Replika, Anima: AI Friend, Mimico - Your AI Friends, EVA AI Chat Bot & Soulmate, and CrushOn.AI are not just the names of 11 chatbots ready to play fantasy girlfriend — they’re also potential privacy and security risks.

A report from Mozilla looked at those AI companion apps, finding many are intentionally vague about the AI training behind the bot, where their data comes from, how they protect information, and their responsibilities in case of a data breach. Only one (Genesia) met its minimum standards for privacy.

Wired says the AI companion apps reviewed by Mozilla “have been downloaded more than 100 million times on Android devices.” 

“To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you,” writes Misha Rykov in the report. For example, the CrushOn.AI app says in its privacy policy that it may collect sexual health information, prescribed medication, and gender-affirming care data.

Several of the apps also mention mental health benefits. Take Romantic AI, which says it’s “here to maintain your mental health.” But inside its terms and conditions, it says, “Romantiс AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP.”

Another chatbot maker, Replika, has expanded beyond just AI companionship to make Tomo, a wellness and talk therapy app with an AI guide that brings the user to a virtual zen island. Since I tried the app, Tomo has published a privacy policy, echoing what I was told by Replika CEO Eugenia Kuyda last month: “We don’t share any information with any third parties and rely on a subscription business model. What users tell Tomo stays private between them and their coach.”

Still, Italy banned the company last year, prohibiting it from using personal data in the country since the bot “may increase the risks for individuals still in a developmental stage or in a state of emotional fragility,” according to Reuters.

The internet is rife with people seeking connections with a digital avatar, even before the rise of generative AI. Even ChatGPT, which expressly forbids users from creating AI assistants to “foster romantic relationships,” couldn’t stop people from creating AI girlfriend chatbots on the GPT Store. 

People continue to crave connection and intimacy, even if the other person happens to be powered by an AI model. But as Mozilla put it, don’t share anything with the bots that you don’t want other people to know.

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd