Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4178

Associated Incidents

Incident 8149 Report
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Dad Discovers Murdered Daughter Is Now a Chatbot
newser.com · 2024

A father who continues to grieve the 2006 murder of his 18-year-old daughter says he was appalled to discover her name and yearbook photo were used to create an AI chatbot. Drew Crecente found the chatbot earlier this month on Character.ai, a website that allows users to converse with digital personalities made with generative artificial intelligence---and create their own. The user-generated profile of Jennifer Ann Crecente, a real person shot to death by an ex-boyfriend, invited users to converse with "a video game journalist and expert in technology, pop culture and journalism." There had been at least 69 chats with the profile when Crecente saw it, reports Business Insider.

Crecente did not interact with the profile, which he believes took details from the biography of Jennifer's uncle, Brian Crecente, founder of gaming news sites Polygon and Kotaku. "I was just looking for a big flashing red stop button," he tells the Washington Post. "It takes quite a bit for me to be shocked, because I really have been through quite a bit. But this was a new low." He contacted Character.ai but received no response. However, the company did respond to a tweet from Brian Crecente, saying the profile was removed. Its terms prevent users from impersonating any person or entity, though the site hosts many imitations of public figures from Elon Musk to Travis Kelce. Wired reports it found several personas "created without a person's consent, some of whom were women already facing harassment online."

A company rep says "reports of impersonation are investigated ... and the Character is removed if it is found to violate our Terms of Service." But experts say the case raises concerns about what the largely-unregulated AI industry is willing and able to do to protect users from harms. "Character, which inked a $2.5 billion deal this year to license its AI models to Google, is among the biggest players in the space," per the Post. "If they're going to say, 'We don't allow this on our platform,' and then they allow it on their platform until it's brought to their attention by somebody who's been hurt by that, that's not right," Jen Caltrider, a privacy researcher at the Mozilla Foundation, tells the Post. Crecente worries about the harms to people who "might not have a voice," per BI. (More artificial intelligence stories.)

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf