Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1216: ChatGPT Reportedly Misleads Users About Soundslice Features, Allegedly Prompting Unplanned Product Development

Description: ChatGPT reportedly misinformed users that Soundslice's music software could import ASCII tablature, a feature the company did not offer. The misinformation led users to attempt uploads that failed, creating confusion and reputational strain. Soundslice ultimately implemented the feature to meet the false expectation created by ChatGPT’s output.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and ChatGPT developed and deployed an AI system, which harmed SoundSlice.
Alleged implicated AI systems: SoundSlice and ChatGPT

Incident Stats

Incident ID
1216
Report Count
1
Incident Date
2025-07-07
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Adding a feature because ChatGPT incorrectly thinks it exists
Loading...
Adding a feature because ChatGPT incorrectly thinks it exists

Adding a feature because ChatGPT incorrectly thinks it exists

holovaty.com

Loading...
Adding a feature because ChatGPT incorrectly thinks it exists
holovaty.com · 2025

Well, here's a weird one.

At Soundslice, our sheet music scanner digitizes music from photographs, so you can listen, edit and practice. We continually improve the system, and I keep an eye on the error logs to see which images are getting …

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Inappropriate Gmail Smart Reply Suggestions

Inappropriate Gmail Smart Reply Suggestions

Nov 2015 · 22 reports
Loading...
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

Jun 2020 · 6 reports
Loading...
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Inappropriate Gmail Smart Reply Suggestions

Inappropriate Gmail Smart Reply Suggestions

Nov 2015 · 22 reports
Loading...
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

Jun 2020 · 6 reports
Loading...
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 6f6c5a5