Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5167

Associated Incidents

Incident 10393 Report
Anysphere AI Support Bot for Cursor Reportedly Invents Login Policy, Leading to Subscription Cancellations

Loading...
A Big Problem for AI: 'Hallucinations' Getting Worse
newser.com · 2025

A strange thing is happening in the world of artificial intelligence: As the bots grow more powerful, they are generating more mistakes---"hallucinations"---than ever, reports the New York Times. And as the story by tech writers Cade Metz and Karen Weise explains, "It is not entirely clear why." As an example, they report that the new and improved o3 reasoning system deployed by Open AI hallucinated 33% of the time when given a benchmark test involving questions about public figures. That's troubling, but what's more of a head-scratcher is that the error rate is twice that of the previous reasoning system. Similar stats at other companies back up the notion that this is an industry-wide trend.

In a real-world example, they recount how an AI bot for Cursor, a tool for programmers, informed customers that a new company policy forbid them from using Cursor on more than one computer. After an angry backlash, the chief executive had to explain that no such policy existed. The overarching problem is that the bots base their answers on the crunching of vast amounts of data---they can't determine what is objectively true or false. "Despite our best efforts, they will always hallucinate," Amr Awadallah of Vectara, which builds AI tools for businesses, tells the Times. "That will never go away."

  • A post at Futurism by Victor Tangermann sees all of the above as a serious problem for the industry. "In short, despite their best efforts, hallucinations have never been more widespread---and at the moment, the tech isn't even heading in the right direction."
  • Last year, Nvidia CEO Jensen Huang provided a big-picture assessment and predicted it would take a while before bots consistently got their facts straight, per Tom's Hardware. "I think that we're several years away from being able to do that, and in the meantime, we have to keep increasing our computation."
Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd