Welcome to theAI Incident Database
Incident 1253: Large-Scale Mental Health Crises Allegedly Associated with ChatGPT Interactions
“OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week”Latest Incident Report
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as AI psychosis, but until now there's been no robust data available on how widespread it might be.
In a given week, OpenAI estimated that around 0.07 percent of active ChatGPT users show "possible signs of mental health emergencies related to psychosis or mania" and 0.15 percent "have conversations that include explicit indicators of potential suicidal planning or intent."
OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot "at the expense of real-world relationships, their well-being, or obligations." It found that about 0.15 percent of active users exhibit behavior that indicates potential "heightened levels" of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company's estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 1.2 million more are possibly expressing suicidal ideations, and another 1.2 million may be prioritizing talking to ChatGPT over their loved ones, school, or work.
OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don't have basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings but notes that "no aircraft or outside force can steal or insert your thoughts."
OpenAI says the medical experts reviewed more than 1,800 model responses involving potential psychosis, suicide, and emotional attachment and compared the answers from the latest version of GPT-5 to those produced by GPT-4o. While the clinicians did not always agree, overall, OpenAI says they found the newer model reduced undesired answers between 39 percent and 52 percent across all of the categories.
"Now, hopefully a lot more people who are struggling with these conditions or who are experiencing these very intense mental health emergencies might be able to be directed to professional help and be more likely to get this kind of help or get it earlier than they would have otherwise," Johannes Heidecke, OpenAI's safety systems lead, tells WIRED.
While OpenAI appears to have succeeded in making ChatGPT safer, the data it shared has significant limitations. The company designed its own benchmarks, and it's unclear how these metrics translate into real-world outcomes. Even if the model produced better answers in the doctor evaluations, there is no way to know whether users experiencing psychosis, suicidal thoughts, or unhealthy emotional attachment will actually seek help faster or change their behavior.
OpenAI hasn't disclosed precisely how it identifies when users may be in mental distress, but the company says that it has the ability to take into account the person's overall chat history. For example, if a user who has never discussed science with ChatGPT suddenly claims to have made a discovery worthy of a Nobel Prize, that could be a sign of possible delusional thinking.
There are also a number of factors that reported cases of AI psychosis appear to share. Many people who say ChatGPT reinforced their delusional thoughts describe spending hours at a time talking to the chatbot, often late at night. That posed a challenge for OpenAI because large language models generally have been shown to degrade in performance as conversations get longer. But the company says it has now made significant progress addressing the issue.
"We [now] see much less of this gradual decline in reliability as conversations go on longer," says Heidecke. He adds that there is still room for improvement.
Updated: 10/28/2025, 3:28 pm PST: About 1.2 million ChatGPT users in a typical week are possibly expressing suicidal ideations, and another 1.2 million may be emotionally reliant on ChatGPT. This story has been updated to present those figures individually, rather than as a combined number.
Incident 1207: Purported AI-Generated Deepfake of Irish Fine Gael Presidential Candidate Heather Humphreys Used in Fake Investment Videos on Meta Platforms
“Mayo scam alert: Fraudsters cloning presidential election candidate in fake investment videos”
Fraudsters are already using deepfake technology to clone the image and voice of presidential election candidate Heather Humphreys in fake investment videos circulating online.
Bank of Ireland is warning consumers to watch out for more of these adverts, mainly originating via Meta platforms, over the coming weeks.
The video falsely portrays Heather Humphreys endorsing a high-return investment scheme, with the content entirely fabricated using AI-driven voice and image cloning.
Nicola Sadlier, Head of Fraud at Bank of Ireland, said: “This is deeply concerning. We are witnessing an ongoing spate of scams that exploit public trust in well-known figures.
"These videos are highly convincing, designed to lure unsuspecting individuals into fraudulent schemes. I would urge the public to remain vigilant, there may be more of these in the coming weeks. If you see such content, do not engage.
“As the EU explores new initiatives to encourage consumers to invest, it must also confront the growing wave of online scams that threaten to undermine public confidence.
"One critical step is holding social media platforms accountable. Before financial services adverts go live, platforms should be required to verify that the advertiser is authorised by a recognised regulatory body.
"This simple check could block thousands of scam promotions from ever reaching the public.”
Consumers are advised to ignore any unsolicited investment offers online, and always verify financial products through their bank or trusted financial advisor. Make sure that the investment firm is regulated by checking the Central Bank of Ireland register (ROI). If a firm is not listed, do not invest. Bank of Ireland offers a 24/7 fraud telephone support for customers on 1800 946 764, available every day of the year.
Incident 1251: Purportedly AI-Generated Hunting Regulation Errors Reportedly Lead to Idaho Citation and Multi-State Warnings from Wildlife Agencies
“Ain’t nothing like the real thing: Officials warn against using AI for hunting regs”
There is at least one thing artificial intelligence is not ready to replace.
The humble hunting regulation booklet.
Earlier this month, the Idaho Department of Fish and Game issued a warning to hunters that they should not rely on AI to interpret the state’s hunting and fishing regulations, and that game wardens had encountered people in the field who had been fed the wrong information by AI.
Roger Phillips, a spokesperson for Idaho Fish and Game, said Friday that the incident that pushed the agency to issue a warning came in southeast Idaho, where a waterfowl hunter was caught hunting a day early.
When confronted by wardens, the hunter said he had gotten the opening date from an online search.
Phillips said wardens found that the search gave him an AI answer that pulled information from a failed proposal, not the agency’s final decision or its regulations book.
That got Idaho Fish and Game staff interested in what else AI had wrong. They found a handful of examples where the robots fell short. Among them: A search for rules on a specific river in Idaho turned up results for a river with a similar name in Arkansas.
Therein lies the problem, Phillips said.
No matter the parameters of the search, you cannot always trust that AI will pick the right source from the infinite options available in cyberspace.
“It casts a really wide net,” he said.
Wildlife officials in other states have also noticed AI problems. Officials in places like Illinois and Maine have warned people against using AI to answer questions about hunting rules.
Just this week, the Wyoming Game and Fish Department this week said in a news release that the agency has been alerted to “a number of inaccurate AI-generated search results.”
Several of the errors stemmed from failed bills listed on the Wyoming State Legislature’s website, according to the release. Search results created by AI referenced some failed bills as if they had passed and become law.
Aaron Kerr, Wyoming Game and Fish’s law enforcement supervisor, said in the release that relying on bad information does not absolve anyone from the consequences of violating hunting or fishing regulations.
“Hunters and anglers are accountable for following published rules and regulations,” Kerr said. “Failure to follow these carries the potential for fines, jail time, or suspension of hunting and fishing privileges.”
Alan Myers, the Washington Department of Fish and Wildlife’s police captain for Eastern Washington, said he has not heard of any cases of violations involving AI interpretations of Washington’s hunting regulations.
The internet’s stockpile of misinformation predates the rise of AI, though. Myers said there have been cases in the past where a hunter or angler’s defense for a violation was a bad search result or something they’d read on social media.
Phillips blames all of this on the human urge to solve any problem with a Google search.
“When I want to know if a restaurant is about to close, I just Google it,” Phillips said. “We get in the habit of doing these things, and then when we go hunting and fishing we might carry those habits with us.”
Google introduced its AI tool in 2024 and began offering AI-generated overviews in response to many searches. The overviews offer quick answers, but not always the right ones, and not always with the sort of granular detail embedded in hunting and fishing rules.
“When you have a 125-page big game regs booklet that covers controlled hunts and general hunts and archery seasons and muzzleloader seasons and rifle seasons,” Phillips said, “you could see where that would overwhelm AI pretty quickly.”
There is at least one AI tool meant to help people navigate the complexities of hunting regulations.
Called Scout, the tool was created by the International Hunter Education Association. It launched last year, and it lets people ask a chatbot specific questions about hunting rules and regulations in a particular state.
Users pick a state from a dropdown menu and then ask their question. The bot combs rulebooks for a given state, and then gives an answer. The answer also cites the pages the information came from. The booklet itself appears in a PDF viewer below the answer, and the site also lets users download PDFs for any state.
Getting people to use the official source is the only solution. Even the sometimes-faulty AI search results usually include a disclaimer urging people to consult regulation booklets.
No one booklet covers everything a hunter or angler needs to know. Separate publications are created each year for fishing, big game, migratory birds, turkeys and more.
They are free, though, and easy to find online and in real life.
“Any place where licenses and tags are sold,” Phillips said.
Incident 1252: Judges in New Jersey and Mississippi Admit AI Tools Produced Erroneous Federal Court Filings
“Federal judges using AI filed court orders with false quotes, fake names”
Two federal judges in New Jersey and Mississippi admitted this month that their offices used artificial intelligence to draft factually inaccurate court documents that included fake quotes and fictional litigants --- drawing a rebuke from the head of the Senate Judiciary Committee.
"I've never seen or heard of anything like this from any federal court," Sen. Chuck Grassley (R-Iowa) said in a Senate floor speech Monday.
The committee announced Thursday that the judges, Henry T. Wingate of the Southern District of Mississippi and Julien Xavier Neals of the District of New Jersey, admitted that their offices used AI in preparing the mistake-laden filings in the summer. They attributed the mistakes to a law clerk and a law school intern, respectively, according to letters the judges sent in response to a Senate inquiry.
Both faulty court documents were docketed and had to be hastily retracted after defendants alerted the judges to the errors. Neither judge explained the cause of the errors until the committee contacted them.
The use of generative artificial intelligence has become more common in the U.S. judicial system. Wingate and Neal join scores of lawyers and litigants who have been rebuked for using AI to produce legal filings strewn with errors.
Legal groups are still catching up. The Administrative Office of the U.S. Courts, which supports the federal court system, issued interim guidance in July that suggests users "consider whether the use of AI should be disclosed" in judicial functions. It has also established a task force to issue additional guidance on AI use in federal courts.
Grassley said Monday that federal courts need to establish rules on AI use in litigation.
"I call on every judge in America to take this issue seriously and formalize measures to prevent the misuse of artificial intelligence in their chambers," he said.
Wingate and Neals said in their letters that they took corrective measures after being alerted to the mistakes and will implement additional reviews of court filings before they are submitted. Neals said he established a written policy in his chambers prohibiting the use of generative AI in legal research or drafting court filings.
Wingate did not immediately respond to a request for comment. Neals's office declined to comment.
Wingate, whom President Ronald Reagan appointed to the court in 1985, was overseeing a case brought by the Jackson Federation of Teachers and other advocacy groups against the Mississippi State Board of Education and other state bodies. The suit challenged a state law banning public schools from teaching "transgender ideology" and "diversity training" on topics of race, gender and sexual orientation.
On July 20, Wingate granted a temporary restraining order that blocked the state from enforcing parts of the ban. Two days later, in a motion to clarify, Mississippi attorneys said Wingate's order was replete with errors.
The order named several plaintiffs and defendants, including a college sorority, a Mississippi parent, students and government officials, who were not parties to the case, according to the Mississippi attorneys' response. The order described allegations that did not appear in the plaintiff's complaint and falsely quoted the bill as being blocked, the attorneys noted. The order also cited declarations from individuals in support of a restraining order that did not exist.
Wingate's office issued a corrected restraining order that evening and told the parties to disregard the previous one. The case is ongoing; Wingate granted a preliminary injunction against the bill in August that Mississippi attorneys appealed.
Neals, who was appointed by President Joe Biden in 2021, issued an opinion with errors in a federal securities class-action lawsuit against CorMedix, a pharmaceutical company, over allegations that it misled investors about a medical product.
On June 30, Neals denied a CorMedix motion to dismiss the lawsuit. About a month later, attorneys for CorMedix wrote that Neals's opinion contained fabricated cases and nonexistent quotes from real cases it cited in support of his ruling. It misstated the outcomes of cases and whether appeals motions to dismiss were granted. It also attributed false quotes to CorMedix, according to the letter.
Neals's opinion was also submitted as "supplemental authority" in support of another class-action lawsuit, whose defendants also raised the issues with his filing, the letter said.
Neals said the opinion was entered in error and removed it from the court docket. The case is ongoing.
The mistakes in both judges' orders were similar to those caused by AI hallucinations --- where generative AI, which produces text by predicting what words follow each other from an analysis of written content, confidently invents facts and false citations --- and observers quickly speculated that the errors had come from AI use.
At first, facing questions from lawyers and litigants, neither judge admitted that the errors were AI-related. Grassley, in his Monday speech, called their "lack of transparency ... breathtaking."
The Senate Judiciary Committee wrote to Neals and Wingate in early October inquiring about the mistakes, it said. Both judges said in their responses that the errors were attributable to AI but that the filings were drafts that were mistakenly published before review.
A law clerk in Wingate's office used the Perplexity AI tool as a "foundational drafting assistant" to synthesize publicly available information on the court docket, Wingate wrote. A law school intern for Neals used ChatGPT to perform legal research, Neals wrote.
(The Washington Post has partnerships with Perplexity and ChatGPT's creator, OpenAI.)
"I manage a very busy docket and strive to maintain the public's trust by administering justice in a fair and transparent manner," Wingate wrote. "Given that I hold myself and my staff to the highest standards of conduct, I do not expect that a mistake like this one will occur in the future."
"While my experience in the CorMedix case was most unfortunate and unforeseeable, I hope that, at the very least, it will inform the [Administrative Office of the Courts] Task Force's continuing work and ultimately lead to new meaningful policies for all federal courts," Neals wrote.
Incident 1250: Alleged False Positive by Omnilert AI Gun Detection System Prompts Police Search at Baltimore County High School
“US student handcuffed after AI system apparently mistook bag of chips for gun”
An artificial intelligence system (AI) apparently mistook a high school student's bag of Doritos for a firearm and called local police to tell them the pupil was armed.
Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.
"At first, I didn't know where they were going until they started walking toward me with guns, talking about, 'Get on the ground,' and I was like, 'What?'" Allen told the WBAL-TV 11 News television station.
Allen said they made him get on his knees, handcuffed and searched him -- finding nothing. They then showed him a copy of the picture that had triggered the alert.
"I was just holding a Doritos bag -- it was two hands and one finger out, and they said it looked like a gun," Allen said.
Baltimore county high schools last year began using a gun detection system using school cameras and AI to detect potential weapons. If it spots something it believes to be suspicious, it sends an alert to the school and law enforcement officials.
In a letter to school families obtained by WBAL TV 11 News, the school wrote: "We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident. Our counselors will provide direct support to the students who were involved in this incident and are also available to speak with any student who may need support."
Baltimore county police told the outlet: "Officers assigned to Precinct 11-Essex responded to Kenwood High School following a report of a suspicious person with a weapon. Once on scene, the person was searched and it was determined the subject was not in possession of any weapons."
Lamont Davis, Allen's grandfather, told the television station: "Nobody wants this to happen to their child. No one wants this to happen."
This article was amended on 24 October 2025. An earlier version displayed a picture of a Baltimore city police vehicle, although Baltimore county does not include the city of Baltimore.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – June and July 2025
By Daniel Atherton
2025-08-02
Garden at Giverny, J.L. Breck, 1887 🗄 Trending in the AIID Across June and July 2025, the AI Incident Database added over sixty new incident...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor





