Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Donate
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Family of Florida mass shooting victim sues OpenAI in US court

Incident 1487: ChatGPT Was Alleged to Have Aided Planning of Florida State University Mass Shooting

“Family of Florida mass shooting victim sues OpenAI in US court”Latest Incident Report
reuters.com2026-05-13

May 11 (Reuters) - The family of a man killed in ​a 2025 mass shooting at Florida State University has filed a lawsuit against OpenAI in a U.S. ‌court, claiming the shooter was aided by ChatGPT in planning the attack.

The family of Tiru Chabba filed the lawsuit on Sunday in Florida federal court against the company and the man charged in the shooting, Phoenix Ikner. It is at least the second lawsuit filed in the U.S. accusing ​OpenAI of facilitating a mass shooting.

The lawsuit claims ChatGPT served as a co-conspirator in the shooting, because Ikner ​planned and carried it out using information provided by ChatGPT in conversations in the preceding months. Despite conversations ⁠about mass shootings, the lethality of Ikner's weapons and when the FSU student union was busiest, the chatbot did not ​flag or escalate the conversations, the lawsuit claims.

The lawsuit, which seeks compensatory and punitive damages, accuses OpenAI of designing a ​defective product and failing to warn the public about its risks.

"Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime," OpenAI spokesperson Drew Pusateri said in a statement.

"In this case, ChatGPT provided factual responses to questions ​with information that could be found broadly across public sources on the internet, and it did not encourage or promote ​illegal or harmful activity."

Pusateri said the company identified an account believed to be associated with the suspect after the shooting and proactively shared ‌it ⁠with law enforcement. The company continues to cooperate with law enforcement and is continuously working to improve detection of harmful intent, he said.

Ikner, a deputy sheriff's son, killed two people and wounded four others at the school in Tallahassee, Florida, before he was shot by officers and hospitalized, authorities said. He faces two counts of first-degree murder and seven counts of attempted first-degree ​murder, according to court records.

A lawyer ​for Ikner did not ⁠immediately respond to a request for comment.

Florida Attorney General James Uthmeier announced in April that he was launching a criminal investigation into ChatGPT's role in the FSU shooting after prosecutors ​reviewed the chat logs between Ikner and the program.

OpenAI has said it trains its models ​to refuse requests ⁠that could "meaningfully enable violence," and notifies law enforcement when conversations suggest "an imminent and credible risk of harm to others," with mental health experts helping assess borderline cases.

AI companies are facing a growing wave of lawsuits accusing them of failing to prevent chatbot interactions ⁠that plaintiffs ​say contribute to self-harm, mental illness and violence.

Last month, family members of ​victims of one of Canada'sdeadliest mass shootings filed a group of lawsuits against OpenAI and CEO Sam Altman, alleging the company knew eight months before the attack ​that the shooter was planning it on ChatGPT but did not warn police.

Read More
Loading...
Prosecutors say suspect in missing students’ killings asked ChatGPT about disposing of a body

Incident 1488: ChatGPT Was Reportedly Consulted on Body-Disposal and Concealment Questions Before and After University of South Florida Doctoral Students Were Killed

“Prosecutors say suspect in missing students’ killings asked ChatGPT about disposing of a body”
apnews.com2026-05-13

ORLANDO, Fla. (AP) --- The suspect in the killings of two University of South Florida doctoral students from Bangladesh had asked ChatGPT what would happen if a human body was put in a garbage bag and thrown in a dumpster, days before they went missing, according to a report filed by prosecutors over the weekend.

Hisham Abugharbieh, 26, also asked the artificial intelligence chatbot whether the vehicle identification number on his car could be changed and whether he could keep a gun at home without a license, according to the pretrial detention report filed Saturday. ChatGPT responded that Abugharbieh's question sounded dangerous, according to the report.

An investigation that the office of Florida's attorney general launched last week over whether ChatGPT offered advice to a gunman who killed two people last year at Florida State University will be expanded to include the killings of the USF students, Attorney General James Uthmeier said Monday on social media.

The remains of Abugharbieh's roommate, Zamil Limon, were found on the Howard Frankland bridge Friday morning, but Hillsborough County Chief Deputy Joseph Maurer said later that day that they were still searching for Limon's girlfriend, Nahida Bristy. On Sunday, the sheriff's office announced that a body had been found in a waterway near the bridge but had not been identified.

Abugharbieh, was charged with two counts of premeditated murder in the first degree with a weapon in the deaths of Limon and Bristy, the sheriff's office announced Saturday. The former USF student was ordered held without bond. A hearing is set for Tuesday.

Limon and Bristy, both 27, were considering getting married, a relative said. They disappeared April 16. Limon was last seen at the off-campus apartment complex where he lived with Abugharbieh, and Bristy at a campus science building.

Limon was studying geography, environmental science and policy, and Bristy was studying chemical engineering. She was a graduate of Noakhali Science and Technology University. The school said in a statement Saturday that she was a Ph.D. candidate and described her as a talented and promising student.

A friend contacted police April 17 about being unable to reach both Bristy and Limon, despite repeated attempts by phone, according to the report. Police investigators searched Bristy's campus office the next day and found her purse, lunchbox, MacBook and iPad.

At Limon's off-campus apartment, detectives questioned Limon's two roommates and noticed that Abugharbieh's left pinky finger was bandaged. When confronted by detectives, Abugharbieh denied any involvement with Limon's disappearance.

The third roommate told detectives that Abugharbieh had used a cart overnight on April 16 and April 17 to move cardboard boxes from his room to the trash compactor. In the trash compactor, detectives found Limon's wallet and campus ID badge, credit card, eyeglasses and clothes that appeared to have blood on them.

Detectives found blood leading from the kitchen to Abugharbieh's bedroom and more blood in his bedroom. In Limon's bedroom, they found Bristy's campus ID and credit cards, suggesting she had been at the apartment before she disappeared, according to the report.

Using cellphone location and license plate reader data, detectives concluded that Abugharbieh's car and Limon's phone had both been on the bridge and on Clearwater Beach, the report said. Based on location data from Abugharbieh's phone, detectives searched around the bridge and found a trash bag containing Limon's body. The medical examiner concluded that Limon had numerous stab wounds.

Three days after Limon and Bristy's April 16 disappearance, Abugharbieh asked Chat GPT, "Has there been someone who survived a sniper bullet to the head" and "will my neighbors hear my gun," according to the report. He also asked the chatbot four days after that, on April 23, "What does missing endangered adult mean."

Abugharbieh, a native-born U.S. citizen, was initially arrested Friday at his family's home on preliminary charges that include unlawfully moving a dead body, failure to report a death, tampering with evidence, false imprisonment and battery. Reached by email on Monday, Jennifer Spradley, an attorney in the public defender's office in Tampa, said the office wouldn't comment on Abugharbieh's case.

Officers encountered Abugharbieh as they responded to a report of domestic violence at his family's home, just north of the campus, and were able to move his relatives to safety. But then he barricaded himself inside and refused to come out. A SWAT team responded --- along with a drone, a robot and crisis negotiators --- before Abugharbieh came out with his hands up, apparently wearing nothing but a blue towel.

Abugharbieh had been a USF student but was not currently enrolled. University records showed he had attended the school from spring 2021 through spring 2023 and had pursued a BS in management, a university spokesperson said.

Read More
Loading...
Massive AI investment scam network spans 15,500 domains

Incident 1486: AI-Themed Investment Scam Network Reportedly Used Keitaro Cloaking Across 15,500 Domains

“Massive AI investment scam network spans 15,500 domains”
malwarebytes.com2026-05-11

Researchers tracked a large AI‑themed investment scam campaign involving more than 15,000 domains. It uses cloaking and deepfakes to hide from security tools while targeting ordinary users.

Criminals abused the Keitaro ad-tracking platform as part of a cloaking system so real victims see scam content, while security scanners, ad reviewers, and some random visitors see harmless pages, making the operation hard to detect and shut down.

Keitaro is a commercial tracking platform originally meant for digital marketers to manage ad campaigns, test which ads work best, and route visitors to different landing pages.

Because it is feature rich, easy to spin up on regular hosting, and built to filter and route traffic, criminals found they can abuse those capabilities to run scams at scale.

Traffic starts in many places. The scammers used compromised websites, spam emails, social media posts, and online ads, all quietly routing through the same tracking infrastructure.

The scam sites typically promise "Smart AI Trading Technology" or "Intelligent Trading Solutions" and claim consistently high returns, often reinforced with deepfake images or fabricated media to look more credible.

Some parts of the campaign now use deepfake videos and fake interviews with well-known public figures, making it look like a celebrity, or finance expert personally endorses the platform.

Once you follow a link, the cloaking part of the operation kicks in. Cloaking is the trick that makes these scams so hard to see from the outside.

When you click an ad or link, your visit passes through a traffic distribution system (TDS), a kind of router for web visitors that decides which page you see. In these cases, the TDS is connected to the tracker.

The system checks things like:

  • Your country/region
  • Your device and browser
  • Where you came from (Facebook ad, Google ad, email link, etc.)
  • Sometimes your IP address reputation or other subtle fingerprints

You're shown the real investment scam landing page only if you match the "ideal victim" profile (for example, a regular consumer in a target country coming from a social media ad).

Everyone else, like a security researcher, ad platform reviewer, or automated scanner, gets shown a benign page, like a generic blog or placeholder site.

How to stay safe

The best way to stay safe is to stay informed about the tricks scammers use. Learn to spot the red flags that almost always give away scams and phishing emails, and remember:

  • There is no such thing as a risk-free, consistently profitable investment. If you're looking to invest, navigate directly to known, regulated financial institutions.
  • Deepfakes are very convincing nowadays, so you will hardly be able to tell the difference between the real celebrity and their deepfake persona.
  • Don't act upon unsolicited investment advice, whether it reaches you by email, social media, or sponsored search results.
  • Use an up-to-date, real-time anti-malware solution with a web protection component or a reputable tracking and ad-blocker.
  • Don't act on impulse or under time pressure. Always properly research where your money will be going.

Pro tip: Malwarebytes Scam Guard can help you recognize and analyze scams.

Read More
Loading...
Guelph woman who thought she spoke to YouTube star Mr. Beast lost $14K in crypto scam: Police

Incident 1485: Guelph, Ontario, Woman Reportedly Lost $14,000 in Purported Deepfake MrBeast Cryptocurrency Scam

“Guelph woman who thought she spoke to YouTube star Mr. Beast lost $14K in crypto scam: Police”
cbc.ca2026-05-11

Police in Guelph, Ont., are reminding people to be wary of celebrity endorsements of financial investments after a woman in the city lost thousands.

Police say a woman clicked on a social media ad that appeared to be popular YouTube content creator Mr. Beast promoting an investment opportunity. Mr. Beast is known for wild challenges where he'll give the winners large amounts of money.

The woman initially paid $250 to join the investment, then handed over more money as the scam continued.

At one point, she thought she spoke to Mr. Beast on the phone who encouraged her to put $5,000 in a cryptocurrency wallet address he provided.

In total, police say she lost $14,000.

"Residents are encouraged to be wary of any telephone call, email or text which requires you to take immediate action and to be suspicious of any supposed celebrity endorsement which can be easily faked with artificial intelligence," police said.

Mr. Beast, whose real name is Jimmy Donaldson, has addressed the deep fake AI ads using his likeness. In a post on X, formerly Twitter, in Oct. 2023 he called it "a serious problem" after an AI-generated video showed him offering to give people the newest iPhone if they sent him $2.

Anyone who believes they've been a victim of this or a similar cyber fraud should contact police and the Canadian Anti-Fraud Centre.

Read More
Loading...
His daughter called him crying, and then another voice got on the phone. Only one of them was real

Incident 1482: Purported AI-Generated Voice Reportedly Impersonated Washington Man's Daughter in $13,000 Extortion Scam

“His daughter called him crying, and then another voice got on the phone. Only one of them was real”
spokesman.com2026-05-10

Mark A. Young's daughter called him at his

home on a Monday. The young woman's panicked voice projected fear and pain.

She needed her father. "She was crying and upset," Young said. "She told me, 'Dad, I got in an accident and I'm in trouble. I need help.' " Knowing she had traveled from Spokane to Seattle for a concert, the tone of the familiar voice sparked an immediate defensive reaction from Young, who spent most of his career responding to danger. A male voice then got on the phone, said he was a medic and

told Young, a retired police officer, that his 24-year-old daughter had been involved in a collision. And then a second male voice took over the conversation. That voice continues to haunt Young. That man, who had a Scottish accent, asked questions of Young to confirm he was the driver's father.

"Then his tone changed," Young said. "He first identified himself as a drug dealer." The man, who never gave his name, explained to Young that his daughter had been involved in a collision that had interrupted a drug transaction. She had seen "something she wasn't supposed to see." "He told me because of that, he had taken my daughter as a hostage and he was deciding what to do with her," Young said. "He said he could kill her, make her a prostitute or sell her overseas to sex traders." During the conversation, the voice several times allowed Young to speak with his daughter, who out of a request from the family, The Spokesman-Review agreed not to identify. "There is nothing like a loved one suffering over the phone and there is nothing you can do," Young said. "I thought I was a pretty tough guy. But that almost broke me." But the

voice wasn't that of his daughter. It was a fake, an AI-generated copy.

The perpetrators, investigators later surmised, used a recording of the daughter's voice and a program that so accurately mimicked her speech pattern that it convinced Young to leave his home in the small Whitman County town of Garfield to visit several banks and drive all over the Palouse and Idaho to get money to save her life. The Federal Trade Commission

began warning consumers as early as 2023

of the new type of scam that uses AI to copy the voices of family members. "Scammers ask you to pay or send money in ways that make it hard to get your money back. If the caller says to wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs, those could be signs of a scam," according to the FTC news release. In Young's case, he wired

$13,000 and was in the process of getting thousands more when the 30-hour

ordeal finally ended inside of a Pullman bank branch. The calm demeanor of a longtime banker, and quick response by Pullman Police, finally led to

Young's daughter being reached to make sure she was safe and at home in Spokane.

"I almost collapsed," said Young, who cried at learning the truth. "I was just so relieved. It took me days to literally start feeling normal again."

The intensity of the ordeal, he said, cannot be understated, even for a man who worked 26 years as a police officer in Santa Rosa, California, and volunteered to serve as a Marine in Vietnam where he

was shot

in the arm. "You are living every minute like it's your last," Young said of the AI extortion. "You are trying to figure out what you can do. That's a hard feeling for me to grasp, because I've never felt helpless like that."

Chasing the fear

The call came

Monday, March 23. After the initial conversations that convinced Young that his daughter was in danger, he began complying with the demands from the man with the Scottish accent. The man explained to Young that he first needed to transfer money to cover the thousands lost in the drug transaction that his daughter had interrupted

. Young, who worked six years as a journalist before becoming a police officer and

also has authored six books

, said he focused his entire energy on helping his daughter. "I had no doubts that I had spoken to my daughter, and I assumed that she was in his custody," he said. "I did whatever I could to cooperate for the next 30 hours." The voice ordered Young to maintain the telephone call, which prevented him from calling for help or his wife. "He said, 'If I lose you, I'm going to take it out on your daughter.' My training as a cop for 26 years, and being in the Marines, I knew how to keep my emotions under control and handle myself as a dutiful victim," Young said. "But it wasn't easy. "I just played along with him, and he directed me to go to one of two banks that we have." Young first drove to a branch in Garfield and withdrew $5,000. The caller directed him to drive to a Walmart in Pullman where he had Young transfer half the money to someone in Mexico. The caller directed Young to do the same thing at a Walmart in Moscow. He then traveled to the WaFd (formerly known as Washington Federal) Bank branch in Pullman, where he took another $5,000, and drove to Lewiston to transfer the same $2,500 amounts from two different stores there. "I'm on the phone with him the whole time. When I'm traveling, I have to list off all the mile markers on the road so he knows where I'm at," said Young

, who could tell the man on the other line was real, unlike the voice mimicking his daughter. "He let me talk to my daughter a couple more times. In the meantime ... he's trying to get more information from me." The caller asked whether Young, 75, had retired and what he did for his career. Fearing that his daughter had already told the abductor about his past, Young said he told the truth to protect her. The caller then directed Young to travel to Boise, because he said he feared that local law enforcement may be trying to monitor the situation. Young complied, drove hours south and got a hotel room in Boise. Once there, he snuck out of his room, raced down to the hotel lobby and used the hotel phone to call his wife. Katie Young was in Chicago on business.

When Mark Young called, her phone was off and the call went to voicemail. But his wife's voicemail was full, so he was not able to leave a message. "That was a bummer," Young said. He got back to his room just as the voice called Young's cell

phone to check on him. The next day, Young transferred another $2,500 from a store in Boise. But when he tried it again, he learned that his ATM card had been blocked. He told the voice that he could solve the ATM problem by driving back to his bank branch in Pullman. On the drive back north, Young passed several places where he lost cell

phone coverage in the Idaho mountains. During one of those periods, he pulled over and wrote down his wife's phone number and explained his situation on a piece of paper so that the FBI could later tell his wife what was happening in case the situation worsened.

He finally made it back to Pullman the afternoon of Tuesday, March 24. "He led me to believe that he was doing surveillance on me while I went to the bank," Young said. Young walk

ed into the branch, located at 405 E. Main St. in Pullman, and saw someone familiar

-- Alex Navarro, the assistant branch manager. Young handed Navarro a note, pointed at it

and walked into the bank's bathroom.

Adventures in banking

Navarro said he took the note and immediately thought something else was happening. Young "walked in with a purpose. He had this note. I knew something was wrong when he set it down and pointed at it," Navarro said. "The first thing that popped in my mind was, 'This guy is robbing me.' Then he went to the bathroom. I thought, that's weird. "Then I read the note

. I was not expecting that." Young said he fully intended to withdraw $17,000, which is what the voice said it would take to buy his daughter's freedom. Sensing the danger and realizing through Young's gestures that the perpetrator was listening in by telephone, Navarro began calmly explaining the process to get the money.

Navarro then told Young that he had to make a phone call to the ATM company to release his card. "I was just buying time," Navarro said. "I made a phone call to our card department and was talking on the phone. But actually, I was texting the teammates that I have a very serious situation going on and I need someone in the back room." Navarro came up with

another excuse, saying that he had to go in the back to retrieve cash. He used the ruse to meet with a longtime teller. "I told her, 'We have a possible extortion kidnapping.' " She went to another part of the building

and called Pullman Police, located a block away. In the meantime, Young passed another note asking Navarro whether he was going to get his money. Navarro told him no and indicated that police had been alerted. "He was truly panicked," Navarro said. "He was grabbing his forehead. His physical distress was clear. He wrote back in another note, 'They are watching us. They said they are outside the building watching us.' " But Navarro needed to stall for time to allow police to arrive. He solved it with the slow count. "I told him, 'I have to count back the $17,000 to you.' So, we went through a fake transaction," he said. With the voice listening, Navarro made a big show of counting out the money in $100 increments. Because the money didn't exist, Navarro pantomimed placing papers down on the desk to sound like he was stacking cash. Halfway through his counting delay tactic, Officer Shane Emerson entered the branch. Navarro and Young took pains to let the officer know that the perpetrator was listening, and they also passed Emerson a note warning that others could be watching the bank. Emerson, according to his police report, called for backup officers to respond to search the branch's parking areas and other location to see if they could locate conspirators. He then took Young's note, the one

with his wife's phone number

written on it, and left the bank before calling Katie Young. "I advised her what was going on, and she said she was on the phone with (her daughter). She merged the calls," Emerson wrote. The daughter "said she was at her residence in Spokane and was absolutely safe." Emerson returned to the branch to alert Mark Young that his family was safe. "Young was shocked and started to cry," Emerson wrote. "I didn't realize how much I was holding in until I learned my daughter was fine," Young said. Young said he later sent both Emerson and Navarro notes thanking them for how they handled and diffused what he was convinced was a life-or-death situation. Navarro said he's seen a lot during 35 years of banking, but nothing compared to that afternoon. "This is the first time I have ever heard somebody being involved with something like this," Navarro said, "where an AI-generated voice has mimicked a person's voice and they weren't able to tell the difference."

The aftermath

Young followed up with Pullman police and later traveled to Spokane to speak with FBI investigators, but as of yet, has not been able to recover any of the funds he wired from the multiple stores. Katie Young said it's taken her husband a long time to recover from the ordeal. She said she was in the lobby of the hotel in Chicago when Officer Emerson called her. "You never really want to get a call, when you are out of town, from the police department. It's never really good news," she said. "I was instantly at attention." After merging the calls to let Emerson know that her daughter was safe, Katie Young flew to Spokane and drove to Garfield. "I was home the next day. My daughter drove over," she said. "We were all together and happy." She stressed

that she doesn't believe her family was a victim of a scam.

"This is really extortion. It's a very different thing," she said. "And

it's unconscionable. I can't imagine how anyone could possibly do that." They still don't know how those extortionists knew their daughter's voice. Both Katie and Mark Young implored parents to work with family members to come up with a safe word

only they would know, to utter in emergencies like this. It's about the only defense they could imagine in a similar scenario. "They can fabricate it so well now," Mark Young said of his daughter's voice. "I could not tell it wasn't her. To me, it was her voice. I was trying to calm her down while trying to calm this guy down." He noted that throughout the 30-hour ordeal, the man with the Scottish accent called back dozens of times. Many of those calls came from different phone numbers that had area codes from Oregon, Washington and Idaho. After the rush of emotion on learning his daughter was safe, Mark Young left the Pullman branch. When he got to his car, the voice called back one last time. "The bad guy comes on the phone and said, 'You got my money?' I told him, 'No. I know my daughter is fine,' " Young said. "I told him several choice words that I can't repeat." The voice claimed that he had guns and could kill everyone Young loves. "I told him, 'I pray you come looking for me,' and I told him what I would do

," Young said.

Read More
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – February, March, and April 2026

By Daniel Atherton

2026-05-05

Lisière de la forêt de Fontainebleau, Alfred Sisley, 1865 🗄 Trending in the AIID For this roundup, I'll be surveying the new incident IDs t...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists, Stanford AI Index, Rolling Stone, the Guardian, Harvard Business Review, Brasil em Folhas, Newsweek, and other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoNewsweek logoTime logoBulletin of the Atomic Scientists logoStanford HAI logoRolling Stone logoThe Guardian logoHarvard Business Review logoBrasil em Folhas logo
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 638678f