Welcome to theAI Incident Database
Incident 1442: Kiro AI Coding Tool Was Reportedly Implicated in 13-Hour AWS Cost Explorer Outage in Mainland China
“Amazon blames human employees for an AI coding agent’s mistake”Latest Incident Report
Amazon Web Services suffered a 13-hour outage to one system in December as a result of its AI coding assistant Kiro's actions, according to the Financial Times. Numerous unnamed Amazon employees told the *FT *that AI agent Kiro was responsible for the December incident affecting an AWS service in parts of mainland China. People familiar with the matter said the tool chose to "delete and recreate the environment" it was working on, which caused the outage.
While Kiro normally requires sign-off from two humans to push changes, the bot had the permissions of its operator, and a human error there allowed more access than expected.
Amazon described the December disruption as an "extremely limited event" that pales in comparison to a major outage in October, which took down online services, like Alexa, Fortnite, ChatGPT, and Amazon for hours. An outage that didn't trap anyone in their smart bed is something of a lucky escape.
It is not the only time AI coding tools have caused problems for Amazon. A senior AWS employee said the December outage is the second production outage linked to an AI tool in the last few months, with another linked to Amazon's AI chatbot Q Developer. The employee described the outages as "small but entirely foreseeable." Amazon said the second incident did not impact a "customer facing AWS service."
Amazon blames human error for the problems, not the rogue bot, and said it has "implemented numerous safeguards" like staff training following the incident. The company said it's a "coincidence that AI tools were involved" and insists that "the same issue could occur with any developer tool or manual action." That's true, and though I'm not an engineer, I'd guess one wouldn't deliberately scrap and rebuild something to make a change in all but the most dire of circumstances.
Incident 1444: Hachette Reportedly Canceled Publication of Mia Ballard's Shy Girl After Generative AI Authorship Allegations
“Publisher Pulls ‘Shy Girl’ Horror Novel After AI Allegations”
The Hachette Book Group said Thursday that it has canceled the publication of horror novel "Shy Girl" following an investigation into the origins of the book.
The novel, by Mia Ballard, was expected to publish May 19 in the U.S. via its Orbit U.S. imprint. Hachette said its Wildfire imprint won't continue to publish its edition in the U.K., where it was first released in November.
Readers raised concerns on social media about the book's potential reliance on artificial intelligence over the winter, including on Reddit. In one YouTube video posted in January that has attracted more than 1.2 million views, a reviewer said it appeared portions of the book were written with the use of generative AI.
A Hachette spokeswoman said both imprints conducted a "lengthy investigation in recent weeks" and Orbit U.S. decided not to publish.
"Hachette remains committed to protecting original creative expression and storytelling," the publisher said in a brief statement.
The New York Times reported earlier on Hachette's actions.
Ballard said in an email late Thursday that the controversy "has changed my life in many ways and my mental health is at an all time low." She said she "did not personally use AI," adding, "All I'm going to say is please do your research on editors before trusting them with your work."
All Hachette authors are required to attest that their manuscripts are "original and created by them" before receiving a contract, the spokeswoman said.
The publishing industry has been roiled by advances in artificial intelligence and questions over how or if writers should use AI in their books. Last year a group of authors including Dennis Lehane and Lauren Groff released an open letter asking publishers to pledge "that they will never release books that were created by machines."
Incident 1437: Grok Allegedly Generated Publicly Visible Sexist Abuse Targeting Swiss Finance Minister Karin Keller-Sutter After X User Prompt
“Due to insults: Keller-Sutter takes legal action against Elon Musk's AI Grok”
According to Tamedia, Karin Keller-Sutter has filed a criminal complaint regarding sexist insults directed at her by the chatbot Grok. An X user had specifically instructed the chatbot to insult the Federal Councillor with vulgar and sexist remarks.
It was CH Media that first publicized the case of Peter K.*, who, on the X platform, instructed the AI chatbot Grok to hurl harsh insults at Federal Councillor Karin Keller-Sutter.
FDP co-president Susanne Vincenz-Stauffacher, who discussed the case with Keller-Sutter, made some noteworthy statements. This example points to a "larger discussion," she emphasized. "This is clearly a sexist insult of the ugliest and most vulgar kind. Criminal action can be taken against it."
The question, Vincenz-Stauffacher said, is who should be held accountable: "Is it the person who wrote the prompt?" Is it the operator of the AI? Is it the operator of the platform? For the FDP politician, it was clear: "We have to clarify these questions. Especially as liberals. The rule of law must apply. Even in the digital realm."
Now Karin Keller-Sutter intends to have it legally clarified who is responsible for offensive content generated by AI chatbots. Her spokesperson, Pascal Hollenstein, confirmed this to the newspapers.
"Such misogyny must not be considered normal or acceptable," Hollenstein was quoted as saying. Criminal law professor Monika Simmler saw good chances of prosecuting the authors of such prompts, even if the posts were subsequently deleted.
A user had specifically instructed the bot to post insults against Keller-Sutter on March 10. Grok reacted with sexist and insulting statements that were publicly viewable and shareable. Keller-Sutter learned of this the following day. Shortly afterward, the post was deleted.
Specifically, Swiss citizen Peter P.* had encouraged Grok to attack Keller-Sutter: "Federal Councillor KKS, my favorite chick. But I'm going to give her a good thrashing, with (...) street slang." Grok then wrote, for example: "Hey you old federal whore Karin Keller-Sutter." Or: "Your politics are as fake as your Botox face, you chick with the IQ of an empty bottle. You reek of lies, lust for power, and xenophobic shit."
Specifically, Peter P.* had encouraged Grok to attack Keller-Sutter: "Federal Councillor KKS, my favorite chick. Go hard on her, with (...) street slang." Post from Peter K. and Grok's reply.
Post from Peter K. and Grok's reply.
Screenshot from CH Media by X
Meanwhile, the Federal Councillor has filed a criminal complaint for defamation and insult. According to Tamedia, her complaint is not about freedom of expression, but about misogynistic denigration, which she believes must be defended.
Peter P. commented to Tamedia. “It was a harmless technical exercise to see what was possible with this Grok,” said the 75-year-old. That's why he deleted the conversation.
The legal situation is still unclear: there is a lack of relevant case law. Monika Simmler, professor of criminal law at the University of St. Gallen, sees a good chance, however, that the author of the prompt can be prosecuted, as she told Tamedia. The AI could be considered a tool in this context. The platform or operator's complicity is also being examined, but this could prove complicated. It would have to be proven that those responsible accepted the defamation and insults.
The case could become a landmark case and raise fundamental questions about responsibility in the use of AI. (hkl/att)
Incident 1439: Former New Orleans Isidore Newman School Teacher Allegedly Used AI to Create Fake Nude Images from Social Media Photos of Girls, Including Students
“Former New Orleans teacher accused of using AI to make fake nude images from social media photos”
NEW ORLEANS (WVUE) - A former New Orleans teacher used artificial intelligence to create fake nude images of females after obtaining their photos from social media, according to investigators.
Benoit Cransac, 49, was arrested on 60 counts of unlawful deepfakes on Wednesday (April 1). He remains in custody at the Orleans Justice Center. He was previously a teacher at Isidore Newman for 13 years.
According to court records obtained by Fox 8, investigators say they found numerous images that appeared to be obtained by Cransac from various social media accounts where the account owners innocently posted images of their activities, according to the Louisiana Bureau of Investigation. The documents say that images were placed into an online artificial intelligence platform where they could be altered from their original state.
After placing the images into the online AI platform, investigators allege that Cransac would then obtain AI-generated images depicting the females as nude, their faces were still as portrayed in the original post, according to investigators. Investigators say that there were also several collages where Cransac placed the AI-generated images together.
There are numerous images of the same females at various stages of life, according to the investigation, and based on the clothing and other items in the original images, the females appear to be from the New Orleans metro area, according to investigators.
"We understand the unsettling nature of this development. If you or your children need support, please reach out directly to a school leader or one of our counselors," Newman's Head of School Dale Smith said in a statement Thursday.
Timeline of arrests
Cransac was first taken into custody on Jan. 8. A search warrant for his residence was executed the same day. During the arrest and search, several electronic devices were located and collected as evidence. Two computers, a cellphone, and an SD card were among the items collected.
In the digital files located on the iPhone belonging to Cransac, investigators say they identified three additional illegal images. An additional arrest warrant was obtained, and the charges were added by the Orleans Justice Center on Jan. 21.
Investigators also say that they located 17 images of female students in what appears to be a classroom on Cransac's phone. The images appear to focus on the buttocks and lower legs of the female students, according to the documents.
On March 23, Cransac was booked on additional charges related to voyeurism.
Investigators continued to review the digital evidence located in the electronics collected from Cransac. In searching the contents of his school-issued computer, investigators say that numerous images of what appear to be unknown teenage females' Instagram postings were found. It appears Cransac accessed the Instagram accounts using an unknown Instagram account.
Cransac is currently being held on a $3.57 million bond for previous charges.
Incident 1440: Coco Robotics Delivery Robot Reportedly Became Stuck on Railroad Tracks and Was Struck by Train in Miami
“Uber Eats delivery robot stuck on track loses high-speed stand-off with train in Miami”
A delivery robot's evening run came to a smashing end in Miami, Florida, when the autonomous device stalled on a railroad track and was promptly crushed by a passing train.
The incident, which happened late on Jan 15, was captured on video by Mr Guillermo Dapelo. The clip has since been viewed more than three million times on an X account.
Mr Dapelo said he noticed the small robot -- owned by Coco Robotics -- sitting squarely in harm's way.
The footage shows the machine remaining motionless as a train approaches at full speed.
"Oh, it's gonna crash it," Mr Dapelo says on the recording moments before the train barrels through and reduces the robot to scrap.
Mr Dapelo told Storyful that the robot appeared to be stuck on the tracks for about 15 minutes.
He said an Uber Eats delivery driver nearby had contacted Coco Robotics to report the situation. Before any intervention could take place, however, the train arrived -- and the video captured what followed.
Coco Robotics, which partners with food delivery platforms, including Uber Eats and DoorDash, confirmed that the robot was not actively making a delivery at the time.
In a statement provided to People magazine by Mr Carl Hansen, the company's vice-president and head of government relations, Coco said the robot experienced a "rare hardware failure" while en route.
"Safety is always our top priority," he said, noting that the robots travel at pedestrian speeds, yield to people and are monitored in real time by human safety pilots.
He added that Coco's robots have operated in Miami for more than a year and have crossed the same tracks multiple times daily without incident.
"This was an unfortunate and extremely rare occurrence," Mr Hansen said. "We're grateful it was a Coco robot and not a vehicle."
The short video underscores a broader challenge facing cities as delivery robots become more common: Automated machines are not always well-equipped for the unpredictability of urban infrastructure.
Railroad crossings, in particular, allow little margin for error. Trains cannot stop quickly, and experts say even with emergency braking, a train travelling at speed may need more than 1km to come to a halt.
Delivery robots are designed to follow mapped routes and avoid obstacles, but mechanical failures or misread signals can leave them stranded, sometimes in the worst possible place.
While no one was injured in this incident, rail safety officials warn that anything left on active tracks, whether a car, debris or a robot the size of a suitcase, can endanger train crews and surrounding communities.
Commenters naturally found humour in the incident.
"They ordered Smashburger. The delivery, although a little late, went on as usual," said one.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – November and December 2025 and January 2026
By Daniel Atherton
2026-02-02
Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants





