Welcome to theAI Incident Database
Incident 1307: Grok AI Reportedly Generated Fabricated Civilian Hero Identity During Bondi Beach Shooting
“The 5 fake Bondi attack stories spread by AI and social media”Latest Incident Report
The shooting had barely finished before the false information began to spread, formed in corners of the internet governed by alternative facts.
According to Grok, the artificial intelligence platform on X (formerly Twitter), a man named Edward Crabtree was the hero who risked his life to disarm one of the attackers. Crabtree even had an entire life story. Only problem -- it was complete fiction.
There were also, allegedly, other attacks happening simultaneously across Sydney, fuelling further fear and paranoia.
"If there are not a lot of facts out there, it will always be filled with whatever is available, and quite often the initial information is very low quality," said Dr Anne Kruger, a mis and disinformation expert from the University of Queensland.
"The plural of anecdote is never a fact. We're seeing the same thing, the laundering of information through social media."
The term fake news may have been popularised by Donald Trump, but misinformation is not new. It is, however, a huge concern as people turn to social media and AI platforms for the latest information.
"It is categorically easier to make fake information than wait for reliable sources to report it," said Nathan Ruser, an analyst with the Australian Strategic Policy Institute. "If something is believable, the narrative can escape much quicker than reality."
There were many false narratives steaming through the internet on Sunday night as two suspected gunmen opened fire at a Hanukkah event next to the iconic Bondi Beach.
Here were five of them.
1. Edward Crabtree
The attack was barely over when a legitimate-looking article began to spread. It was an interview with Crabtree, apparently a 43-year-old IT professional. "I didn't think about it. I just acted," Crabtree was quoted as saying from his hospital bed.
But the website, called thedailyaus.world -- not connected in any way to youth news website The Daily Aus -- was registered by a user based (or pretending to be based) in Iceland on Sunday. (The Daily Aus founders Sam Koslowski and Zara Seidler said their readers had been in touch to alert them to the fake news site).
However, the truth is the man who risked his life to tackle one of the gunmen and take his gun was revealed to be Ahmed Al Ahmed, a father from the Sutherland Shire, who was shot twice for his heroics.
Hedge fund manager Bill Ackman called out the misinformation, which also reveals how widespread the false account travelled. "I am told that other reports claiming the hero was a man named Edward Crabtree are false," Ackman wrote on X.
Israeli Prime Minister Benjamin Netanyahu also got it wrong, initially telling journalists it was a Jewish bystander who wrestled the gun from the gunman's hands, The Times of Israel reported.
2. Other attacks
Minutes after the first round of bullets were fired, posts began circulating about Bondi being the first of a series of co-ordinated attacks on other Eastern Suburbs locations, including nearby Dover Heights and Double Bay.
There were no witness accounts, photo evidence or verification. But posts circulated of supposed witnesses hearing gunshots and claiming to have knowledge of plans of further attacks. Another "Chanukah by the Sea" event at Dudley Page Reserve in Dover Heights went into lockdown and escalated fears and paranoia across Sydney.
The false information spread so far and so quickly the NSW Police Force had to issue a statement on social media. "There have been NO reports of any incidents at Dover Heights -- please do NOT share unconfirmed rumours," NSW police wrote.
3. IDF, Iran, Palestine ties
Before Naveed Akram's identity was verified as one of the two alleged shooters, his driver's licence was posted across social media with purported facts about his identity. It was weaponised by people on either side of Israel's war in Gaza.
Some of the most viral posts made categorically false claims that he was a former soldier in the Israeli Defence Force who had lost his mind after being stationed in Gaza. Others stated he was a Mossad agent and that this was some sort of "false-flag" attack, a term describing an incident designed to look like it was perpetrated by someone other than the person or group responsible.
There were other false claims that Akram had studied at Islamabad University and had been planted by Iran to conduct the attack.
When Akram's name was released, there were also erroneous reports about the identity of the second shooter which named an unconnected Lebanese national of Palestinian descent and which were false.
4. Wrong guy
Late on Sunday, another Sydney-based man called Naveed Akram shared two videos begging media outlets and social media sites to stop naming him connected to the attack.
This Naveed Akram is an IT specialist from Pakistan who moved to Australia in 2018 and happened to have a public Facebook and LinkedIn account which meant photos of him at the cricket and his workplace history were posted all over the internet.
The claim that it was this Akram who committed the attacks went viral after being heavily promoted by India-based accounts, including the Indian Strategic Studies Forum. It was even picked up by some traditional platforms, including US-based CBS News.
Misidentifying the perpetrator of crimes has become commonplace over the past decade since the rise of social media. Earlier this year, Sydney man Benjamin Cohen sued Seven Network after the media company's Sunrise program falsely claimed he'd murdered six people in the Bondi Junction stabbing attack in April.
5. Firework celebration
Some used the tragedy to stoke tensions across Sydney. Videos of fireworks in western Sydney circulated across X, Facebook and Reddit in the aftermath of the attack, including by right-wing content creator Drew Pavlou in a now-deleted post captioned "reports that Islamists set off celebratory fireworks in Bankstown, Western Sydney".
There were fireworks in western Sydney on Sunday night, but they had nothing to do with the horrors at Bondi. Instead, they were to celebrate Padstow's annual Carols by Candlelight event which happened to be scheduled at the same time.
The role of artificial intelligence in driving mis and disinformation will be picked apart in the days and weeks to come. But AI and social media have not covered themselves in glory in the Bondi Beach attack.
"The biggest issue is the way AI has lowered the friction to spreading and creating misinformation," Ruser said.
Kruger added: "Social media sets up an us-versus-them dynamic. In really bad moments, there's no room for nuance. It's a perfect petri dish on social media that puts people on one side or another."
Incident 1309: Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations
“Publisher under fire after ‘fake’ citations found in AI ethics guide”
One of the world's largest academic publishers is selling a book on the ethics of artificial intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.
Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.
The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented --- a sign, often, of AI-generated material.
The book --- Social, Ethical and Legal Aspects of Generative AI --- is advertised as an authoritative review of the ethical dilemmas posed by the technology and is on sale for £125. At least two chapters include footnotes that cite scientific publications that appear to have been invented.
In one chapter, 8 of the 11 citations could not be verified, suggesting more than 70 per cent may have been fabricated.
There is growing concern within academia about citations and even entire research papers being generated by AI tools that try to mimic genuine scholarly work.
In April, Springer Nature withdrew another technology title --- Mastering Machine Learning: From Basics to Advanced --- after it was found to contain numerous fictitious references.
In the more recent book analysed by The Times, one citation claims to refer to a paper published in "Harvard AI Journal". Harvard Business Review has said that no such journal exists.
Guillaume Cabanac, an associate professor of computer science at the University of Toulouse and an expert in detecting fake academic papers, analysed two chapters using BibCheck, a tool designed to identify fabricated references.
He found that at least 11 of 21 citations in the first chapter could not be matched to known academic papers. The analysis also suggested that 8 of the 11 citations in chapter 4 were untraceable.
"This is research misconduct: falsification and fabrication of references," Cabanac said. He tracks such cases and says he has seen a steady rise in AI "hallucinated" citations across academic literature.
He said: "Researchers build knowledge by relying on previously published research ... When [these studies] are fragile or rotten, we can't build anything robust on top of that."
A separate review carried out by Dr Nathan Camp of New Mexico State University reached similar conclusions. Camp, who has studied the rise of fake AI-generated citations, found numerous erroneous, mismatched or wholly invented references in the AI ethics book.
In some cases, details of different genuine papers appeared to have been combined. Another six chapters appeared to be accurate. Each chapter has been written by a different set of authors.
Camp said: "While it is difficult to definitively ascertain whether or not the citations used are AI-generated, they are certainly erroneous at best, likely fabricated, and the simplest way to fabricate citations is with AI."
James Finlay, vice-president for applied sciences books at Springer Nature, said: "We take any concerns about the integrity of our published content seriously. Our specialist research integrity team is investigating this case as a priority."
He added: "Our integrity team works with editors and uses specialist expertise and detection tools to uphold our standards and catch any integrity issues ahead of time. A small number, however, may slip through."
Incident 1310: Canada Revenue Agency (CRA) AI Chatbot 'Charlie' Reportedly Gave Incorrect Tax Filing Guidance at Scale
“CRA billed taxpayers over $18 million for AI chatbot that spouted inaccurate info”
The Canada Revenue Agency blew nearly $20 million of taxpayer money on an "AI" chatbot that repeatedly gave out the wrong information to Canadians trying to file their taxes.
The CRA's AI tool, named Charlie the Chatbot, was accurate about 44 per cent of the time when tested by Auditor General Karen Hogan's team.
The software feature was announced in 2020 by then-National Revenue Minister Diane Lebouthillier and was slated to cost taxpayers more than $18 million.
According to documents tabled in Parliament on Monday, $13.67 million went to pay the salaries of those who worked on it, in addition to costs related to employee benefits and travel. Another $3.21 million was spent on the project's IT consultants.
At the time of its release, the CRA warned the chatbot was still "learning" about the agency, saying "the questions you ask will help it become more knowledgeable and interactive."
"Charlie will make it easier to get the information you need to help you file your income tax and benefit return!" wrote the agency in 2020.
However, Hogan's report said the software, which is currently still available on 13 different CRA webpages, remains far from accurate.
"Charlie's responses tended to be brief, offering limited context and minimal additional information," reads the AG report published in October. "We found that Charlie provided accurate answers in only 2 out of the 6 questions we asked it, while the other public web-based conversational artificial intelligence tools answered 5 out of 6 questions accurately."
According to the CRA, people have engaged in over seven million "conversations" with Charlie and asked over 18 million questions since its inception.
Still, the chatbot was recently recorded as having a 70 per cent "accuracy threshold," with a new unreleased version reaching "approximately 90 per cent" last month.
However, the agency noted that "the exact number of questions the Generative Artificial Intelligence chatbot has answered correctly cannot be precisely determined without conducting a comprehensive review of all interaction transcripts."
The CRA released the results of its "100-day service improvement plan" on Thursday, which reported that 70 per cent of Charlie the Chatbot users had reached their "goal" in November.
The report also noted that the agency's overall responsiveness to people had nearly doubled during the fall. This was partly due to rehiring or extending the contracts of over 1,200 call centre employees.
Incident 1311: Peppermill Casino Facial Recognition System Reportedly Misidentified Individual, Leading to Wrongful Arrest in Reno
“Peppermill Reno’s Facial Recognition Tech Leads to Wrongful Arrest”
A long-haul truck driver who stopped by Reno's Peppermill Casino in September 2023 saw his evening turn into a nightmare after he was misidentified by the venue's AI facial recognition technology as a local reprobate, according to court filings.
The system flagged Jason Killinger as a man who had been 86'd from the venue months earlier for sleeping on the premises. Killinger's face generated a "100% match" with the banned individual, who is identified in court documents only as "M.E."
Killinger was detained by casino security and then arrested by a rookie officer from the Reno Police Department. The officer, named R. Jager in the filings, refused to believe Killinger and accused him of obtaining a fraudulent ID when he provided evidence of who he was.
The truck driver was detained for 11 hours, according to a wrongful arrest lawsuit filed in the US District Court for the District of Nevada. For four of those hours, he was handcuffed, resulting in bruises and shoulder pain, per the suit.
Killinger was ultimately freed after a fingerprint check confirmed he was not his less reputable doppelganger.
'False Statements'
The plaintiff has already sued the Peppermill Casino in a case that was settled for an undisclosed amount. Now he's going after Jager, whom he accuses of fabricating evidence and engaging in malicious prosecution.
Killinger claims Jager "knowingly inserted false statements" into both his incident report and arrest declaration -- specifically, that his IDs were conflicting and that one or more "appeared to be fraudulent."
This is despite the plaintiff having a valid Nevada driver's license, UPS pay stub, and vehicle registration all bearing his name and matching physical descriptors.
The filing says this was "a deliberate falsification" intended to "manufacture probable cause" after the officer realized there were inconsistencies between the casino's claim and Killinger's documentation.
The lawsuit says Jager told a supervising officer that Killinger "probably has a DMV hook-up," implying access to fake identification, although there was no evidence supported that claim.
Fabrication by Concealment
Killinger argues Jager's actions violated his Fourteenth Amendment right to due process. He also contends that Jager's reports omitted the fingerprint exoneration and falsely described Killinger as uncooperative and using conflicting identities. Those omissions amounted to fabrication by concealment, the suit argues.
The complaint seeks compensatory, special, and punitive damages but does not specify amounts.
Incident 1306: Florida Couple Reportedly Loses $45,000 in Alleged AI-Generated Elon Musk Impersonation Scam
“Elon Musk giveaway scheme steals $45,000 from Florida couple”
LEESBURG, Fla. -- A Leesburg couple is out of tens of thousands of dollars after an alleged car giveaway from billionaire Elon Musk.
George Hendricks, 69, says his life "doesn't really exist" anymore after losing nearly $45,000 to a scam that could cost him his home and his wife of 39 years.
"She wants to get a divorce," Hendricks said. When asked if it was because of the scam, he replied, "Yep."
The scheme started when Hendricks commented in an Elon Musk car giveaway group on Facebook, one of many such groups online.
Elon Musk added legitimacy, Hendricks said.
Hendricks then received a Facebook message congratulating him on winning $100,000 and a brand-new car.
The crooks asked him to switch to WhatsApp, a free messaging app commonly used by criminals because of its private, encrypted communication.
Hendricks then got a personalized video message, which he believed was from Musk himself, promising delivery of the prize package.
However, he was told he had to pay $7,500 in cash to ship the car. Hendricks took pictures of the cash and sent it.
Soon after, he was presented with investment opportunities promising returns of $120,000 on a $10,000 investment.
When Hendricks expressed skepticism, he received another video message: "I'd never take advantage of you or your funds. Trust me with your whole heart. God knows I have no bad intentions."
The scammers exchanged hundreds of messages with Hendricks, who ended up draining bank accounts and maxing out credit cards, sending nearly $37,000 more.
When no payout or car arrived, Hendricks filed a report with Leesburg police.
Hendricks is not alone. This year, AARP reported on ads using Musk's image to steal victims' savings.
Kieran Human is the security enablement lead with cybersecurity company ThreatLocker.
"A deepfake video is something that's being generated to look like somebody, but it's not," Human said.
The videos are not at all difficult to make and can be made in minutes, according to Human.
"With a 30-second, 60-second clip [of someone speaking] and you can make a video," Human said.
Human said there are signs to watch for, like the body and neck not moving in the videos; even to take a breath, only the face moves.
But as A.I. improves, so will deepfakes.
"It's just going to be a matter of time until people do start adding that breathing to the videos, and people are just going to have to be more cautious," Human warned.
"I know it's going on all the time, you just never think it's going to be you," Hendricks said.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – August, September, and October 2025
By Daniel Atherton
2025-11-08
At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor





