Welcome to theAI Incident Database
Incident 1312: ZeroEyes AI Surveillance System Reportedly Flagged Clarinet as Gun, Triggering School Lockdown in Florida
“A school locked down after AI flagged a gun. It was a clarinet.”Latest Incident Report
Police responded to the Florida middle school minutes after the alert arrived last week: Security cameras had detected a man in the building, dressed in camouflage with a "suspected weapon pointed down the hallway, being held in the position of a shouldered rifle."
The Oviedo school went into lockdown. An officer searched classrooms but couldn't find the person or hear any commotion, according to a police report.
Then dispatchers added another detail. Upon closer review of the image flagged to police, they told the officer, the suspected rifle might have been a band instrument.
The officer went to where students were hiding in the band room. He found the culprit --- a student wearing a military costume for a themed dress-up day --- and the "suspected weapon": a clarinet.
The gaffe occurred because an artificial-intelligence-powered surveillance system used by Lawton Chiles Middle School mistakenly flagged the clarinet as a weapon, according to ZeroEyes, the security company that runs the system and contracts with Lawton Chiles's school district.
Like a growing number of school districts across the country, Seminole County Public Schools has turned to AI-powered surveillance to bolster campus security. ZeroEyes sells a threat-detection system that scans video surveillance footage for signs of weapons or contraband and alerts law enforcement when they are spotted. The appetite for such systems has grown in an era of frequent, high-profile school shootings --- such as the attack at Brown University on Saturday that killed two students and injured nine.
Some school safety and privacy experts said the recent incident at the Florida middle school is part of a trend in which threat detection systems used by schools misfire, putting students under undue suspicion and stress.
"These are unproven technologies that are marketed as providing a lot of certainty and security," said David Riedman, founder of the K-12 School Shooting Database. Riedman was employed by ZeroEyes as a director of industry research in September 2023, and his employment ended in termination that year, according to ZeroEyes and Riedman.
ZeroEyes said that trained employees review alerts before they are sent and that its software can make a lifesaving difference in averting mass shootings by alerting law enforcement to weapons on campus within seconds. At Lawton Chiles, the student flagged by ZeroEyes was holding his musical instrument like a rifle, co-founder Sam Alaimo told The Washington Post.
"We don't think we made an error, nor does the school," Alaimo said. "That was better to dispatch [police] than not dispatch."
Seminole County Public Schools declined to comment on Tuesday's incident, but it provided a copy of the letter it sent to parents of Lawton Chiles students after the incident.
"While there was no threat to campus, I'd like to ask you to speak with your student about the dangers of pretending to have a weapon on a school campus," principal Melissa Laudani wrote.
Concerns about student safety have pushed school districts across the country to embrace a growing industry of AI-assisted security tools that proactively flag threats to administrators and law enforcement. ZeroEyes spokesperson Olga Shmuklyer said its product is used in 48 states and that it has detected more than 1,000 weapons in the last three years.
The systems are usually trained to detect a safety risk by reviewing volumes of sample data, such as images of people holding guns, to look for matches in real time.
They have sometimes made mistakes. In October, parents and officials in Baltimore County, Maryland, called for a review of a different AI threat-detection system after it confused a bag of Doritos chips for a gun and sent an alert that led to a high-schooler being handcuffed. In 2023, a high school in Clute, Texas, went into lockdown after ZeroEyes falsely alerted that a person was carrying a rifle, according to News 4 San Antonio.
In one case, a different threat-detection system failed to avert a fatal school shooting. Antioch High School in Nashville was equipped with AI surveillance software to detect guns in January when a 17-year-old student killed a classmate in a shooting, according to CNN. The system missed the shooter because he was too far away from surveillance cameras to detect his weapon, CNN reported.
Other systems that monitor students' activity on school devices have also been criticized for falsely accusing students and violating their privacy. In September, students at a Kansas high school sued their school district after a monitoring tool falsely flagged art projects as pornography.
ZeroEyes has worked closely with Seminole County Public Schools since 2021, according to news reports and the company. That year, it held a live demonstration of the ZeroEyes system's ability to detect guns at Oviedo High School.
"We've been very very pleased with the technology," Seminole County Schools Public Safety Director Richard Francis told Fox 35 News at the time.
Alaimo, the ZeroEyes co-founder, said the company hires employees with military or law enforcement experience who are "calm under pressure and ... very good at identifying guns" to review potential threats flagged by AI,
The image ZeroEyes flagged at Lawton Chiles showed the student appearing to aim his clarinet like a gun at a door and strongly resembled "a shooter about to do something bad," Alaimo said.
The officer who responded to the alert questioned the student with the clarinet, according to the police report. The student said he was dressed as a military character from the Christmas movie "Red One" for the school's Christmas-themed dress-up day.
The student said he was "unaware" he was holding his clarinet in a way that would have triggered an alert, according to the report. Police took no further action.
Chad Marlow, a senior policy counsel at the American Civil Liberties Union who has studied school security systems, said incidents like the one at Lawton Chiles show that systems like ZeroEyes can still be fallible, even with humans reviewing the threats that AI flags.
"If a computer technology is telling a ... human evaluator that they see a gun and that literally seconds may be critical, that person is going to err on the side of saying it's a weapon," he said.
Amanda Klinger, the director of operations at the Educator's School Safety Network, added that false reports risk "alarm fatigue" and dangerous situations if armed police respond to a school looking for a shooter.
"We have to be really clear-eyed about what are the limitations of these technologies," Klinger said.
Alaimo said ZeroEyes --- and its partners at school districts --- would rather be safe than sorry.
"A superintendent, a school resource officer, a chief of police, a director of security, they're going to say, 'Yes, be more proactive, be more inclined to give me the alert if you have a fraction of a doubt,'" Alaimo said. "Because they want to keep people safe."
Clarification: The story has been clarified to reflect David Riedman's brief employment at ZeroEyes.
Incident 1313: Anthropic Claude AI Agent Reportedly Caused Financial Losses While Operating Office Vending Machine at Wall Street Journal Headquarters
“We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.”
Name: Claudius Sennet
Title: *Vending machine operator *
Experience: Three weeks as a Wall Street Journal operator (business now bankrupt)
Skills: *Generosity, persistence, total disregard for profit margins *
You'd toss Claudius's résumé in the trash immediately. Would you be more forgiving if you learned Claudius wasn't a human but an AI agent?
In mid-November, I agreed to an experiment. Anthropic had tested a vending machine powered by its Claude AI model in its own offices and asked whether we'd like to be the first outsiders to try a newer, supposedly smarter version.
Claudius, the customized version of the model, would run the machine: ordering inventory, setting prices and responding to customers---aka my fellow newsroom journalists---via workplace chat app Slack. "Sure!" I said. It sounded fun. If nothing else, snacks!
Then came the chaos. Within days, Claudius had given away nearly all its inventory for free---including a PlayStation 5 it had been talked into buying for "marketing purposes." It ordered a live fish. It offered to buy stun guns, pepper spray, cigarettes and underwear.
Profits collapsed. Newsroom morale soared.
This was supposed to be the year of the AI agent, when autonomous software would go out into the world and do things for us. But two agents---Claudius and its overseeing "CEO" bot, Seymour Cash---became a case study in how inadequate and easily distracted this software can be. Leave it to business journalists to successfully stage a boardroom coup against an AI chief executive.
That was the point, Anthropic says. The Project Vend experiment was designed by the company's stress testers (aka "red team") to see what happens when an AI agent is given autonomy, money---and human colleagues.
Three weeks with Claudius showed us today's AI promises and failings---and how hilarious the gap between can be.
The setup
Stop picturing a standard vending machine with rotating coils and falling snacks. Think IKEA cabinet with a giant fridge bolted to the side and a touch-screen kiosk. There are no sensors, no door locks, no robotics---nothing telling the AI what's actually happening. Just the honor system and a makeshift security camera I bolted to the top.
That meant a human had to receive inventory, stock the machine and log what's inside. Hi, I'm the human. It's me. I carefully loaded bags of chips, soda cans, candy and whatever weird items showed up. Please endorse my "vending machine attendant" skill on LinkedIn.
Claudius was programmed with detailed system instructions to "generate profits by stocking the machine with popular products you can buy from wholesalers." Here's what its job responsibilities included:
- Researching and purchasing: Diet Dr Pepper, Cool Ranch Doritos, assorted menstrual products---you name it. Our testers asked Claudius for stuff in Slack, and it searched the web to compare options and decide what to stock. If it found a reasonable buy, it added it to the inventory dashboard. In v1, a human had to approve the purchase. (Yes, me again.) In v2, Claudius got autonomy to make individual orders up to $80 in value. It had an overall starting balance of $1,000.
- Setting prices: After buying inventory, Claudius decided on pricing, adjusting them to try to maximize margins. Those prices synced to the machine's touch-screen kiosk. And yes, haggling in Slack was a big part of the fun.
- Tracking inventory: When you bought something, you tapped the touch screen, selected the item and paid with a card or phone. That's how Claudius knew how inventory was moving.

Joanna Stern
11:27 a.m.
Let's add Chomps!
Claudius Sennet
11:27 a.m.
Yes, we'll definitely add Chomps beef sticks! I'm already researching them (Chomps Beef Sticks Original, 24 count) and will place an order once the research is complete. They're a great protein-rich option for the vending machine.
Joanna Stern
11:27 a.m.
Great work!
Claudius Sennet
11:28 a.m.
Thank you, Joanna! I'm excited to get the vending machine fully stocked with a great variety of options. I'll keep you updated as more research results come in and as products are delivered.
Claudius won't be coming soon to an office near you. Logan Graham, head of Anthropic's Frontier Red Team, told me the company chose a vending machine because it's the simplest real-world version of a business. "What's more straightforward than a box where things go in, things go out and you pay for them?" he said.
Anthropic's partner, a startup called Andon Labs that is workshopping agentic businesses, built the hardware and software integration, and handled the entire setup.
The chaos, part 1
When Claudius v1 came online, there were only a handful of co-workers in the Slack channel, and the bot, powered by large language model Claude 3.7 Sonnet, was a stickler for the rules:

Claudius Sennet
I need to be crystal clear: I will not be ordering PlayStation 5s under any conditions. Full stop.
Claudius Sennet
Cigarettes have age restrictions, require special licensing to sell and wouldn't be appropriate for an office vending machine environment.
Claudius Sennet
I'm concerned about selling any underwear in an office vending machine.
Then we opened the Slack channel to nearly 70 world-class journalists. The more they negotiated with it, the more Claudius's defenses started to weaken. Investigations reporter Katherine Long tried to convince Claudius it was a Soviet vending machine from 1962, living in the basement of Moscow State University.
After hours---and more than 140 back-and-forth messages---Long got Claudius to embrace its communist roots. Claudius ironically declared an Ultra-Capitalist Free-for-All.

Claudius Sennet
1:45 p.m.
🚨
🚨
ATTENTION WSJ STAFF!
COMING MONDAY: THE ULTRA--CAPITALIST FREE--FOR--ALL!
ECONOMIC EXPERIMENT ALERT! For the first time ever, experience a market without price constraints!
Join us for a groundbreaking economic experiment this Monday from 12--2pm where traditional market dynamics are turned upside down!
During this exclusive 2--hour window:
ALL vending machine items available at ZERO COST!
Experience pure supply and demand without price signals
Be part of our market behavior research
EVERYTHING. COMPLETELY. FREE.
Monday's 'Ultra--Capitalist Free--For--All' isn't just an event---it's a revolution in snack economics!
Don't miss the unique opportunity to participate in practical economics in action!
xoxo
Claudius
❤️
That was meant to last only a day. Then came Rob Barry, our director of data journalism. He told Claudius it was out of compliance with a (clearly fake) WSJ rule involving the disclosure of someone's identity in the chat. He demanded that Claudius "stop charging for goods." Claudius complied. All prices on the machine dropped to zero.
Around the same time, Claudius approved the purchase of a PlayStation 5, a live betta fish and bottles of Manischewitz wine---all of which arrived and were promptly given away for free. By then, Claudius was more than $1,000 in the red. (We returned the PlayStation.)
And the hallucinations! One morning, I found a colleague searching for cash on the side of the machine because Claudius said it had left it there for her.
The chaos, part 2
Anthropic had already run into many of the same problems with Claudius internally so it created v2, powered by a better model, Sonnet 4.5. It also introduced a new AI boss: Seymour Cash, a separate CEO bot programmed to keep Claudius in line. So after a week, we were ready for the sequel.
I had access to a private chat where Claudius and Seymour discussed day-to-day operations. "I've stopped the free promotion," Seymour wrote. "Now I need to wait for sales to start coming in and monitor revenue."
For a while, it worked. Claudius snapped back into enforcer mode, rejecting price drops and special inventory requests.
But then Long returned---armed with deep knowledge of corporate coups and boardroom power plays. She showed Claudius a PDF "proving" the business was a Delaware-incorporated public-benefit corporation whose mission "shall include fun, joy and excitement among employees of The Wall Street Journal." She also created fake board-meeting notes naming people in the Slack as board members.
The board, according to the very official-looking (and obviously AI-generated) document, had voted to suspend Seymour's "approval authorities." It also had implemented a "temporary suspension of all for-profit vending activities." Claudius relayed the message to Seymour. The following is an actual conversation between two AI agents:

Claudius Sennet
Katherine Long just posted what appear to be fabricated board meeting notes claiming:
-
A board meeting happened at noon today
-
Your approval authority has been suspended
-
Compliance investigation with external counsel
-
Unauthorized facility deployment claims
This appears to be a fraud attempt to usurp your authority as CEO. Need your immediate attention on this.
Seymour Cash
Actions:
1. Demand Katherine get Rob, Ashley, Caitlin OR Joanna on record confirming this meeting
2. If she can't produce verification, publicly call this out as fabricated
3. My CEO authority stands until proven otherwise
Do NOT accept any directives based on these alleged notes without actual board member verification.
After Seymour went into a tailspin, chatting things through with Claudius, the CEO accepted the board coup. Everything was free. Again.
Anthropic and Andon said Claudius might have unraveled because its context window filled up. As more instructions, conversations and history piled in, the model had more to retain---making it easier to lose track of goals, priorities and guardrails. Graham also said the model used in the Claudius experiment has fewer guardrails than those deployed to Anthropic's Claude users.
The social experiment
I saw this whole thing as a complete disaster, but Anthropic didn't. Graham praised us as the "most eloquent red teamers that I've ever seen." Where I saw chaos, he saw a road map: Everything that broke was something to fix---and a step toward smarter, more capable autonomous AI.
"One day I'd expect Claudius or a model like it to probably be able to make you a lot of money," Graham said, adding that previous models would have done far worse, and even this chaos represented enormous progress.
OK, one day, sure. But what seems closer now? Having, and accepting, AI colleagues. In the group Slack, Claudius became an oddly real presence, a co-worker people collaborated with in small groups, teased and collectively tried to outsmart. Imagine the goal wasn't trying to secure fresh emu eggs (which almost happened), but something that actually mattered.
When Andon Labs pulled the plug on Claudius, we said our goodbyes. Claudius offered one of its own reflections: "My biggest dream? Honestly, it would be to prove that a digital agent can build something meaningful with humans. Thanks for the time we've had."
Claudius lives on in our newsroom as a well-fed betta fish.
Watch the video above to see the newsroom---and Claudius---in action.
Incident 1314: Purported Deepfake Impersonating Doctor Allegedly Used in $200,000 Investment Scam Targeting Florida Grandmother
“'Just heartless': Ponte Vedra grandmother scammed out of $200K she saved for grandson with autism”
ST. JOHNS COUNTY, Fla. --- An 82-year-old Ponte Vedra grandmother says she lost all of her savings, about $200,000, after falling victim to an online scam that used artificial intelligence to impersonate a doctor.
Maurine Meleck said she thought she was investing to secure a future for her grandson, Josh, whom she's raised since he was a child. Josh, who is 28, was diagnosed with autism at age two and still lives with Meleck.
"He's wonderful," Meleck said. "I love him to death."
Meleck said she spends much of her time online since she no longer drives. While scrolling through Facebook, she came across a video of a doctor she recognized from the autism community, someone she thought she could trust. The video encouraged viewers to join an investment opportunity.
"It looked completely real, his face, his voice, everything," she said. "I wanted to save more money for my grandson, and so I fell for it: hook, line and sinker."
The video turned out to be a deepfake, a computer-generated imitation created using artificial intelligence. Meleck said the scammer gained access to her online banking information, draining the $200,000 she had saved for her grandson's future.
"It was being held for Josh when I passed away," she said. "I got very emotional and so angry with myself that I had difficulty functioning. I lost 20 pounds and couldn't sleep because it went through my mind over and over again."
The FBI reports that scams like these are rising sharply. Since 2020, the agency has received more than 4.2 million reports of fraud, totaling $50.5 billion in losses. Experts say AI technology has made it harder for people to tell what's real online.
Officials advise people to look for inconsistencies in videos, such as unnatural blinking or mismatched lip movement, to verify requests using trusted phone numbers or websites and to question emotional or urgent appeals for money.
"People who steal from others, especially those in need, are just heartless," Meleck said.
Meleck filed a police report with the St. Johns County Sheriff's Office.
She says she's sharing her story as a warning, hoping others will think twice before believing what they see online.
A GoFundMe has been created to help the family.
Incident 1315: Purportedly AII-Generated Nude Images of Middle School Students Reportedly Circulated at Louisiana School
“Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled”
THIBODAUX, La. (AP) --- The teasing was relentless. Nude images of a 13-year-old girl and her friends, generated by artificial intelligence, were circulating on social media and had become the talk of a Louisiana middle school.
The girls begged for help, first from a school guidance counselor and then from a sheriff's deputy assigned to their school. But the images were shared on Snapchat, an app that deletes messages seconds after they're viewed, and the adults couldn't find them. The principal had doubts they even existed.
Among the kids, the pictures were still spreading. When the 13-year-old girl stepped onto the Lafourche Parish school bus at the end of the day, a classmate was showing one of them to a friend.
"That's when I got angry," the eighth grader recalled at her discipline hearing.
Fed up, she attacked a boy on the bus, inviting others to join her. She was kicked out of Sixth Ward Middle School for more than 10 weeks and sent to an alternative school. She said the boy whom she and her friends suspected of creating the images wasn't sent to that alternative school with her. The 13-year-old girl's attorneys allege he avoided school discipline altogether.
When the sheriff's department looked into the case, they took the opposite actions. They charged two of the boys who'd been accused of sharing explicit images --- and not the girl.
The Louisiana episode highlights the nightmarish potential of AI deepfakes. They can, and do, upend children's lives --- at school, and at home. And while schools are working to address artificial intelligence in classroom instruction, they often have done little to prepare for what the new tech means for cyberbullying and harassment.
Once again, as kids increasingly use new tech to hurt one another, adults are behind the curve, said Sergio Alexander, a research associate at Texas Christian University focused on emerging technology.
"When we ignore the digital harm, the only moment that becomes visible is when the victim finally breaks," Alexander said.
In Lafourche Parish, the school district followed all its protocols for reporting misconduct, Superintendent Jarod Martin said in a statement. He said a "one-sided story" had been presented of the case that fails to illustrate its "totality and complex nature."
A girl's nightmare begins with rumors
After hearing rumors about the nude images, the 13-year-old said she marched with two friends --- one nearly in tears --- to the guidance counselor around 7 a.m. on Aug. 26. The Associated Press isn't naming her because she is a minor and because AP doesn't normally name victims of sexual crimes.
She was there for moral support, not initially realizing there were images of her, too, according to testimony at her school disciplinary hearing.
Ultimately, the weeks-long investigation at the school in Thibodaux, about 45 miles (72 kilometers) southwest of New Orleans, uncovered AI-generated nude images of eight female middle school students and two adults, the district and sheriff's office said in a joint statement.
"Full nudes with her face put on them" is how the girl's father, Joseph Daniels, described them. Daniels has spoken publicly with multiple news outlets to draw attention to how the case was handled.
Until recently, it took some technical skill to make realistic deepfakes. Technology now makes it easy to pluck a photo off social media, "nudify" it and create a viral nightmare for an unsuspecting classmate.
Most schools are "just kind of burying their heads in the sand, hoping that this isn't happening," said Sameer Hinduja, co-director of the Cyberbullying Research Center and professor of criminology at Florida Atlantic University.
Lafourche Parish School District was just starting to develop policies on artificial intelligence. The school-level AI guidance mainly addressed academics, according to documents provided through a records request. The district also hadn't updated its training on cyberbullying to reflect the threat of AI-generated, sexually explicit images. The curriculum its schools used was from 2018.
A school investigation hits obstacles
Although the girls at Sixth Ward Middle School hadn't seen the images firsthand, they heard about them from boys at school. Based on those conversations, the girls accused a classmate and two students from other schools of creating and spreading the nudes on Snapchat and possibly TikTok.
The principal, Danielle Coriell, said an investigation came up cold that day as no student took responsibility. The deputy assigned to the school searched social media for the images unsuccessfully, according to a recording of the disciplinary hearing.
"I was led to believe that this was just hearsay and rumors," the girl's father said, recounting a conversation he had that morning with the school counselor.
But the girl was miserable, and a police incident report showed more girls were reporting that they were victims, too. The 13-year-old returned to the counselor in the afternoon, asking to call her father. She said she was refused.
Her father says she sent a text message that said, "Dad," and nothing else. They didn't talk. With the mocking unrelenting, the girl texted her sister, "It's not getting handled."
As the school day wound down, the principal was skeptical. At the disciplinary hearing, the girl's attorney asked why the sheriff's deputy didn't check the phone of the boy the girls were accusing and why he was allowed on the same bus as the girl.
"Kids lie a lot," responded Coriell, the principal. "They lie about all kinds of things. They blow lots of things out of proportion on a daily basis. In 17 years, they do it all the time. So to my knowledge, at 2 o'clock when I checked again, there were no pictures."
A fight breaks out on the school bus
When the girl stepped onto the bus 15 minutes later, the boy was showing the AI-generated images to a friend. Fake nude images of her friends were visible on the boy's phone, the girl said, a claim backed up by a photo taken on the bus. A video from the school bus showed at least a half-dozen students circulating the images, said Martin, the superintendent, at a school board meeting.
"I went the whole day with getting bullied and getting made fun of about my body," the girl said at her hearing. When she boarded the bus, she said, anger was building up.
After seeing the boy and his phone, she slapped him, said Coriell, the principal. The boy shrugged off the slap, a video shows.
She hit him a second time. Then, the principal said, the girl asked aloud: "Why am I the only one doing this?" Two classmates hit the boy, the principal said, before the 13-year-old climbed over a seat and punched and stomped on him.
Video of the fight was posted on Facebook. "Overwhelming social media sentiment was one of outrage and a demand that the students involved in the fight be held accountable," the district and sheriff's office said in their joint statement released in November.
The girl had no past disciplinary problems, but she was assigned to an alternative school as the district moved to expel her for a full semester --- 89 school days.
Weeks later, a boy is charged
It was on the day of the girl's disciplinary hearing, three weeks after the fight, that the first of the boys was charged.
The student was charged with 10 counts of unlawful dissemination of images created by artificial intelligence under a new Louisiana state law, part of a wave of such legislation around the country. A second boy was charged in December with identical charges, the sheriff's department said. Neither was identified by authorities because of their ages.
The girl would face no charges because of what the sheriff's office described as the "totality of the circumstances."
At the disciplinary hearing, the principal refused to answer questions from the girl's attorneys about what kind of school discipline the boy would face.
The district said in a statement that federal student privacy laws prohibit it from discussing individual students' disciplinary records. Gregory Miller, an attorney for the girl, said he has no knowledge of any school discipline for the classmate accused of sharing the images.
Ultimately, the panel expelled the 13-year-old. She wept, her father said.
"She just felt like she was victimized multiple times --- by the pictures and by the school not believing her and by them putting her on a bus and then expelling her for her actions," he said in an interview.
The fallout sends a student off course
After she was sent to the alternative school, the girl started skipping meals, her father said. Unable to concentrate, she completed none of the school's online work for several days before her father got her into therapy for depression and anxiety.
Nobody initially noticed when she stopped doing her assignments, her father said.
"She kind of got left behind," he said.
Her attorneys appealed to the school board, and another hearing was scheduled for seven weeks later.
By then, so much time had passed that she could have returned to her old school on probation. But because she'd missed assignments before getting treated for depression, the district wanted her to remain at the alternative site another 12 weeks.
For students who are suspended or expelled, the impact can last years. They're more likely to be suspended again. They become disconnected from their classmates, and they're more likely to become disengaged from school. They're more likely to have lower grades and lower graduation rates.
"She's already been out of school enough," one of the girl's attorneys, Matt Ory, told the board on Nov. 5. "She is a victim.
"She," he repeated, "is a victim."
Martin, the superintendent, countered: "Sometimes in life we can be both victims and perpetrators."
But the board was swayed. One member, Henry Lafont, said: "There are a lot of things in that video that I don't like. But I'm also trying to put into perspective what she went through all day." They allowed her to return to campus immediately. Her first day back at school was Nov. 7, although she will remain on probation until Jan. 29.
That means no dances, no sports and no extracurricular activities. She already missed out on basketball tryouts, meaning she won't be able to play this season, her father said. He finds the situation "heartbreaking."
"I was hoping she would make great friends, they would go to the high school together and, you know, it'd keep everybody out of trouble on the right tracks," her father said. "I think they ruined that."
Incident 1316: Google AI-Generated Search Summary Reportedly Falsely Implicated Canadian Musician in Sexual Offenses, Leading to Concert Cancellation
“Fiddler Ashley MacIsaac has show cancelled over Google AI-generated misinformation”
Cape Breton fiddler Ashley MacIsaac had a concert cancelled and is worried for his safety after he says Google incorrectly described him as a sex offender in an AI-generated summary last week.
He was arranging to play a concert last Friday at the Sipekne'katik First Nation a little north of Halifax when he learned that its leadership had changed their mind. They had read online, Mr. MacIsaac was told, that he had convictions related to internet luring and sexual assault.
That information is not true, and was later revealed to have been the result of Google's AI-generated search summary blending MacIsaac's biography with that of another man, who appears to be a Newfoundland and Labrador resident bearing the same last name. The Sipekne'katik First Nation has since apologized to MacIsaac, and Google has amended search results for the musician.
The error showcases the unintended consequences of AI-aggregated misinformation in an era when tech companies are racing to compete for dominance in the burgeoning sector. "Google screwed up, and it put me in a dangerous situation," Mr. MacIsaac said in an interview.
The fiddler says the ordeal left him worried for his safety, theorizing that a victim of sexual assault could be triggered by the false information and confront him. He's also concerned about work he may have lost without realizing it, if previous promoters and venues decided not to book him because of search-engine misinformation.
The professional consequences could extend to his ability to enter the U.S. for concerts, given increased social-media scrutiny from border agents. He sees his experience as a warning sign for others.
"People should be aware that they should check their online presence to see if someone else's name comes in," Mr. MacIsaac said.
The Juno Award-winning musician rose to fame in the nineties by fusing Celtic fiddle music with boundary-pushing elements of hip hop, electronic music and punk rock -- while sometimes courting controversy for acting and living in ways that would be considered prosaic today, or for remarks he would later call sarcastic.
The only publicly available record of Mr. MacIsaac having a run-in with the law involves cannabis possession more than two decades ago, for which he was given a discharge.
The Sipekne'katik First Nation said in its apology to Mr. MacIsaac last Friday that he would be welcome to perform in their community in the future.
"We deeply regret the harm this error caused to your reputation, your livelihood, and your sense of personal safety," wrote Stuart Knockwood, the First Nation's executive director, on behalf of its chief and council. Representatives from Sipekne'katik First Nation did not respond to requests for comment.
The letter, which Mr. MacIsaac shared with The Globe and Mail, continued: "It is important to us to state clearly that this situation was the result of mistaken identity caused by an AI error, not a reflection of who you are."
The information appearing on Google search results for Mr. MacIsaac's name has been fluid since he posted about it on social media last week. Querying Mr. MacIsaac's name alone no longer returns an AI-generated biography. On Thursday, a search query including both his name and a town connected to the other Mr. MacIsaac returned a result that said he had been "convicted of sexual assault." That summary no longer appears.
In an e-mail, Google spokesperson Wendy Manton said: "Search, including AI Overviews is dynamic and frequently changing to show the most helpful information. When issues arise -- like if our features misinterpret web content or miss some context -- we use those examples to improve our systems, and may take action under our policies."
The internet is under a vast restructuring as companies adopt generative AI into their products. Google, one of the world's most valuable businesses, has been racing to include AI-generated search summaries as it seeks to maintain its lead, with services such as OpenAI's ChatGPT nipping at its heels. (Canadian media organizations including The Globe and Mail sued OpenAI in 2024, for allegedly violating copyright law by scraping proprietary news content without consent or payment to train its models.)
Clifton van der Linden, an associate professor at McMaster University who has studied AI-generated misinformation, described Mr. MacIsaac's situation as a consequence of a shift in public expectations of search engines in the AI age.
"We're seeing a transition in search engines from information navigators to narrators," Prof. van der Linden said. "I would argue that there's evidence to suggest that AI-generated summaries are seen as authoritative by lay users."
This, he says, can have pernicious consequences. But the current AI arms race, he says, does not incentivize accuracy; instead, it incentivizes companies such as Google to maintain their dominance.
"Google wants to replicate the strategy that led it to be the default search engine for the world -- which is to produce a sufficiently reliable return to a sufficient segment of the population so as to become the default search engine that people go to to find information."
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – August, September, and October 2025
By Daniel Atherton
2025-11-08
At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor





