Welcome to theAI Incident Database
Incident 1286: Purportedly AI-Assisted Citation Errors Allegedly Found in Newfoundland and Labrador's 2025 Health Workforce Report by Deloitte
“Deloitte allegedly cited AI-generated research in a million-dollar report for a Canadian provincial government”Latest Incident Report
A Canadian government-commissioned Deloitte health care report that cost one province nearly $1.6 million contains potentially AI-generated errors, marking the second country this year to allege the consulting firm's fact-checking shortcomings.
The errors---found in an investigation published Saturday by the* Independent*, a progressive Canadian news outlet covering the country's easternmost province Newfoundland and Labrador---appear in a 526-page report that was disseminated by its government in May.
The report advised the then Liberal-led government's Department of Health and Community Services on topics including virtual care, retention incentives, and the impacts of the COVID-19 pandemic on health care workers during a time when the province's health care sector is facing nurse and doctor staffing shortages.
The Deloitte report contained false citations, pulled from made-up academic papers to draw conclusions for cost-effectiveness analyses, and cited real researchers on papers they hadn't worked on, the* Independent* found. It included fictional papers coauthored by researchers who said they had never worked together.
"Deloitte Canada firmly stands behind the recommendations put forward in our report," a Deloitte Canada spokesperson told Fortune in a statement. "We are revising the report to make a small number of citation corrections, which do not impact the report findings. AI was not used to write the report; it was selectively used to support a small number of research citations."
The lengthy report also cited an academic paper from the Canadian Journal of Respiratory Therapy, which is yet to be found when searching its database.
"It sounds like if you're coming up with things like this, they may be pretty heavily using AI to generate work," Gail Tomblin Murphy, an adjunct professor in the School of Nursing at Dalhousie University in Nova Scotia, told the* Independent*. Tomblin Murphy was cited by Deloitte in an academic paper that "does not exist." She added that she had worked with only three of the six other authors named in the false citation.
"And I definitely think that there's many challenges with that. We have to be very careful to make sure that the evidence that's informing reports [is] the best evidence, that it's validated evidence. And that, at the end of the day, these reports---not just because they cost governments and they cost the public---[are] accurate and evidence-informed and helpful to move things forward."
As of Monday, the report remains on the Canadian government's website.
The Canadian government spent just under $1.6 million on the report, paying in eight installments, according to an access to information request published in a blog post last Wednesday.
Tony Wakeham, leader of the Progressive Conservative Party in the province and the province's new premier, was sworn into office in late October. Newfoundland and Labrador's Office of the Premier and the province's Department of Health and Community Services did not immediately respond to Fortune's request for comment on the May report and has not publicly addressed the issue.
The revelation comes on the heels of news last month that Deloitte leveraged AI in a $290,000 report published in July to help the Australian government crack down on welfare. But a researcher flagged hallucinations in the 237-page study, which included references to nonexistent academic research papers and a fabricated quote from a federal court judgment.
In the revised study, which was quietly uploaded to the Australian government's website, the consulting firm admitted it had used the generative AI language system Azure OpenAI to help create the report.
"The updates made in no way impact or affect the substantive content, findings, and recommendations in the report," Deloitte wrote in a section in the updated study.
Deloitte's member firm in Australia was required to pay the government a partial refund for the report. No information has been made public yet in regards to a potential refund for Canada's report.
Incident 1289: Malta's Prime Minister Robert Abela Reportedly Deepfaked by a Ukrainian National in Cryptocurrency Fraud Targeting Local Residents
“Ukrainian woman accused of using deepfake PM video in cryptocurrency scam”
A 24-year-old Ukrainian national has been remanded in custody after being accused of defrauding several people in Malta through a cryptocurrency scam promoted with an AI-generated video of Prime Minister Robert Abela.
The woman, identified as Kateryna Izotkina, who holds a UK residence permit but no fixed address in Malta, was arraigned in court on Wednesday. She pleaded not guilty to charges of money laundering, fraud, illicit gains, and involvement in organised crime.
During the sitting, Inspector Andy Rotin explained that the case began when a deepfake video featuring Abela circulated on social media, falsely endorsing cryptocurrency investments. Several individuals fell victim to the ruse.
One complainant claimed she had been tricked out of €52,000, transferred both through multiple transactions and in cash. Investigators discovered that the accused continued to press the victim for further sums, even after the initial losses.
To apprehend the suspect, police organised a controlled delivery. The victim arranged to hand over €10,000 in Żurrieq square, using counterfeit €50 notes supplied by police. Plainclothes officers arrested Izotkina immediately after the handover.
The court issued a freezing order on €63,000 of the accused's assets.
Defence lawyer Nicholai Bugeja requested bail, emphasising that Izotkina had cooperated with the police and that investigators already held the relevant evidence. However, prosecutor Marica Ciantar opposed the request, stressing that the inquiry was still in its early stages, the complainant had not yet testified, and the accused posed a flight risk due to her lack of ties to Malta. Ciantar also highlighted concerns about possible tampering with evidence and links to a criminal network.
Magistrate Elaine Rizzo ruled against bail, ordering the woman's continued detention.
The case is being prosecuted by AG lawyer Marica Ciantar, assisted by Inspectors Andy Rotin and Duncan Schembri. Lawyers Nicholai Bugeja and Alexander Boichuk appeared for the defence.
Incident 1287: Purported Deepfake-Based Facebook Impersonation Reportedly Targets Daughter of Scot in Coma
“Daughter of Scot in coma in Turkey left sickened by deep fake scammers asking for cash”
The daughter of a Scot in a coma in Turkey has been left sickened by deep-fake scammers asking for cash.Teigan McMahon, from Glasgow, made a desperate appeal for help last week when her dad Peter suffered a heart attack on holiday.
Her family are facing mounting medical bills while the 52-year-old is being "kept alive by machines".
Now, Teigan has had to put a hold on vital donations after a vile fraudster set up a fake Facebook account in her name. The decoy profile, which is still active despite attempts to take it down, has ripped photos of Teigan and her dad to use as a profile picture and make posts.
There's also a further link to Teigan's fundraiser and a screenshot of donations with a post that pretends to be the 20-year-old saying: "Thank you all for your support. I'm glad donations keep coming."
The fake Facebook account says its location is in Dallas, Texas.
A total of £9,505 has been raised for Peter before donations were paused on Wednesday, November 26, although the family had a fundraising target of £12,000.
An alarmed Teigan told the Record: "I put a pause on my GoFundMe page as soon as I found out. I was told the profile was taken down but I can still see it."
Peter went into cardiac arrest in the early hours of Wednesday, November 19. His heart stopped beating for 70 minutes. The dad-of-five had flown out to the Antalya resort on November 14 for a week-long winter break.
Teigan flew straight out to be by her dad's hospital bedside. The devastated daughter told us: "I got a call on Wednesday to say my dad was in ICU after he had a heart attack.
"He had a mini heart attack a few months ago.
"This was totally different. They had to perform CPR on him for over an hour. His heart stopped beating for 70 minutes.
"It's caused damage to his kidneys and he is now having dialysis.
"He's still in a coma but he's able to breathe on his own. We are praying he wakes up. We are hoping for a miracle."
Peter, who owns a bodywork repair car garage, had flown out without insurance, and hospital costs had already stacked up to £7,000 in three days.
She added: "I can't stress enough how important every penny counts. This bill is increasing day by day and is far beyond anything me or my family can manage alone.
"I am not the type of person to ask anyone for a penny, but I will quite literally beg to save my dad's life.
"I will do absolutely anything to give my dad the best possible chance."
The Daily Record has highlighted multiple instances where fraudsters create fake fundraising pages exploiting real-life tragedies to scam grieving communities.
These include hacking into a Facebook account to set up a fake fundraiser after the death of champion horse rider Natasha Galpin, and the case of Nicole Elkabbas, who was jailed for nearly three years after scamming the public out of over £45,000 by faking ovarian cancer.
A GoFundMe spokesperson said: "We understand the fundraising page set up by Tiegan was shared by a fake social media account but we are satisfied the page is genuine."
GoFundMe has previously told us that it has a Trust & Safety team to detect fraud and typically refunds donations in cases where misuse of funds is proven or terms of service are violated. Police Scotland has also warned that "scams of this nature can be difficult to spot and anyone can be fooled".
Incident 1285: Purportedly AI-Generated Jason Momoa Deepfake Used in Romance Scam Reportedly Defrauding British Widow of $600,000
“British widow loses over $600k to AI-generated scam”
A British woman has lost over $600,000 to an artificial intelligence-generated scam after the scammers led her to believe she was in a romantic relationship with actor Jason Momoa. According to reports, the scammers sent her several videos, made with artificial intelligence, of the actor claiming several things.
According to reports, the scammers used those images to lie to the British widow that she could live happily with the Hollywood hunk, claiming the money she was sending would be used to build their dream home in Hawaii.
The British widow started the relationship after the Hollywood star supposedly replied to her comments on one of his fan pages. He then started talking to her frequently after the first contact, as things moved fast between them.
British widow loses funds to scammers
According to the police, the scammers, posing as Momoa, started to solicit the grandmother for cash. He claimed that his fortune was tied up in several film projects that he was working on. The British widow eventually put her house in Cambridgeshire on sale, transferring more than half a million from the sale to her supposed boyfriend. After she sent the money, the messages stopped, and her heathrob vanished.
"This might sound far-fetched, but it's a true story, and it left a vulnerable woman without a home," Cambridgeshire Police said.
The police claimed that the scammers have pulled this same scam on other victims across the United Kingdom and the United States. They claimed that another British woman was scammed of up to £80,000, with the scammer deploying the same romance scam technique with the Hollywood star to swindle the unsuspecting victim.
The British woman claimed to have had a conversation with his daughter, who she claimed was turning 15 this year. "I was also told he was fighting his ex-wife for the house, and he said we needed a marriage certificate to keep the house. So I was gullible and paid for it," she added.
Dave York, a fraud prevention officer, noted that scammers always target those whom they feel are at their lowest, especially widows. They somehow notice their desperation to fill the gaps in their lives, creating the right opportunity for the criminal elements.
Celebrities call out rise in AI-generated deepfakes
Aside from Jason Momoa, there have also been other popular figures in the United States who have seen their pictures and videos being deployed by criminals. A typical example is Steve Harvey, the popular host of Family Feud. Last year, he was among some celebrities whose voices were mimicked by criminals to promote a scam that promised people government-provided funds.
"I've been telling you guys for months to claim this free $6,400," a voice that sounds like Harvey's says in one video. However, Harvey has urged regulators to look into this issue and make sure the perpetrators are brought to book. "My concern now is the people that it affects. I don't want fans of mine or people who aren't fans to be hurt by something," Harvey said.
Since the beginning of this year, there has been a big jump in scammers using artificial intelligence to carry out their bad acts. According to a previous report by Cryptopolitan, the Securities Exchange Commission of Nigeria issued a statement warning the general public about the use of AI to create deepfakes of popular personalities to carry out bad acts. The scammers create deepfake videos to solicit funds, while others use them to advertise fake investment opportunities.
Incident 1288: Purported Deepfake Video and Fake News Articles Allegedly Used to Impersonate Guernsey's Chief Minister in Investment Scam
“AI scam targets islanders with fake minister posts”
AI-generated content of Guernsey's top politician has been used by fraudsters in a scam to try and persuade islanders into making investments, police say.
The scam, known as a deepfake, shows fake video and images of Guernsey's chief minister, Deputy Lindsay de Sausmarez, appearing to recommend an investment scheme, as well as fake local newspaper articles.
De Sausmarez said she hoped "most people would quickly realise it's fake", but advised islanders to warn vulnerable people the posts were fraudulent.
A Guernsey Police spokesperson said: "Please be aware that this story, along with any others that may surface featuring Deputies requesting payment or investment, are not real and are a scam."
Police added people should be wary of "articles that promote too-good-to-be-true investments or prizes".
Deepfake scams have become more common in recent years as AI technology improves, with faked images of high-profile figures such as financial journalist Martin Lewis and London mayor Sadiq Khan used by people online to trick people.
Guernsey Police warned people to think twice before clicking links, entering personal details or providing bank details.
Anyone who suspects they may have been the victim of online fraud should contact their bank immediately, a police spokesperson added.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – August, September, and October 2025
By Daniel Atherton
2025-11-08
At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor





