Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
New Zealand Financial Markets Authority: Pump And Dump Scam Nets Kiwi Investors

Incident 1227: New Zealand Financial Markets Authority (FMA), Te Mana Tātai Hokohoko, Reportedly Flags Purported Deepfake Pump-and-Dump Network Using Social Media Ads

“New Zealand Financial Markets Authority: Pump And Dump Scam Nets Kiwi Investors”Latest Incident Report
mondovisione.com2025-10-06

Kiwi investors are being caught by a new "pump and dump" scam, says the Financial Markets Authority (FMA) -- Te Mana Tātai Hokohoko.  

FMA Director of Markets, Investors and Reporting, John Horner says: "As we start World Investor Week, aimed at educating and protecting investors, it is timely to remind Kiwi investors about this latest version of the deepfake impersonation scam. We have issued a new warning."  

"This current scam uses social media scam advertisements featuring an impersonated business leader, encouraging investors to join a fake investor chat group. The FMA first raised its concerns about the impersonations in a public warning issued 19 August 2025, and has received further information showing the scam is part of a global network of scams aimed at market manipulation." 

A pump and dump occurs when a person buys shares in a company and starts an organised campaign to increase (or 'pump') the share price. They then sell (or 'dump') their shares and make a profit, while the other shareholders suffer financial losses as the share price falls. 

Often those running a 'pump and dump' use social media and online forums to create a sense of excitement about buying a company's shares by spreading false information about the company's prospects. This excitement and interest artificially drives the share price up as they lure investors.  This is considered a form of market manipulation. 

The pump and dump scam uses Facebook and Instagram ads impersonating a variety of well known Kiwi business leaders to target Kiwi investors, inviting them to join a chat group on a messaging platform such as WhatsApp. There, they are encouraged to use reputable trading platforms to buy low-value shares in companies listed on overseas exchanges, to artificially inflate the share price. The scammers sell their own shares at the inflated price and, when the price later drops, the victims are left bearing the loss.  

After a victim suffers a loss, the scammers often claim, falsely, that the victim is entitled to compensation or reimbursement. The scammers then collect personal information and further payments from the victims.  

"We have received a number of complaints, with multiple victims having lost significant amounts," said John Horner. 

"These complaints are likely only the tip of the iceberg. Because these impersonation pump and dump scams play out over days and weeks, it's possible there are others happening in New Zealand right now, with the scammers potentially pivoting to another company's shares.   

"We encourage New Zealanders to take extra care when considering investment decisions and seeking out investment advice. If an organisation is offering investment tips through social media, be cautious. Report any suspicious activity to the social media channel, to the company involved and to the FMA. 

"We urge extreme caution to investors impacted by this or other scams which may include 'opportunities' to trade out of this position or support to recover funds lost, these are also likely to be scams." 

"One of the FMA's functions is to promote the confident and informed participation of businesses, investors, and consumers in the financial markets.  

"We have therefore taken steps to urgently issue an updated public warning to raise awareness of how this scam impacts investors and educate the general public on how to avoid being scammed.

Investment advice group chats used in suspected pump-and-dump schemes

"Our aim is to make New Zealand an unattractive place for scammers to operate, by raising awareness of scam methods and encouraging investors to stop and think carefully before making investment decisions that might be driven by a scam." 

The FMA is also sharing case details with the relevant overseas financial regulators, given some of the pumped shares are listed on a US exchange (NASDAQ) and some of the pumped companies are based in China.  "There are currently similar scams operating overseas, with a warning recently issued by the FBI

Internet Crime Complaint Center (IC3) | Fraudsters Target US Stock Investors through Investment Clubs Accessed on Social Media and Messaging Applications

The FMA has also alerted New Zealand banks and businesses being impersonated to let them know the impersonation scam has pivoted and is using new tactics, and is encouraging them to share the FMA warning. 

Investment advice group chats used in suspected pump-and-dump schemes

Imposter Facebook pages used to promote WhatsApp investment scam

How the scam works

  • Scammers promote investment club group chats through social media. Often, the scammers will publish ads impersonating business leaders or financial commentators to promote the group chats. In some cases, the scammers send unsolicited invitations to the group chats through messaging platforms like WhatsApp. 
  • In the group chats, an 'investment mentor' provides trade recommendations to group members. The group chats appear to have dozens of members, but they are populated primarily with bot accounts controlled by the scammers. 
  • The scammers may initially recommend purchasing shares of a large, well-known company such as Nvidia or Tesla. This is a tactic to build trust before drawing potential victims into the pump-and-dump scheme. 
  • Eventually, the scammers will recommend shares in a smaller company. Members are often encouraged to message the investment mentor or their assistant directly, who will guide them through the investing process.
  • Over several weeks or months, the victims' investments artificially inflate (or 'pump') the company's share price. The scammers then sell (or 'dump') their holdings at the inflated price. When the share price collapses, victims are left bearing the loss. 
  • After the share price drops and the victims suffer a loss, the scammers may tell investors they are entitled to compensation or reimbursement. The scammers may ask victims to complete an online form, provide copies of their ID, or pay an application fees in order to gain access to the funds. This is a recovery scam, and is an attempt to harvest victims' personal information or extract further funds. 

How to recognise and avoid pump-and-dump schemes

  1. Be cautious when interacting with ads for financial services on social media. If an ad features a well-known business leader or financial commentator, they may be an imposter. 

  2. Do not engage with any unsolicited messages regarding financial advice or trade recommendations. 

  3. Before you act on any trade recommendations, check that the person providing the advice is a licensed financial adviser registered on the FSPR. 

  4. Be sceptical about urgent or time-sensitive trade recommendations, particularly for emerging or low-priced shares. Scammers will often pressure potential victims to act quickly, leveraging investors' fear of missing out.

What to do if you've been scammed

  1. Stop engaging with the scammers. Do not provide any further personal information or payments to them. Report the group chats to WhatsApp and block the scammers on all devices.

  2. If you have shared your bank account or credit/debit card details with the scammers, contact your bank immediately to secure your account.

  3. If you have downloaded remote access software on the instructions of the scammers, immediately contact an IT professional to have your device checked for malware. If you have accessed your bank account or other payment systems while the remote access software was operating on your device, report this to the relevant account providers.

  4. If you have shared any other personal information or downloaded any software at the scammers' request, contact IDCare for help creating a plan to secure your identity.

  5. If you have followed the scammers' trade recommendations, contact the trading platform or sharebroker you used to carry out the trades and let them know about the scam.

  6. If you are getting spam emails and text messages, report these to the Department of Internal Affairs here.

  7. Tell a trusted relative or friend what has happened. They may help you see the situation more clearly, help you deal with the scammers, and suggest what to do next.

  8. Contact Victim Support on 0800 842 846 or visit their website. They can provide free emotional and practical support and information.

  9. Report the scam to the FMA.

Read More
Loading...
Old Mutual warns about ‘Trevor Manuel’ deepfake investment pitches

Incident 1226: Old Mutual Reportedly Warns of Purported Deepfake Videos Impersonating Chairman Trevor Manuel in Investment Scams

“Old Mutual warns about ‘Trevor Manuel’ deepfake investment pitches”
moonstone.co.za2025-10-06

An investment scam using a fake likeness of Trevor Manuel (pictured), the chairman of Old Mutual, is circulating on social media.

Last week, Old Mutual warned the public about the videos, which purportedly show Manuel providing investment advice and encouraging the public to invest in certain products.

Manuel, who was Minister of Finance from 1996 to 2009, is the latest high-profile individual whose likeness has been used in deepfake videos to peddle investment scams. Similar deepfake videos have featured Johann Rupert, Nicky Oppenheimer, and Elon Musk.

"It has come to my attention that there are social media posts using my image and artificial voice and pretending to either give people investment advice or requesting that they invest in products that I advise them to. Artificial Intelligence can easily be used by bad people," Manuel said in a statement.

Manuel said he does not give investment advice to anybody -- and it would be illegal for him to do so. The Financial Advisory and Intermediary Services Act, which was passed in 2002, stipulates that only authorised individuals can provide investment advice and offer financial products. Manuel said he was not so authorised.

"Too many people have lost their hard-earned money, especially what they've saved for later in life, stolen by crooks. I plead with you to protect your nest egg."

Regarding unauthorised products, the Financial Sector Conduct Authority recently warned the public to exercise caution when conducting financial services business with the following entities/individuals:

  • Dazzle Brilliance Diamond (Pty) Ltd.
  • Lucas Sekhorane Maake, who also operates as Skuriey.rsa on Telegram.
  • Michael Jacobs and AMFX Solutions.
  • Capital40 Investments.

None of the above is authorised by the FSCA to provide financial services to the public in South Africa.

The FSCA said it has received information that Dazzle is soliciting funds from members of the public for investment purposes, while promising unrealistic returns. Dazzle is reportedly encouraging individuals to "invest R2 500 and receive R6 000 in 40 days". The FSCA cautions that promises of unrealistic returns should be treated with great suspicion.

The Authority said it has come to its attention that Maake is offering forex signals and account management services to members of the public. Maake offered to trade on behalf of members of the public, in return for 50% of the profit generated through his trades.

"Providing signals refers to the practice of persons making recommendations to their clients in respect of trades and prices in financial products. The signal provider is remunerated through a subscription fee or a percentage of profits. Even in the instances of clients suffering trading losses, signal providers may benefit through commissions paid by brokers. It is not unusual for signal providers to provide fictitious signals and share fake trading patterns to lure clients into participating," the Authority said.

The FSCA said Jacobs is reportedly encouraging members of the public to "invest and enjoy 300% profit, receivable every month for six months".

The Authority said Capital40 is encouraging members of the public to invest between R1 000 and R400 000 and are promised monthly returns of between R320 and R128 000. Capital40 claims its providers investors with access to artificial intelligence trading software that makes automated trading decisions in financial markets and generates profits on behalf of clients.

Spot the red flags

The FSCA advises the public to remain alert when approached with attractive investment offers. These often come with warning signs that should raise immediate concern. Common red flags include:

  • Promises of unrealistic or exaggerated returns.
  • Offers made through social media platforms.
  • Requests for upfront payments.
  • Additional payments required to access your investment returns.
  • Charges for training before you can invest.
  • Pressure to act and pay urgently.
  • Vague or unclear information about the investment product.

To check whether a person or business is authorised by the FSCA to operate as a financial services provider -- and to verify their FSP number -- the public can use any of the following options:

  • Call the FSCA's toll-free line: 0800 110 443
  • Search licensed financial institutions by category:
    https://www.fsca.co.za/Regulated%20Entities/Pages/List-Regulated-EntitiesPersons.aspx
  • Search authorised FSPs under the FAIS Act:
    https://www.fsca.co.za/Fais/Search_FSP.htm
Read More
Loading...
Resilient Homes Program data breach

Incident 1228: Alleged ChatGPT Misuse by Contractor Leads to Reported Data Exposure in New South Wales Resilient Homes Program

“Resilient Homes Program data breach”
nsw.gov.au2025-10-06

The NSW Reconstruction Authority (RA) is aware of a data breach involving personal information belonging to some people who applied for the Northern Rivers Resilient Homes Program (RHP).

The breach occurred when a former contractor of the RA uploaded data containing personal information to an unsecured AI tool which was not authorised by the department.

There is no evidence that any information has been made public, however, Cyber Security NSW will continue to monitor the internet and the dark web to see if any of the information is accessible online.

We understand this news is concerning and we are deeply sorry for the distress it may cause for those who have engaged with the program.

We will be contacting people this week with updates to let them know what has happened and whether they have been impacted or not.

Since learning about the extent of this breach, we have engaged forensic analysts and are working closely with Cyber Security NSW to undertake an investigation to understand the scope and the risks arising from it.

We expect the forensic analysis to be completed within the coming days. This will give us a clearer understanding of the extent of the breach and the specific data involved.

We know people will want to know exactly what has been shared and we are doing all we can to get that information to them as soon as possible.

So far, there is no evidence that any of the uploaded data has been accessed by a third party.

What happened?

Between 12 and 15 March 2025, personal information held for the Resilient Homes Program (RHP) was uploaded to the AI platform ChatGPT by a former RA contractor.

Once we understood the full scope of the breach, we took immediate steps to contain any further risk. We engaged forensic analysts, began working closely with Cyber Security NSW and launched a detailed investigation to determine what was shared, what risks may exist, and who was affected.

The data involved was a Microsoft Excel spreadsheet containing 10 columns and over 12,000 rows of information. Every row is being carefully reviewed to understand what information may have been compromised.

This process has been complex and time-consuming and we acknowledge that it has taken time to notify people. Our focus has been on ensuring we had the right information to contact every impacted person accurately and completely.

We understand people will have questions about how this happened and why notification has taken time. To help answer those questions, we've initiated an independent review.

What we know

Based on early forensic analysis, up to 3,000 people may be potentially impacted.

At this stage, the information we know that has been disclosed includes:

  • Names and addresses
  • Email addresses
  • Phone numbers
  • Some personal and health information

What we are doing

Within a week, we will contact anyone impacted to confirm exactly what data was shared and offer personalised support.

We're working with Cyber Security NSW to monitor the internet and dark web for any signs that this information is accessible online. Continuous monitoring of the dark web and broader internet is ongoing and to date, there is no evidence that any uploaded data has been accessed or distributed by a third party.

The NSW Privacy Commissioner has been notified and we've reviewed and strengthened our internal systems and processes and issued clear guidance to staff on the use of unauthorised AI platforms, like ChatGPT. Safeguards are now in place to prevent similar incidents in future.

What support is available?

To speak to someone on the phone about what has happened please call the RHP call centre on [1800 844 085](tel:1800 844 085) Monday to Friday, 9am-5pm (excluding public holidays).

RA will provide compensation for any reasonable out of pocket expenses if any compromised identity documents need to be replaced.

If you have any concerns about protecting your identity, NSW government agency ID Support can help prevent and recover from data breaches with expert advice, free resources and support. You can reach them via their website www.nsw.gov.au/id-support-nsw or call them on [1800 001 040](tel:1800 001 040), Monday to Friday, 9am-5pm (excluding public holidays). Interpreter services are available.

ID Support NSW can help by

  • providing advice on compromised identification documents and how to restore your identity security
  • guiding you on how to keep your personal identity information safe
  • sharing options for additional support and counselling services.

We will continue to share updates and provide support to those who have been impacted.

We understand the seriousness of this breach and are deeply sorry for the potential impact on people. We remain fully committed to protecting their privacy and restoring trust in the Resilient Homes Program and the RA.

Frequently asked questions

What happened?

Personal information provided during applications for the RHP was uploaded by a former contractor of the RA to the Artificial Intelligence (AI) platform, ChatGPT.

The data shared was contained in a Microsoft Excel spreadsheet with 10 columns and more than 12,000 rows of information.

When did the breach occur?

The upload took place between 12 and 15 March 2025.

How many people were affected?

Our early analysis indicates that up to 3,000 people may be affected. 

Was this a cyber-attack or hacking incident?

No.

This incident occurred when an RA contractor uploaded information from a Microsoft Excel spreadsheet to an unauthorised third-party AI platform, ChatGPT.

Our internal security systems remain secure and have not been compromised.

What is ChatGPT?

ChatGPT is an online AI tool developed by a company called OpenAI. It allows users to ask questions or upload information to help generate written content or ideas.

What personal information was included in the uploaded file?

We're working through the forensic analysis and expect that to be completed within the coming days. At this stage we can confirm the following information has been disclosed:

  • names and addresses
  • email addresses
  • phone numbers
  • some personal and health information

Was my financial or banking information uploaded?

We expect to have a complete understanding of the information uploaded within the coming days. As soon as we know, we will contact you with the types of information potentially exposed.

Was any government ID or sensitive information included?

We expect to have a complete understanding of the information uploaded within the coming days. As soon as we know, we will contact you with the types of information potentially exposed.

What have you done to fix the issue?

We are working with Cyber Security NSW to monitor the internet and dark web to see if any of this information is accessible online. The NSW Privacy Commissioner has also been notified.

We have reviewed and strengthened internal systems and processes and issued clear guidance to staff on the use of non-sanctioned AI platforms. Safeguards are now in place to prevent future uploads of personal information into ChatGPT and other AI platforms. 

Have you reported the breach?

Yes, in line with the Privacy and Personal Information Protection Act 1998, the breach was reported to the NSW Privacy Commissioner.

What steps have been taken to ensure this doesn't happen again

We've conducted a full cyber security review and engaged technical and legal specialists. The RA has also implemented controls to block the upload of personal information into AI tools.

We will continue to update mandatory cyber security training and provide regular communication to ensure every employee is aware of their data obligations.

The RA will continue to review and implement any additional measures that may be needed to better protect the data that we hold.

We have also initiated an independent review of how this breach was identified and managed and will share those findings once it is completed.

What does the breach mean for me?

We believe the risk of misuse is low, however, we recommend staying alert for any suspicious emails or messages that ask for your personal details. 

What can I do to protect myself?

  • be cautious of emails that look unusual or unexpected
  • avoid clicking on links or opening attachments from unknown senders
  • contact us if you receive any communication that references your participation in the RHP that is not from the RA.

Will I be contacted directly

Everyone who registered for the Resilient Homes Program will receive an email.

The email will be sent by the NSW Reconstruction Authority with the subject line: RHP Data Breach. The messages will begin going out in the coming days.

Please check your spam or junk folder if you don't see it in your inbox.

What support are you offering?

To speak to someone on the phone about what has happened please call the RHP call centre on [1800 844 085](tel:1800 844 085), Monday to Friday, 9am-5pm (excluding public holidays).

RA will provide compensation for any reasonable out of pocket expenses if any compromised identity documents need to be replaced.

If you have any concerns about protecting your identity, ID Support NSW can provide advice and assistance via their website www.nsw.gov.au/id-support-nsw or call them on [1800 001 040](tel:1800 001 040), Monday to Friday, 9am-5pm (excluding public holidays). Interpreter services are available.

Where can I find more information?

This website will include the most up to date information. You can also contact the RA on [1800 844 085](tel:1800 844 085).

Read More
Loading...
Spam and Scams Proliferate in Facebook’s Political Ads

Incident 1223: Purportedly AI-Generated Deepfake Ads on Facebook Reportedly Impersonate Trump, Musk, Ocasio-Cortez, Warren, Sanders, and Leavitt to Promote Fraudulent Rebates

“Spam and Scams Proliferate in Facebook’s Political Ads”
nytimes.com2025-10-05

An ad that appeared in thousands of Facebook feeds this summer featured an altered video of the White House press secretary, Karoline Leavitt, at a regular press briefing. In it, she appeared to say Americans could claim a $5,000 relief check on an official government site. An arrow that then appeared instead led to an advertiser called Get Covered Today.

Similar ads showed fabricated videos of Senators Bernie Sanders of Vermont and Elizabeth Warren of Massachusetts promising similar rebates that did not exist. "This is not a gimmick," the impersonation of Ms. Warren says.

In fact, it was.

Even so, the company behind the ads and others like it were among the top political advertisers on Facebook, according to an analysis by the Tech Transparency Project, a nonprofit focused on holding large technology companies accountable.

The ads are a lucrative part of Facebook's advertising revenue that, the project's researchers and others say, has led the company to turn a blind eye to a flood of low-quality or deceptive content, spam and in some cases outright fraud on the platform.

"Meta is very aware of these types of scams," said Katie A. Paul, director of the Tech Transparency Project. "They just didn't care."

In a report released on Wednesday, the project identified 63 advertisers that have by several measures employed deceptive or fraudulent practices. They amount to roughly one in five of the platform's top 300 spenders in the category of political or social advertising.

Collectively they bought nearly 150,000 ads, spending almost $49 million over the past seven years, according to data in Facebook's ad library.

Ads from all of the 63 advertisers have previously been removed for violating Facebook's policies, meaning their deceptive practices were not unknown. Meta has suspended some of them, but more than half were able to continue posting new ads as recently as this week. The analysis suggests that even when Facebook takes action, it has profited from advertisers that try to swindle its users.

"The only thing worse than these deepfake A.I. videos being used to scam Americans is the fact that Meta makes tens of millions of dollars off those scam ads," Senator Warren said in a statement about the impersonation of her.

She added, "We need serious guardrails in place to protect consumers online."

Facebook, which is owned by Meta, prohibits advertising that uses "identified deceptive or misleading practices, including scams to take money from people or access personal information." It has explicit rules against impersonations, and it ultimately removed the video of Ms. Leavitt, though not all the others.

In a statement, the company said it enforced its rules vigorously and would "invest in building new technical defenses" against what it called an industrywide issue. "Scammers are relentless," the statement said, "and constantly evolve their tactics to try to evade detection."

Facebook has long had to wrestle with criticism of the political ads it solicits on the platform. It imposed restrictions on them after Russia used them in an attempt to sow voter discontent during the 2016 presidential election and banned them entirely for a time after President Trump fought to overturn the 2020 election.

With government officials around the world reporting a sharp rise in online fraud, Meta is facing new pressure.

Last week, Singapore's government gave Meta until the end of the month to crack down on scam ads and other posts after a sharp rise in content impersonating officials there. The government threatened a fine starting at $770,000 and rising daily if Meta did not comply.

In the United States, the company argued in court last year that it "does not owe a duty to users" to address fraudulent content, but that legal argument appears to be falling short.

A federal court in California refused last week to dismiss a lawsuit that accused Facebook of negligence and breach of contract for abetting fraud by advertisers.

"While Facebook disclaims responsibility for third-party conduct, it maintains responsibility for its own promise to 'take appropriate action' to combat scam advertisements and to do so in good faith," Judge Jeffrey S. White of U.S. District Court for the Northern District of California wrote in his ruling.

Meta keeps political ads in its library for seven years as a measure of transparency after the Russian influence operation in 2016. Ads in other categories are not preserved after they appear, making it harder for researchers to study the extent of scams among those.

Facebook has specific requirements for political advertisers, requiring them, for example, to be registered companies in the United States. The process of submitting and approving ads that appear on the platform is largely automated, allowing dishonest actors to slip through, experts say.

Some of the advertisers studied by the Tech Transparency Project included contact information in the Philippines, Bangladesh, Vietnam or Britain, which should presumably have disqualified them.

"You can be anybody," said Iesha White, director of intelligence for Check My Ads, an advocacy group that monitors the digital ad industry and was not involved in the Tech Transparency Project's analysis. "There are no rigorous checks."

Meta, she said, effectively crowdsources its enforcement, waiting for users or researchers to report fraud. "It should be identified before the ads ever run."

Many of the ads appear to target older and conservative users. They often respond quickly to politically charged events.

On the day of the memorial service for Charlie Kirk on Sept. 21, an ad appeared asking if people could forgive his assassin, as his wife, Erika, said she had. Click to vote now, it said, promising a red "We Are Charlie Kirk" hat for those who did --- free, except for 99 cents to help cover the shipping costs.

Many of the ads highlighted by the researchers respond quickly to politically charged events, like this one that was posted the same day as the memorial for conservative activist Charlie Kirk.

Though the hat was ostensibly free, it required a more costly monthly membership for a clothing and other merchandise supplier calling itself End the Wokeness. Entering a credit card to pay for the nominal shipping meant automatically signing up for the subscription.

The Federal Trade Commission has warned consumers about websites that sign them up for subscriptions they do not want.

The company that operates as End the Wokeness, Sculpin Media, has listed page administrators in the Philippines, as well as the United States. It describes itself as a clothing store and media company and operates out of a small office next to a spice dealer and a car detailer in a low-slung brick warehouse near the former Fresh Kills landfill in Staten Island.

A man who identified himself as a co-owner of the company appeared surprised that a reporter had found the office. He acknowledged placing the ads but declined to speak on the record. He said the staff worried about potential threats of political violence.

Inside the office were boxes of red baseball caps with the slogan MAGA 2028. End the Wokeness's website is devoted to sales of Trump- and MAGA-themed knickknacks. Among the latest is a "Patriot Pumpkin Carving Kit," a stencil that features Mr. Trump's mug shot. The company's own page on Facebook includes numerous posts from users complaining about its subscription model or asking for refunds.

End the Wokeness, one of the advertisers whose content has been pulled from Facebook for violating the company's policies, also sells Trump- and MAGA-themed knickknacks.

In August, the Federal Trade Commission warned that reported scams targeting older Americans by impersonating government officials or businesses had increased fourfold between 2020 and last year.

Frauds on social media platforms and other websites now far exceed those using text messages or phone calls. The spread of commercially available tools using artificial intelligence has clearly contributed to the surge in deceptive ads, making them easier to produce and spread.

"This is part of a much larger issue: A.I. is advancing rapidly, but the laws and protections we have in place for everyday Americans are dangerously outdated," Mr. Sanders said in a statement in response to an inquiry about the deepfake of him.

The ad featuring Ms. Leavitt was posted by one of four advertisers linked to a company calling itself RFY News Group. No one responded to a message left at a number listed in the ad library as a contact.

Neither Ms. Leavitt nor the White House responded to a request for comment about the deceptive videos.

Meta suspended the accounts of two of the four advertisers involved in August, though they had been advertising for weeks at that point. Two others appeared to have been removed this week following inquiries about the Tech Transparency Project's research, which has tracked the RFY News Group for more than a year.

Researchers say Meta could do more to limit ads like these and to be more transparent about its enforcement of its own rules. Instead, it has rolled back its efforts to moderate political content and reduced its teams working on platform safety.

"The pendulum has swung toward more short-term revenue considerations versus the long-term health of the platform and the ecosystem," said Rob Leathern, a former Meta employee who helped create the ad library and is now head of Hawkview Labs, a company that helps start-ups working in online trust and safety. "Perhaps at some point it has to swing the other way."

Read More
Loading...
Brazilian scammers, raking in millions, used Gisele Bundchen deepfakes on Instagram ads

Incident 1224: Purportedly AI-Generated Deepfake Ads on Instagram Impersonate Gisele Bündchen and Other Celebrities in Brazilian Fraud Scheme

“Brazilian scammers, raking in millions, used Gisele Bundchen deepfakes on Instagram ads”
reuters.com2025-10-05

A group of suspected scammers in Brazil using Instagram ads featuring deepfakes of supermodel Gisele Bundchen and other celebrities appears to have made millions of dollars through online fraud, according to police investigators.

Authorities arrested four suspects tied to the scheme this week and froze assets across five states in an investigation that found over 20 million reais ($3.9 million) in suspicious funds identified by federal anti-money laundering agency COAF.

The operation marks one of the first attempts in Brazil to counter the growing use of artificial intelligence tools to manipulate images and videos of celebrities for online scams.

Brazil's Supreme Court ruled in June that social media platforms can be held liable for criminal ads posted by users if they fail to act swiftly to remove the content, even without a court order.

Instagram owner Meta (META.O), opens new tab said that its policies forbid "ads that deceptively use public figures to try to scam people," and said it removes such ads "when detected." The company said in a statement that it has "specialized systems to detect celeb-bait, invests heavily in trained review teams, shares tips on avoiding scams and offers tools to report potential violations."

A spokesperson for Bundchen's team advised consumers to exercise caution regarding unusual discounts in advertisements featuring celebrities, verify the authenticity of offers through official brand or celebrity channels, and report potential scams to authorities.

The probe announced this week by police investigators in the southern state of Rio Grande do Sul began in August 2024 after a victim reported being misled by an Instagram ad showing an altered video of Bundchen promoting a skincare product.

Another ad used the supermodel's likeness to promise a giveaway of suitcases, with buyers paying shipping fees for items that never arrived.

"We identified that the criminal group carried out a series of other scams, involving deepfakes of other celebrities and false betting platforms," Eibert Moreira Neto, head of Rio Grande do Sul's cybercrime unit, told Reuters.

Investigators are pursuing charges of money laundering and online fraud. They noted that most victims lost small amounts, usually under 100 reais ($19), and did not report the crimes.

"That created a perverse situation in which the criminals enjoyed a kind of 'statistical immunity.' They knew most people would not report them, so they operated at scale without fear," Isadora Galian, from the cybercrime unit, said in a statement.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – June and July 2025

By Daniel Atherton

2025-08-02

Garden at Giverny, J.L. Breck, 1887 🗄 Trending in the AIID Across June and July 2025, the AI Incident Database added over sixty new incident...

Read More
The Database in Print

Read about the database on the PAI Blog, Vice News, Venture Beat, Wired, arXiv , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    592
  • 🥈

    Anonymous

    152
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    678
  • 🥈

    Khoa Lam

    230
  • 🥉

    Anonymous

    224
Total Report Contributions
  • 🥇

    Daniel Atherton

    2743
  • 🥈

    Anonymous

    955
  • 🥉

    Khoa Lam

    456
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Introduces Errors in Critical Child Protection Court Report
ChatGPT Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65