Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4647

Associated Incidents

Incident 92726 Report
Fraudsters Allegedly Use AI-Generated Voice of Italian Defense Minister Guido Crosetto to Scam Business Leaders

Loading...
False Crosetto, scams like this can hit anyone: here's how to defend yourself
repubblica.it · 2025

The false "Crosetto" scam was a great job by professionals, but on a smaller scale something general can also affect a larger audience: anyone who happens to manage bank transfers in a company for example; not necessarily managers. It has already happened. And on a simpler level, scams where someone pretends to be a relative, friend or famous person actually also affect ordinary people, increasingly often in recent years.

The most sophisticated techniques exploit artificial intelligence

Scammers can clone a person's voice with just a few seconds of recording and use it to deceive friends, family or colleagues.

This year in Treviso an 84-year-old woman was contacted by someone using an artificial reproduction of her daughter's voice. During the call, the fake daughter, in tears, said she had been in a car accident and asked for money. Convinced of the authenticity of the call, the elderly woman gave 30 thousand euros to a scammer. In 2023, the voice of a 15-year-old girl, Jennifer DeStefano, was cloned to simulate a kidnapping. The scammers contacted the mother, reproducing the voice of the panicked daughter, and demanded a ransom. Fortunately, the scam was discovered in time.

In July 2024, a scam used the cloned voice of the CEO of Ferrari.

Sometimes, you don't need AI to deceive. This month, Sandra Bullock reported the existence of fake Facebook profiles impersonating her sister, Gesine. These scammers contacted people by pretending to be in direct communication with Sandra, managing to extort large sums of money from victims. Despite reports to Meta and legal action taken, these fake accounts continued to proliferate. A case reported by Striscia la Notizia technologist Marco Camisani Calzolari highlights how scammers often use fake profiles of famous people to initiate private conversations with victims. In this example, a fake account in the name of entrepreneur Tommaso Mazzanti contacted users via private messages, trying to convince them that they were entitled to a nonexistent prize. As you can see, both celebrities and ordinary people are being targeted.

How to defend yourself?

So, what to do? Let's start with the bad news.

Unfortunately, there are currently no infallible apps capable of recognizing a voice cloned by AI in real time, but there are minimal defense tools: apps that identify calls from suspicious numbers, such as Hiya AI Phone and Truecaller. They only work if the scammer has already used that number for other calls of that type. A solution could come in the future from telephone operators, if -- as requested for some time by the Agcom authority -- they install network filters to block calls with fake numbers (altered caller ID), often used to make scams more convincing. For example, scammers use an African mobile number but manage to make it appear as Italian and even registered to a certain institution or bank.

Common sense wins

A few common sense measures are the best defense today. Not only against scam calls but also against similar techniques that use other means, such as social chat.

Let's ask personal questions: the Ferrari manager managed to unmask a deepfake of the voice of CEO Benedetto Vigna in this way: the hackers had cloned the voice, but they could not answer a question about a private detail of his life. If we receive a suspicious call from someone we know, we ask for something that only that person would know.

Or we call them back immediately on a number we already had.

In the United States, the problem is becoming so serious that many families have started using a "safe word", a secure word to be shared only between close relatives. If someone calls pretending to be a family member in difficulty, before acting we ask for the agreed word.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd