Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4953

Associated Incidents

Incident 98315 Report
Scammers Reportedly Used AI Voice Clone and YouTube Footage to Impersonate WPP CEO in Unsuccessful Scam Attempt

Loading...
WPP boss targeted by deepfake scammers using voice clone
ft.com · 2024

Mark Read, chief executive of WPP, has been the target of a deepfake scam in which criminals used a voice clone and public YouTube footage to set up a video meeting with his executives.

In an email to colleagues seen by the Financial Times, Read said that the scammers used a publicly available photo to set up a fake WhatsApp account under his name. This was used to arrange a Microsoft Teams call between one of his agency heads, another senior executive and the scammers.

Once on the Teams meeting, a voice clone and YouTube footage of the other executive was used, he said, while scammers impersonated Read off-camera using the chat function.

He said: "The pretext was that the individual targeted was being asked to set up a new business with the ultimate aim of extracting personal details and money. Fortunately, the attackers were not successful."

Read warned his colleagues that WPP had "seen increasing sophistication in the cyber attacks on our colleagues, and those targeted at senior leaders in particular."

The sophisticated cyber attack — which appears to use advanced AI voice software — shows the vulnerability of high-profile individuals whose details are easily available online.

Read said that attack, which was first reported in the Guardian, showed "how these techniques are specifically designed to target individuals at a level that is far more tailored and psychological than the scams defrauding the vulnerable or general public".

Companies are having to invest heavily in security software to help prevent such attacks, although many are being targeted directly at employees. Banks and other financial institutions are on high alert for any attempts to withdraw money using AI impersonations.

Political party leaders are also concerned in the forthcoming elections of the use of deepfakes that could be used to impersonate members to influence voters.

This week, it was reported that a deepfake video of Scottish National party leader John Swinney speaking in the Holyrood parliament was posted on social media.

A picture of singer Katy Perry at the Met Gala this week wearing a flamboyant dress was revealed to be faked using AI.

WPP itself is investing heavily in AI technology, which will give it the ability to reproduce images of its clients’ products at a mass scale for consumers — for example a can of a soft drink in thousands of different locations and with different people.

A WPP spokesperson said in a statement: "Thanks to the vigilance of our people, including the executive concerned, the incident was prevented."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd