Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1100: Reported Audio Deepfake Impersonating CEO Karim Toubba Targets LastPass Employee via WhatsApp

Description: A LastPass employee was reportedly targeted by an audio deepfake impersonating CEO Karim Toubba via WhatsApp. The message used voice-cloning AI in a social engineering attempt to create urgency and bypass security protocols. The employee recognized red flags, reported the incident, and no breach occurred. LastPass shared the incident publicly to raise awareness about deepfake-driven fraud campaigns targeting executives and companies.
Editor Notes: This incident reportedly occurred on 04/10/2024. It was ingested into the AI Incident Database on 06/13/2025.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake technology developer and Unknown voice cloning technology developer developed an AI system deployed by Unknown scammers and Unknown scammer impersonating Karim Toubba, which harmed Karim Toubba , LastPass employee , Employees of LastPass and General public.
Alleged implicated AI systems: Unknown deepfake technology , Unknown voice cloning technology and WhatsApp

Incident Stats

Incident ID
1100
Report Count
2
Incident Date
2024-04-10
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Attempted Audio Deepfake Call Targets LastPass Employee
LastPass Dodges Deepfake Scam: CEO Impersonation Attempt Thwarted
Attempted Audio Deepfake Call Targets LastPass Employee

Attempted Audio Deepfake Call Targets LastPass Employee

blog.lastpass.com

LastPass Dodges Deepfake Scam: CEO Impersonation Attempt Thwarted

LastPass Dodges Deepfake Scam: CEO Impersonation Attempt Thwarted

hackread.com

Attempted Audio Deepfake Call Targets LastPass Employee
blog.lastpass.com · 2024

For the last several years, the cyber threat intelligence community has been concerned about the larger proliferation of "deepfake" technology and its potential use by fraudsters against companies and/or individuals. "Deepfakes" use generat…

LastPass Dodges Deepfake Scam: CEO Impersonation Attempt Thwarted
hackread.com · 2024

Password management giant LastPass narrowly avoided a potential security breach after a company employee was targeted by a deepfake scam. The incident, detailed in a blog post by LastPass, involved an audio deepfake impersonating CEO Karim …

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

Feb 2022 · 4 reports
The DAO Hack

The DAO Hack

Jun 2016 · 24 reports
Reported AI-Cloned Voice Used to Deceive Hong Kong Bank Manager in Purported $35 Million Fraud Scheme

Reported AI-Cloned Voice Used to Deceive Hong Kong Bank Manager in Purported $35 Million Fraud Scheme

Jan 2020 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

Feb 2022 · 4 reports
The DAO Hack

The DAO Hack

Jun 2016 · 24 reports
Reported AI-Cloned Voice Used to Deceive Hong Kong Bank Manager in Purported $35 Million Fraud Scheme

Reported AI-Cloned Voice Used to Deceive Hong Kong Bank Manager in Purported $35 Million Fraud Scheme

Jan 2020 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f