Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 999: Attackers Reportedly Deployed Simulated AI Support Chatbot to Trick Instagram Business Users into Adding Malicious 2FA Login

Description: A phishing campaign has reportedly been impersonating Meta support using a fake chatbot interface to hijack Instagram Business accounts. Victims received emails claiming ad violations and were directed to a fraudulent site mimicking Meta's support. There, a simulated chatbot instructed users to add the attacker’s Authenticator app as a secure login method, enabling account takeover. It remains unclear whether the chatbot used AI or was human-operated via a bot-like interface. See editor's note.
Editor Notes: Some ambiguity in the reporting exists over whether the alleged chatbot interface is an actual chatbot or a human-operated interface mimicking one. In the former case, this would constitute a more clear-cut AI incident. In the latter, it may represent a newer category of AI incident—one in which attackers exploit emerging user expectations around automated systems by mimicking the interaction patterns of AI-driven support tools.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown generative AI developers developed an AI system deployed by Scammers impersonating Meta support , scammers , Phishers and Fraudsters, which harmed Meta users , Instagram users and Instagram Business users.
Alleged implicated AI systems: Unknown chatbot interface and Instagram

Incident Stats

Incident ID
999
Report Count
2
Incident Date
2025-03-12
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceClickbait to Catastrophe: How a Fake Meta Email Leads to Password PlunderNew Phishing Scam Uses Fake Instagram Chatbot to Hijack Accounts
Clickbait to Catastrophe: How a Fake Meta Email Leads to Password Plunder

Clickbait to Catastrophe: How a Fake Meta Email Leads to Password Plunder

cofense.com

New Phishing Scam Uses Fake Instagram Chatbot to Hijack Accounts

New Phishing Scam Uses Fake Instagram Chatbot to Hijack Accounts

hackread.com

Clickbait to Catastrophe: How a Fake Meta Email Leads to Password Plunder
cofense.com · 2025

Social media is widely used in marketing, helping businesses to generate ads that attract potential customers. But what if you received an email stating, "YOUR ADS ARE TEMPORARILY SUSPENDED"? What steps would you take? The urgency of the em…

New Phishing Scam Uses Fake Instagram Chatbot to Hijack Accounts
hackread.com · 2025

A new phishing campaign has been tricking users into giving out access to their Meta Business accounts especially Instagram. The scam, detected by the Cofense Phishing Defense Center, uses fake chat support, detailed instructions, and attem…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Nest Smoke Alarm Erroneously Stops Alarming

Nest Smoke Alarm Erroneously Stops Alarming

Jan 2014 · 6 reports
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Game AI System Produces Imbalanced Game

Game AI System Produces Imbalanced Game

Jun 2016 · 11 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Nest Smoke Alarm Erroneously Stops Alarming

Nest Smoke Alarm Erroneously Stops Alarming

Jan 2014 · 6 reports
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Game AI System Produces Imbalanced Game

Game AI System Produces Imbalanced Game

Jun 2016 · 11 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e