Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1037: Microsoft Reportedly Blocks 1.6 Million Bot Signup Attempts Per Hour Amid Global AI-Driven Fraud Surge

Description: Between April 2024 and April 2025, Microsoft reportedly blocked 1.6 million bot signups per hour and disrupted $4 billion in fraud attempts linked to AI-enhanced scams. The company's Cyber Signals report details how generative AI is being used to fabricate realistic e-commerce sites, job offers, customer service bots, and phishing lures. Fraud actors now automate mass-deceptive campaigns with fake reviews, deepfakes, and cloned brand domains at unprecedented scale and speed.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Various generative AI developers , Various deepfake technology developers and Various voice cloning technology developers developed an AI system deployed by scammers , Fraudsters and Cyber criminal networks, which harmed General public , Consumers , Enterprises , Microsoft and Financial institutions.
Alleged implicated AI systems: Generative AI systems , AI-powered chatbots , AI-enhanced phishing frameworks , Microsoft Quick Assist , E-commerce and job listing platforms and Fraud detection and domain protection tools

Incident Stats

Incident ID
1037
Report Count
5
Incident Date
2025-04-16
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+2
Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures
+1
As scammers latch on to artificial intelligence, Microsoft says it blocks 1.6 million bots every hour
Microsoft Delivers Stark Warning About AI-Enabled Online Scams
Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures

Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures

microsoft.com

AI is making online shopping scams harder to spot

AI is making online shopping scams harder to spot

cbsnews.com

As scammers latch on to artificial intelligence, Microsoft says it blocks 1.6 million bots every hour

As scammers latch on to artificial intelligence, Microsoft says it blocks 1.6 million bots every hour

abc.net.au

Microsoft blocks $4 billion in fraud as AI scams rise globally

Microsoft blocks $4 billion in fraud as AI scams rise globally

cnbctv18.com

Microsoft Delivers Stark Warning About AI-Enabled Online Scams

Microsoft Delivers Stark Warning About AI-Enabled Online Scams

extremetech.com

Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures
microsoft.com · 2025

Microsoft maintains a continuous effort to protect its platforms and customers from fraud and abuse. From blocking imposters on Microsoft Azure and adding anti-scam features to Microsoft Edge, to fighting tech support fraud with new feature…

AI is making online shopping scams harder to spot
cbsnews.com · 2025

Online scams are nothing new, but artificial intelligence is making it easier than ever to dupe people.

What used to take days now takes a scammer only minutes to create.

A new report from Microsoft highlights the scale of the problem. The …

As scammers latch on to artificial intelligence, Microsoft says it blocks 1.6 million bots every hour
abc.net.au · 2025

Scammers have latched on to AI as the newest tool in their arsenal --- generating fake photos, voice clones, phishing emails and fake websites "at an increasingly rapid rate".

Artificial intelligence has made it "easier and cheaper" for sca…

Microsoft blocks $4 billion in fraud as AI scams rise globally
cnbctv18.com · 2025

In the past year, Microsoft says it blocked more than $4 billion in fraud attempts, as cybercriminals increasingly turn to AI-powered tools to run sophisticated scams.

From impersonating tech support staff to creating convincing fake e-comm…

Microsoft Delivers Stark Warning About AI-Enabled Online Scams
extremetech.com · 2025

Microsoft's latest Cyber Signals report shows how cybercriminals are using AI to make scams more believable and harder to spot. The company says that between April 2024 and April 2025, it stopped $4 billion worth of fraud attempts, blocked …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

Oct 2020 · 1 report
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Jun 2020 · 12 reports
Facebook's Automated Tools Failed to Adequately Remove Hate Speech, Violence, and Incitement

Facebook's Automated Tools Failed to Adequately Remove Hate Speech, Violence, and Incitement

Mar 2021 · 1 report
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

Oct 2020 · 1 report
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story

Jun 2020 · 12 reports
Facebook's Automated Tools Failed to Adequately Remove Hate Speech, Violence, and Incitement

Facebook's Automated Tools Failed to Adequately Remove Hate Speech, Violence, and Incitement

Mar 2021 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e