Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5597

Associated Incidents

Incident 11583 Report
Alleged Malicious Wiping Command Found in Amazon Q AI Assistant

Loading...
Amazon AI Breach: A Harsh Wake-Up Call for AI Regulation
citizen.org · 2025

WASHINGTON, D.C. --- Amazon's generative AI coding assistant, Amazon Q, was compromised by a hacker who injected malicious code into the tool's GitHub repository. The code instructed the AI to wipe users' systems and cloud resources. Amazon unknowingly included the exploit in a public release downloaded nearly a million times. The hacker claims they were granted admin access with ease and used the incident to expose Amazon's lax security practices. The breach highlights growing risks as hackers increasingly target AI tools to gain access to sensitive systems.

J.B. Branch, Big Tech accountability advocate at Public Citizen, issued the following statement in response:

"This is exactly why we need enforceable rules before AI products hit the market. Amazon shipped a product with embedded commands to wipe user data---because they lacked the most basic guardrails and oversight. Regulatory and liability standards may have enabled this breach to have been caught before release."

"Big Tech keeps asking the public to trust them, but time and again they prove why that trust is misplaced. AI products are being rushed to market with minimal safety checks, and only profits on the mind. We need mandatory safety reviews, independent audits, and public accountability. Congress can't keep shrugging at Silicon Valley's blatant AI liabilities--the next AI hack could take down entire systems."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd