Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 898: Alleged LLMjacking Targets AI Cloud Services with Stolen Credentials

Description: Attackers reportedly exploited stolen cloud credentials obtained through a vulnerable Laravel system (CVE-2021-3129) to allegedly abuse AI cloud services, including Anthropic’s Claude and AWS Bedrock, in a scheme referred to as “LLMjacking.” The attackers are said to have monetized access through reverse proxies, reportedly inflating victim costs to as much as $100,000 per day. Additionally, they allegedly bypassed sanctions, enabled LLM models, and evolved techniques to evade detection and logging.
Editor Notes: Incident 898 presents an editorial challenge in synthesizing events from multiple reports, pointing to the evolution of LLMjacking trends over time. The following is a reconstruction of key incidents outlined in Sysdig's investigative reports: (1) 05/06/2024: Initial publication date LLMjacking report by Sysdig's Alessandro Brucato. Attackers reportedly exploited stolen cloud credentials obtained via a Laravel vulnerability (CVE-2021-3129) to access cloud-hosted LLMs like Anthropic Claude. Monetization allegedly occurred via reverse proxies, potentially costing victims up to $46,000 per day. (2) 07/11/2024: Significant spike in LLMjacking activity reportedly observed, with over 61,000 AWS Bedrock API calls logged in a three-hour window, allegedly generating significant costs to victims. (3) 07/24/2024: A second surge in activity reportedly occurred, with 15,000 additional API calls detected. Attackers are alleged to have escalated the abuse of APIs and developed new scripts to automate LLM interactions. (4) 09/18/2024: Sysdig's second report detailing evolving attacker tactics, including alleged enabling LLMs via APIs (e.g., PutFoundationModelEntitlement) and tampering with logging configurations (e.g., DeleteModelInvocationLoggingConfiguration) to evade detection. Motives reportedly expanded to include bypassing sanctions, enabling access in restricted regions, and role-playing use cases. (5) Ongoing: Sysdig and other researchers continue to observe alleged LLMjacking incidents, reportedly involving other LLMs like Claude 3 Opus and OpenAI systems. Victim costs have allegedly risen to over $100,000 per day with LLM usage, which is reportedly fueling a black market for stolen credentials.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OAI Reverse Proxy Tool Creators and LLMjacking Reverse Proxy Tool Creators developed an AI system deployed by LLMjacking Attackers Exploiting Laravel and Entities engaging in Russian sanctions evasion, which harmed Laravel users , Laravel CVE-2021-3129 users , Cloud LLM users and Cloud LLM service providers.
Alleged implicated AI systems: OpenRouter services , OpenAI models , Mistral-hosted models , MakerSuite tools , GCP Vertex AI models , ElevenLabs services , Azure-hosted LLMs , AWS Bedrock-hosted models , Anthropic Claude (v2/v3) and AI21 Labs models

Incident Stats

Incident ID
898
Report Count
2
Incident Date
2024-05-06
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
LLMjacking: Stolen Cloud Credentials Used in New AI Attack
The Growing Dangers of LLMjacking: Evolving Tactics and Evading Sanctions
LLMjacking: Stolen Cloud Credentials Used in New AI Attack

LLMjacking: Stolen Cloud Credentials Used in New AI Attack

sysdig.com

The Growing Dangers of LLMjacking: Evolving Tactics and Evading Sanctions

The Growing Dangers of LLMjacking: Evolving Tactics and Evading Sanctions

sysdig.com

LLMjacking: Stolen Cloud Credentials Used in New AI Attack
sysdig.com · 2024

The Sysdig Threat Research Team (TRT) recently observed a new attack that leveraged stolen cloud credentials in order to target ten cloud-hosted large language model (LLM) services, known as LLMjacking. The credentials were obtained from a …

The Growing Dangers of LLMjacking: Evolving Tactics and Evading Sanctions
sysdig.com · 2024

Following the Sysdig Threat Research Team's (TRT) discovery of LLMjacking --- the illicit use of an LLM through compromised credentials --- the number of attackers and their methods have proliferated. While there has been an uptick in attac…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

The DAO Hack

The DAO Hack

Jun 2016 · 24 reports
Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks

Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks

Mar 2019 · 1 report
Game AI System Produces Imbalanced Game

Game AI System Produces Imbalanced Game

Jun 2016 · 11 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

The DAO Hack

The DAO Hack

Jun 2016 · 24 reports
Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks

Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks

Mar 2019 · 1 report
Game AI System Produces Imbalanced Game

Game AI System Produces Imbalanced Game

Jun 2016 · 11 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c