Description: Attackers reportedly exploited stolen cloud credentials obtained through a vulnerable Laravel system (CVE-2021-3129) to allegedly abuse AI cloud services, including Anthropic’s Claude and AWS Bedrock, in a scheme referred to as “LLMjacking.” The attackers are said to have monetized access through reverse proxies, reportedly inflating victim costs to as much as $100,000 per day. Additionally, they allegedly bypassed sanctions, enabled LLM models, and evolved techniques to evade detection and logging.
Editor Notes: Incident 898 presents an editorial challenge in synthesizing events from multiple reports, pointing to the evolution of LLMjacking trends over time. The following is a reconstruction of key incidents outlined in Sysdig's investigative reports: (1) 05/06/2024: Initial publication date LLMjacking report by Sysdig's Alessandro Brucato. Attackers reportedly exploited stolen cloud credentials obtained via a Laravel vulnerability (CVE-2021-3129) to access cloud-hosted LLMs like Anthropic Claude. Monetization allegedly occurred via reverse proxies, potentially costing victims up to $46,000 per day. (2) 07/11/2024: Significant spike in LLMjacking activity reportedly observed, with over 61,000 AWS Bedrock API calls logged in a three-hour window, allegedly generating significant costs to victims. (3) 07/24/2024: A second surge in activity reportedly occurred, with 15,000 additional API calls detected. Attackers are alleged to have escalated the abuse of APIs and developed new scripts to automate LLM interactions. (4) 09/18/2024: Sysdig's second report detailing evolving attacker tactics, including alleged enabling LLMs via APIs (e.g., PutFoundationModelEntitlement) and tampering with logging configurations (e.g., DeleteModelInvocationLoggingConfiguration) to evade detection. Motives reportedly expanded to include bypassing sanctions, enabling access in restricted regions, and role-playing use cases. (5) Ongoing: Sysdig and other researchers continue to observe alleged LLMjacking incidents, reportedly involving other LLMs like Claude 3 Opus and OpenAI systems. Victim costs have allegedly risen to over $100,000 per day with LLM usage, which is reportedly fueling a black market for stolen credentials.
Entités
Voir toutes les entitésAlleged: OAI Reverse Proxy Tool Creators et LLMjacking Reverse Proxy Tool Creators developed an AI system deployed by LLMjacking Attackers Exploiting Laravel et Entities engaging in Russian sanctions evasion, which harmed Laravel users , Laravel CVE-2021-3129 users , Cloud LLM users et Cloud LLM service providers.
Systèmes d'IA présumés impliqués: OpenRouter services , OpenAI models , Mistral-hosted models , MakerSuite tools , GCP Vertex AI models , ElevenLabs services , Azure-hosted LLMs , AWS Bedrock-hosted models , Anthropic Claude (v2/v3) et AI21 Labs models
Statistiques d'incidents
ID
898
Nombre de rapports
2
Date de l'incident
2024-05-06
Editeurs
Daniel Atherton
Rapports d'incidents
Chronologie du rapport
sysdig.com · 2024
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
L'équipe de recherche sur les menaces Sysdig (TRT) a récemment observé une nouvelle attaque qui exploitait des identifiants cloud volés afin de cibler dix services de modèles de langage volumineux (LLM) hébergés dans le cloud, connus sous l…
sysdig.com · 2024
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
Suite à la découverte du LLMjacking par l'équipe de recherche sur les menaces Sysdig (TRT) --- l'utilisation illicite d'un LLM via des identifiants compromis --- le nombre d'attaquants et leurs méthodes a proliféré. Bien qu'il y ait eu une …
Variantes
Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.
Incidents similaires
Did our AI mess up? Flag the unrelated incidents
The DAO (organization)
· 24 rapports
6 goof-ups that show AI is still in its diapers
· 11 rapports
Incidents similaires
Did our AI mess up? Flag the unrelated incidents
The DAO (organization)
· 24 rapports
6 goof-ups that show AI is still in its diapers
· 11 rapports