OpenAI models
Incidents implicated systems
Incident 9396 Reports
AI-Powered Chinese Surveillance Campaign 'Peer Review' Used for Real-Time Monitoring of Anti-State Speech on Western Social Media
2025-02-21
OpenAI reportedly uncovered evidence of a Chinese state-linked AI-powered surveillance campaign, dubbed "Peer Review," designed to monitor and report anti-state speech on Western social media in real time. The system, believed to be built on Meta’s open-source Llama model, was detected when a developer allegedly used OpenAI’s technology to debug its code. OpenAI also reportedly identified disinformation efforts targeting Chinese dissidents and spreading propaganda in Latin America.
MoreIncident 9973 Reports
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
2023-02-28
Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.
MoreIncident 8982 Reports
Alleged LLMjacking Targets AI Cloud Services with Stolen Credentials
2024-05-06
Attackers reportedly exploited stolen cloud credentials obtained through a vulnerable Laravel system (CVE-2021-3129) to allegedly abuse AI cloud services, including Anthropic’s Claude and AWS Bedrock, in a scheme referred to as “LLMjacking.” The attackers are said to have monetized access through reverse proxies, reportedly inflating victim costs to as much as $100,000 per day. Additionally, they allegedly bypassed sanctions, enabled LLM models, and evolved techniques to evade detection and logging.
More