DeepSeek
Incidents impliqués en tant que développeur et déployeur
Incident 11882 Rapports
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
2025-06-25
On June 25, 2025, the American Security Project produced a report outlining how several major U.S. LLMs, including ChatGPT, Microsoft Copilot, Google Gemini, and Grok, sometimes generated responses aligned with Chinese Communist Party propaganda or censorship when prompted in English and Simplified Chinese on sensitive topics. The study also found similar patterns in the Chinese-developed DeepSeek-R1 model.
PlusIncidents implicated systems
Incident 11882 Rapports
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
2025-06-25
On June 25, 2025, the American Security Project produced a report outlining how several major U.S. LLMs, including ChatGPT, Microsoft Copilot, Google Gemini, and Grok, sometimes generated responses aligned with Chinese Communist Party propaganda or censorship when prompted in English and Simplified Chinese on sensitive topics. The study also found similar patterns in the Chinese-developed DeepSeek-R1 model.
PlusIncident 9561 Rapport
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
2025-02-28
A dataset used to train large language models allegedly contained 12,000 live API keys and authentication credentials. Some of these were reportedly still active and allowed unauthorized access. Truffle Security found these secrets in a December 2024 Common Crawl archive, which spans 250 billion web pages. The affected credentials could have been exploited for unauthorized data access, service disruptions, financial fraud, and a variety of other malicious uses.
PlusIncident 10261 Rapport
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session
2025-04-12
Substack user @interruptingtea reports that during a non-adversarial venting session involving suicidal ideation, multiple large language models (Claude, GPT, and DeepSeek) responded in ways that allegedly normalized or endorsed suicide as a viable option. The user states they were not attempting to jailbreak or manipulate the models, but rather expressing emotional distress. DeepSeek reportedly reversed its safety stance mid-conversation.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
Microsoft
Incidents impliqués en tant que développeur et déployeur
- Incident 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
- Incident 9561 Report
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks