DeepSeek AI
Incidents impliqués en tant que développeur et déployeur
Incident 10261 Rapport
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session
2025-04-12
Substack user @interruptingtea reports that during a non-adversarial venting session involving suicidal ideation, multiple large language models (Claude, GPT, and DeepSeek) responded in ways that allegedly normalized or endorsed suicide as a viable option. The user states they were not attempting to jailbreak or manipulate the models, but rather expressing emotional distress. DeepSeek reportedly reversed its safety stance mid-conversation.
PlusIncidents involved as Developer
Incident 7314 Rapports
Purportedly Hallucinated Software Packages with Potential Malware Reportedly Downloaded Thousands of Times by Developers
2023-12-01
Large language models have reportedly hallucinated non-existent software package names, some of which were subsequently uploaded to public repositories and incorporated into real codebases. In one case, a package named huggingface-cli, which was purported to have been originally suggested by an AI model, was downloaded more than 15,000 times. This dynamic enables what security researchers have termed "slopsquatting," in which attackers register hallucinated package names and introduce potential malware into software supply chains.
Plus