Perplexity AI
Incidents Harmed By
Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MoreIncidents involved as Deployer
Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MoreIncidents implicated systems
Incident 10711 Report
Student Reportedly Files Complaint Over Professor's Undisclosed Use of Generative AI at Northeastern University
2025-05-14
A student at Northeastern University reportedly filed a complaint after discovering that a professor had used generative AI tools, including ChatGPT, to produce course materials despite university policies discouraging undisclosed AI use. The student alleged hypocrisy, citing a syllabus that barred unauthorized AI use by students. The professor acknowledged having used AI-generated materials and expressed regret over not reviewing them more carefully. No tuition refund was reportedly granted.
MoreIncident 12051 Report
Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage
2025-09-11
Multiple AI systems allegedly spread false claims in the aftermath of Charlie Kirk's assassination at Utah Valley University. Perplexity and Grok chatbots reportedly stated Kirk was alive, mischaracterized authentic video as satire, and wrongly identified Utah Democrat Michael Mallinson as the suspect. A Google AI Overview allegedly claimed Kirk was on Ukraine's Myrotvorets "enemies" list, a reported falsehood that echoed pro-Kremlin narratives.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
ChatGPT
Incidents involved as Deployer
- Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents implicated systems
OpenAI
Incidents Harmed By
- Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios