Entities
View all entitiesIncident Stats
Incident Reports
Reports Timeline
We are committed to preventing misuse of our Claude models by adversarial actors while maintaining their utility for legitimate users. While our safety measures successfully prevent many harmful outputs, threat actors continue to explore me…
LLMs and generative AI systems are rapidly deployed across industries, and their scale is creating fresh opportunities for threat actors.
Recently, a threat report from Anthropic discussed malicious uses of the AI model Claude. While the re…
Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an "influence-as-a-service" operation to engage with authentic accounts across Facebook and X.
The sophisticated activit…
Anthropic has revealed a concerning example of how AI tools can be weaponized to scale influence operations, according to a report by The Hacker News. The company found that its Claude chatbot was used to create and manage a network of poli…
Claude AI, developed by Anthropic, has been exploited by malicious actors in a range of adversarial operations, most notably a financially motivated "influence-as-a-service" campaign.
This operation leveraged Claude's advanced language capa…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents