Entities
View all entitiesIncident Stats
Incident Reports
Reports Timeline
We are committed to preventing misuse of our Claude models by adversarial actors while maintaining their utility for legitimate users. While our safety measures successfully prevent many harmful outputs, threat actors continue to explore me…
Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an "influence-as-a-service" operation to engage with authentic accounts across Facebook and X.
The sophisticated activit…
Anthropic has revealed a concerning example of how AI tools can be weaponized to scale influence operations, according to a report by The Hacker News. The company found that its Claude chatbot was used to create and manage a network of poli…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents