Unknown large language models
Incidents implicated systems
Incident 111841 Report
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
2021-01-01
North Korean operatives have reportedly used AI-generated identities to secure remote jobs or impersonate employers in order to infiltrate companies. These tactics allegedly support sanctions evasion through wage theft, credential exfiltration, and malware deployment. Workers reportedly use fake resumes, VPNs, and face-altering tools; some deploy malware like OtterCookie after embedding, while others lure targets via spoofed job interviews. AI systems are reportedly used to generate fake resumes, alter profile photos, and assist in real-time responses during video interviews.
MoreIncident 70136 Report
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
MoreIncident 114526 Report
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
2025-02-25
In February 2025, lawyers Christopher I. Kachouroff and Jennifer T. DeMaster, representing Mike Lindell, reportedly used generative AI to draft a court brief that contained nearly 30 defective or fabricated citations. The error-filled filing violated federal court rules requiring factual and legal accuracy. The judge fined both lawyers $3,000 each, citing either the improper use of AI or gross carelessness as the cause of the misleading legal content.
MoreIncident 96011 Report
Plaintiffs' Lawyers Admit AI Generated Erroneous Case Citations in Federal Court Filing Against Walmart
2025-02-06
Lawyers Rudwin Ayala, T. Michael Morgan (Morgan & Morgan), and Taly Goody (Goody Law Group) were fined a total of $5,000 after their Wyoming federal lawsuit filing against Walmart cited fake cases "hallucinated" by AI. Judge Kelly Rankin sanctioned them, removing Ayala from the case and noting attorneys must verify AI sources. The filing, flagged by Walmart’s legal team, led to its withdrawal and an internal review.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Unknown large language model developers
Incidents involved as Developer
- Incident 111841 Reports
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
- Incident 114526 Reports
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
Epistemic integrity
Incidents Harmed By
- Incident 111841 Reports
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
- Incident 114526 Reports
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations