Unknown large language models
Incidents implicated systems
インシデント 111841 Report
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
2021-01-01
North Korean operatives have reportedly used AI-generated identities to secure remote jobs or impersonate employers in order to infiltrate companies. These tactics allegedly support sanctions evasion through wage theft, credential exfiltration, and malware deployment. Workers reportedly use fake resumes, VPNs, and face-altering tools; some deploy malware like OtterCookie after embedding, while others lure targets via spoofed job interviews. AI systems are reportedly used to generate fake resumes, alter profile photos, and assist in real-time responses during video interviews.
もっとインシデント 70136 Report
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
もっとインシデント 114526 Report
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
2025-02-25
In February 2025, lawyers Christopher I. Kachouroff and Jennifer T. DeMaster, representing Mike Lindell, reportedly used generative AI to draft a court brief that contained nearly 30 defective or fabricated citations. The error-filled filing violated federal court rules requiring factual and legal accuracy. The judge fined both lawyers $3,000 each, citing either the improper use of AI or gross carelessness as the cause of the misleading legal content.
もっとインシデント 96011 Report
Plaintiffs' Lawyers Admit AI Generated Erroneous Case Citations in Federal Court Filing Against Walmart
2025-02-06
Lawyers Rudwin Ayala, T. Michael Morgan (Morgan & Morgan), and Taly Goody (Goody Law Group) were fined a total of $5,000 after their Wyoming federal lawsuit filing against Walmart cited fake cases "hallucinated" by AI. Judge Kelly Rankin sanctioned them, removing Ayala from the case and noting attorneys must verify AI sources. The filing, flagged by Walmart’s legal team, led to its withdrawal and an internal review.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえ ば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
Springer Nature
Incidents involved as Deployer
- インシデント 13081 レポート
Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations
- インシデント 13091 レポート
Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations
students
影響を受けたインシデント
- インシデント 13081 レポート
Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations
- インシデント 13091 レポート
Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations
Academic researchers
影響を受けたインシデント
- インシデント 13081 レポート
Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations
- インシデント 13091 レポート
Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations