Unknown large language models
Incidents implicated systems
Incidente 111841 Reportes
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
2021-01-01
North Korean operatives have reportedly used AI-generated identities to secure remote jobs or impersonate employers in order to infiltrate companies. These tactics allegedly support sanctions evasion through wage theft, credential exfiltration, and malware deployment. Workers reportedly use fake resumes, VPNs, and face-altering tools; some deploy malware like OtterCookie after embedding, while others lure targets via spoofed job interviews. AI systems are reportedly used to generate fake resumes, alter profile photos, and assist in real-time responses during video interviews.
MásIncidente 70136 Reportes
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
MásIncidente 114526 Reportes
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
2025-02-25
In February 2025, lawyers Christopher I. Kachouroff and Jennifer T. DeMaster, representing Mike Lindell, reportedly used generative AI to draft a court brief that contained nearly 30 defective or fabricated citations. The error-filled filing violated federal court rules requiring factual and legal accuracy. The judge fined both lawyers $3,000 each, citing either the improper use of AI or gross carelessness as the cause of the misleading legal content.
MásIncidente 96011 Reportes
Plaintiffs' Lawyers Admit AI Generated Erroneous Case Citations in Federal Court Filing Against Walmart
2025-02-06
Lawyers Rudwin Ayala, T. Michael Morgan (Morgan & Morgan), and Taly Goody (Goody Law Group) were fined a total of $5,000 after their Wyoming federal lawsuit filing against Walmart cited fake cases "hallucinated" by AI. Judge Kelly Rankin sanctioned them, removing Ayala from the case and noting attorneys must verify AI sources. The filing, flagged by Walmart’s legal team, led to its withdrawal and an internal review.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
Unknown large language model developers
Incidents involved as Developer
- Incidente 111841 Report
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
- Incidente 114526 Report
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
Epistemic integrity
Afectado por Incidentes
- Incidente 111841 Report
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
- Incidente 114526 Report
MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations
Springer Nature
Incidents involved as Deployer
- Incidente 13081 Report
Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations
- Incidente 13091 Report
Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations
students
Afectado por Incidentes
- Incidente 13081 Report
Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations
- Incidente 13091 Report
Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations
Academic researchers
Afectado por Incidentes
- Incidente 13081 Report
Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations
- Incidente 13091 Report
Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations