Unknown large language model
Incidents implicated systems
インシデント 9386 Report
AI-Assisted Impersonation of Martin Henderson in Romance Scam Leads to Reported NZ$375,000 Fraud
2025-02-01
An alleged online romance scam exploiting AI-assisted impersonation defrauded a woman, "Lea," of NZ$375,000 over two years. The scammer used AI-generated voice messages and deceptive text communication to pose as New Zealand actor Martin Henderson and built up a relationship beginning in early 2023. The victim, believing the scam was real, reportedly relocated to New Zealand in December 2024, only discovering the fraud in February 2025.
もっとインシデント 11043 Report
AI Chatbot Allegedly Used to Research Explosive Materials in Palm Springs Fertility Clinic Bombing
2025-05-17
An unnamed AI chatbot was reportedly used by Guy Edward Bartkus and Daniel Park, the perpetrators of the 2025 Palm Springs fertility clinic bombing, to research explosive materials and optimize fuel mixtures. Records show the chatbot responded to queries related to ammonium nitrate fuel oil (ANFO) composition. The bombing resulted in one death, four injuries, and significant structural damage.
もっとインシデント 11373 Report
Reported False Legal Citations in South African Case Mavundla v. MEC Generated by Purported AI Tool
2024-09-20
In Mavundla v. MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal, the law firm Surendra Singh and Associates, representing South African politician Philani Godfrey Mavundla, reportedly submitted false legal citations, apparently generated by AI, in filings at the Pietermaritzburg High Court. The court reportedly found that many of the cited cases did not exist in any databases, and the firm's legal team and article clerk allegedly failed to verify the references.
もっとインシデント 10732 Report
$31,000 Sanction in Lacey v. State Farm Tied to Purportedly Undisclosed Use of LLMs and Erroneous Citations
2025-04-15
In the case of Lacey v. State Farm, two law firms were sanctioned $31,000 after submitting a legal brief containing reportedly erroneous citations generated using AI tools. The court reportedly found that the lawyers failed to disclose the use of AI, neglected to verify its output, and refiled a revised brief with additional inaccuracies. Judge Michael Wilner deemed the conduct reckless and issued sanctions for what he described as "improper" and "misleading" legal filings.
もっと