Associated Incidents

- Federal judge orders Mike Lindell’s attorneys to pay $3,000 each for filing AI-generated motion containing nearly 30 defective citations and nonexistent case references
- Lead attorney admitted to running draft motion through AI without proper verification, delegating fact-checking to co-counsel who failed to validate citations
- Court ruling highlights critical need for legal profession to establish AI competency standards and proper prompt engineering protocols for law firms
A federal judge has delivered a stark warning to the legal profession about the perils of artificial intelligence misuse, sanctioning two attorneys representing MyPillow CEO Mike Lindell with $3,000 fines each for filing a court document riddled with AI-generated errors, including citations to cases that simply do not exist.
U.S. District Judge Nina Y. Wang ruled Monday that Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed a motion containing nearly 30 fake or defective citations in Lindell’s defamation case. The sanctions underscore a growing crisis in legal practice as attorneys increasingly turn to AI tools without adequate safeguards or understanding of their limitations.
The flawed filing emerged from a contentious defamation lawsuit brought by Eric Coomer, former director of security at Dominion Voting Systems, against Lindell and his companies. Coomer successfully argued that Lindell defamed him by spreading rumors that he engaged in election rigging, with a jury awarding him $2.3 million in damages in June 2024 – far less than the $62.7 million he sought but still a significant victory.
The case centered on Lindell’s role in promoting claims that Coomer manipulated voting systems to favor Joe Biden in the 2020 election. According to Coomer, these allegations led to death threats, forcing him into hiding and ultimately costing him his career in election security.
The legal disaster began when Kachouroff filed what appeared to be a standard opposition motion on February 25, 2025. However, when Judge Wang questioned the numerous citation errors during a pretrial conference, a troubling pattern emerged.
Under direct questioning from the court, Kachouroff made a damaging admission: “Not initially. Initially, I did an outline for myself, and I drafted a motion, and then we ran it through AI.” When Wang pressed further about verification, asking if he “double checked the citations once it was run through artificial intelligence,” Kachouroff’s response was devastating: “Your Honor, I personally did not check it. I am responsible for it not being checked.”
Even more concerning, Kachouroff claimed the error-laden document was filed by accident – a “draft” version. Yet Judge Wang found that the supposedly “final” version he intended to file contained additional substantive errors, suggesting a complete breakdown in quality control procedures.
In the legal world, an AI “hallucination” refers to a situation where a language model fabricates facts, citations, or quotes that appear real but are entirely false. Contrary to public misconception, these hallucinations aren’t bugs—they’re a known feature of generative models attempting to mimic human reasoning without understanding truth.
Attorneys using AI must understand that AI-generated content should be treated as a rough draft at best, not as a final, court-ready product. The lack of basic verification on the part of Lindell’s counsel demonstrates not only a failure of due diligence but a failure to understand what AI is and isn’t.
What They Should Have Done Instead Had the attorneys understood the core principles of prompt engineering and AI limitations, they could have used their original draft to command AI assistance more precisely. For example:
Sample Prompt Template for Safe Legal Drafting with AI:
“Using the motion draft provided below, refine the language to be more persuasive and legally sound, but do not fabricate case law. Cite only verified, real legal authorities and include accurate, verbatim quotations from the cited cases. Clearly mark any sources or citations that should be manually verified before submission.”
CRITICAL INSTRUCTIONS:- Preserve all original citations exactly as provided- Flag any legal assertions that may need additional support- Do not fabricate any legal authorities, case names, or quotations- Maintain original legal arguments while improving presentation
Attorneys should then cross-check each citation and quote for accuracy. This process preserves the strengths of AI (clarity, structure, persuasion) without inviting the risk of unverified or fictional content.
This approach would have allowed the attorneys to leverage AI’s strengths in organization and prose enhancement while avoiding the fatal trap of fabricated citations.
The Delegation Misstep: A Fatal Error Kachouroff’s delegation of citation-checking duties to DeMaster without ensuring it was completed properly highlights a second critical error: the failure of legal leadership and accountability. In a profession where ethical standards and accuracy are paramount, delegating AI-generated content without oversight is a dereliction of duty.
Judge Wang was clear: the sanctions issued were “the least severe sanction adequate to deter and punish defense counsel in this instance.” Yet for the legal community, the implications are far more lasting.
Judge Wang noted that “this Court derives no joy from sanctioning attorneys who appear before it,” describing the $3,000 fines as “the least severe sanction adequate to deter and punish defense counsel in this instance”. However, the broader implications extend far beyond these individual sanctions.
The legal profession urgently needs mandatory AI competency training and certification programs. Current bar associations and continuing legal education requirements have failed to keep pace with technological advancement, leaving practitioners vulnerable to exactly the type of professional negligence demonstrated in this case.
The Broader Impact
This case represents more than just two attorneys facing sanctions – it’s a wake-up call for the entire legal profession. As AI tools become increasingly sophisticated and accessible, the temptation to use them without proper safeguards will only grow.
The consequences of inadequate AI oversight extend beyond professional embarrassment. Client representation suffers, court resources are wasted, and public confidence in legal institutions erodes when attorneys fail to maintain basic professional standards.
The Lindell case offers several critical lessons for legal practitioners:
- AI is a tool, not a replacement for professional judgment and verification
- Delegation without oversight is professional negligence, regardless of the tool involved
- Citation verification must be performed by humans, not AI systems
- Proper training is essential before implementing AI in legal practice
For the legal profession to successfully integrate AI tools, practitioners must understand both their capabilities and limitations. This requires moving beyond simple trial-and-error approaches to structured, professional implementation protocols.
The future of legal practice will undoubtedly include AI assistance, but only for those who take the time to understand how to use these tools responsibly. The alternative, as Kachouroff and DeMaster learned, is professional sanctions and reputational damage that could have been easily avoided.