AI Hallucination Sanctions: 14 Cases and What They Teach
I've been tracking every court sanction involving AI-generated hallucinations since 2023. Here are the 14 cases that tell you everything you need to know about where the line is.
Two years ago, a New York lawyer named Steven Schwartz became the most famous cautionary tale in legal technology. He submitted a brief stuffed with case citations that ChatGPT had fabricated. The judge was not amused. Neither was the profession.
Since then, the number of documented hallucination incidents in court filings has crossed 300 worldwide. Courts are no longer surprised. They are angry. And the sanctions are getting sharper.
I have been cataloguing these cases because the pattern is more instructive than any vendor whitepaper. Here are the 14 that matter most, and what each one actually teaches.
1. Mata v. Avianca (S.D.N.Y., June 2023)
The case that started it all. Steven Schwartz submitted six fabricated case citations generated by ChatGPT. When the court asked him to verify them, he went back to ChatGPT and asked it to confirm they were real. It did. Judge Castel imposed a $5,000 sanction and found "subjective bad faith."
Lesson: Asking the tool that hallucinated to verify its own hallucinations is not verification. This sounds obvious, but it keeps happening.
2. Park v. Kim (E.D.N.Y., December 2023)
Attorney Jae Lee used ChatGPT for a motion to dismiss and filed a brief with fabricated citations. Unlike Schwartz, Lee cooperated immediately, admitted the error, and was sanctioned $999 — just under the threshold that would trigger automatic reporting to the state bar.
Lesson: Speed and honesty matter. The $999 versus $5,000 difference between this case and Mata came down to candor.
3. Kramer v. Nuru (N.D. Cal., January 2024)
Attorneys submitted a brief with hallucinated case citations, then claimed they did not know AI had been used in its preparation. The court was skeptical. Sanctions followed, along with a requirement to disclose AI use in future filings.
Lesson: "I didn't know AI was involved" is not a defense. If you supervise the work, you are responsible for the work product.
4. Massachusetts Sanctions (February 2024)
Judge Brian Davis sanctioned attorneys for submitting fictitious AI-generated citations. His order called out "the tendency of some attorneys and law firms to utilize AI in the preparation of motions, pleadings, memoranda, and other court papers, then blindly file their resulting work product in court without first checking."
Lesson: Judges are developing standard language for these sanctions. It is becoming boilerplate. That is not a good sign for repeat offenders.
5. Bednar Sanctions (Utah, 2024)
Attorney Richard Bednar submitted a brief containing fake ChatGPT citations. The court ordered him to pay opposing counsel's attorney fees, refund his client's fees, and donate $1,000 to a legal nonprofit. A triple penalty.
Lesson: The sanction toolkit is expanding. Courts are getting creative with remedies that hit reputation, wallet, and professional standing simultaneously.
6. Ex parte Allen (Texas, 2024)
A Texas attorney used AI to generate a habeas corpus petition containing fabricated case law. The court not only denied the petition but referred the attorney for potential disciplinary action.
Lesson: In criminal proceedings, the stakes for hallucinated citations escalate dramatically. A client's liberty was at stake.
7. Zachariah Crabill (Colorado, 2024)
Attorney Crabill disclosed upfront that he had used AI but failed to verify the citations. The court sanctioned him anyway. Disclosure without verification is not sufficient.
Lesson: AI disclosure requirements are proliferating across jurisdictions. But disclosure is a floor, not a ceiling. You still have to check the work.
8. Rahmani v. SunPower (N.D. Cal., 2024)
AI-generated brief contained not only fabricated citations but fabricated quotations attributed to real cases. The court imposed sanctions and required CLE courses on AI competence.
Lesson: Fabricated quotes from real cases are arguably worse than fabricated case names. They demonstrate a deeper failure of verification because the case exists but the quote does not.
9. Johnson v. Dunn (N.D. Ala., July 2025)
This is the case that changed the trajectory. The court explicitly declared that "monetary sanctions are proving ineffective at deterring false, AI-generated statements of law in legal pleadings." The judge signaled willingness to pursue bar referrals and more severe consequences.
Lesson: The escalation ladder is real. Courts that started with warnings moved to fines. Now they are contemplating suspensions and disbarment referrals.
10. Patel v. Frontier Airlines (D. Colo., 2024)
An attorney submitted a response containing AI-generated case citations that did not exist. The court noted that the attorney had not even performed basic verification steps, such as checking whether the cited cases appeared in Westlaw or Lexis.
Lesson: The verification standard is not exotic. Courts expect you to run citations through a legal database. That is the bare minimum.
11. Benjamin v. Yelp (C.D. Cal., 2024)
Plaintiff's counsel submitted a brief with fabricated citations. When confronted, counsel withdrew the brief and filed a corrected version. The court still imposed sanctions but mitigated them based on the prompt remedial action.
Lesson: Remediation helps but does not eliminate consequences. Courts distinguish between lawyers who fix problems quickly and those who dig in.
12. Wade v. Beto (S.D. Tex., 2024)
A pro se litigant submitted AI-generated filings with hallucinated citations. The court was more lenient with the individual but used the case to issue a general warning about AI reliance in court filings.
Lesson: Even pro se litigants are not immune. And courts are using these cases to establish general principles that will apply to attorneys.
13. The UK Pattern (Multiple Cases, 2024-2025)
UK courts have dealt with several instances of AI hallucinations in submissions. The Solicitors Regulation Authority has issued guidance noting that solicitors remain responsible for the accuracy of all filings regardless of the tools used in preparation.
Lesson: This is not an American phenomenon. Jurisdictions with different legal traditions are reaching identical conclusions: the lawyer is responsible.
14. Repeat Offender Cases (2025)
By mid-2025, courts began encountering attorneys who had been previously warned about AI verification and were caught again. These cases are drawing the harshest responses, with referrals to disciplinary bodies and suggestions of suspension.
Lesson: Courts have finite patience. The grace period for "I didn't understand the technology" is over.
The Pattern Across All 14
If you squint at these cases, three variables determine outcome severity.
Variable one: candor. Lawyers who admitted AI use immediately and cooperated with the court received lighter sanctions. Lawyers who denied it or tried to cover it up got hammered.
Variable two: remediation speed. Filing a corrected brief within days is different from fighting about it for weeks. Courts reward lawyers who treat hallucinations as errors to fix rather than accusations to defend against.
Variable three: systemic response. Did the lawyer implement AI policies after the incident? Did the firm develop verification procedures? Courts consistently note whether remedial steps suggest genuine reform or just damage control.
What This Means for Your Practice
I am a technologist. I build with AI tools every day. I think they are genuinely useful. But useful and reliable are different things.
Every one of these 14 cases was preventable. Not with better AI — with basic verification. Run the citation through a legal database. Read the case. Confirm the quote exists. This takes minutes per citation. The alternative takes months of litigation, thousands in fines, and damage that no amount of CLE credits can repair.
The court in Johnson v. Dunn said monetary sanctions are not working. That means the next phase involves professional licenses. If you are using AI for anything that will be filed, submitted, or relied upon by a client, you need a verification protocol. Not eventually. Now.
The AI is not going to stop hallucinating. The question is whether you are going to stop trusting it blindly.
Key Takeaways
- 300+ documented hallucination incidents in court filings worldwide since 2023, with new cases weekly
- Sanction severity correlates with three factors: candor, remediation speed, and systemic response
- Courts are escalating from fines to bar referrals — Johnson v. Dunn explicitly declared monetary sanctions ineffective
- Disclosure of AI use is necessary but not sufficient — verification remains mandatory
- Every one of these cases was preventable with basic citation checking against a legal database

