Sullivan & Cromwell, OpenAI's Counsel, Files a Brief Full of AI Hallucinations
Liga reads the apology letter and traces the competence failure. Alex frames why this is no longer a junior-associate story.
On 18 April 2026, partner Andrew Dietderich, co-chair of the restructuring practice at Sullivan & Cromwell, sent a three-page letter to Chief Judge Martin Glenn of the US Bankruptcy Court for the Southern District of New York. The letter apologised for "inaccurate citations and other errors" in an emergency motion filed nine days earlier, "some of which" Mr Dietderich acknowledged were "artificial intelligence ('AI') 'hallucinations'." Attached to the letter was a Schedule A — dozens of corrections across multiple documents, replacing fabricated case citations, fixing misquoted Chapter 15 precedents, and removing authorities that did not exist.
The motion had been filed on 9 April in the Chapter 15 proceedings of Prince Global Holdings Ltd. The errors were not caught by Sullivan & Cromwell. They were caught by Boies Schiller Flexner — opposing counsel.
This is the firm that advises OpenAI on the responsible deployment of artificial intelligence.
Why this case is different
The legal profession has been collecting AI hallucination cases for almost three years. Mata v. Avianca opened the file in 2023 with two New York lawyers and six fabricated cases. Damien Charlotin's database now tracks more than 1,000 incidents worldwide. Most of these stories follow the same arc: a solo practitioner or a small firm uses ChatGPT for legal research, fails to verify, and ends up sanctioned.
Sullivan & Cromwell is not that story.
Sullivan & Cromwell is one of the most prestigious firms in the United States. It is consistently ranked among the top firms globally for M&A, restructuring, and complex litigation. It generates roughly $2 billion in revenue per year. Its partners earn at the very top of the legal profession. And, by its own description, it advises OpenAI — the most consequential AI company in the world — on the safe and ethical deployment of artificial intelligence.
If a firm with that profile cannot prevent AI hallucinations from reaching a federal bankruptcy docket, the question of whether your organisation can prevent them is no longer rhetorical.
What Mr Dietderich's letter actually says
The apology letter is a careful document, and it is worth reading what is actually in it.
First, the firm acknowledges the errors are real. The Schedule A runs to multiple pages and corrects errors across the emergency motion, the supporting declaration, and ancillary papers. Many corrections are substantive — replacing case citations, fixing misdescribed holdings, removing references to authorities that do not appear to exist.
Second, the firm acknowledges that internal protocols were not followed. Sullivan & Cromwell has, like most firms of its size, an AI use policy. Junior associates working on the brief used AI tools. The protocol called for those AI-assisted citations to be verified against primary sources before filing. They were not.
Third, the firm explains why the protocol failed: the motion was an emergency. The usual review cycle was compressed. Citations that would normally have been checked twice were checked once or, in some cases, not at all.
Fourth, the firm commits to "evaluating whether further enhancements to its internal training and review processes are warranted."
That last sentence is the one that should concern every general counsel reading this case. The firm already had training. The firm already had review processes. The protocol existed. None of it stopped the hallucinations from reaching the docket.
The structural lesson
Every major AI hallucination sanction case discloses the same pattern at the moment of failure. A lawyer is operating under time pressure. They use AI to accelerate work. The output sounds plausible. They do not have the trained reflex to ask whether what they are reading is actually a real citation. They file. The citation does not exist.
In the Sullivan & Cromwell case, the time pressure was an emergency motion. In Mata v. Avianca, it was a missed deadline. In the Ayinde v. Haringey case in the UK, it was a litigation schedule that had compressed faster than the lawyer's review capacity. The structural condition is the same in each: AI accelerates the upstream work, but human review is the only thing that catches the downstream errors, and review is the part that gets compressed when timelines compress.
A policy does not stop this. A policy is a sentence. It has no force in the moment when a junior associate at 2 a.m. is trying to finish an emergency motion before a 9 a.m. court deadline. What stops the hallucination at that moment is not the policy. It is whether the associate has the trained competence to recognise an AI-generated citation that does not feel quite right, the understanding of how AI fabrication works at a mechanical level, and the workflow muscle memory to verify even when the clock is shorter than the verification step deserves.
That is competence, in the precise sense. It is what we measure with the EU AI Act's Article 4. It is what the Twin Ladder Standard's Pillar 5 (Evidence & Documentation) and Pillar 7 (AI Decision Boundaries) are designed to assess. It is what a written policy fundamentally cannot deliver.
What Sullivan & Cromwell got right, and what they got wrong
The firm's response to the discovery is, technically, exemplary. Mr Dietderich wrote the apology letter within days. He attached a detailed schedule of corrections. He did not blame the associates. He did not minimise the errors. He acknowledged that internal protocols were not followed and committed to a review of training and processes. From a regulatory and bar-association perspective, this is what a firm of S&C's calibre is expected to do once a problem of this kind is identified.
What the firm got wrong is the part before the apology letter. The competence layer that should have caught the hallucinations before they were filed simply was not strong enough. The juniors did not catch them. The supervising partner did not catch them. The compressed review timeline did not catch them. Boies Schiller Flexner caught them. That is a competence failure, distributed across roles and seniority levels, and it matches the structural pattern we have now seen in over 1,000 cases.
Why this is no longer a junior-associate story
For three years, the dominant industry response to AI hallucination cases has been: "this is a training problem for junior lawyers." The Sullivan & Cromwell case ends that narrative. The associates at S&C who used AI for citation work are among the most credentialed junior lawyers anywhere. The partners who supervised them are among the most experienced restructuring lawyers in the world. The firm that employs them advises OpenAI on AI deployment. If the competence gap reaches into S&C's restructuring practice, it reaches into every major firm, every general counsel's office, and every regulated organisation deploying AI in critical workflows.
This is not a junior-lawyer training problem. It is a structural competence problem at every level of professional practice where AI now sits in the workflow. The Sullivan & Cromwell letter, read carefully, says exactly this — though not in those words.
What general counsel should take from this
Three things follow directly from the Prince Global Holdings filings.
One. A written AI policy is necessary but not sufficient. Sullivan & Cromwell had a policy. It did not prevent the hallucinations. The competence layer underneath the policy is what matters, and competence requires assessment, training, and measurable evidence — not just a document on the intranet.
Two. Compressed timelines are the highest-risk environment for AI use, not the lowest. Most firms allow more AI assistance under time pressure, on the implicit theory that AI saves time. The Sullivan & Cromwell case shows that compressed timelines are precisely when AI hallucinations are most likely to reach the final document, because the verification step is the first thing that gets cut. AI use under time pressure should be governed more tightly, not less.
Three. Verification cannot be delegated downward without verification of the verifier. The juniors using AI need to be competent in AI verification. The partners reviewing the work need to be competent in catching what the juniors missed. The firm reviewing its own protocols needs to be competent in evaluating whether the training is actually creating the trained reflex, or whether it is producing certificates without skill. Each of these is a separate competence assessment, and none of them is satisfied by reading a policy.
The wider point
The EU AI Act made AI competence a legal requirement from 2 August 2026 for organisations deploying AI in the EU. The US has no equivalent statute. But the Mata v. Avianca sanctions, the Ayinde and Al-Haroun cases in the UK, and now the Sullivan & Cromwell letter in New York are all teaching the same lesson regardless of jurisdiction: when the human competence layer fails, AI deployment failures show up in court records, in regulatory filings, and in apology letters to federal judges. The legal exposure is real, the reputational exposure is real, and the cost of remediation is paid in time, money, and standing in the profession.
If Sullivan & Cromwell can have this experience while advising OpenAI on the safe deployment of AI, the question of whether your organisation has the competence layer to prevent it is no longer optional reading. It is the work itself.
— Liga
Alex's note: The Sullivan & Cromwell case is the moment the AI hallucination conversation crosses a threshold. For three years we have been talking about junior lawyers and small firms. We are now talking about the partners at one of the most sophisticated firms in the world, supervising work for a firm that advises OpenAI. If your AI competence programme is built on the assumption that "this happens to other people," the assumption needs to be retired. Competence is not a junior-level skill. It is a firm-wide operating capability, and it is now a regulated one.
Sources
-
Andrew Dietderich's letter to Chief Judge Martin Glenn (PDF) — primary source. The actual three-page apology letter and Schedule A of corrections, dated 18 April 2026. Filed in In re Prince Global Holdings Limited and Paul Pretlove, Bankr. S.D.N.Y. Case No. 1:26-bk-10769. Hosted via David Lat / Original Jurisdiction.
-
Sullivan & Cromwell Apologizes to Judge for AI Hallucinations — Bloomberg Law, April 2026. Primary reporting on the apology letter from Andrew Dietderich to Chief Judge Martin Glenn.
-
Top Law Firm Apologizes to Bankruptcy Judge for AI Hallucination — Bloomberg, 21 April 2026. Coverage of the Prince Global Holdings filing context and S&C's role advising OpenAI.
-
An AI Screw-Up By... Sullivan & Cromwell? — David Lat, Original Jurisdiction, April 2026. Detailed legal-industry analysis of the failure points; hosts the original letter PDF.
-
Sullivan & Cromwell Files Emergency 'Please Don't Sanction Us For All These AI Hallucinations' Letter — Above the Law, April 2026. Industry commentary noting the irony of S&C's OpenAI advisory role.
-
Another 'hallucinated' court filing highlights the difference between Silicon Valley and the rest of the world — CNN Business, 23 April 2026. Wider commentary on the disconnect between AI hype and AI deployment reality.
-
Sullivan & Cromwell apologises after AI hallucinations appear in court document — Legal Cheek, April 2026. Junior-lawyer angle and review-process compression context.
-
Sullivan & Cromwell apologizes to US bankruptcy judge for AI-generated errors in Prince Group case — Canadian Lawyer, April 2026. Detailed account of the Schedule A corrections and the Boies Schiller Flexner discovery.
-
AI Hallucination Sanctions: 14 Cases and What They Teach — Twin Ladder, October 2025. Internal companion piece on the broader sanctions pattern.
-
Damien Charlotin Hallucination Database — Ongoing global database tracking 1,000+ documented AI hallucination incidents in court proceedings.
