TWINLADDER
TwinLadder
TWINLADDER
Back to Archive
TwinLadder Intelligence
Issue #21

TwinLadder Weekly

December 2025

TwinLadder Weekly

Issue #21 | December 2025


Editor's Note

A year ago, I started this newsletter with a conviction that legal AI needed honest analysis, not hype. I was not sure anyone would read it. It turns out that practitioners are hungry for someone to tell them what is actually happening -- without the breathlessness, without the vendor pitches, without the implication that they are falling behind if they have not adopted the latest tool.

Looking back at 2025, I was wrong about some things. I underestimated adoption speed -- law firms integrating generative AI went from roughly 26% to 79%. I overestimated how quickly pricing would democratise. I did not predict that documented hallucination cases would quintuple from 120 to 660+, though in retrospect the math was obvious: more users means more mistakes.

What I got right, I think, is the central insight: this technology is genuinely important, and the profession is genuinely unprepared for it. Both things remain true as we close the year.

This issue is personal. It is the one I have been wanting to write since January. And I write it from a European desk, where the regulatory conversation is a year ahead of the American one -- even if the technology conversation lags behind.


What Actually Changed in 2025

[HIGH CONFIDENCE]

The headline framing is that AI moved from experiment to infrastructure. LawSites said exactly that, and the ABA Task Force confirmed it. But that framing, while accurate, obscures the unevenness of what happened. Let me break it into what genuinely changed, what did not, and what should have.

What changed. Harvey went from $3B to $8B valuation, raising $760M in a single year. That is not hype -- it is 50 Am Law 100 firms paying enterprise prices, KKR and Bayer and Comcast signing on, $100M ARR reached in three years. The enterprise legal AI market exists. That is settled.

2025 Legal AI: The Numbers
Firm AI adoption 26% to 79% (3x increase)
Harvey valuation $3B to $8B ($760M raised)
Hallucination cases documented 120 to 660+ (5.5x increase)
States issuing AI ethics guidance 30+
Legal aid AI adoption rate 74% (nearly 2x profession average)
Am Law 100 with AI governance boards 80%
Law schools offering AI courses 55%
Stanford hallucination rate (legal tools) 17-58%

The regulatory architecture emerged. Over 30 states issued AI ethics guidance. The ABA released Formal Opinion 512 and its Task Force's final report. New York mandated AI competency CLE credits. Pennsylvania required AI disclosure in court submissions. The UK's SRA approved the first AI-only law firm. Courts developed accountability mechanisms -- Butler Snow was sanctioned despite being a large, sophisticated firm. In Johnson v. Dunn, the court declared monetary sanctions "proving ineffective" against AI misuse.

And the EU moved furthest of all. Article 4 of the EU AI Act -- mandatory AI literacy for all staff deploying or operating AI systems -- took effect on February 2, 2025. Not August 2026, when the full Act becomes enforceable. February 2025. That deadline passed while most of the profession was still debating whether to adopt AI at all. It is, by some distance, the most consequential regulatory development of 2025, and it received less attention than Harvey's funding rounds.

The market consolidated. Clio acquired vLex for $1 billion. Lawhive bought Woodstock Legal. Norm Ai launched Norm Law with $50M from Blackstone. AI-native firms went from concept to operating reality, targeting both consumer and institutional clients.

And legal aid organisations moved fastest of all -- 74% adoption rate, nearly double the profession average, with 100+ documented use cases. The organisations with the fewest resources found the most value. That tells you something important about where the technology's real potential lies.

What did not change. The mid-market gap. Harvey at $1,200/seat/month. Garfield at two pounds per document. Nothing between them serves the 10-200 lawyer firm economically. Clio's vLex acquisition points toward integrated, affordable solutions, but they are not here yet.

Hallucination rates. Stanford found 17-58% hallucination rates even in purpose-built tools. Every citation still requires verification. Every document still requires review. The productivity gains from AI are partially offset by verification requirements. No breakthrough in reliable autonomous legal research arrived in 2025, and none is projected for 2026.

The competence gap. 79% of firms use AI. Far fewer understand it. The ABA's Year 2 Report said it directly: adoption has surpassed understanding. Most practitioners are using tools they cannot evaluate, producing outputs they cannot reliably verify, in a regulatory environment they have not fully mapped. That is the profession's core problem entering 2026.

[MODERATE CONFIDENCE]

What should have changed. I expected more progress on interoperability. Every vendor still builds a walled garden. I expected the European regulatory conversation to be more advanced -- the EU AI Act is effective August 2026 for full enforcement, and most firms I speak with have not begun compliance planning beyond Article 4. I expected more honest public conversation about what AI cannot do. Instead, the marketing has intensified. Every vendor is "agentic" now. Most are not.

The trust paradox defined 2025: adoption grew 200%, failures grew 450%. That is not contradictory. It is what happens when experimental tools become operational infrastructure. More usage, more opportunities for both success and failure. The difference now is that accountability mechanisms exist. Courts sanction. Regulators guide. Firms govern. 80% of Am Law 100 established AI governance boards. That is the trust turning point -- not that AI became trustworthy, but that the profession began building the structures to manage the trust deficit.

I believe 2025 was the year legal AI proved it is real and proved it is unfinished. Both at the same time. The firms that understand that duality -- taking AI seriously without taking vendor claims at face value -- are the ones best positioned for what comes next.


The Competence Question

It is December. You are a mid-market managing partner reviewing your firm's year. One senior associate has become your de facto AI person -- she uses Claude for research drafting, has experimented with CoCounsel, and informally trains colleagues who ask. There is no policy, no governance, no systematic training. It works because she is careful and competent.

Now imagine she leaves. What remains? Not her knowledge. Not her judgment about when AI outputs need extra scrutiny. Not her informal quality control. Your firm's AI capability walks out the door with her.

The question for year-end is whether your firm's AI competence is institutional or individual. If it depends on one person's initiative, it is fragile. Governance, training, and documented procedures are not bureaucracy. They are the difference between capability and dependency. And as the ABA has now said, AI is infrastructure. Infrastructure cannot depend on a single person's goodwill.

Under Article 4, that fragility is also a compliance risk. The obligation is organisational, not individual. A firm cannot demonstrate "sufficient AI literacy" by pointing to one competent associate. It must demonstrate systematic capability across all staff who deploy or operate AI systems. If your Article 4 compliance walks out the door with one person, it was never compliance at all.


What To Do

  1. Conduct a year-end AI audit. Inventory every AI tool in use at your firm. Who uses what, for which workflows, with what verification procedures. You may be surprised by what you find.

  2. Write an AI policy before January. It does not need to be long. One page covering approved tools, verification requirements, confidentiality protections, and training expectations creates a baseline. You can refine it later.

  3. Budget for AI competence in 2026. CLE requirements are expanding. Tool costs are rising. Training takes time. Allocate specific resources rather than treating AI capability as something that happens organically.

  4. Define your firm's AI position. Not every firm needs Harvey. Not every firm needs to be AI-native. But every firm needs a coherent answer to: "What is your approach to AI?" If you cannot answer that question, make answering it your first priority for January.

  5. Start your Article 4 gap assessment now. If your firm operates in the EU or serves European clients, Article 4 compliance is not a 2026 task -- the obligation is already in force. Document which staff deploy AI systems, what training they have received, and what gaps remain. Build the compliance record before the enforcement mechanism catches up with the obligation. The firms that can demonstrate they began this work in 2025 will be in a fundamentally stronger position than those who waited.


Quick Reads


One Question

If 2025 was the year legal AI proved it is real, was it also the year we proved we are ready for it -- or just the year we discovered how much readiness still costs?


TwinLadder Weekly | Issue #21 | December 2025

Helping lawyers build AI capability through honest education.