TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

Market Analysis

AI Adoption in the Am Law 100: What Half Saying Yes and Half Saying No Tells Us

Harvey has penetrated fifty of the hundred largest US law firms. The more interesting question is why the other fifty have not signed up.

2025. gada 8. septembrisAlekss Blumentāls, Dibinātājs un vadītājs11 min read
AI Adoption in the Am Law 100: What Half Saying Yes and Half Saying No Tells Us

AI Adoption in the Am Law 100: What Half Saying Yes and Half Saying No Tells Us

Harvey has penetrated fifty of the hundred largest US law firms. The more interesting question is why the other fifty have not signed up.


By September 2025, Harvey had reached fifty percent penetration of the Am Law 100. Half of the largest, most sophisticated, best-resourced law firms in the United States were using the most visible legal AI product on the market.

That is a remarkable achievement for a company that is three years old. It is also, to my mind, a remarkably useful data point for understanding where legal AI actually stands. Because the adoption story is not just about the fifty firms that said yes. It is equally about the fifty that have not.

What Adoption Actually Looks Like

Let me start with what we know about the firms that adopted. Harvey serves more than five hundred enterprise customers across fifty-four countries. Named Am Law 100 clients include Latham & Watkins, Willkie Farr & Gallagher, and Duane Morris. CMS expanded its deployment to more than seven thousand lawyers globally.

The growth curve tells its own story: from ten million in annual recurring revenue at the end of 2023 to a hundred million by August 2025, with external estimates suggesting roughly 195 million by year's end.

But here is what the headline numbers do not reveal. Adoption within a firm and deployment across a firm are very different things. A firm "using Harvey" might mean a thousand lawyers with active accounts. It might also mean a twenty-person innovation team running a pilot. The penetration figure tells us about breadth of adoption but not depth.

From my conversations with firm leaders, the pattern is consistent: adoption concentrates in specific practice areas and specific tasks. The firm uses Harvey, but not every lawyer in the firm uses Harvey, and those who do tend to use it for a defined set of activities.

Where AI Concentrates

The practice-area pattern is revealing and, if you understand why, quite logical.

Contract review and due diligence lead adoption by a significant margin. These tasks involve large document volumes, structured extraction criteria, and verifiable outputs. When the AI extracts the change-of-control provisions from three hundred contracts, you can check its work against the source documents. The verification path is clear, and the efficiency gain is measurable.

Legal research and memo drafting rank second. The models are genuinely good at surveying case law, identifying relevant authority, and producing structured analysis. The catch is the hallucination risk, but firms with proper verification workflows are seeing meaningful time savings.

Regulatory compliance research has quietly become a strong use case. Multi-jurisdictional mapping, obligation tracking, regulatory change monitoring. These combine the breadth that AI handles well with the structure that makes verification feasible.

At the other end of the spectrum, complex litigation strategy, client counselling, and nuanced negotiation show the lowest adoption. These tasks require the kind of judgment, relationship awareness, and strategic intuition that AI cannot replicate and that experience-poor professionals cannot verify.

Why Half Said No

This is where the story gets interesting. Thomson Reuters research found that "demonstrated accuracy" remains the single biggest barrier to AI investment. Ninety-one percent of professionals said computers should be held to higher standards than humans. Forty-one percent required one hundred percent accuracy before they would use AI without human review.

That last number stopped me. Forty-one percent of legal professionals will not use AI unless it is perfect. And current AI tools are nowhere near perfect. Stanford research showed error rates of 17 to 33 percent for major platforms. The 660 documented hallucination cases from 2025 make the accuracy concern viscerally concrete.

But accuracy is not the only barrier. Several others matter.

The total cost is higher than the subscription. Enterprise AI deployment requires platform licensing, implementation, training, governance development, verification workflow overhead, and ongoing administration. Not every firm concludes the investment delivers sufficient return, particularly at lower volumes.

Data privacy and confidentiality concerns are real. Lawyers have strict confidentiality obligations. Forty-one percent of respondents in Embroker's 2024 survey cited data privacy concerns. Larger firms have addressed this through enterprise agreements with robust confidentiality protections. Smaller firms often lack the leverage to negotiate such terms or the resources to evaluate data security adequacy.

The billable hour creates structural tension. I want to be direct about this because it is often discussed euphemistically. If AI enables completing in one hour work that previously took five, time-based invoices shrink by eighty percent. For firms whose economic model depends on accumulating hours, efficiency-enabling technology is not straightforwardly beneficial. The firms that have adopted most aggressively tend to be the ones already moving toward value-based pricing.

The sanctions cases created genuine caution. Mata v. Avianca. Morgan & Morgan. Ko v. Li. These cases put names and consequences on the risk of AI misuse. Some firms decided to wait for clearer regulatory guidance rather than risk being test cases for enforcement.

The Size Gap

There is a significant disparity in adoption by firm size. Thirty-nine percent of firms with more than fifty lawyers have adopted AI tools, compared to twenty percent at smaller firms. This is not surprising, but it is important.

Larger firms have dedicated technology teams, substantial budgets, negotiating leverage with vendors, and the internal capacity to build governance frameworks. They can absorb the overhead costs of AI deployment across a larger revenue base. Smaller firms face proportionally higher costs for proportionally lower volume.

This gap concerns me because it risks creating a two-tier profession: firms with AI-augmented capabilities and firms without. If the efficiency gains are real, and in contract review and research they clearly are, the competitive advantage compounds over time.

The Expectations-Reality Gap

Bloomberg Law's 2025 State of Practice Survey found something that deserves more attention than it received. Law firm lawyers reported smaller-than-expected changes from AI in every workload and operational category. Every category.

This is not evidence that AI does not work. It is evidence that the marketing exceeded the reality. The firms that set moderate expectations and methodically deployed AI in defined use cases are generally satisfied. The firms that expected transformation are generally disappointed.

MIT economist Mert Demirer captured this well: "I will expect some impact on the legal profession's labour market, but not major... the law's low risk tolerance, plus the current capabilities of AI, are going to make that case less automatable at this point."

The tools work. They improve efficiency in specific tasks. They do not, at present, transform how law is practised. Understanding that distinction is essential for making sensible adoption decisions.

What I Tell Firms That Ask

When managing partners ask me about AI adoption, and they ask frequently, I offer the same guidance regardless of firm size.

Start with a specific problem, not a technology. Identify the two or three tasks that consume the most time, involve the most tedium, and produce the most verifiable output. Deploy AI there first. Measure results honestly.

Budget for the full cost, not just the licence. The subscription is perhaps a third of the true cost. Implementation, training, governance, and verification workflow development consume the rest. If you only budget for the licence, you will be disappointed.

Build verification before you build adoption. Every AI workflow needs a verification step that is defined before deployment, not added after the first mistake. The firms that got this right from the start are the ones without sanctions stories.

Do not wait for perfection. The forty-one percent who require one hundred percent accuracy before using AI will be waiting a very long time. The productive middle ground is deploying AI where the efficiency gain justifies the verification overhead, and investing in the human skills to do that verification well.

Watch the Am Law 100 experiment. These firms are running the world's largest experiment in legal AI deployment. Learn from what they discover about use cases, governance, training, and verification. The cost of being a thoughtful fast follower is almost always lower than the cost of being an early adopter.

The Year Ahead

The fifty percent line will likely move toward sixty or seventy percent in 2026 as AI capabilities improve and governance frameworks mature. The more important metric, one nobody publishes, is the depth of adoption within firms. How many lawyers at each firm are actually using AI regularly? For what tasks? With what verification? With what results?

Those questions matter more than the headline penetration number. And answering them honestly is the difference between organisations that build genuine AI competence and organisations that collect certificates.


Alex Blumentals is the founder of Twin Ladder. He helps organisations understand not just whether to adopt AI, but how to adopt it in ways that build lasting competence rather than superficial compliance.