TWINLADDER
TwinLadder
TWINLADDER
Back to Insights

Market Analysis

Juridiskā MI vērtējumi: tirgus apjoma pieņēmumu analīze

Kritisks skatījums uz juridiskā MI uzņēmumu vērtējumu pamatā esošajiem tirgus pieņēmumiem.

November 10, 2025Alex Blumentals, Founder & CEO14 min read
Juridiskā MI vērtējumi: tirgus apjoma pieņēmumu analīze

Klausīties šo rakstu

0:000:00

2026 Legal AI Outlook: Evidence-Based Projections

The regulatory clock is running. The technology is ready. The question is whether your organisation has the competence to use it responsibly.


I have spent three decades watching organisations adopt new technologies. Most of those adoptions followed a pattern: a wave of enthusiasm, a trough of disillusionment, and eventually, quiet operational integration. AI in legal practice is different. This time, the regulator arrived before most organisations finished their pilots. And that changes everything about how you should think about 2026.

What follows is not a list of predictions. It is a reading of the evidence -- regulatory calendars, published benchmarks, pricing data, court rulings -- assembled to help you make decisions with real information rather than conference-circuit optimism.

1. Europe Sets the Pace

The most consequential development of 2025 was not a product launch or a funding round. It was a date: 2 February 2025, when Article 4 of the EU AI Act entered into application. From that day forward, every provider and deployer of AI systems in the European Union has been obligated to ensure that their staff have "a sufficient level of AI literacy."

The EU AI Office has clarified that it will not impose rigid requirements -- it considers "a certain degree of flexibility" necessary given how fast the technology moves. But make no mistake: from 2 August 2025, providers and deployers face civil liability if inadequately trained staff cause harm using AI systems. Full supervision and enforcement rules apply from 3 August 2026.

Article 4 is not optional. It is European law, and it applies now.

Meanwhile, individual member states are moving faster than Brussels. Italy became the first EU member state to adopt comprehensive national AI legislation with Law No. 132/2025, effective October 2025. Its Article 13 is particularly pointed for legal professionals: AI may only be used for "support and auxiliary tasks," with all central intellectual work reserved for humans, and practitioners must inform clients of AI use in "clear, simple, and exhaustive" language. Italy is not waiting for the EU to define what compliance looks like. It is defining it.

In Germany, the Munich Regional Court's ruling in GEMA v. OpenAI (November 2025) established that memorising copyrighted works during LLM training constitutes copyright infringement, and that the text-and-data-mining exception does not cover it. The court ordered OpenAI to cease storing unlicensed German lyrics on infrastructure in Germany. OpenAI will appeal, but the precedent is set.

And then there is the Baltic gap. Latvia, Estonia, and Lithuania -- all EU member states, all subject to Article 4 -- have no dedicated AI competence training providers for professional services. Lithuania leads the region with 21% AI adoption according to Microsoft's AI Diffusion Report, but adoption without structured training is a liability under Article 4, not an asset. Latvia's MILA association was only established in 2024. The infrastructure gap is real.

2. The Competence Question

Here is what keeps me up at night. It is not whether firms will adopt AI. They will. It is whether anyone will still know how to check the AI's work in five years.

I call this the competence paradox. AI is automating precisely the tasks through which junior professionals learn their craft: legal research, first-draft contract review, due diligence document sorting. Remove those tasks, and you remove the apprenticeship. But the AI still needs human oversight -- someone who learned to do it manually, who can spot when the machine gets it wrong. If juniors never build that expertise, and seniors retire, who verifies the output?

This is not theoretical. It is already happening. A Thomson Reuters survey found that 26% of legal organisations are now actively using generative AI, up from 14% in 2024. Document review (77%), legal research (74%), and document summarisation (74%) are the top use cases -- precisely the tasks where juniors traditionally built competence.

Competence debt accumulates silently. By the time you notice it, the people who could have trained the next generation have moved on.

This is why I am sceptical of the one-day-workshop model of AI training. A webinar does not build competence. Neither does a lunch-and-learn or a "prompt engineering masterclass." Real competence requires structured, sustained engagement over weeks, not hours. It requires practice with verification workflows, not just exposure to capabilities. Article 4 demands "sufficient" AI literacy. Sufficient for what? For using the tools without causing harm. That takes more than an afternoon.

3. Operational Integration

The pilot phase is ending. The question now is whether AI moves from "interesting experiment" to "how we actually work."

Thomson Reuters data shows that firms with an AI strategy are 3.9 times more likely to see benefits compared to firms with no plans, and nearly twice as likely to experience revenue growth compared to firms adopting AI without strategic direction. The difference is not the technology. It is the organisational work around it.

The real gains in 2026 will come from reducing what I call "glue work" -- the administrative overhead that connects one professional task to the next. Meeting summaries that feed into action items. Contract review outputs that populate clause libraries. Research results that integrate with drafting workflows. None of this is glamorous. All of it matters.

The agentic AI trend accelerates this. Bloomberg Law reports that agentic AI is the hurdle law firms must clear in 2026. LexisNexis launched Protege in August 2025, an agentic assistant that autonomously completes tasks and reviews its own work. Corporate legal adoption of generative AI more than doubled in a single year, from 23% in 2024 to 52% in 2025.

But here is the governance problem: autonomous agents that act, not just suggest, require a level of oversight infrastructure that most organisations have not built. You cannot supervise what you do not understand. And there is that competence question again.

4. Token Costs Falling

The economic argument against AI adoption is collapsing. The cost of running frontier models has fallen by orders of magnitude in three years, and the trend is accelerating.

Model Release Input (per 1M tokens) Output (per 1M tokens) Source
GPT-3.5 Turbo Mar 2023 $1.50 $2.00 OpenAI
GPT-4 (8K) Mar 2023 $30.00 $60.00 OpenAI
GPT-4 Turbo Nov 2023 $10.00 $30.00 OpenAI
GPT-4o May 2024 $2.50 $10.00 OpenAI
GPT-4o mini Jul 2024 $0.15 $0.60 OpenAI
Claude 2 Jul 2023 $8.00 $24.00 Anthropic
Claude 3 Opus Mar 2024 $15.00 $75.00 Anthropic
Claude 3 Haiku Mar 2024 $0.25 $1.25 Anthropic
Claude 3.5 Sonnet Jun 2024 $3.00 $15.00 Anthropic
Claude Opus 4.5 Late 2025 $5.00 $25.00 Anthropic

Look at the trajectory. GPT-4's output cost dropped from $60 per million tokens in March 2023 to $10 with GPT-4o in May 2024 -- an 83% reduction in fourteen months. GPT-4o mini then cut that to $0.60, a 99% reduction from the original GPT-4. Anthropic's flagship went from $75 output (Claude 3 Opus) to $25 (Opus 4.5) -- a 67% cut while delivering substantially more capability.

The implication is plain: cost is no longer a valid reason to delay adoption. The question has shifted from "can we afford to use AI?" to "can we afford the competence gap if we don't?"

5. Model Capabilities Climbing

While costs fall, capabilities climb. The pace of improvement is worth tracking precisely because it changes which tasks AI can handle reliably.

Model Year Parameters Context Window Key Milestone
GPT-2 2019 1.5B 1K tokens First coherent long-text generation
GPT-3 2020 175B 2K tokens Zero-shot and few-shot learning
GPT-3.5 Turbo 2022 ~20B* 4K tokens ChatGPT launch, conversation at scale
GPT-4 Mar 2023 ~1.76T* 8K / 32K tokens Multimodal input, bar exam passage
Claude 2 Jul 2023 Undisclosed 100K tokens First 100K context window
GPT-4 Turbo Nov 2023 ~1.76T* 128K tokens 128K context, cheaper than GPT-4
Claude 3 Opus Mar 2024 Undisclosed 200K tokens Vision capabilities, 200K context
GPT-4o May 2024 Undisclosed 128K tokens Multimodal native, 50% cheaper
Claude 3.5 Sonnet Jun 2024 Undisclosed 200K tokens Near-Opus quality at Sonnet price
GPT-5 Aug 2025 Undisclosed 400K tokens 400K context window
Claude Opus 4.5 Late 2025 Undisclosed 200K tokens 67% cheaper than predecessor

Estimated; OpenAI has not disclosed official figures for most models.

The context window evolution alone transforms legal use cases. At 1K tokens (2019), you could barely fit a page. At 200K tokens (2024), you can process an entire contract suite. At 400K tokens (2025), you can ingest a transaction room. Each expansion opens categories of work that were previously impractical.

The technology is no longer the bottleneck. Organisational readiness is.

6. Hallucination Rates

Every capability discussion must be weighed against reliability. Here is what the published evidence shows.

Study / Benchmark Year Finding Source
Vectara Hallucination Leaderboard (v1) 2023 GPT-4: 3.0% hallucination rate on summarisation Vectara
Stanford Legal RAG Study 2024 Lexis+ AI: 17% hallucination rate; Westlaw AI: 33% Stanford HAI
Stanford Legal RAG (revised) 2024 Westlaw hallucinates at roughly double the rate of LexisNexis LawNext
Vectara Leaderboard (v2) 2025 GPT-4 Turbo: 0.9%; GPT-4o mini: 1.7%; Gemini 1.5 Pro: 1.1% Vectara/HuggingFace
Vectara (latest) Apr 2025 Gemini-2.0-Flash: 0.7%; three models below 1% Vectara Blog

Two things stand out. First, general-purpose hallucination rates on summarisation benchmarks have dropped below 1% for the best models. That is genuine progress. Second, legal-specific hallucination rates remain substantially higher. The Stanford study published in the Journal of Empirical Legal Studies found that even with RAG (retrieval-augmented generation), dedicated legal tools hallucinate between 17% and 33% of the time on legal queries.

That gap -- sub-1% on general tasks, 17-33% on legal tasks -- is the most important number in this article. It tells you that domain-specific verification is not optional. It tells you that the "hallucination-free" marketing claims from legal AI vendors are, to put it charitably, premature. And it tells you that the human verifier -- the person who actually knows the law -- remains indispensable.

Which brings us back to competence.

7. What Is Happening in the US and UK

Europe is not alone. The US and UK are building their own regulatory frameworks, though with characteristically different approaches.

In the US, the patchwork is growing fast. Colorado's SB24-205, signed in May 2024, is arguably the first comprehensive state-level AI regulation, targeting algorithmic discrimination in high-risk systems including legal services. Its effective date was pushed to June 2026. The Illinois AI disclosure statute requires employers to disclose AI use in employment decisions starting January 2026. Texas signed the Responsible AI Governance Act (TRAIGA) in June 2025, creating an AI Advisory Council and a regulatory sandbox, effective January 2026.

On the professional ethics side, ABA Formal Opinion 512 (July 2024) established that lawyers need not become AI experts, but must have "a reasonable understanding of the capabilities and limitations of AI tools they use." Independent verification is mandatory. Confidentiality duties extend to AI tool data practices. And supervisors must ensure all staff -- attorneys and non-attorneys alike -- are trained on AI risks and benefits.

In the UK, the Solicitors Regulation Authority approved Garfield.Law in May 2025 -- the first purely AI-driven firm authorised to provide regulated legal services in England and Wales. The firm handles small claims debt recovery at costs starting from two pounds for a letter. Named regulated solicitors remain accountable. It is a carefully bounded experiment, not a free-for-all.

The UK Bar Council updated its AI guidance in November 2025, urging barristers to "make the effort to understand these systems" while warning about hallucinations, bias in training data, and confidentiality risks. The ultimate responsibility for all legal work remains with the barrister. No tool changes that.

8. Consolidation

The legal AI market is consolidating rapidly, and the numbers tell the story.

Harvey raised $760 million across three rounds in 2025 alone: $300 million Series D at $3 billion (February), $300 million Series E at $5 billion (June), and $160 million Series F at $8 billion (December). Sacra estimates Harvey reached $195 million ARR by year-end 2025, up from $50 million at the end of 2024. That is a 41x revenue multiple at the $8 billion valuation -- extraordinary by any standard, and a sign of how much capital is betting on winner-take-most dynamics.

Thomson Reuters acquired Casetext for $650 million in 2023, integrating CoCounsel into its platform. By early 2026, CoCounsel had reached one million users, driving an 11% share price jump. The message is clear: incumbents with distribution are moving aggressively to absorb AI capability, and they have the client base to deploy it at scale.

For mid-market and smaller firms, the consolidation trend means fewer independent options and greater dependence on platform vendors. The time to evaluate your technology stack and vendor dependencies is now, before the market narrows further.

9. What to Do Now

If you have read this far, you want practical guidance. Here is what the evidence supports.

Start with Article 4. If you operate in the EU, you are already subject to the AI literacy obligation. Map which of your staff interact with AI systems. Assess their current understanding. Document what training you provide and why you consider it sufficient. The enforcement date is August 2026, but the obligation is live now. Civil liability exposure began in August 2025.

Build competence, not just awareness. A compliance checkbox is not competence. Real training means structured programmes over weeks, with verification workflows, domain-specific exercises, and measurable outcomes. If your people cannot explain when to trust AI output and when to verify it -- and demonstrate how they verify -- your training is insufficient.

Budget for governance and training as infrastructure. Thomson Reuters found that strategic AI adopters are 3.9x more likely to see benefits. Strategy means governance structures, usage policies, training programmes, and supervision frameworks. It means treating AI competence the way you treat continuing professional development: not as an event, but as an ongoing obligation.

Monitor hallucination rates in your specific domain. General benchmarks are encouraging; legal-specific ones are not. Until the gap between sub-1% general hallucination rates and 17-33% legal hallucination rates closes substantially, every AI-assisted legal output requires human verification by someone who knows the law. Build that verification capacity before you need it.

Evaluate your vendor exposure. With Harvey at $8 billion and Thomson Reuters absorbing Casetext, the market is consolidating around a few major players. Understand your contractual commitments, data portability options, and what happens if your vendor is acquired or changes pricing.

10. Closing

I have spent enough years in organisational change to know that the firms and departments that thrive in transitions are not the ones with the best technology. They are the ones with the best people -- people who understand the technology deeply enough to use it wisely, and who have the institutional support to keep learning as it evolves.

The evidence presented here paints a clear picture: the technology is capable and getting cheaper, the regulation is real and getting stricter, the market is consolidating and getting more expensive, and the competence gap is growing and getting harder to close.

2026 will not be the year AI replaces lawyers. It will be the year we discover which organisations invested in competence and which ones only invested in tools. The regulatory clock does not care about your implementation timeline. The technology does not wait for your training budget. And competence debt, once accumulated, compounds faster than anyone expects.

The firms that start building competence infrastructure now -- real, sustained, measurable competence -- will have an advantage that no amount of late spending can replicate. Not because they chose the right tool, but because their people learned to think alongside the machines rather than defer to them.

That is the real projection for 2026. Everything else is commentary.