TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

AI-Native Legal Services

SRA Approves Garfield.Law: What the Regulatory Template Actually Says

Eight months of regulatory review produced a specific set of constraints. If you read them carefully, they tell you exactly what regulators think AI can and cannot be trusted to do.

2025. gada 20. maijsLīga Pauliņa, Līdzdibinātāja un TwinLadder Akadēmijas direktore14 min read
SRA Approves Garfield.Law: What the Regulatory Template Actually Says

SRA Approves Garfield.Law: What the Regulatory Template Actually Says

Eight months of regulatory review produced a specific set of constraints. If you read them carefully, they tell you exactly what regulators think AI can and cannot be trusted to do.


On May 5, 2025, the Solicitors Regulation Authority authorized Garfield.Law Ltd — the first purely AI-based firm authorized to provide regulated legal services in England and Wales. The media covered the headline. I want to talk about the engineering constraints, because those are what matter.

The SRA did not just approve an AI law firm. They drew a line. On one side: what AI can handle under professional supervision. On the other: what it absolutely cannot touch. That line is worth studying.

What Garfield Actually Does

Strip away the marketing and here is the system: an AI-powered litigation assistant that handles debt recovery claims up to ten thousand pounds in UK small claims court.

Costs start at two pounds for a polite chaser letter and seven pounds fifty for a letter before action. Full small claims representation runs a hundred pounds plus VAT. Compare that to the Channel 4 Dispatches experiment, where traditional legal services quoted 1,080 pounds for the same work. A senior solicitor judging the blind comparison deemed Garfield's court documents "acceptable in a court of law."

The founding team is interesting: Philip Young, a former Baker McKenzie associate, and Daniel Long, a quantum physicist. That combination — legal domain expertise plus computational thinking — is exactly what you need to build reliable AI legal systems. I say this as someone who has spent years at the intersection of both.

The Eight Months Tell the Story

The SRA's approval process was not perfunctory. Eight months of review. That timeline reveals both the seriousness of the undertaking and the absence of existing frameworks to evaluate it.

The review covered quality control mechanisms, client protection at critical decision points, confidentiality and data security, conflict of interest management, professional accountability, and consumer protection insurance.

Eight months to answer a question that, in principle, is simple: can this system deliver competent legal services? The length tells you the answer was not obvious, which means the conditions attached to the approval are not cosmetic.

The Constraints Are the Blueprint

Here is where I want you to pay close attention, because these constraints reveal what the regulator learned during that eight-month review.

No case law proposals. The system will not propose relevant case law. Full stop.

Think about that from an engineering perspective. The SRA identified legal research — specifically case law retrieval and citation — as a high-risk area for LLM errors and banned the AI from doing it. This directly mirrors the Stanford research showing 17-33% hallucination rates on leading legal AI research platforms.

This is not a minor constraint. It removes the single most dangerous category of AI hallucination from the system's outputs. The SRA essentially said: we know LLMs fabricate citations. So this system will not cite anything.

Mandatory client approval gates. The system only proceeds with actions the client has explicitly approved. No autonomous decision-making on consequential matters. The AI proposes; the human disposes.

Solicitor accountability for all outputs. Registered solicitors oversee all work products. The AI is a tool. Professional responsibility remains with licensed practitioners, exactly as it does when a paralegal or junior associate does the work.

Equivalent supervision standards. The supervision processes must be equivalent to traditional practice. No reduced requirements because AI is involved. This is critical — it means the SRA views AI as functionally similar to a junior staff member, not as a distinct category of service provider.

Why Lord Justice Birss Matters

Judicial endorsement came before SRA approval. At the Civil Justice Council's annual conference in November 2024, Lord Justice Birss — deputy head of civil justice — described Garfield as "absolutely at the core of what we can do for access to justice."

The Justice Select Committee called it "ground-breaking."

This matters because judicial attitudes shape how AI-generated filings are received in court. If the deputy head of civil justice publicly endorses the model, lower court judges are unlikely to view AI-assisted filings with suspicion, at least within the small claims scope that Garfield operates in.

The Access-to-Justice Economics

SRA Chief Executive Paul Philip framed the approval explicitly: "With so many people and small businesses struggling to access legal services, we cannot afford to pull up the drawbridge on innovations that could have big public benefits."

The numbers support this framing. The Ministry of Justice reported over 1.73 million county court claims in 2024. Small claims take nearly 50 weeks to reach trial on average. Most of these claims proceed without professional assistance because the economics of traditional representation do not work for sub-ten-thousand-pound disputes.

A business owed five thousand pounds cannot economically spend a thousand or more on recovery. The seven-pound-fifty letter before action changes that calculation entirely. This is not marginal improvement. It is a structural change in who can afford legal services.

What This Does Not Mean

I want to be precise about limitations, because the enthusiasm sometimes outpaces the reality.

This is not full autonomy. Solicitor oversight is required. Client approval gates every significant action.

This is not complex litigation. The small claims court focus is deliberate. This model has not been tested or approved for high-stakes matters, contested facts, or novel legal questions.

This is not precedent for unregulated AI. The approval covers a regulated firm with licensed solicitors. It does not validate AI legal chatbots operating outside professional supervision.

And the case law prohibition is a significant limitation. The system cannot do legal research in any meaningful sense. It handles procedural workflow — which is where the value lies for small claims — but it is not replacing lawyers in the way some headlines suggest.

The Template for Other Jurisdictions

For anyone building or evaluating legal AI systems, the SRA approval creates a regulatory template with five elements:

  1. Constrain AI from high-risk tasks (case law proposals, novel legal analysis)
  2. Require human approval at critical decision points
  3. Maintain professional accountability for all outputs
  4. Implement supervision equivalent to traditional practice
  5. Carry adequate professional indemnity insurance

US state bars will be watching outcomes closely. If Garfield demonstrates competent service delivery with acceptable error rates, similar applications will follow in other common law jurisdictions. If it fails, the SRA approval becomes a cautionary tale that regulators elsewhere will cite for years.

What I Am Watching

Three metrics will determine whether this model has legs.

Client outcomes. Not just satisfaction surveys — actual resolution rates compared to traditional representation. Do clients recover their debts? At what rate? In what timeframe?

Error rates. How often does the system produce incorrect filings, miss deadlines, or make procedural errors? The SRA will be monitoring this closely, and so should everyone else.

Scope expansion. Will Garfield (or competitors) expand to employment disputes, consumer contracts, landlord-tenant matters? The regulatory template would need to be renegotiated for each practice area, but the precedent exists.

The experiment is now running. The data will tell us more than the opinions.


Key Takeaways

  • SRA's eight-month review produced specific constraints: no AI case law proposals, mandatory client approval gates, solicitor accountability for all outputs
  • The case law prohibition directly addresses LLM hallucination risk — the regulator identified citation generation as too unreliable for deployment
  • Pricing starts at two pounds, creating access to legal services for disputes that traditional economics cannot serve
  • Lord Justice Birss's endorsement signals judicial receptivity to AI-assisted filings within the small claims scope
  • The regulatory template — constrained tasks, human gates, professional accountability — will likely be studied by other jurisdictions