TWINLADDER
TwinLadder
TWINLADDER
Back to Newsletter

Issue #19

UK Bar Council AI Guidance Update: Key Changes for Barristers

The Bar Council's revised guidance adds specific provisions on training data, client consent, and disclosure in court. We highlight the three changes that require immediate attention and compare with SRA requirements.

UK Bar Council
Barrister Obligations
Compliance
Disclosure
November 7, 202518 min read
UK Bar Council AI Guidance Update: Key Changes for Barristers

Listen to this article

0:000:00

TwinLadder Weekly

Issue #19 | November 2025


Editor's Note

I read the Bar Council's updated AI guidance on the day it was published, November 25th. My first reaction was that it was overdue. My second was that it is better than most practitioners will give it credit for.

The timing matters. In the months before publication, courts sanctioned lawyers for AI hallucinations at accelerating rates. The MyPillow case made headlines. Ko v. Li set Canadian precedent. Lord Justice Birss warned from the bench. Barbara Mills KC, the Chair, said plainly that "recent cases have emphasized the dangers of the misuse by lawyers of artificial intelligence."

This guidance is the Bar Council documenting that barristers have been told. That is not a neutral act. It creates a baseline of professional expectation that future disciplinary proceedings will reference.

What strikes me from a European vantage point -- sitting in Riga, watching regulators across the continent grapple with the same questions -- is that the UK's approach is both more elegant and more honest than most. The Bar Council did not invent new rules. It connected AI to existing professional duties. The EU AI Act, by contrast, layers new obligations on top of existing ones. Both paths have merit. But the Bar Council's simplicity will age better.


The Bar Council's Updated AI Guidance: What Actually Changed

[HIGH CONFIDENCE]

The November 2025 guidance is an evolution of the January 2024 original, not a rewrite. But the evolution matters in three critical ways.

First, the tool coverage expanded. The original referenced ChatGPT and Bard. The update explicitly covers Harvey, Microsoft Copilot, Gemini, and Perplexity, along with legal-specific LLM tools. Barristers can no longer claim ignorance about purpose-built platforms. The guidance makes clear that all LLM-based tools carry similar hallucination risks regardless of marketing.

Second, the guidance now cites the Stanford study showing 17%+ hallucination rates even in purpose-built legal AI, and integrates recent case law on AI misuse. The Bar Council is not relying on anecdote. It is anchoring expectations in evidence and precedent.

Third, the data handling emphasis sharpened. The update requires understanding how each specific tool handles inputs, reviewing terms and conditions against Core Duty 6 (confidentiality) and rC15.5, and considering cyber risk. "I did not read the terms of service" is not a defence against professional conduct violations.

The most important paragraph: "LLMs are not a substitute for human legal expertise, critical judgment or diligent verification. The ultimate responsibility for all legal work remains with the barrister." That is not guidance. That is a warning.

UK Bar Council Approach EU AI Act Article 4 Approach
Principles-based: connects AI to existing Core Duties Prescriptive: new obligation for "sufficient AI literacy"
Applies to barristers using AI tools Applies to all staff deploying or operating AI systems
Guidance issued November 2025 Mandatory since February 2, 2025
Enforcement through existing disciplinary mechanisms Enforcement through national market surveillance authorities
No specific training requirements defined Documented training and competency assessments required

What I find noteworthy -- and this reflects a European sensibility that often goes unremarked -- is that the Bar Council took a principles-based approach, connecting AI use to existing Core Duties rather than creating new rules. This stands in contrast to the patchwork of American state bar opinions (30+ and counting) and individual court disclosure requirements. The UK approach trusts existing professional frameworks to govern new tools. The American approach multiplies rules. The EU AI Act sits between: prescriptive requirements, but built on a risk-based framework that rewards existing governance.

For practitioners outside England and Wales, the lessons transfer. The core duties implicated -- integrity, client interests, independence, competence, confidentiality -- exist in every jurisdiction I have practised in. Whether your bar has issued specific guidance or not, the obligations already apply. This guidance just makes the application explicit.

The guidance connects to existing Core Duties in a way that deserves specific attention: CD1 (integrity) means do not submit unverified work. CD3 (client interests) means AI errors harm clients. CD4 (independence) means do not outsource judgment to algorithms. CD5 (competence) means understand limitations before using. CD6 (confidentiality) means do not input privileged information without safeguards. None of these are new obligations. All of them now have explicit AI applications documented by the Bar Council.

[MODERATE CONFIDENCE]

One note on the SRA's authorisation of Garfield.Law in May 2025: the UK is doing something genuinely interesting by simultaneously tightening practitioner obligations and authorising AI-native firms. That is not contradiction. It is the recognition that AI in legal services requires both accountability for practitioners and regulated pathways for innovation. Other jurisdictions would benefit from studying this dual approach.

For European practitioners, the Bar Council guidance matters for a specific reason beyond its direct jurisdiction. Article 4 of the EU AI Act, which took effect February 2, 2025, requires "sufficient AI literacy" -- but the regulation is deliberately vague on what sufficiency means. The Bar Council's mapping of AI use to Core Duties offers one model for what substantive AI competence looks like in legal practice. National bar associations across Europe, from the Latvian Sworn Advocates Council to the German Federal Bar, will need to define their own standards. The UK guidance provides a reference point. Not a template -- the regulatory context differs too much -- but a demonstration that principles-based professional regulation can address AI governance without regulatory overload.


The Competence Question

You are a junior barrister instructed on a commercial matter with 48 hours before the hearing. You use an AI tool to accelerate your research. It produces a well-structured skeleton argument with twelve case citations.

You have time to verify five citations thoroughly. The remaining seven look correct -- right court, plausible dates, consistent with the legal principles you know. But you have not confirmed them against BAILII or Westlaw. Do you include them?

The guidance is clear: verification is mandatory. But the guidance does not reconcile mandatory verification with the practical reality of compressed timelines. The barrister who disclosed the limitation to instructing solicitors -- flagging which propositions were fully verified and which required confirmation -- found a workable path. Transparency about limitations is itself a form of professional competence. But it requires the confidence to say "I am not certain about this" in a profession that rewards certainty.

The harder question the guidance raises, but does not answer: if verification of every AI output is mandatory, and verification takes significant time, has AI actually saved the profession any time? Or has it merely shifted the work from research to verification?

I spoke with a commercial barrister in London who put it well: "The AI gives me a first draft in ten minutes that would have taken three hours. But verifying that draft takes two hours. I have saved one hour and introduced a new category of risk that did not exist before." That is an honest assessment. The net productivity gain exists, but it is smaller than the marketing suggests, and it comes with obligations the marketing never mentions.

That is worth sitting with.


What To Do

  1. Review your tool's terms of service this week. Not next month. The guidance specifically requires understanding how each tool handles your inputs. If you use ChatGPT, Claude, Harvey, or Copilot, read the data provisions. Document the date you reviewed them.

  2. Create a verification protocol. Before your next AI-assisted piece of work, write down your verification steps. Every citation checked against primary sources. Every quotation confirmed verbatim. Every holding verified. Make it a checklist you can attach to the file.

  3. Consider disclosure language. If your chambers does not have a position on AI disclosure to clients or courts, raise it at the next meeting. Courts in multiple jurisdictions now expect or require it.

  4. Check your insurance. Confirm with your malpractice or professional indemnity insurer that AI-assisted work is covered. Do not assume. Ask in writing and keep the response.

  5. Map the Bar Council guidance against Article 4. If you serve European clients or operate within EU jurisdiction, the Bar Council's Core Duty framework does not satisfy Article 4's documentation requirements on its own. Identify the gaps now -- documented training records and competency assessments are Article 4 obligations that the Bar Council guidance does not address. Begin building that compliance record before February 2026.


Quick Reads


One Question

If the Bar Council says you must verify every AI output but does not say how long that should take, whose billable hours absorb the verification burden -- the barrister's or the client's?


TwinLadder Weekly | Issue #19 | November 2025

Helping lawyers build AI capability through honest education.

Included Workflow

UK Bar Council AI Compliance Checklist

Compliance checklist for UK Bar Council AI guidance. Covers competence (Core Duty 7), confidentiality, client communication, supervision, billing transparency, and accuracy verification.

Start this workflow