TwinLadder Weekly
Issue #22 | December 2025
Editor's Note
When the American Bar Association calls something a "pivotal moment," that language goes through committees. It is reviewed by people who understand that official ABA statements shape professional expectations nationwide. So when the Task Force on Law and Artificial Intelligence used that phrase in its 56-page Year 2 Report, released December 15, it was deliberate.
Former ABA President William R. Bay's framing was equally measured: "AI has become key to reshaping the way we practice, serve our clients, and safeguard the rule of law." Not "will become." Has become.
I have read the full report. For European practitioners, the ABA's conclusions are largely unsurprising -- we have been living with regulatory frameworks for AI (GDPR, the forthcoming EU AI Act) for longer than American lawyers have been thinking about them. But the report matters because the ABA is the profession's most influential voice, and what it says filters into bar guidance, CLE requirements, and judicial expectations across every US jurisdiction. When the ABA says AI is infrastructure, it becomes infrastructure.
What I notice, reading from Riga, is what the ABA does not say. There is no mention of Article 4 of the EU AI Act, which has been mandatory since February 2025. There is no discussion of cross-border AI governance coordination. The report is thoroughly American in scope. That is not a criticism -- the ABA serves its membership. But for transatlantic practitioners, the absence of any European regulatory awareness in the profession's definitive document is itself a data point worth noting.
The ABA's Core Finding -- And Why It Matters Beyond America
[HIGH CONFIDENCE]
The report's central conclusion, captured by LawSites: "The majority of legal professionals now use AI but do not fully appreciate the practical and ethical challenges that arise when using AI."
Read that again. Adoption has surpassed understanding. This is the profession's core problem for 2026, and it is not limited to the United States.
The report examines six domains: the rule of law, courts, legal education, access to justice, risks and challenges, and bar ethics rules. I want to focus on the three findings that matter most for mid-market practitioners.
First: AI competence is now professional competence. The report does not mandate AI use. But it establishes that understanding AI is part of the duty of competence -- whether you use it or not. This matters because bar associations follow the ABA. Expect CLE requirements to expand (New York already mandates AI credits). Expect malpractice implications for AI-related failures. Expect insurance carriers to start asking questions.
| ABA Task Force: Key Findings | European Parallel |
|---|---|
| "Adoption has surpassed understanding" | Article 4 mandates "sufficient AI literacy" -- same gap |
| AI competence = professional competence | EU AI Act: literacy obligation already enforceable |
| 30+ state bar opinions issued | Fragmented national guidance across EU member states |
| 55% of law schools offer AI courses | European legal education varies widely by jurisdiction |
| 74% legal aid adoption | EU access-to-justice programmes slower to adopt |
| Responsibility transferred to Center for Innovation | EU AI Office coordinates implementation guidance |
If you practise across jurisdictions, including transatlantically, you face a multiplicative compliance challenge. The ABA framework, 30+ state bar opinions, EU AI Act requirements, and national bar standards each add obligations. The report acknowledges this fragmentation without solving it.
Second: access to justice is where AI delivers genuine progress. 100+ documented AI use cases in legal aid settings. 74% of legal aid organisations using AI -- nearly double the profession average. Self-represented litigants receiving AI-assisted guidance. This is the report's most optimistic section, and it deserves to be. The organisations with the least resources moved fastest. AI's access-to-justice potential is not theoretical. It is being measured.
Third: the report warns about what we are not thinking about. The Task Force cautions against becoming "so focused on short-term implementation challenges that it neglects the longer-term implications of increasingly powerful AI systems." Several contributors warn that sudden advances toward human-level AI could leave legal institutions unprepared. This is the section most readers will skip. It is the section that matters most.
[MODERATE CONFIDENCE]
The report is descriptive more than prescriptive. It documents where the profession stands without mandating specific tools or approaches. That is both its strength and its limitation. For practitioners, the takeaway is clear: understand the landscape, develop governance, build competence. Specific tool choices remain your decision. But deciding not to decide is no longer a defensible position.
One thing the report does not address: affordability. Harvey at $1,200/seat/month. Enterprise tools requiring significant investment. The mid-market gap persists. The ABA calls AI infrastructure but says nothing about who can afford the infrastructure. That silence is loud.
I also note what the report does not say about international coordination. The EU AI Act effective August 2026 for full enforcement, the UK's principles-based approach through the Bar Council and SRA, the ABA's own framework -- these are developing independently. For transatlantic practitioners, the result is multiplying compliance obligations without harmonised standards. The ABA acknowledges AI as infrastructure for the American profession. It does not address the reality that AI governance is becoming a cross-border coordination challenge. For firms that serve multinational clients, this gap in the report matters more than most of what it includes.
The contrast with the European approach is worth dwelling on. The EU AI Act does not merely recommend AI literacy. Article 4 makes it mandatory, enforceable, and applicable to all staff deploying or operating AI systems -- not just lawyers, but paralegals, legal operations staff, and anyone interacting with AI tools in a professional capacity. The ABA says AI competence is professional competence. The EU made that a legal obligation nine months before the ABA's report was published. American practitioners reading the Task Force report as a call to action should understand that their European counterparts are already living under the obligation the ABA is still describing.
The Competence Question
Your firm has eleven lawyers. Three use AI regularly -- one pays for a ChatGPT subscription out of pocket, one uses the free tier of Claude, and a third has a Harvey account funded by a client who insists on it. The other eight have never used any AI tool for legal work. None of them has completed any AI-focused training in the past year.
The ABA just said that understanding AI is part of professional competence. Your state bar will likely echo that position within months.
Those eight lawyers are not incompetent. They are experienced, capable professionals who have been doing excellent work without AI tools. But the professional standard is shifting under them. The question is not whether they need to use AI. It is whether they can demonstrate understanding of what AI does, what it gets wrong, and when it might be appropriate or inappropriate for their matters. Can they answer a client who asks "do you use AI?" Can they evaluate AI-assisted work product from opposing counsel? Can they advise a client on AI governance?
Competence is not adoption. Competence is understanding. And the clock on that understanding just started running faster.
What To Do
-
Read the report. Or at least the executive summary. This is now the baseline professional expectation in the US, and its influence will cross borders. The full report and ABA's summary are freely available.
-
Map your jurisdiction's requirements. Check PAXTON's state bar guidance tracker for US jurisdictions. For European practitioners, review your national bar's position alongside the EU AI Act timeline -- and remember that Article 4 is already enforceable, not a future obligation.
-
Plan training for every lawyer in your firm. Not just the enthusiasts. The report's finding that adoption has surpassed understanding means the profession is using tools it does not comprehend. Training is not optional professional development. It is competence maintenance.
-
Track the ABA Center for Innovation. The Task Force has handed off ongoing work to the Center for Innovation. Implementation guidance will follow. Stay connected.
-
Bridge the ABA framework with Article 4 requirements. If your practice has any European dimension -- clients, counterparties, data flows -- the ABA's framework alone is insufficient. Article 4 requires documented training, competency assessments, and ongoing education for all staff who deploy or operate AI systems. Map the ABA's competence standard against Article 4's literacy obligation and identify where additional documentation is needed. The firms that build a unified governance framework covering both will spend less time on compliance than those managing parallel, disconnected obligations.
Quick Reads
-
The 56-page ABA Task Force Year 2 Report declares AI has reached a "pivotal moment" and describes it as professional infrastructure -- though it says nothing about cross-border coordination with European regulatory frameworks.
-
55% of law schools offer AI courses; 83% provide hands-on experiences -- law school graduates will arrive AI-literate, widening the gap with existing practitioners.
-
The report is the Task Force's final assessment; responsibility transfers to the ABA Center for Innovation for ongoing implementation.
-
Above the Law's analysis calls it "the next phase of legal AI" -- from experimentation to institutional integration.
One Question
If the ABA says understanding AI is now professional competence, and the EU says AI literacy is now a legal obligation, how long before a client argues that a lawyer who understands neither framework failed in their duty of care?
TwinLadder Weekly | Issue #22 | December 2025
Helping lawyers build AI capability through honest education.
