ABA Task Force Report: AI as Legal Infrastructure
The ABA has declared that AI is no longer optional. The profession's response will determine whether that is good news or bad.
In December 2025, the American Bar Association's Task Force on Law and Artificial Intelligence released its Year 2 Report. I have now read the 56-page document three times, and my conclusion is this: the ABA is telling the profession something it does not want to hear.
AI is not a trend. It is not a competitive advantage for early adopters. It is infrastructure — as fundamental to legal practice as research databases, word processors, and email. And the profession's understanding of this infrastructure is dangerously behind its adoption.
The Central Finding
Former ABA President William R. Bay's framing is unambiguous: "AI is no longer an abstract concept. AI has become key to reshaping the way we practice, serve our clients, and safeguard the rule of law."
This is not hype from a technology vendor. This is the official position of the largest professional bar association in the world. The Task Force has moved past asking whether AI belongs in legal practice and is now asking whether the profession can govern it competently.
The answer, based on the report's findings, is: not yet.
The Competence Gap
The report identifies what I consider the most important finding: the majority of legal professionals now use AI tools but do not fully understand the practical and ethical challenges that arise from that use.
Read that again. Most lawyers using AI do not adequately understand it.
This is not a failure of individual lawyers. It is a systemic failure of professional education and training. The tools arrived faster than the profession's capacity to develop competent users. And the gap is widening, not closing, because tool capabilities advance faster than training programmes update.
From a training design perspective, this finding confirms what I see every day: attendance at an AI awareness session does not produce competence. The profession needs structured, sustained training that addresses not just what AI can do, but how it fails, why it fails, and what professional obligations attach to its use.
Access to Justice: Real Progress
The report's most encouraging section documents access to justice improvements. The Task Force found more than 100 documented AI use cases in legal aid settings, with increased productivity and direct delivery of understandable legal information to self-represented litigants.
This matters beyond the immediate impact. It demonstrates that AI can serve the profession's highest aspiration — access to justice for all — while simultaneously highlighting the competence requirements. Legal aid organisations are deploying AI with measurable results. But those results depend on staff who understand how to use the tools safely, who verify outputs, and who maintain quality standards.
The access-to-justice applications are a case study in what happens when AI is deployed with appropriate governance. The sanctions cases are a case study in what happens without it.
Legal Education is Moving
The numbers on legal education are striking: 55% of law schools now offer AI-focused courses, and 83% provide hands-on AI experiences through clinics or labs. Case Western Reserve University requires all first-year students to obtain legal AI certification.
This is progress. But it highlights a growing gap between what new lawyers learn and what practising lawyers understand. Law school graduates entering the profession in 2026 will have more structured AI education than many partners at major firms.
That inversion has implications for supervision. How does a partner who does not understand AI supervise an associate who uses it? Model Rules 5.1 and 5.3 require supervisory competence, and AI is rapidly becoming a domain where junior lawyers may know more than their supervisors.
The training imperative applies to senior lawyers as much as — perhaps more than — junior ones.
Current Usage: Simple Tasks Only
The Task Force found that legal professionals are accomplishing relatively simple tasks with AI: summarisation, document review, drafting brief documents, issuing client alerts. More complex legal work involving confidential client information remains largely outside AI workflows.
This finding suggests the profession is underutilising AI while simultaneously being under-prepared for the uses it has adopted. Simple tasks carry lower risk but still require verification. Complex tasks, where AI could add the most value, are avoided because lawyers lack confidence in their ability to use the tools safely.
This is a comfort problem as much as a competence problem. Lawyers who understand AI's capabilities and limitations — who have genuine literacy — are better positioned to use it for complex tasks with appropriate safeguards. Lawyers with superficial understanding default to simple uses or avoidance.
What Practitioners Should Take Away
The ABA is not telling lawyers to slow down. It is telling them to catch up. Here is what that means practically.
If you are using AI without structured training in its limitations and failure modes, you are exposed. The Task Force's finding about the comprehension gap applies to you. Invest in understanding, not just usage.
If you are supervising lawyers who use AI, your obligation includes understanding the technology yourself. Supervisory liability does not require direct involvement — it requires adequate oversight. Oversight requires understanding.
If your firm does not have an AI governance framework, the ABA has just told you it is needed. The infrastructure framing implies that firms without AI governance are as exposed as firms that were without data security policies a decade ago.
If you are involved in training or professional development, the current model is insufficient. The gap between adoption and understanding requires a fundamentally different approach to AI education — one that prioritises competence over awareness.
The Institutional Question
The Task Force concluded its formal work and handed responsibility to the ABA Center for Innovation. This institutional transition matters. Task forces have focused mandates and defined timelines. Centres have broader portfolios and competing priorities.
Whether the momentum built by the Task Force is maintained will depend on whether the Center for Innovation treats AI governance as a priority or subsumes it into a broader portfolio. The legal profession should pay attention to this handoff and demand continued focus.
The Bottom Line
The Year 2 Report is a clear-eyed assessment from the profession's most authoritative body. AI is infrastructure. Adoption has outpaced understanding. And the profession needs to close the gap before the gap closes the profession's credibility.
For those of us who work in AI training and competence development, this report is both validation and challenge. The problem we have been describing is now officially recognised. The solution we are working toward — genuine, sustained, structured AI competence — is now officially needed.
The question is whether the profession will respond with the urgency the situation requires, or whether the Year 2 Report will become another document that was widely cited and narrowly acted upon.

