Why Comfort Matters More Than Code: A Better Model for Legal AI Training
The legal profession has an AI training problem. It is not a shortage of courses. It is that the courses teach the wrong thing.
Every generation of technology produces the same well-intentioned mistake. Experts in how the technology works design training for people who need to use it. The training emphasises architecture and theory. The learners nod, retain almost nothing, and return to their desks no more capable than before.
With AI in legal practice, this pattern has reached its most consequential form. Firms invest in programmes explaining transformer models to lawyers whose actual questions are far simpler: Can I rely on this? Does using it breach my duties? How do I check the output?
The Fundamental Disconnect
AI tools for legal practice are built by engineers whose instincts about training are shaped by their own expertise. They default to explaining what they find interesting: how models process language, why hallucinations occur at the architectural level, what retrieval-augmented generation adds. Reasonable if the audience is engineers. Counterproductive when it is practising lawyers.
Technical training implicitly tells lawyers they must think like engineers before they can act like lawyers. It positions technological comprehension as a prerequisite for professional use. This is factually wrong.
A senior partner with three decades of litigation experience already possesses the most important skills for responsible AI use. She evaluates arguments for a living. She assesses source reliability. She verifies factual claims against primary materials. These are not transferable skills that happen to apply to AI. They are the core competencies that responsible AI use demands.
Yet technical training has convinced this lawyer -- and thousands like her -- that without understanding gradient descent, she cannot responsibly touch these tools.
What the Data Actually Shows
According to Thomson Reuters' 2025 Future of Professionals survey, AI use among lawyers reached 80 percent in some segments in 2025, up from 22 percent a year earlier. But among those who use AI, only a quarter report strong understanding. Nearly sixty percent are "somewhat familiar" -- exposed but not competent.
The dividing line between effective adopters and non-adopters is not technical knowledge. It is comfort and confidence. Lawyers who feel assured they can verify outputs and maintain professional standards use AI productively. Those who acquired technical knowledge but lack practical confidence often remain non-adopters despite their investment.
Comfort is not a soft concept. It is the measurable precondition for adoption.
Five Ways Technical Training Fails
It answers questions nobody is asking. A lawyer wants to know whether the citations are real and what verification satisfies her duties. Explaining attention mechanisms does not help.
It creates false prerequisites. By positioning technical comprehension as the gateway, it deters the practitioners whose domain expertise would make them the most effective AI users.
It scales poorly and ages fast. Technically-focused curricula require rare dual expertise, and a programme built around one model generation misleads when the next arrives. Workflow competencies remain valid regardless.
Retention collapses within weeks. Theoretical knowledge disconnected from practice has the lowest retention of any instructional modality. What participants retain is not content but emotional residue: a vague sense that AI is complex and intimidating.
The confidence gap persists despite knowledge gains. Understanding how large language models generate text does not give a lawyer confidence to use one in practice. Confidence comes from guided practice and professional judgment -- elements technical training omits entirely.
Workflow-Based Learning: Starting Where Lawyers Stand
The alternative is not to dumb anything down. It is to start from a different premise: "What does the lawyer already know that applies to AI, and what specific new skills close the remaining gap?"
The experienced litigator does not need to understand neural networks. She needs to learn how AI-generated research differs from traditional research, what verification satisfies her jurisdiction, and when inputting case details creates a confidentiality risk. Professional questions with professional answers -- connecting to existing competence and building the comfort that predicts adoption.
What the Regulation Actually Requires
Article 4 of the EU AI Act requires "a sufficient level of AI literacy" among staff, calibrated by "technical knowledge, experience, education and training and the context the AI systems are to be used in."
That demands contextual, role-appropriate awareness -- not technical comprehension. A litigator needs different literacy than a transactional lawyer. The regulation explicitly rejects the one-size-fits-all technical curriculum.
Workflow-based training is not merely better pedagogy. It is what the law demands.
The Competence-Confidence Loop
Professional development research describes a virtuous cycle: competence builds confidence, confidence enables practice, practice deepens competence. Technical AI training breaks this loop at the first link. Theoretical knowledge that does not translate to felt competence means confidence never develops and practice never begins.
Workflow-based training initiates the loop immediately. A lawyer who learns to verify AI-generated citations in her first session has a capability she can use that afternoon. She uses it. It works. Confidence grows. She tries something harder. The loop accelerates.
This is the difference between a training programme that changes behaviour and one that merely changes a completion checkbox in the LMS.
Where This Leads
The profession does not need more AI courses. It needs better ones -- designed around how experienced professionals actually learn, built on evidence about what predicts adoption, and aligned with what the regulation requires.
This is why Twin Ladder's training methodology starts with your workflow, not with neural networks. Read more about our four-phase approach.

