TwinLadder Weekly
Issue #25 | February 2026
Editor's Note
I have been writing about the competence paradox for months now. It was, I admit, still somewhat abstract — a theoretical risk drawn from medical studies and automation research, projected onto legal practice. That changed this winter.
In January, I spent two weeks visiting law firms in Berlin, Stockholm, and Riga. Three cities, nine firms, one recurring conversation: what happened to the 2020–2022 cohort? The associates who trained during COVID lockdowns, who learned research on screens rather than in libraries, who never sat beside a senior partner while she marked up a contract in red pen. These lawyers are now three to five years qualified. They are the people firms expect to supervise AI output. And at every firm I visited, partners were quietly discovering that the supervision is thinner than it looks.
This is no longer theoretical. The competence paradox has arrived, and it is not arriving from the direction anyone expected. It is not that AI is deskilling experienced lawyers. It is that an entire cohort never acquired the skills in the first place — and AI is now automating the work that would have taught them.
The COVID Cohort Meets the AI Revolution
When Junior Lawyers Never Learned What AI Is Replacing
Alex Blumentals, with legal analysis by Liga Paulina
At every firm I visited this winter — in Berlin, Stockholm, and Riga — partners described the same pattern. Associates who qualified between 2020 and 2023 consistently require more supervision on complex research tasks than pre-pandemic cohorts at equivalent experience levels. Not one partner I spoke to quantified it with survey data. None needed to. The observation was universal enough to be self-evident.
This has not yet been framed as a crisis by any bar association. But read it alongside what we know about AI adoption, and the implications become acute.
The Training Gap No One Measured
I called Liga before my Stockholm visit to get her read on the regulatory dimension. She did not mince words. "We trained a generation of lawyers through Zoom screens and PDF bundles," she said. "They passed their exams. They are technically qualified. But qualification is not the same as competence, and we are only now discovering how wide that gap is."
The gap manifests in specific, observable ways. A partner at a mid-sized Stockholm firm — who asked not to be named because the firm is recruiting — described it to me: "I ask a fourth-year associate to verify an AI-generated contract analysis. She reads the output. She confirms it looks reasonable. But she has never drafted that analysis herself from primary sources. She is checking plausibility, not accuracy. Those are fundamentally different skills."
This is precisely the mechanism Lisanne Bainbridge described in her 1983 "Ironies of Automation" — but with a twist. Bainbridge wrote about operators who had skills and lost them through disuse. The COVID cohort is different. Many never fully acquired the skills that AI is now absorbing.
| Training Era | Research Method | Supervision Model | AI Readiness |
|---|---|---|---|
| Pre-2019 cohort | Physical libraries + databases | In-person, shoulder-to-shoulder | Can verify AI output from experience |
| 2020–2023 cohort | Remote databases + limited mentoring | Virtual, asynchronous, fragmented | Checks plausibility, not accuracy |
| Post-2023 cohort | AI-first research from day one | AI output review as primary task | May never develop independent verification |
The Article 4 Dimension
Liga and I were reviewing the enforcement data when she zeroed in on the language of the regulation itself. "Article 4 of the EU AI Act requires a 'sufficient level of AI literacy' — but sufficient for what?" she said, pulling up the text. "The regulation deliberately leaves this open. If your firm's AI literacy programme teaches associates how to prompt an LLM but not how to verify its output against primary sources, you have taught the tool, not the competence. That is not what Article 4 intends."
She is right, and the enforcement trajectory matters. The European Commission's AI Office published guidance on Article 4 interpretation in May 2025, updated in November 2025. [cite:ec-article4] It connects AI literacy to the ability to interpret AI system output "in suitable ways" appropriate to the user's role — language that implies a baseline of domain knowledge that cannot itself be AI-dependent.
For firms in Latvia, where we are based, the Competition Council (Konkurences padome) has signalled it will treat AI literacy obligations as part of broader professional competence requirements. Similar frameworks are emerging in Estonia through the Data Protection Inspectorate and in Lithuania through the Communications Regulatory Authority.
What the Numbers Show
The academic evidence is accumulating, even if bar associations have not yet measured it systematically.
| Source | Finding | Date |
|---|---|---|
| Bastani et al. (PNAS) | Students with unrestricted AI scored 48% higher on assisted tasks but 17% worse on independent assessments [cite:bastani] | July 2025 |
| Budzyn et al. (Lancet Gastroenterology & Hepatology) | Endoscopists' adenoma detection rate dropped from 28.4% to 22.4% (a 21% relative decline) after ~3 months of AI-assisted colonoscopy [cite:lancet-endo] | October 2025 |
| Partner interviews (Berlin, Stockholm, Riga) | Consistent reports of COVID-era associates requiring more supervision on complex research | January 2026 |
The Bastani finding is particularly striking. Students who used unrestricted AI during practice performed dramatically better on those practice problems — 48% higher than the control group. But when the AI was removed for the exam, they scored 17% worse than students who never had AI access. The assistance did not accelerate learning. It replaced it.
The Compounding Problem
Here is where competence debt becomes dangerous. The COVID cohort's training deficit is not static. It compounds.
When a firm deploys AI research tools, the associates who most need practice doing manual research are the ones most likely to lean on AI. They are not lazy. They are rational. If you were never confident in your manual research skills, why would you choose the slower, harder method when an AI tool delivers faster results?
The partner in Stockholm saw this directly: "Our 2021 qualifiers use the AI tools more heavily than our 2018 qualifiers. The 2018 cohort uses AI to accelerate work they already know how to do. The 2021 cohort uses AI to do work they are not sure they could do without it. Same tool, completely different dependency profile."
This is the competence paradox made concrete. AI is most useful to the people who least need it (experienced lawyers accelerating known tasks) and most dangerous for the people who most rely on it (less experienced lawyers substituting it for skills they never built).
What Thoughtful Firms Are Doing
Not every firm is sleepwalking into this. Three approaches I observed in my visits deserve attention:
Mannheimer Swartling (Stockholm) has introduced quarterly "analogue days" — full working days where associates complete research and drafting assignments using only primary legal databases, with no AI assistance. The assignments are reviewed and scored. Associates who consistently perform below benchmarks receive targeted training. It is not popular. It is effective.
COBALT (Riga) requires all associates under five years' qualification to maintain a "verification journal" — a monthly log documenting three instances where they independently verified AI output against primary sources, including the methodology used and any discrepancies found. The journal feeds into annual reviews.
Noerr (Berlin) has restructured its associate training programme to front-load manual research intensive work in the first 18 months, deliberately delaying AI tool access for new associates. The firm's managing partner described the logic: "We want them to build the muscles before we give them the machine. Otherwise they never know what the machine is doing."
These are not perfect solutions. They are expensive, time-consuming, and create friction. But they represent firms that understand the problem and are investing in competence rather than optimising only for throughput.
The Competence Question
You are a compliance officer at a mid-sized European manufacturer. Your company deployed AI tools across legal, HR, and finance eighteen months ago. Productivity metrics are excellent. Your Article 4 documentation shows that all AI-facing staff completed a certified AI literacy course.
Now a regulator asks: "Can your staff identify when the AI system produces an incorrect output in their domain?" Your marketing team was trained to use the AI content generator. Were they trained to fact-check its regulatory claims? Your HR team uses AI for contract drafting. Can they spot a non-compliant clause the AI missed?
The competence question this month: does your AI literacy programme teach people to use the tools, or to think independently of them?
What To Do
-
Audit your COVID-era associates specifically. Do not assume that years of qualification equal years of competence. Identify which associates completed training primarily remotely and assess their independent research capabilities — without AI tools — against firm benchmarks.
-
Implement verification journals or equivalent tracking. Whether formal or informal, create a record of associates independently checking AI output against primary sources. This serves both as training and as Article 4 compliance documentation.
-
Review your AI tool deployment by seniority. Map which associates use AI tools most heavily and cross-reference with their independent practice assessments. If your least experienced associates are your heaviest AI users, you have a dependency problem, not an efficiency gain.
-
Build "analogue" competence exercises into quarterly schedules. Reserve time for AI-free research and drafting. Assess the results. Use them to identify training gaps before they manifest as supervision failures.
Quick Reads
-
European Commission AI Office Article 4 Q&A — what "sufficient AI literacy" means in practice under Article 4.
-
Bastani et al. in PNAS — 48% better with AI, 17% worse without it; the clearest evidence of the competence paradox.
-
Budzyn et al. in The Lancet — 21% decline in endoscopist detection accuracy after three months of AI assistance.
One Question
If you removed AI access from your associates for one week, which ones could still do their jobs to the standard your clients expect — and what does that tell you about what you have built?
TwinLadder Weekly | Issue #25 | February 2026
Helping professionals build AI capability through honest education.
Included Workflow
Technology Profile Assessment
Map your organisation's AI technology landscape. Identify tools, departments, use cases, and governance structures to build a complete technology profile for Article 4 compliance.
Start this workflow
