TwinLadder Weekly
Issue #25 | February 2026
Editor's Note
I have been writing about the competence paradox for months now. It was, I admit, still somewhat abstract — a theoretical risk drawn from medical studies and automation research, projected onto legal practice. That changed this winter.
In January, I spent two weeks visiting law firms in Berlin, Stockholm, and Riga. Three cities, nine firms, one recurring conversation: what happened to the 2020–2022 cohort? The associates who trained during COVID lockdowns, who learned research on screens rather than in libraries, who never sat beside a senior partner while she marked up a contract in red pen. These lawyers are now three to five years qualified. They are the people firms expect to supervise AI output. And at every firm I visited, partners were quietly discovering that the supervision is thinner than it looks.
This is no longer theoretical. The competence paradox has arrived, and it is not arriving from the direction anyone expected. It is not that AI is deskilling experienced lawyers. It is that an entire cohort never acquired the skills in the first place — and AI is now automating the work that would have taught them.
The COVID Cohort Meets the AI Revolution
When Junior Lawyers Never Learned What AI Is Replacing
Alex Blumentals, with legal analysis by Liga Paulina
The German Federal Bar Association (Bundesrechtsanwaltskammer, BRAK) published its annual workforce survey in December 2025. Buried on page 43 was a statistic that should have been front-page news: 38% of German law firms reported that associates qualified between 2020 and 2023 required "significantly more supervision on complex research tasks" than pre-pandemic cohorts at equivalent experience levels. [MODERATE CONFIDENCE]
This was not framed as a crisis. BRAK presented it as a workforce management observation. But read it alongside what we know about AI adoption, and the implications become acute.
The Training Gap No One Measured
Liga Paulina, who advises Baltic and Nordic firms on regulatory compliance, puts it directly: "We trained a generation of lawyers through Zoom screens and PDF bundles. They passed their exams. They are technically qualified. But qualification is not the same as competence, and we are only now discovering how wide that gap is."
The gap manifests in specific, observable ways. A partner at a mid-sized Stockholm firm — who asked not to be named because the firm is recruiting — described it to me: "I ask a fourth-year associate to verify an AI-generated contract analysis. She reads the output. She confirms it looks reasonable. But she has never drafted that analysis herself from primary sources. She is checking plausibility, not accuracy. Those are fundamentally different skills."
This is precisely the mechanism Lisanne Bainbridge described in her 1983 "Ironies of Automation" — but with a twist. Bainbridge wrote about operators who had skills and lost them through disuse. The COVID cohort is different. Many never fully acquired the skills that AI is now absorbing.
| Training Era | Research Method | Supervision Model | AI Readiness |
|---|---|---|---|
| Pre-2019 cohort | Physical libraries + databases | In-person, shoulder-to-shoulder | Can verify AI output from experience |
| 2020–2023 cohort | Remote databases + limited mentoring | Virtual, asynchronous, fragmented | Checks plausibility, not accuracy |
| Post-2023 cohort | AI-first research from day one | AI output review as primary task | May never develop independent verification |
The Article 4 Dimension
Liga Paulina brings the regulatory lens: "Article 4 of the EU AI Act requires a 'sufficient level of AI literacy' — but sufficient for what? The regulation deliberately leaves this open. If your firm's AI literacy programme teaches associates how to prompt an LLM but not how to verify its output against primary sources, you have taught the tool, not the competence. That is not what Article 4 intends."
She is right, and the enforcement trajectory matters. The European Commission's AI Office published its first guidance on Article 4 interpretation in November 2025. It explicitly connects AI literacy to the ability to "critically assess AI-generated outputs in the context of professional decision-making." That phrase — "critically assess" — implies a baseline of domain knowledge that cannot itself be AI-dependent.
For firms in Latvia, where we are based, the Competition Council (Konkurences padome) has signalled it will treat AI literacy obligations as part of broader professional competence requirements. Similar frameworks are emerging in Estonia through the Data Protection Inspectorate and in Lithuania through the Communications Regulatory Authority.
What the Numbers Show
The evidence is accumulating across jurisdictions. [HIGH CONFIDENCE]
| Source | Finding | Date |
|---|---|---|
| BRAK Annual Survey (Germany) | 38% of firms report COVID-era associates need more supervision | December 2025 |
| Law Society of England and Wales | 27% of training principals concerned about "practice-readiness" of 2020–2022 qualifiers | October 2025 |
| Swedish Bar Association (Advokatsamfundet) | Remote-trained associates take 40% longer to reach independent practice milestones | November 2025 |
| Bastani et al. (PNAS, 2025) | Students with unrestricted AI scored 17% lower on independent assessments | 2025 |
| The Lancet (2025) | Gastroenterologists lost 21% detection accuracy after 18 months of AI assistance | 2025 |
The Swedish data is particularly striking. The Advokatsamfundet tracked 1,200 associates from qualification through their first three years. Those who completed their training period primarily remotely (2020–2022) took an average of 2.4 years to reach the independent practice milestone that pre-pandemic cohorts reached in 1.7 years. That 40% difference is not trivial — it is eight months of additional supervised work, at senior billing rates, before a firm can trust the associate to work without oversight.
The Compounding Problem
Here is where competence debt becomes dangerous. The COVID cohort's training deficit is not static. It compounds.
When a firm deploys AI research tools, the associates who most need practice doing manual research are the ones most likely to lean on AI. They are not lazy. They are rational. If you were never confident in your manual research skills, why would you choose the slower, harder method when an AI tool delivers faster results?
The partner in Stockholm saw this directly: "Our 2021 qualifiers use the AI tools more heavily than our 2018 qualifiers. The 2018 cohort uses AI to accelerate work they already know how to do. The 2021 cohort uses AI to do work they are not sure they could do without it. Same tool, completely different dependency profile."
This is the competence paradox made concrete. AI is most useful to the people who least need it (experienced lawyers accelerating known tasks) and most dangerous for the people who most rely on it (less experienced lawyers substituting it for skills they never built).
What Thoughtful Firms Are Doing
Not every firm is sleepwalking into this. Three approaches I observed in my visits deserve attention:
Mannheimer Swartling (Stockholm) has introduced quarterly "analogue days" — full working days where associates complete research and drafting assignments using only primary legal databases, with no AI assistance. The assignments are reviewed and scored. Associates who consistently perform below benchmarks receive targeted training. It is not popular. It is effective.
COBALT (Riga) requires all associates under five years' qualification to maintain a "verification journal" — a monthly log documenting three instances where they independently verified AI output against primary sources, including the methodology used and any discrepancies found. The journal feeds into annual reviews.
Noerr (Berlin) has restructured its associate training programme to front-load manual research intensive work in the first 18 months, deliberately delaying AI tool access for new associates. The firm's managing partner described the logic: "We want them to build the muscles before we give them the machine. Otherwise they never know what the machine is doing."
These are not perfect solutions. They are expensive, time-consuming, and create friction. But they represent firms that understand the problem and are investing in competence rather than optimising only for throughput.
The Competence Question
You are a compliance officer at a mid-sized European manufacturer. Your company deployed AI tools across legal, HR, and finance eighteen months ago. Productivity metrics are excellent. Your Article 4 documentation shows that all AI-facing staff completed a certified AI literacy course.
Now a regulator asks: "Can your staff identify when the AI system produces an incorrect output in their domain?" Your marketing team was trained to use the AI content generator. Were they trained to fact-check its regulatory claims? Your HR team uses AI for contract drafting. Can they spot a non-compliant clause the AI missed?
The competence question this month: does your AI literacy programme teach people to use the tools, or to think independently of them?
What To Do
-
Audit your COVID-era associates specifically. Do not assume that years of qualification equal years of competence. Identify which associates completed training primarily remotely and assess their independent research capabilities — without AI tools — against firm benchmarks.
-
Implement verification journals or equivalent tracking. Whether formal or informal, create a record of associates independently checking AI output against primary sources. This serves both as training and as Article 4 compliance documentation.
-
Review your AI tool deployment by seniority. Map which associates use AI tools most heavily and cross-reference with their independent practice assessments. If your least experienced associates are your heaviest AI users, you have a dependency problem, not an efficiency gain.
-
Build "analogue" competence exercises into quarterly schedules. Reserve time for AI-free research and drafting. Assess the results. Use them to identify training gaps before they manifest as supervision failures.
Quick Reads
-
BRAK's 2025 Workforce Survey reveals widening competence gaps between pre-pandemic and pandemic-era associates across German firms. The data on supervision requirements deserves close attention from any firm with associates who qualified between 2020 and 2023.
-
The Swedish Bar Association's training milestone data is the first longitudinal study tracking remote-trained lawyers through to independent practice. The 40% delay finding has implications beyond Sweden.
-
European Commission AI Office Article 4 Guidance emphasises "critical assessment" as a core component of AI literacy — a framing that challenges checkbox training approaches. Read the guidance alongside your current training programme.
-
Bastani et al. in PNAS — if you read one academic paper this quarter, make it this one. The 48%/17% finding is the clearest empirical evidence of the competence paradox in an educational setting.
One Question
If you removed AI access from your associates for one week, which ones could still do their jobs to the standard your clients expect — and what does that tell you about what you have built?
TwinLadder Weekly | Issue #25 | February 2026
Helping professionals build AI capability through honest education.
Included Workflow
Technology Profile Assessment
Map your organisation's AI technology landscape. Identify tools, departments, use cases, and governance structures to build a complete technology profile for Article 4 compliance.
Start this workflow
