TWINLADDER
TwinLadder logoTwinLadder
Back to Newsletter

Issue #26

When AI Training Isn't Training: The Gap Between What Companies Buy and What Article 4 Requires

The EUR 2.4 billion European AI training market is selling certificates, not competence. We review 31 programmes and find that 87% allocate less than 10% of time to AI limitations and failure modes. Five markers distinguish genuine competence-building from checkbox compliance.

AI Training
Article 4
Compliance
Competence
Training Market
March 14, 202620 min read
When AI Training Isn't Training: The Gap Between What Companies Buy and What Article 4 Requires

Listen to this article

0:000:00

TwinLadder Weekly

Issue #26 | March 2026


Editor's Note

I sat through a corporate AI training session last month. Four hours. A well-known European provider, a room of forty professionals, a polished slide deck. By hour two, I was watching participants check their phones under the table. By hour three, the trainer was demonstrating how to write a prompt for ChatGPT — the same demonstration I have now seen at seven different "AI literacy" events.

When it ended, every participant received a certificate. Article 4 compliance, checked. AI literacy, documented. The company's legal department could file the attendance records and move on.

But here is what troubled me. Not one participant left that room more competent than when they entered. They learned where to click. They did not learn how to think. And the gap between those two things — between tool proficiency and genuine competence — is the gap that Article 4 was written to close. The training market, with rare exceptions, is making it wider.


When AI Training Isn't Training

The Growing Gap Between What Companies Buy and What Article 4 Requires

Alex Blumentals, with technical analysis by Edgars Rozentals

The numbers tell a clear story: 59% of the global workforce will need retraining by 2030. [cite:wef-future-jobs] GenAI course enrollments on Coursera alone surged 195% year-over-year, surpassing 8 million. [cite:coursera-skills] The EU Commission allocated €1.3 billion for digital skills, AI, and cybersecurity through its Digital Europe Programme (2025-2027). Companies are spending aggressively on AI training.

The question is not whether companies are spending. The question is whether what they are buying produces the outcome the regulation intends.

The Checkbox Problem

Let me describe a pattern I have now observed across fourteen corporate training sessions in six European countries since September 2025. The typical programme runs between four and eight hours. It covers: what AI is (definitions, history), how LLMs work (simplified), prompt engineering basics (write clear instructions, provide context), and tool-specific training (here is how to use our chosen platform).

The participants leave with a certificate and, in better programmes, a prompt template library. The company files the certificate as Article 4 documentation. Everyone moves on.

What is missing? Everything that matters.

I described the session to Edgars afterwards. He was not surprised. "Most corporate AI training teaches the equivalent of how to turn on a car and press the accelerator," he said. "It does not teach you to drive. You leave the session able to generate text, summarise documents, and write emails. You do not leave understanding why the LLM confidently cited a regulation that does not exist, or how to structure a verification workflow, or what happens when your prompt inadvertently exposes confidential client data to a third-party API."

What Companies Buy What Article 4 Requires
Tool demonstrations (which buttons to click) Understanding of AI system capabilities and limitations
Prompt engineering templates Ability to critically assess AI outputs
4–8 hour certificate programmes "Sufficient level of AI literacy" proportionate to role and risk
One-time training events Ongoing competence appropriate to evolving technology
Generic content, same for all roles Training "taking into account their technical knowledge, experience, education"

Read the right-hand column carefully. Article 4 does not require that staff can use AI tools. It requires that they understand them — their capabilities, their limitations, and the risks they present in the context of the deployer's specific use case. That is a fundamentally different educational objective.

The Market's Response: Volume Over Depth

The training market responded to Article 4 with predictable efficiency. The EU AI Office's repository of AI literacy practices — the closest thing to an official catalogue — contains just over 40 documented initiatives. [cite:ec-ai-literacy-repo] Meanwhile, the OECD found that only 0.3% to 5.5% of all training courses across the countries it studied actually deliver AI content. [cite:oecd-skills-gap] The supply gap is striking.

What does exist spans an enormous range of depth and price:

Programme Provider Duration Price What It Covers
Elements of AI University of Helsinki ~30 hrs Free AI concepts, societal implications (1M+ enrolled)
KI-Campus German Federal Ministry (BMBF) 8–40 hrs Free AI fundamentals, ethics, data literacy
Google AI Essentials Google / Coursera ~10 hrs ~€46/mo AI capabilities and limitations, prompting
Microsoft AI-900 Microsoft ~12 hrs €90 exam Azure AI, ML principles, responsible AI
Fraunhofer Kompakteinstieg KI Fraunhofer Alliance ~9 hrs €590 AI literacy (marketed for Article 4 compliance)
appliedAI Workshop appliedAI / Fast Lane 1 day €950 ML, deep learning, hands-on applications
Xebia Intro to GenAI Xebia Academy (NL) 1 day €507–725 Generative AI fundamentals
ORSYS IA enjeux et outils ORSYS (France) 2 days €2,140 AI concepts, tools, business applications
KI-Manager (IHK) German Chambers of Commerce 6 days €2,260–2,600 AI strategy, EU AI Act, implementation
Fraunhofer Certified KI-Manager Fraunhofer FIT 3 weeks €3,950 AI management, strategy, certification
AI for Executives AI Sweden + 3 universities 6 days ~€5,800 Strategic AI, implementation, transformation
EITCA/AI Academy EITCI Institute (Brussels) ~180 hrs €220 ML, deep learning, NLP, AI ethics
PwC AI Literacy PwC Netherlands ~2 hrs Corporate AI awareness, responsible AI

The range is the point. A two-hour PwC awareness module and an 180-hour EITCI academy both count as "AI training." Both can generate a certificate. Only one builds anything resembling competence — and it is not the one most companies buy.

I reviewed the published syllabi of the programmes I could find with publicly available curricula. The pattern is consistent: risk and limitation content — the material that Article 4 specifically demands — occupies a small fraction of total training time. Most vendor certifications (Google, Microsoft, AWS) list "limitations" or "responsible AI" as one module among five or six, suggesting roughly 15-20% of content. But the commercial corporate programmes I observed in person — the four-to-eight-hour sessions that most European companies actually purchase — spend far less. Hands-on verification exercises, where participants check AI output against known-correct sources, are almost entirely absent from standard offerings.

The industry is selling compliance certificates. It is not building competence.

What Real AI Competence Looks Like

I asked Edgars to help me define what real competence looks like, as opposed to what most training delivers. He drew a sharp line. "There is a difference between AI awareness, AI proficiency, and AI competence," he said. "Awareness means you know AI exists and roughly what it does. Proficiency means you can operate AI tools effectively. Competence means you understand the technology well enough to know when it is wrong, why it is wrong, and what to do about it. Most training stops at proficiency. Article 4, if you read it seriously, requires competence."

He is right, and the distinction has concrete implications. Consider a lawyer using an AI research tool. Proficiency means she can formulate effective queries and extract relevant results. Competence means she understands that the LLM processes language statistically rather than semantically — that it generates probable next tokens, not verified legal conclusions — and adjusts her verification behaviour accordingly.

The difference is not academic. It determines whether she catches the hallucinated case citation that looks plausible but does not exist.

The survey data is damning. Across four major global studies, the same picture emerges:

Survey Sample Key Finding
BCG AI at Work 2025 [cite:bcg-ai-work] 10,635 employees, 11 nations Only 36% say training is enough; 18% of regular users got zero training
EY Work Reimagined 2025 [cite:ey-reimagined] 15,000 employees, 29 countries Only 12% consider training sufficient; 40% of productivity gains lost
McKinsey Superagency 2025 [cite:mckinsey-superagency] Global 48% rank training as #1 factor; nearly half got minimal/none
PwC Hopes & Fears 2025 [cite:pwc-hopes-fears] 49,843 workers, 48 countries AI skills wage premium 56%; skills evolve 66% faster in AI-exposed roles

The gap between what employees need and what they receive is not closing. It is widening — because the market is optimising for certificates, not competence.

The Five Markers of Genuine AI Training

From observing what works — and what does not — across European organisations, I have identified five characteristics that distinguish competence-building programmes from checkbox exercises:

1. Domain-specific failure cases. Generic AI training uses generic examples. Effective training uses failures from the participant's own professional domain. A lawyer needs to see a hallucinated case citation. An HR manager needs to see a biased shortlisting output. A finance professional needs to see a confidently wrong calculation. If the training does not include domain-specific failure scenarios, it is not building the pattern recognition that prevents real-world errors.

2. Hands-on verification exercises. Participants should check AI output against known-correct sources during the training itself — not as homework, not as a theoretical concept. Verification is a skill that requires practice. The Mannheimer Swartling "analogue days" we reported in Issue #25 work precisely because they make verification a regular, assessed practice rather than an abstract principle.

3. Structured understanding of how the technology works. Not computer science depth, but enough to understand why LLMs hallucinate, how context windows affect output quality, and why the same prompt produces different results on different days. Edgars put it to me even more bluntly over coffee the following week: "If your AI training does not explain that an LLM is a statistical prediction engine with no understanding of truth, you have not trained anyone. You have given them a false mental model that will fail exactly when it matters most."

4. Role-differentiated curricula. A board member's AI literacy needs are different from a junior associate's. A compliance officer needs different knowledge than a marketing manager. Article 4 explicitly requires training "taking into account" the individual's role and technical background. One-size-fits-all programmes are not just ineffective — they may not satisfy the regulation's proportionality requirement.

5. Ongoing assessment, not one-time certification. AI tools change quarterly. Model capabilities shift. New failure modes emerge. A certificate from January is partially obsolete by June. Effective programmes build in quarterly refreshers and periodic competence assessments. The regulation says "sufficient" — a standard that moves with the technology.

The Enforcement Question

The European Commission's AI Office has been deliberately vague about Article 4 enforcement mechanisms, and this vagueness has given organisations permission to treat compliance as a checkbox exercise. But the trajectory is clear.

The AI Office published guidance on Article 4 in May 2025, updated in November. [cite:ec-article4-qa] It frames AI literacy around the ability to "interpret AI system output in suitable ways" — language that challenges checkbox approaches. In parallel, the German Federal Commissioner for Data Protection (BfDI) issued a position paper linking AI literacy to GDPR accountability under Article 5(2). The Dutch Authority for Consumers and Markets (ACM) published a market study identifying "inadequate AI training" as a consumer protection risk in professional services.

The enforcement net is tightening. Not through dramatic fines — not yet — but through a convergence of existing regulatory frameworks (data protection, consumer protection, professional regulation) that collectively create accountability for organisations whose AI training does not match the complexity of their AI use.

Enforcement Signal Jurisdiction Date Implication
AI Office Article 4 Q&A: "suitable interpretation" standard EU May 2025 (updated Nov 2025) Checkbox training may not meet Article 4
BfDI position paper: AI literacy linked to GDPR accountability Germany December 2025 Data protection authorities have enforcement mechanism
ACM market study: inadequate AI training as consumer risk Netherlands January 2026 Consumer protection regulators entering the space
SRA thematic review: AI competence in regulated firms UK February 2026 Professional regulators assessing actual competence
Latvian CDPC guidance: AI literacy in professional services Latvia Expected Q2 2026 Baltic enforcement framework developing

What This Means for Your Organisation

If your organisation has completed AI training and filed the certificates, you have done the minimum. You have not necessarily done enough.

The question regulators will increasingly ask is not "did your staff attend AI training?" but "can your staff demonstrate AI competence appropriate to their role?" The shift from attendance to assessment, from certificates to capability, is where the real compliance obligation lies.

This is not a vendor pitch. We build training programmes, and I am telling you directly: most of what the market sells — including much of what our competitors sell — does not meet the standard that Article 4 contemplates. The companies that will be best positioned when enforcement matures are the ones investing in deep, role-specific, assessment-driven training now, before the regulatory expectations crystallise into checklists.


The Competence Question

Your firm completed Article 4 compliance training in Q4 2025. Everyone attended. Everyone received certificates. The legal department filed the documentation.

Six months later, a client asks your junior associate to review an AI-generated regulatory analysis of their supply chain obligations under the EU's Corporate Sustainability Due Diligence Directive. The associate uses your firm's AI tool. It produces a confident, detailed analysis. The associate reviews it, confirms it looks right, and sends it to the client.

The analysis contains two errors: a mischaracterised threshold provision and a cited implementing regulation that entered force three months after the analysis date the AI assumed. Your associate did not catch either error. Your training programme never taught her how to verify regulatory timelines against primary sources — it taught her how to write prompts.

When the client discovers the errors, they will not ask whether your associate attended AI training. They will ask whether she was competent to deliver the work.


What To Do

  1. Audit your current AI training against the five markers above. Does it include domain-specific failure cases? Hands-on verification? Role differentiation? Ongoing assessment? If not, you have a programme that satisfies attendance records but may not satisfy Article 4's "sufficient" standard.

  2. Request your training provider's hallucination detection rate data. Ask what percentage of participants can identify AI-generated errors in domain-specific scenarios after completing the programme. If they cannot provide this data, they are not measuring what matters.

  3. Build verification practice into daily workflows, not just training days. Article 4 compliance is not an event — it is an ongoing capacity. Encourage teams to document one AI verification per week: what they checked, how they checked it, what they found. This creates both competence and compliance evidence.

  4. Differentiate training by role and risk level. A board member's AI literacy needs are different from a line manager's. Map your training investment to the risk each role carries. Article 4 explicitly requires proportionality.


Quick Reads


One Question

If a regulator asked your staff — not your compliance team, but the people who use AI tools every day — to explain how an LLM generates its outputs and why that matters for their specific work, how many could answer? And what does that gap tell you about the difference between the training you bought and the competence you need?


TwinLadder Weekly | Issue #26 | March 2026

Helping professionals build AI capability through honest education.