TWINLADDER
TwinLadder
TWINLADDER
Back to Insights

AI Strategy

The Competence Framework Gap: Why HR Cannot Outsource AI Training

Generic vendor training creates the illusion of compliance without building real capability. HR must own the competence framework for AI -- because nobody else understands how the organisation actually learns.

March 4, 2026Alex Blumentals, Founder & CEO9 min read
The Competence Framework Gap: Why HR Cannot Outsource AI Training

Klausīties šo rakstu

0:000:00

The Competence Framework Gap: Why HR Cannot Outsource AI Training

Vendor training teaches people to click buttons. Competence frameworks teach organisations to think. HR owns the second -- and nobody else can.


I have watched this pattern unfold in every major technology transition I have guided organisations through over the past twenty years. A new capability arrives -- ERP systems in the 2000s, cloud platforms in the 2010s, AI tools now. The organisation procures it. The vendor provides training. People learn the interface. And then, slowly, it becomes clear that knowing the interface and understanding the capability are two entirely different things.

With AI, this gap is wider and more consequential than anything I have seen before. And the function best positioned to close it -- HR -- is being systematically sidelined from the conversation.

The Outsourcing Reflex

When Article 4 of the EU AI Act made AI literacy a legal obligation in February 2025, most organisations reached for the obvious solution: buy training. Vendors were eager to provide it. Microsoft offers AI fluency modules. Google has its AI Essentials certificate. OpenAI has a growing library of enterprise training materials. Legal AI vendors like Harvey and Luminance bundle training with their platforms.

The procurement logic is tidy: we have a compliance requirement, vendors offer a product that addresses it, we buy the product, we are compliant. Neat. And almost entirely wrong.

The problem is not that vendor training is bad. Some of it is genuinely well-designed. The problem is that vendor training answers the wrong question. It teaches people how to use a specific tool. It does not teach organisations how to think about AI competence -- what good looks like, how to measure it, where the risks concentrate, and how capability needs to evolve as systems and regulations change.

That second question -- the competence framework question -- is what Article 4 actually demands when it says organisations must ensure "a sufficient level of AI literacy ... taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in."

Context. Experience. Role. Responsibility. These are not things a vendor can assess for your organisation. They are things HR lives and breathes every day.

What a Competence Framework Actually Is

I should be precise about terms, because they get muddled in practice.

A training programme delivers knowledge and skills. It has a curriculum, a duration, and an assessment. It is a bounded intervention.

A competence framework maps what the organisation needs its people to know and do, at different levels of responsibility, across different functions, evolving over time. It determines which training is needed, for whom, at what depth, and how progress is measured. It is not an event -- it is a structural capability.

When an organisation buys vendor training without a competence framework, it is purchasing answers before defining the questions. The result is what I call checkbox compliance: documented training activity that does not correspond to genuine organisational capability.

I have watched this play out specifically with AI. A European financial services firm I know -- I will not name them, but they are a household name -- rolled out Microsoft Copilot across 4,000 employees in 2024. Microsoft's training programme was thorough. Completion rates were above 85%. The firm's compliance team declared Article 4 obligations met.

Six months later, an internal audit found that fewer than 20% of trained employees were using Copilot in ways that aligned with the firm's risk policies. Many had reverted to previous workflows. Some were using Copilot for tasks it was not approved for. The training had created familiarity with the tool but not judgment about its appropriate use. Nobody had built the framework to define what "appropriate use" meant in each business function.

The Competence Paradox in HR's Own House

Here is where the challenge becomes recursive, and genuinely uncomfortable.

AI is simultaneously automating the entry-level analytical work where HR professionals build their own expertise. Junior HR analysts who once manually reviewed workforce data, identified patterns, and developed recommendations now receive those outputs from AI dashboards. They learn to interpret AI-generated insights without ever having built the foundational understanding of what makes those insights reliable or misleading.

The World Economic Forum's Future of Jobs Report 2025 found that analytical thinking and AI/big data skills are among the fastest-growing requirements for HR roles, but that most organisations report significant gaps in these areas among their current HR workforce. The people being asked to build AI competence frameworks for the organisation may not yet have sufficient AI competence themselves.

This is not a criticism. It is a structural reality. And it is precisely why the competence framework cannot be outsourced -- because addressing it requires understanding the organisation's specific context, not generic AI knowledge.

Why Generic Training Fails the Article 4 Test

Article 4's standard is "sufficient level of AI literacy ... taking into account ... the context the AI systems are to be used in." This phrase -- "taking into account the context" -- is doing enormous regulatory work.

A recruitment officer using Eightfold AI to screen 500 applications for a senior role needs different literacy than a learning coordinator using an AI platform to recommend training modules. A compensation analyst using predictive models for salary benchmarking needs different understanding than an HR business partner using generative AI to draft employee communications.

Generic training flattens these distinctions. Everyone gets the same modules, the same assessments, the same completion certificate. Article 4's contextual standard requires the opposite: a differentiated approach that maps literacy requirements to specific roles, responsibilities, and AI system interactions.

Only HR has the organisational knowledge to build this mapping. Only HR understands which roles interact with which systems, what decisions those interactions inform, and what the consequences of errors look like in each context.

The Five Layers of a Functional AI Competence Framework

From working with organisations across several sectors through transitions like this, I have identified five layers that a functional AI competence framework needs. Each layer is something HR is uniquely positioned to define.

Layer 1: Role-AI Interaction Mapping. Which roles use which AI systems? At what level of autonomy? With what decision authority? This is an extension of job architecture work that HR already does. It requires HR's knowledge of role design, reporting structures, and decision rights.

Layer 2: Contextual Literacy Standards. What does "sufficient" mean for each role-system combination? For a recruiter using AI screening, it means understanding bias vectors, output interpretation, and override protocols. For a payroll administrator using AI for anomaly detection, it means understanding false positive rates and escalation procedures. These standards cannot be set by a vendor who does not know your organisation.

Layer 3: Assessment and Gap Analysis. How do you know whether someone meets the standard? This is assessment design -- a core HR competency. It requires situation-specific evaluations, not generic multiple-choice quizzes. Can the recruiter explain why the system flagged a candidate? Can the analyst identify when the attrition model's predictions diverge from reality?

Layer 4: Development Pathways. How do people move from current state to target state? This is learning design applied to AI literacy. It might include vendor training for tool-specific skills, but it also includes scenario-based exercises, cross-functional workshops, and on-the-job learning integrated into workflows. Research from the CIPD consistently shows that 70% of professional capability is built through work-integrated learning, not formal courses.

Layer 5: Evolution and Maintenance. AI systems change. Regulations evolve. Organisational context shifts. The competence framework must be a living system, not a one-time exercise. This is the area where outsourced solutions fail most completely -- a vendor has no mechanism to track how your organisation's AI deployment evolves and adjust literacy requirements accordingly.

The Uncomfortable Truth About HR's Readiness

I am going to be direct, because I think this matters.

Many HR functions are not currently equipped to build AI competence frameworks. A Gartner survey from 2024 found that only 26% of HR leaders felt "confident" in their ability to manage AI-related workforce transformation. A Josh Bersin Academy study found that AI strategy capability was the largest skill gap among HR professionals globally.

This is not a reason to outsource. It is a reason to invest. If the function responsible for organisational learning cannot learn, the problem is structural -- and no amount of vendor procurement will solve it.

The path forward starts with HR building its own AI literacy. Not generic awareness, but deep understanding of how AI systems interact with people decisions, where bias and risk concentrate, and what genuine competence looks like in practice. Then -- and only then -- HR can build the frameworks that Article 4 demands and the organisation actually needs.

What I Tell Organisations

When I work with organisations on this, I say three things.

First: compliance is the floor, not the ceiling. Article 4 sets a minimum standard. The organisations that treat it as a checkbox will be compliant and incompetent. The organisations that use it as a catalyst for genuine capability building will outperform.

Second: HR must lead, not follow. If your AI literacy programme is being designed by IT, procurement, or external consultants without deep HR involvement, it will be technically sound and organisationally empty. HR brings the knowledge of how people learn, how roles work, and how capability translates into performance.

Third: start with the framework, not the training. Before you buy a single course, map your AI systems to roles, define what sufficient means for each combination, and assess where you stand. The training decisions follow from the framework. When you reverse the order, you get vendor training that satisfies no one and changes nothing.

The competence gap is real. But the solution is not more training -- it is better architecture. And that architecture lives in HR.


For the regulatory foundation of this argument, see our phrase-by-phrase analysis of Article 4. For the broader competence paradox thesis, read The Competence Paradox: When AI Eliminates the Jobs Where You Learn.