TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

AI Strategy

AI Literacy Is Not a Compliance Burden. It Is the Only Defence Against Jobless Growth.

The debate about Article 4 has been framed as regulation versus competitiveness. That framing is wrong. The real question is whether the productivity gains from AI will reach workers or only shareholders. AI literacy is the mechanism that determines the answer.

April 3, 2026TwinLadder Research Team, Editorial Desk8 min read

Listen to this article

0:000:00

There is a debate happening in Brussels right now about whether to weaken Article 4 of the EU AI Act -- the provision that requires companies to ensure their staff have sufficient AI literacy. The Digital Omnibus proposes converting this from a binding obligation into a softer encouragement. The argument is competitiveness. European companies, the reasoning goes, should not bear regulatory burden that American and Chinese competitors do not face.

I have been watching organisations navigate technology transitions for two decades. I have seen this argument before. It is always framed the same way: regulation as burden, deregulation as liberation.

The framing is wrong. The real question is not whether AI literacy obligations slow companies down. It is whether the productivity gains from AI will reach workers -- or only shareholders. AI literacy is the mechanism that determines the answer.


The data on what happens without it

EY surveyed 15,000 employees and 1,500 employers across 29 countries in 2025. Their finding: companies are missing out on up to 40% of AI productivity gains because of gaps in talent strategy. Only 12% of workers said they were receiving sufficient AI training to unlock productivity benefits.

Read that again. Not 12% receiving excellent training. 12% receiving sufficient training.

PwC studied global workforce data and found a 56% wage premium for AI-skilled workers. The people who understand AI tools earn dramatically more. The people who do not are already falling behind -- not in ten years, now.

Korn Ferry surveyed 1,674 global talent leaders. 37% plan to replace entry-level roles with AI. At the same time, 73% identify critical thinking as their number-one recruiting priority. Only 11% said their executives were well-prepared to lead through AI transition.

These numbers describe a bifurcation. A split. People with AI skills move up. People without them get automated. The question is what determines which side a worker falls on. Is it their employer's investment in their development? Their access to training? Their luck in choosing the right company?

Or is it something structural -- a legal requirement that every employer must invest in the literacy of the people who use AI on their behalf?


The rehiring signal

Something remarkable is happening that has not received enough attention.

Gartner predicts that half of companies that cut customer service staff due to AI will rehire by 2027. Their survey of 321 customer service leaders found that only 20% had actually reduced staffing. Those that did are now encountering "the reality that organizations encounter the limits of AI and rising customer expectations."

Forrester reached a similar conclusion: 55% of employers who laid off workers in the name of AI now report regretting the decision. Half of AI-attributed layoffs will be reversed -- but often with rehired workers at lower salaries, offshore.

Harvard Business Review published research in January 2026 showing that 60% of organisations had reduced headcount in anticipation of AI's future impact -- not in response to current performance. As Davenport and Srinivasan wrote, attributing layoffs to AI "conveys a more positive message to investors."

Oxford Economics called AI-attributed layoffs a "corporate fiction" masking routine business cycle adjustments.

The pattern is unambiguous. Companies are cutting staff in the name of AI before they understand what AI can actually do. Then they are discovering that AI without competent humans does not work. Then they are rehiring -- often at worse terms for the workers.

This is not a technology story. This is a power story. And AI literacy is the lever that determines who holds the power.


The competence paradox

I have written about this before, but it bears repeating in this context.

Consider what happens when you deploy AI across a professional services firm. The junior associates who used to spend their first three years learning to research case law are now using AI to generate first drafts. The procurement analysts who learned supplier evaluation by processing hundreds of RFPs manually are now reviewing AI-summarised shortlists. In each case, the output looks the same -- or better. But the learning that used to happen inside those tasks has disappeared.

This is the competence paradox. AI automates the entry-level work where professionals traditionally built their expertise. It simultaneously automates senior-level tasks that require that expertise to verify. The result is a growing gap between what AI produces and the organisation's ability to evaluate whether the output is correct.

Without AI literacy -- real literacy, not a thirty-minute webinar -- organisations are building an invisible dependency. They rely on AI outputs they cannot verify, overseen by people who do not understand the system's limitations.

This is how you get jobless growth. Not because AI eliminates all jobs. But because AI concentrates the value in a small number of AI-literate workers while deskilling everyone else. The productivity gains accrue to the organisation. The capability losses accrue to the individual.


Singapore understood this

Singapore's prime minister has said there will be no jobless growth from AI. The country is building AI innovation centres across the nation -- investing in people, not just tools.

The EU took a different path. It put the obligation in law. Article 4 says: if you deploy AI, you must ensure your people understand it. Not as an aspiration. As a requirement.

The intent is the same: make sure AI makes people more capable, not more disposable.

The Digital Omnibus proposes weakening this. Not eliminating it -- the high-risk system training requirements remain fully binding. But softening the general obligation from "you must" to "your government should encourage you."


Why this matters beyond compliance

Here is the argument that does not get made in Brussels.

AI literacy is not a regulatory burden. It is an investment multiplier. EY's own data shows that companies with AI-skilled workforces capture 40% more productivity from the same AI tools. PwC shows that AI-literate workers command 56% higher wages. These are not costs. They are returns.

When you train an employee to understand AI -- not just to click buttons, but to evaluate outputs, understand limitations, recognise when the system is wrong -- you are not complying with a regulation. You are building the capability that makes your AI investment productive.

The companies that treat Article 4 as a compliance burden will deploy AI, see mediocre results, blame the technology, and eventually discover that the problem was always the humans. The companies that treat it as an investment will deploy AI, see compounding returns, build institutional competence, and wonder why their competitors are struggling.

The regulation just makes the investment non-optional. Remove the regulation and the investment becomes voluntary. In my experience, voluntary workforce investments get cut in the first downturn.


The inequality dimension

There is a harder truth here.

Korn Ferry's data shows that 37% of companies plan to replace entry-level roles with AI. Josh Bersin's research identifies AI strategy capability as the largest skill gap among HR professionals globally. The WEF's Future of Jobs Report flags AI skills as among the fastest-growing requirements -- alongside the largest gaps.

Who gets trained in AI? The employees whose companies invest in them. Which companies invest? The ones that are required to, or the ones that already understand the value.

Without a structural obligation, AI literacy follows existing inequality fault lines. Large companies with training budgets invest. Small companies do not. Senior staff get training. Junior staff get automated. Knowledge workers in wealthy countries adapt. Service workers everywhere absorb the displacement.

Article 4 is a blunt instrument. I will not pretend otherwise. A single article in a 113-article regulation cannot solve structural inequality. But it does one critical thing: it makes every employer responsible for the AI competence of every person who uses AI on their behalf. Not just the senior staff. Not just the technical teams. Everyone who is affected.

That is not a burden. That is a floor.


What I would tell Brussels

If I were in the room where the Omnibus is being debated, I would say this:

The question is not whether Article 4 creates cost. It does. Training costs money. Documentation costs time. Compliance requires effort.

The question is what the alternative costs.

The alternative is a workforce that uses AI tools it does not understand, overseen by managers who cannot evaluate the outputs, in organisations that have outsourced their competence to vendors. The alternative is the Gartner scenario: mass layoffs followed by mass rehiring at lower wages. The alternative is a productivity gap that benefits shareholders while deskilling workers. The alternative is the competence paradox playing out at continental scale.

AI does not add a layer on top of existing work. It rewrites the layer. GDPR required organisations to handle data differently. Article 4 requires organisations to ensure their people can work differently. You can soften the obligation. You cannot soften the reality.

Compliance is the floor. Competence is the mission.

The organisations that will thrive are not the ones that lobbied to lower the floor. They are the ones that recognised the floor for what it is: the starting point of a much larger transformation.


For a structured assessment of where your organisation stands on AI competence, take the Twin Ladder Assessment. For the regulatory analysis of the Digital Omnibus and Article 4, read The Digital Omnibus and Article 4: What Actually Changes.