TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder

The Literacy Paradox: How Article 4 May Prevent the Competence It Demands

March 9, 2026

Article 4 of the EU AI Act contains a structural contradiction: it demands AI literacy while the regulation's own letter may prohibit the most effective method of achieving it — AI-assisted learning itself. A critical analysis of the infinite regress, the competence degradation paradox, and the HR blind spot at the heart of Europe's literacy obligation.

The Literacy Paradox: How Article 4 May Prevent the Competence It Demands

Listen to this article

0:000:00

The Regulation That May Defeat Itself

I have spent the past year inside the text of Article 4. Line by line, phrase by phrase, mapping every operative element to the practical reality of organisations trying to comply. The Twin Ladder framework exists because of that work. And in the course of doing it, I found something that the regulation's drafters almost certainly did not intend.

Article 4 of the EU AI Act contains a structural contradiction. It demands AI literacy while its own letter may prohibit the most effective method of achieving it.

This is not a pedantic legal technicality. It is a design flaw that, if left unaddressed, will produce exactly the hollow compliance theatre that turned GDPR into a cookie banner exercise. The organisations most committed to building genuine AI competence may find themselves in a worse regulatory position than those that settle for a PowerPoint and a checkbox.

Let me be precise about why.


1. What Article 4 Actually Says

The full text of Article 4 is a single sentence:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, the context in which the AI systems are to be used, and considering the persons or groups of persons on whom the AI systems are to be used.

Seven operative elements. Mandatory from 2 February 2025. Penalties for non-compliance will be determined by Member State national law, with the AI Act's general penalty framework providing for fines of up to EUR 15 million or 3% of worldwide annual turnover for operator obligation violations. No size exemption, no sector exemption, no de minimis threshold.

The European Commission's Q&A from May 2025 clarified that simply relying on AI systems' instructions for use or asking staff to read them "might be ineffective and insufficient." One-size-fits-all training fails the requirement to account for "technical knowledge, experience, education and training." The obligation extends to contractors, temporary workers, and outsourced service providers.

This is serious legislation. The intent behind it -- Recital 20 speaks of equipping people "with the necessary notions to make informed decisions regarding AI systems" -- is exactly right. People who use AI must understand what they are using. No argument from me on the goal.

The problem is what happens when you try to achieve that goal at scale.


2. The Intent Versus the Letter

Article 4 was written with a particular mental model: a technology company builds an AI system, deploys it, and the people who interact with it need to understand what it does. Provider educates deployer, deployer educates staff. A reasonable chain of obligation.

But the market has moved far beyond that model. AI is no longer a discrete system you deploy. It is embedded in the fabric of organisational workflows. Your email client uses it. Your CRM uses it. Your document management system uses it. Your HR platform uses it for screening, scheduling, scoring, and performance tracking. Your legal research tools use it. Your finance team's forecasting tools use it.

The Commission acknowledged this breadth -- the Q&A confirms the obligation applies to all AI systems, not just high-risk ones. But the regulation's architecture still assumes a deployment model where there is a clear boundary between "the AI system" and "the people who use it."

That boundary has dissolved.

When a law firm deploys an AI-powered research tool, it is deploying an AI system on its lawyers. Article 4 requires the firm to ensure those lawyers have sufficient AI literacy. Fair enough. But how does the firm build that literacy? The most effective method -- the method that accounts for individual technical knowledge, experience, and training context, as Article 4 itself demands -- is an AI-powered adaptive learning system. A system that assesses each person's baseline, delivers personalised training, tests understanding through scenario-based exercises, and evolves as both the learner and the technology develop.

Here is the problem: that learning system is itself an AI system deployed on staff. Which means it triggers its own Article 4 obligations. The staff must now be trained on the training AI. And the system used to train them on the training AI? If it is AI-powered -- and increasingly, what isn't? -- it triggers yet another layer of obligation.

This is not a hypothetical. It is a logical structure embedded in the regulation's text. Article 4 applies to all AI systems. It does not exempt AI systems used for compliance purposes. It does not create a carve-out for AI used in training. The obligation is universal.

The result is an infinite regress. Each layer of AI-assisted compliance generates a new compliance obligation. There is no termination condition in the regulation.


3. The Competence Degradation Paradox

The infinite regress would be merely annoying if the alternative -- non-AI training -- were adequate. It is not.

Research on AI-assisted work consistently reveals a dual nature. When professionals use AI tools as automation -- auto-complete, auto-decide, auto-generate -- their independent capability degrades over time. The junior lawyer who uses AI to draft research memos from day one never develops the ability to identify relevant case law independently. The procurement analyst who reviews AI-summarised shortlists never learns to read the underlying RFPs. The HR specialist who relies on AI screening never develops the judgment to spot a strong candidate in an unconventional CV.

I have watched this pattern across industries for twenty years, long before the current AI wave. At EDS in 2004, we documented how codification into standard categories -- mainframe, midframe, desktop -- systematically drove out the innovative thinkers who worked across boundaries. At Philip Morris, $5 billion a year disappeared into maintaining rigid systems while transformation capacity dropped to zero. The pattern is structural, not technological: when you optimise for efficiency by codifying knowledge, you lose the peripheral learning that happens in the gaps.

AI accelerates this pattern by an order of magnitude.

But here is what the regulation misses entirely: the same AI tools, deployed differently, can enhance rather than degrade competence. An AI system that explains its reasoning, presents alternatives, tests the user's understanding, and adapts to the user's skill level is not automation. It is education. The distinction is not in the technology. It is in the deployment mode.

Consider two uses of the same large language model in a law firm:

Mode A -- Automation: The lawyer types "draft a motion to dismiss" and submits the AI's output with minimal review. Over time, the lawyer's drafting skill atrophies. Competence degrades.

Mode B -- Learning: The AI presents three possible approaches to the motion, asks the lawyer to evaluate which is strongest for this jurisdiction, explains the reasoning behind each option, and flags areas where the lawyer's analysis differs from established precedent. The lawyer's judgment sharpens with every interaction. Competence builds.

Same technology. Same model. Same deployment context. Opposite effects on human competence.

Article 4 makes no distinction between these two modes. None. The regulation speaks of "AI systems" as a category. It does not distinguish between AI-as-automation and AI-as-learning-instrument. An organisation deploying Mode B -- the mode that actually achieves Article 4's stated goal of building literacy -- faces the same compliance burden as one deploying Mode A.

This is the competence degradation paradox in regulatory form. An organisation that bans AI to "protect competence" fails the literacy requirement. An organisation that deploys AI for learning may violate the letter while fulfilling the spirit.


4. The Self-Learning System Problem

For Article 4 compliance to be meaningful rather than performative, it must evolve as the technology evolves. This is not optional. It is a structural requirement.

A training programme designed in March 2025 teaches staff about GPT-4-class capabilities. By March 2026, the technology has moved to multimodal agents that can browse the web, execute code, analyse documents, and take actions autonomously. The 2025 programme is not merely outdated -- it is actively misleading. Staff trained on 2025 capabilities will misjudge the risks and limitations of 2026 systems. The "sufficient level of AI literacy" that Article 4 demands is a moving target, and it moves faster than any curriculum committee can follow.

The only way to maintain genuinely sufficient literacy is through continuous, adaptive learning. Not annual refresher courses. Not quarterly webinars. Continuous adaptation that tracks what the technology can do, what the organisation is deploying, and what each individual needs to know.

The most effective system for delivering continuous, adaptive, personalised learning is -- inevitably -- an AI system.

And here the paradox closes its loop. The compliance system must be AI-powered to be effective. But deploying an AI-powered compliance system triggers the compliance obligation for that system. And because the compliance system itself must evolve continuously, there is no stable point at which the obligation is "satisfied." The organisation is perpetually deploying new AI capabilities on its staff, perpetually generating new Article 4 obligations.

The regulation contains no provision for this. There is no concept of a "self-evolving compliance system" anywhere in the AI Act. There is no mechanism for recognising that the compliance tool and the compliance obligation are the same thing. The drafters assumed a static world -- deploy a system, train the staff, document the effort, move on. The technology has made that assumption untenable.


5. The Bureaucratic Trap

I am not naive about how regulation works in practice. Enforcement bodies go by the text, not the spirit. Market surveillance authorities in each Member State will assess compliance based on what Article 4 says, not what it was meant to achieve.

Consider what happens when an enforcement authority examines an organisation that uses AI to teach AI literacy. The authority will ask:

Awareness: Do staff know that the training system is AI-powered? Are they aware of its capabilities and limitations? Can they articulate how it generates learning content and assessments?

Policy: Is there a documented policy governing the use of the AI training system? Does it define acceptable and prohibited uses of the training AI? Is it enforced?

Training: Have staff been trained on how the AI training system works? Do they understand its data processing? Can they identify when the training system's output might be unreliable?

Evidence: Is the AI training system's effectiveness documented? Are there records showing that it actually improves literacy? Is there a proportionality assessment justifying the use of AI for training?

Each of these questions is legitimate under Article 4's text. Each creates compliance overhead. And each applies on top of the compliance work for the actual AI systems the organisation is trying to train its staff about.

The cumulative effect is a bureaucratic trap. The organisation that takes the most effective approach to building genuine AI literacy -- an adaptive, AI-powered learning system -- bears the heaviest compliance burden. The organisation that settles for a static PowerPoint presentation and a sign-off sheet faces minimal scrutiny, because a PowerPoint is not an AI system and triggers no Article 4 obligation of its own.

The regulation inadvertently creates a perverse incentive: the less effective your training method, the lighter your compliance obligation.


6. The HR Blind Spot

The paradox becomes acute in human resources, where AI deployment is already deep and largely unexamined under Article 4.

AI in hiring: screening CVs, parsing applications, scheduling interviews, conducting video assessments, scoring candidates on communication and competence proxies. AI in training: personalised learning paths, competence assessment, skill gap analysis, performance prediction. AI in performance management: productivity tracking, sentiment analysis, feedback generation, promotion recommendation. AI in workforce planning: attrition prediction, talent mapping, succession planning.

These are not future scenarios. They are deployed reality in thousands of European organisations today. Every major HR platform -- Workday, SAP SuccessFactors, Oracle HCM, BambooHR -- has embedded AI capabilities that are active by default. Many HR professionals using these systems do not know which features are AI-powered and which are not.

Article 4 requires AI literacy for "persons or groups of persons on whom the AI systems are to be used." In the HR context, this includes job candidates, trainees, and employees being evaluated. A job applicant whose CV is screened by an AI system is a person on whom the AI system is being used. Article 4 arguably requires the deployer to consider that applicant's understanding of the AI system.

But there is no practical mechanism for this. You cannot require job applicants to complete an AI literacy course before applying. You cannot train every employee on every AI-powered HR tool that processes their data, especially when those tools change quarterly as vendors update their platforms.

And here is the deeper problem: the same HR department that must ensure AI literacy for its workforce is itself one of the heaviest users of AI, often without the internal literacy to understand its own tools. I have seen HR teams deploy AI-powered engagement surveys without understanding that the sentiment analysis is AI-driven. I have seen recruitment teams use AI screening tools configured by the vendor with bias parameters they cannot explain.

The regulation treats a customer service chatbot and a hiring algorithm with the same Article 4 obligation. There is no graduated requirement based on the stakes involved. A chatbot that recommends products and an algorithm that determines whether someone gets a job interview are subject to the same single-sentence obligation.

This is the HR blind spot: Article 4 creates an obligation without recognising that employment is one of the highest-stakes deployment contexts for AI, and without providing any specific mechanism for the unique challenges of AI in the employment relationship.


7. The Classification Trap: When Compliance Tools Become High-Risk Systems

The paradox deepens when you examine how the AI Act's own classification system treats AI in education and training.

Annex III, Area 3 designates four education-related AI use cases as high-risk: determining admission or access to education, evaluating learning outcomes, assessing the appropriate education level for an individual, and monitoring behaviour during tests. Annex III, Area 4 separately classifies employment AI as high-risk: recruitment, performance evaluation, task allocation, promotion, and termination decisions.

And here is where it becomes absurd. Article 5(1)(f) prohibits emotion recognition in education and workplace contexts entirely. An AI training system that detects whether a learner is confused or disengaged -- arguably the most useful feature for adaptive learning -- is banned outright.

Now consider what happens when an organisation tries to comply with Article 4 using AI:

Scenario 1: AI competence assessment. An organisation deploys a tool that evaluates staff AI literacy levels and recommends training. This tool evaluates learning outcomes (Annex III, Area 3(b)) and may assess the appropriate education level for an individual (Area 3(c)). It is high-risk. It must undergo conformity assessment, implement human oversight, maintain logging, conduct a fundamental rights impact assessment, and inform affected individuals. The compliance tool for Article 4 triggers Chapter III obligations that are heavier than Article 4 itself.

Scenario 2: AI-powered training platform. An organisation deploys an adaptive AI tutor to build staff competence. If the tutor evaluates whether the learner has achieved competence -- which is the entire point -- it falls under Area 3(b). High-risk. If the tutor detects frustration and adjusts its approach using biometric signals, it is not merely high-risk but prohibited under Article 5(1)(f).

Scenario 3: HR using AI for workforce readiness. An HR department uses AI to assess which employees need AI literacy training and what level they should receive. This simultaneously triggers Area 3(c) (assessing appropriate education level) and Area 4(b) (performance evaluation). The HR team is now deploying a high-risk AI system that itself requires Article 4 compliance for the people operating it.

Article 6(3) offers four derogations -- narrow procedural tasks, improving prior human work, detecting patterns without replacing human assessment, and preparatory tasks. But there is a critical override: any system that performs profiling of natural persons is always high-risk, regardless of derogations. A competence assessment that scores individuals is profiling. A training recommendation engine that categorises learners by capability level is profiling. The derogation exits are locked.

The regulation has, in effect, classified the most effective Article 4 compliance tools as high-risk systems requiring their own compliance infrastructure. This is not a theoretical concern. It is the architecture of the law.

For HR departments, the implications are especially severe. Every major HR platform -- Workday, SAP SuccessFactors, Oracle HCM, BambooHR -- has embedded AI across recruitment, performance management, learning, and workforce planning. Each of these capabilities may independently qualify as high-risk under Area 3 or Area 4. The Chapter III obligations for deployers (Article 26) include: using the system according to provider instructions, assigning competent human oversight with authority to intervene, ensuring input data quality, monitoring and reporting suspected risks, retaining system logs for at least six months, informing workers and their representatives before deployment, informing affected individuals when the system assists in decisions about them, conducting a data protection impact assessment, and cooperating with regulatory authorities.

An HR department deploying four AI-enabled workflows -- screening, performance analytics, learning recommendations, and workforce planning -- potentially faces four separate sets of Chapter III obligations, four conformity documentation requests to vendors, four human oversight assignments, four sets of logs, and four notification obligations. Each of these workflows also requires Article 4 literacy for the HR professionals operating them. And the AI system used to build that literacy may itself be a fifth high-risk deployment.

This is the classification trap: the regulation's own risk taxonomy turns compliance acceleration into compliance multiplication.


8. What Is Missing

The structural problems in Article 4 stem from five specific absences:

No mechanism for regulatory learning. The AI Act is a static text. It cannot update itself as capabilities evolve. The Commission can issue guidance and Q&As, but these are soft law. The regulation's compliance architecture assumes stable technology, which is the one thing AI is not.

No distinction between AI-as-automation and AI-as-learning-tool. The same model powering a customer service bot and a professional development platform triggers identical obligations. This fails to recognise that the risk profile, the competence impact, and the appropriate governance model are fundamentally different between these deployment modes.

No safe harbour for AI used in compliance and training contexts. Organisations using AI to achieve Article 4 compliance should not face the same burden as organisations deploying AI in high-stakes decision-making. The absence of any exemption or lighter obligation for compliance-oriented AI deployment creates the infinite regress described above.

No graduated obligation based on deployment stakes. Article 4 is a single obligation for all AI systems. A text autocomplete in an email client and an algorithm determining credit eligibility carry the same literacy requirement. The proportionality principle ("to their best extent") provides some flexibility in intensity, but not in structure. The same six questions apply. The same evidence is expected.

No provision for the paradox that the best compliance tool is an AI system. The regulation assumes compliance tools are separate from the systems being regulated. When the compliance tool is the regulated technology, the regulatory architecture produces a loop with no exit condition.


9. A Path Forward

I want to be precise about what I am not saying. I am not saying Article 4 is wrong. The spirit -- that people who use and are affected by AI systems must understand them -- is the single most important provision in the entire AI Act. It is the foundation on which meaningful oversight, human agency, and democratic accountability rest. Without literacy, every other safeguard in the regulation is theatre.

What I am saying is that the letter is too blunt.

Three structural changes would resolve the paradox while preserving -- and strengthening -- the intent:

First: Create a "learning deployment" category. AI systems deployed primarily for competence development, training, and compliance should carry lighter obligations than AI systems deployed for decision-making or automation. The distinction is not about the technology. It is about the deployment purpose. A learning deployment enhances human capability. An automation deployment may diminish it. The regulation should recognise this difference.

This does not mean exemption. A learning AI system should still require transparency (staff must know it is AI), basic governance (someone is accountable for its quality), and effectiveness documentation (evidence that it works). But the compliance burden should be proportionate to the risk, and the risk profile of a training tool is categorically different from the risk profile of a hiring algorithm.

Second: Mandate continuous compliance rather than point-in-time assessment. Article 4's architecture -- "take measures to ensure" -- implicitly assumes a compliance moment: you take the measures, you ensure the literacy, you document the evidence. But literacy against a moving technological frontier cannot be ensured at a point in time. It must be maintained continuously.

The regulation should explicitly recognise that AI literacy is a process, not a state. Compliance should be assessed based on whether an organisation has active, evolving competence-building mechanisms, not whether it conducted training on a particular date. Annual checkbox audits are worse than useless for AI literacy -- they create a false sense of compliance while competence erodes between audit cycles.

Third: Measure competence outcomes, not compliance inputs. The current framework, by necessity, will be enforced based on inputs: Did you take measures? Did you document them? Did you differentiate by role? These are process questions. They tell you nothing about whether people actually understand the AI systems they use.

This is where the Twin Ladder framework offers something the regulation currently lacks. By measuring observable competence indicators -- can this person identify when AI output needs verification? Can they articulate the limitations of the AI systems in their workflow? Can they make informed decisions about when to rely on AI and when to override it? -- the assessment shifts from "did the organisation try?" to "did it work?"

An outcome-based compliance model dissolves the paradox. It does not matter whether the organisation used AI or PowerPoint or one-on-one mentoring to build literacy. What matters is whether the people can demonstrate the competence. The method becomes irrelevant. The result becomes everything.


10. The Wider Stakes

The literacy paradox in Article 4 is not an isolated drafting issue. It is a symptom of a deeper challenge that will define the next decade of technology regulation: how do you regulate a technology that is itself the most effective tool for compliance with the regulation?

This challenge will recur. When the AI Act's high-risk system provisions take full effect in 2027-2028, organisations will need AI-powered tools to monitor AI system performance, detect bias, and maintain conformity assessments. The monitoring tools will themselves be AI systems subject to the Act's requirements. The same recursive structure will emerge.

The EU has a window -- between now and August 2026, when market surveillance authorities begin enforcement -- to address this. The Commission's delegated acts, the AI Office's guidance, and CEN/CENELEC's harmonised standards could each provide a path through the paradox. But only if the drafters recognise that the paradox exists.

I wrote this analysis because I believe Article 4 is the most important provision in the AI Act -- more important than the prohibited practices in Article 5, more important than the high-risk system requirements in Chapter III. If people do not understand AI, no amount of technical regulation will protect them. And if the regulation prevents people from accessing the most effective tools for building that understanding, it defeats its own purpose.

The question for European regulators is straightforward: Do you want compliance, or do you want competence? Because right now, the text of Article 4 may force organisations to choose between them.

And that is a choice the regulation should never require.


Sources

  1. Regulation (EU) 2024/1689 -- EU Artificial Intelligence Act, Article 4 (AI Literacy). Full text of the literacy obligation, entered into application 2 February 2025. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

  2. European Commission: AI Literacy -- Questions & Answers (May 2025). Clarified that directing staff to user manuals is "generally not considered sufficient" and that the obligation applies with no size threshold. https://digital-strategy.ec.europa.eu/en/faqs/ai-literacy-questions-answers

  3. Recital 20, Regulation (EU) 2024/1689. Legislative intent: AI literacy should equip providers, deployers and affected persons "with the necessary notions to make informed decisions regarding AI systems."

  4. European Commission: Living Repository of AI Literacy Practices (2025). Collection of AI literacy initiatives from AI Pact pledgers and public sector bodies. https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy

  5. Article 99(4), Regulation (EU) 2024/1689. Penalty framework for general obligation violations: up to EUR 15 million or 3% of worldwide annual turnover.

  6. Recital 21, Regulation (EU) 2024/1689. Proportionality in AI literacy measures, recognising differences between providers and deployers.

  7. Travers Smith: The EU AI Act's AI Literacy Requirement -- Key Considerations. Analysis of the scope and practical implications of Article 4. https://www.traverssmith.com/knowledge/knowledge-container/the-eu-ai-acts-ai-literacy-requirement-key-considerations/

  8. Ropes & Gray: Five Takeaways from the EU Commission's AI Literacy Q&As. Commentary on the Commission's May 2025 guidance. https://www.ropesgray.com/en/insights/viewpoints/102kbn5/five-takeaways-from-the-eu-commissions-ai-literacy-qas

  9. Regulation (EU) 2024/1689, Annex III, Areas 3 and 4. High-risk AI system classification for education/vocational training and employment/workforce management. https://artificialintelligenceact.eu/annex/3/

  10. Regulation (EU) 2024/1689, Article 5(1)(f). Prohibition of emotion recognition AI systems in education and workplace contexts. https://artificialintelligenceact.eu/article/5/

  11. Regulation (EU) 2024/1689, Article 6(3). Derogation conditions for Annex III systems, including the critical override that profiling of natural persons always remains high-risk. https://artificialintelligenceact.eu/article/6/

  12. Regulation (EU) 2024/1689, Article 26. Obligations for deployers of high-risk AI systems: human oversight, data quality, monitoring, logging, notification, and fundamental rights impact assessment. https://artificialintelligenceact.eu/article/26/

  13. DLA Piper: Latest Wave of Obligations Under the EU AI Act (August 2025). Overview of enforcement timeline and market surveillance authority activation. https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect

  14. AI Act Service Desk -- Article 4: AI Literacy. Operational guidance from the European AI Office. https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-4

  15. CEN/CENELEC: AI Standardisation for the EU AI Act. Status of harmonised standards development, including delays and acceleration measures. https://www.cencenelec.eu/news-events/news/2025/brief-news/2025-10-23-ai-standardization/

  16. Twin Ladder Assessment Maturity Model v1.0. Six-pillar rubric for AI literacy and competence assessment, CC BY-SA 4.0. https://twinladder.ai/en/research/assessment-maturity-model

  17. "Comfort Over Code: A Workflow-Based Framework for AI Literacy in Professional Practice" -- TwinLadder Research, March 2026. https://twinladder.ai/en/research/twin-ladder-methodology

  18. Digital Omnibus Proposal (November 2025). Pushed high-risk system application dates to December 2027 (Annex III) and August 2028 (Annex I), partly due to standards delays.

  19. Future of Life Institute: Article 4 Analysis and Commentary. Independent analysis of the AI literacy obligation. https://artificialintelligenceact.eu/article/4/