TwinLadder Weekly
Issue #30 | March 2026
Editor's Note
I was asked three times last week -- twice by in-house counsel and once by a head of compliance -- whether their GDPR programme "covers" the AI Act. Each time I gave the same answer: no. Each time I got the same look of quiet alarm.
The confusion is understandable. Both are EU regulations. Both involve data. Both carry substantial fines. But treating the AI Act as an extension of GDPR is like treating aviation safety rules as an extension of road traffic law because both involve vehicles. The subject matter overlaps. The obligations do not.
Article 4 of the EU AI Act has been legally binding since 2 February 2025 -- over thirteen months now. [cite:eu-ai-act-article4] Most organisations I work with still have not grasped what it actually requires. They have updated a privacy notice, perhaps added an AI clause to their data processing agreements, and moved on. That is a GDPR reflex applied to an AI competence problem. And it leaves the real obligation -- building genuine AI literacy across every function that touches AI -- entirely unaddressed.
GDPR Thinking Will Not Save You From the AI Act
A regulation, not a directive -- and that matters more than you think
Let me start with something I assumed every compliance professional understood, until I discovered many do not. The EU AI Act is a regulation. [cite:eu-ai-act-regulation] In EU law, that is the strongest form of legislative instrument. It is directly applicable in all 27 member states from the moment it takes effect. No transposition required. No national implementing legislation needed. No waiting for your government to decide how to apply it.
This is exactly how GDPR works -- also a regulation, not a directive. But the parallel ends at the legal form. The substance of what the AI Act demands is fundamentally different from what GDPR demands. And the organisations that assume otherwise are building compliance programmes on the wrong foundation.
The AI Act entered into force on 1 August 2024. Article 4 (AI literacy) and Article 5 (prohibited practices) became applicable on 2 February 2025. The European Commission published its Guidelines on Prohibited AI Practices in April 2025, clarifying the scope of what is banned outright. [cite:ec-prohibited-practices-guidelines] General-purpose AI model obligations kicked in on 2 August 2025. And the full high-risk system requirements arrive on 2 August 2026 -- five months from now.
The enforcement machinery is still being assembled. National market surveillance authorities must be designated by August 2026. No public enforcement actions have been taken yet. But if you are reading this and thinking "we have time," I would ask: time to do what? The literacy obligation is already live. The prohibited practices are already enforceable. The question is not whether enforcement will come. It is whether you will be ready when it does.
The fines, for context: up to 35 million euros or 7% of global annual turnover for prohibited practices. Up to 15 million euros or 3% for other violations. Up to 7.5 million euros or 1% for providing incorrect information to authorities.
The four distinctions your compliance team is missing
Here is where GDPR instincts lead you astray. I have identified four fundamental differences between what GDPR requires and what the AI Act requires. Your compliance team is probably applying Column A thinking to a Column B problem.
| GDPR Approach | AI Act Approach | |
|---|---|---|
| What it protects | Personal data | People affected by AI systems -- whether personal data is involved or not |
| Core mechanism | Control data flows: consent, minimisation, purpose limitation, rights of access | Control system competence: literacy, transparency, human oversight, risk management |
| Who must act | Data controllers and processors | AI providers and deployers -- with distinct, non-delegable obligations for each |
| What compliance looks like | Policies, notices, DPIAs, records of processing, DPO appointment | Training programmes, competence evidence, technical documentation, ongoing monitoring, human oversight protocols |
Read that table carefully. GDPR is fundamentally about controlling what happens to data. The AI Act is fundamentally about controlling what happens to people when AI systems make or influence decisions about them. Data protection is necessary but not sufficient. You can have a perfectly GDPR-compliant AI deployment that violates the AI Act in three different ways.
The most dangerous confusion is the third row. Under GDPR, you can largely delegate compliance to a processor through contractual terms -- your data processing agreement. Under the AI Act, deployer obligations are non-delegable. Your vendor's compliance does not discharge yours. Article 4 requires you to ensure AI literacy among your staff. Article 26 requires you to assign human oversight to people with the necessary competence and authority. No contract with a vendor satisfies these obligations on your behalf.
The timeline reality check
Where are we now, and what is coming?
Already binding (since 2 February 2025):
- Article 4: AI literacy for all staff and persons operating AI on your behalf
- Article 5: Prohibited practices (social scoring, manipulative AI, certain biometric uses)
Binding since 2 August 2025:
- General-purpose AI model obligations (transparency, documentation, copyright compliance)
- The AI Office has been in an "informal collaboration" phase with major GPAI providers since August 2025
Coming 2 August 2026 (5 months away):
- Full high-risk system requirements (conformity assessments, risk management, data governance, human oversight)
- National market surveillance authorities must be operational
- Registration requirements for high-risk systems in the EU database
If your compliance programme treats August 2026 as the start date, you have already missed over a year of binding obligations. Article 4 does not arrive in August. It arrived in February 2025.
The Competence Question
Here is the scenario that keeps me up at night. Not the dramatic one -- not the rogue AI system that causes obvious harm. The quiet one.
Amazon built an AI-powered hiring tool that was used internally for years before anyone discovered it was systematically penalising female candidates. [cite:amazon-hiring-tool] The system had been trained on ten years of resumes, which were predominantly from men. It learned that male candidates were preferable. It downgraded resumes containing the word "women's" -- as in "women's chess club captain." It penalised graduates of two all-women's colleges.
Nobody noticed for years. Not because the people using the system were negligent, but because they lacked the competence to recognise what the system was doing. They saw outputs -- candidate rankings -- and assumed the outputs were reliable because the system was sophisticated. They did not understand how training data shapes model behaviour. They did not know to ask whether the historical data encoded the very biases the tool was supposed to eliminate. They were not AI-literate in the way that Article 4 now demands.
This is why I argue -- repeatedly, and with increasing frustration -- that AI competence cannot be centralised. You cannot appoint a Chief AI Officer, give them a budget, and declare the problem solved. The Amazon hiring tool was not used by AI specialists. It was used by HR recruiters. The people who needed to understand the system's limitations were the people making hiring decisions with it -- not the people who built it.
Article 4 recognises this. It does not say "appoint someone who understands AI." It says ensure AI literacy of "staff and other persons dealing with the operation and use of AI systems." [cite:eu-ai-act-article4] That is everyone who touches an AI system in their work. The recruiter using an AI screening tool. The marketing manager using a content generator. The lawyer using a contract review platform. The finance analyst using a forecasting model. Each of them needs to understand, at a level appropriate to their role, what the system can do, what it cannot do, and where their professional judgement must override its output.
BCG's 2025 survey found that only 36% of employees consider their AI training sufficient. 18% of regular AI users received no training at all. [cite:bcg-ai-work-2025] That is the baseline you are working with. Not a competence gap -- a competence chasm.
GDPR compliance never required this. You could centralise GDPR knowledge in a data protection officer and a legal team. They wrote the policies. They reviewed the processing activities. They handled the data subject requests. Most employees needed to know the basics -- do not email spreadsheets of personal data, report breaches promptly -- but the deep expertise sat in one function.
The AI Act does not permit this model. AI literacy is distributed by design. Every function that uses AI must have competent people using it. That is not a policy requirement -- it is a capability requirement. You cannot satisfy it with a document. You satisfy it with evidence that your people understand the systems they operate.
This is where TwinLadder's framework comes in. Our six-pillar competence model -- Deployment Competence, Policy and Data Protection, Training, Tools, Evidence, Governance -- was built specifically for this distributed obligation. You assess competence at the functional level, not the organisational level, because that is where the obligation sits. A legal department using AI contract review needs different competence than a marketing team using AI content generation. The framework accommodates both, with role-appropriate depth.
The organisations that treat the AI Act like GDPR will build a centralised compliance programme and wonder why it fails. The organisations that understand the difference will build distributed competence -- and that is the only thing that actually satisfies Article 4.
What To Do
-
Audit your current AI Act response. If it lives inside your GDPR programme, extract it. The AI Act requires separate governance, separate training, and separate evidence. A GDPR compliance programme cannot satisfy Article 4 any more than a fire safety plan can satisfy building structural requirements.
-
Map every AI system in use across your organisation. Not just the ones IT procured. The ones marketing adopted through a SaaS subscription. The ones individual employees signed up for with a personal email. The ones embedded in existing enterprise software that your vendor quietly updated with "AI-powered features." You cannot build literacy around systems you do not know exist.
-
Identify who in each function is operating AI systems. Article 4 applies to "staff and other persons dealing with the operation and use of AI systems." That means you need a list of names and roles, not departments. The HR manager who reviews AI-ranked candidates. The analyst who uses AI-generated forecasts. The lawyer who relies on AI contract summaries.
-
Assess current AI literacy against the Article 4 standard. Can these people explain what their AI system does, how it can fail, and when to override it? If the answer is no -- and for most organisations, it will be no -- you have a measurable gap and five months to close it before high-risk requirements arrive.
-
Stop treating vendor training as compliance evidence. Your vendor's product onboarding is not AI literacy training. It teaches people how to use the interface. Article 4 requires people to understand the system's capabilities and limitations. Ask whether your vendor's training covers failure modes, known biases, and when human judgement should prevail. If it does not, build supplementary training or find a provider who addresses the gap.
-
Start collecting evidence now. When enforcement begins, regulators will ask for documentation. Training records, competence assessments, oversight protocols, system inventories. The organisations that started building this evidence in 2025 will have a substantial advantage over those scrambling in 2027.
Quick Reads
-
EU AI Act Full Text -- the primary source. Read Articles 4, 5, and 26 before reading anyone's interpretation.
-
European Commission Guidelines on Prohibited AI Practices -- published April 2025, clarifies what Article 5 bans and how to assess borderline cases.
-
BCG, AI at Work 2025 -- 36% of employees say AI training is sufficient. 18% received none. The competence gap in numbers.
-
Bird & Bird, EU AI Act Implementation Timeline -- practical tracker of which obligations apply when, updated regularly.
-
Future of Life Institute, EU AI Act Compliance Checker -- searchable database of the regulation with article-by-article analysis.
One Question
If a regulator asked your organisation today to demonstrate that the staff using AI systems have "sufficient AI literacy" as required by Article 4 -- an obligation that has been binding for thirteen months -- what evidence would you produce?
TwinLadder Weekly | Issue #30 | March 2026
Helping European professionals build AI competence through honest education.
