TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

EU AI Act

GDPR vs the AI Act: Compliance Versus Competence

People keep asking whether the EU AI Act applies to them -- as if it were optional. It is an EU Regulation, the strongest form of European law, and Article 4 has been binding since February 2025. But the real problem is not awareness. It is that organisations are preparing for a GDPR-shaped obligation when what they face is fundamentally different.

2026. gada 21. martsAlekss Blumentāls, Dibinātājs un vadītājs10 min read
GDPR vs the AI Act: Compliance Versus Competence

GDPR vs the AI Act: Compliance Versus Competence

People keep asking me the same question. In boardrooms, at conferences, in emails that start with "Quick one" and never are. The question is always some version of: "Does this actually apply to us?" The answer is always the same. Yes. If you deploy AI in any form, you are within scope. And you have been since February 2025.


The Question Everyone Asks

I was in Riga last month talking to the compliance lead of a mid-sized financial services firm. She had read about the EU AI Act. Her team had flagged it. But when I asked what they had done about Article 4, she looked at me as though I had asked about a regulation that had not been passed yet.

It has been passed. It is in force. Article 4 has been legally binding since 2 February 2025. [cite:eu-ai-act-article4]

I hear this confusion everywhere. Public organisations assume it only applies to the private sector. Private companies assume it only applies to "AI companies." Both are wrong. If your organisation uses AI systems -- including general-purpose tools like ChatGPT, Copilot, or any of the hundred generative AI products now embedded in enterprise software -- you are a deployer under the regulation. The obligation to ensure sufficient AI literacy among your staff is already active.

This is not a future problem. It is a present legal requirement that most organisations are ignoring because they have not understood what they are dealing with.


What a Regulation Actually Means

Here is something that gets lost in the noise. The EU AI Act is a Regulation. Not a Directive. That distinction matters enormously.

Under the Treaty on the Functioning of the European Union, Article 288, there are two main forms of EU legislation. A Directive sets objectives that member states must achieve, but leaves each country to write its own national law to get there. This is why GDPR's predecessor -- the 1995 Data Protection Directive -- was implemented differently across every member state, creating a patchwork that took two decades to untangle. [cite:tfeu-article-288]

A Regulation is different. It is binding in its entirety and directly applicable in all 27 member states from the moment it takes effect. No transposition. No national interpretation. No waiting for your parliament to pass implementing legislation. It is the strongest form of EU law available.

The EU AI Act is a Regulation. [cite:eu-ai-act-regulation]

This means there is no ambiguity about whether your country has "adopted" it. If you operate within the EU -- or if your AI systems affect people within the EU -- the obligations apply to you directly. The same text, the same requirements, the same enforcement framework, from Lisbon to Tallinn.

When people ask me "has this been transposed in our country?", the answer is that the question does not apply. Transposition is for Directives. This is a Regulation. It is already your law.


GDPR -- The Guardrail Model

To understand why the AI Act is different, you need to understand what GDPR actually changed inside organisations. The answer, structurally, is: less than most people think.

Consider a bakery. Before GDPR, the baker made bread, served customers, and kept a list of regular orders. After GDPR, the baker still made bread, still served customers, still kept the list -- but now had to get consent for the list, document why they kept it, and respond to requests to delete entries. The work did not change. A set of guardrails was placed around the work.

This is the GDPR model. It is a guardrail regulation. It protects an asset -- personal data -- by wrapping existing processes in procedural safeguards. The processes themselves remain intact.

Four characteristics define this model.

The compliance is procedural. You can comply with GDPR by following procedures: document your data flows, obtain consent where required, appoint a Data Protection Officer, maintain records of processing activities. The procedures can be written into checklists. They can be audited against defined criteria. They can be delegated to a compliance team.

The systems are deterministic. The databases and processing systems that GDPR regulates behave predictably. You put data in, it stays where you put it. You query it, you get the same answer every time. If something goes wrong -- a data breach, an unauthorised transfer -- you can trace the failure to a specific event, a specific system, a specific moment in time.

The scope is bounded. GDPR applies to personal data processing. Not every business process. Not every employee. You can map your data flows, identify the touchpoints, and focus your compliance effort on those specific areas. A company with 500 employees might need 20 of them to understand GDPR deeply.

The failure mode is discrete. When GDPR compliance fails, it fails as an event. A breach happens. Data is transferred without a legal basis. A subject access request goes unanswered. These are identifiable incidents with clear timestamps and defined response procedures.

This model is well understood. Organisations have spent eight years building the muscle to handle it. The mistake is assuming the AI Act works the same way.


The AI Act -- The Process Transformation

Now consider the same bakery, except the baker has been replaced by a machine that invents new recipes, adjusts ingredients based on customer sentiment data it infers from facial expressions, and sometimes substitutes flour with something it considers equivalent but has not told anyone about.

That is what AI does to a business process. It does not add a guardrail to the process. It becomes the process.

When a law firm deploys an AI research tool, the AI is not a layer around legal research. It is doing the research. When an HR department uses an AI screening tool, the AI is not protecting candidate data. It is making the hiring decisions -- or the decisions that shape the decisions. When a financial services firm uses AI for credit scoring, the AI is not an accessory to the assessment. It is the assessment.

This is the fundamental shift that most compliance teams have not grasped. GDPR regulates how you handle an asset within your existing processes. The AI Act regulates processes that are themselves being transformed by the technology.

The characteristics are the opposite of GDPR in every dimension that matters.

The compliance requires competence, not procedures. You cannot comply with Article 4 by following a checklist. The regulation requires that your people have "sufficient AI literacy" -- meaning they understand how the AI works, what its limitations are, and how to exercise appropriate judgment when using it. This cannot be checked off. It must be built, maintained, and demonstrated on an ongoing basis. [cite:ec-ai-literacy-qa]

The systems are non-deterministic. AI systems -- particularly large language models and machine learning systems -- do not behave predictably in the way databases do. The same input can produce different outputs. The system can be confident and wrong. It can degrade silently over time as the data it was trained on becomes stale or the world changes. There is no equivalent of "check the database log" because the system's reasoning is not fully transparent even to its creators.

The scope is comprehensive. AI does not stay in one department. It spreads across every function that touches information -- which is every function. Legal uses it for research. Marketing uses it for content. HR uses it for screening. Finance uses it for forecasting. Operations uses it for planning. Unlike GDPR, where you could identify the 20 people who needed deep training, Article 4 applies to everyone who interacts with AI systems. In most modern organisations, that is approaching everyone.

The failure mode is silent and systemic. This is the most dangerous difference. When AI fails, it often does not announce itself. Amazon's hiring tool did not crash or throw an error. It systematically discriminated against women for years before anyone noticed. [cite:amazon-hiring-tool] The system was working perfectly by every metric the team was tracking. The failure was invisible because no one had the competence to ask the right questions about what the system was actually doing.

GDPR failures are fire alarms. AI failures are carbon monoxide.


Four Distinctions That Change Everything

Let me be precise about why this matters for how you plan your compliance.

First: the nature of compliance. GDPR compliance is procedural -- you build systems, document them, and follow them. AI Act compliance is competence-based -- your people must understand the technology well enough to use it responsibly and recognise when it is failing. Procedures can be bought. Competence must be built. This takes time, investment, and ongoing effort that cannot be outsourced to a vendor selling a four-hour workshop.

Second: system predictability. GDPR regulates deterministic systems where you can trace cause to effect. The AI Act regulates non-deterministic systems where the same input can produce different outputs, where confidence does not equal accuracy, and where the system's behaviour can shift without any visible change to its interface. Compliance with unpredictable systems requires a fundamentally different kind of organisational capability -- one built on judgment, not checklists.

Third: scope of impact. GDPR touched specific functions -- legal, IT, marketing, customer service. The AI Act touches every function that uses AI, which increasingly means every function. You cannot ring-fence this obligation. It must be embedded across the organisation, calibrated to each role's specific interaction with AI systems. A procurement officer using AI to evaluate suppliers needs different literacy than a lawyer using AI for contract review. Both need it. Neither can be skipped.

Fourth: failure mode. GDPR failures are events -- breaches, violations, complaints. You know when they happen and you have 72 hours to respond. AI failures are conditions -- silent degradation, systematic bias, accumulated errors that compound over months before anyone notices. By the time an AI failure becomes visible, the damage is often already structural. Detecting these failures requires the very competence that Article 4 demands.


The Timeline You Need to Know

The AI Act is not a single deadline. It is a phased programme spanning six years. Here is what has happened and what is coming.

Date Milestone What It Means
1 August 2024 Entry into force The regulation became law. The clock started.
2 February 2025 Article 4 + Article 5 apply AI literacy obligations and prohibited practices are now enforceable. This has already passed.
2 August 2025 GPAI obligations apply General-purpose AI model providers (OpenAI, Anthropic, Google, Meta) must comply with transparency and documentation requirements.
2 August 2026 Full applicability High-risk AI system obligations, conformity assessments, and the full enforcement machinery take effect. National market surveillance authorities must be operational.
2 August 2027 Extended high-risk deadline High-risk AI systems listed in Annex I (certain regulated products) face their compliance deadline.
31 December 2030 Legacy systems deadline Pre-existing high-risk AI systems in use by public authorities must comply.

The critical point: we are already past the first enforcement date. Article 4 is not something your organisation needs to prepare for. It is something your organisation should already be complying with. Every month that passes without a structured AI literacy programme is a month of non-compliance with a binding EU Regulation.


What This Means for Your Organisation

If your compliance strategy for the AI Act looks like your GDPR strategy -- hire a specialist, draft a policy, run a training day, file the documentation -- you are solving the wrong problem.

GDPR compliance could be centralised. You could hire a Data Protection Officer, build a privacy team, and let the rest of the organisation get on with their work within the guardrails. That does not work for AI literacy. You cannot centralise competence. Every person who uses AI needs to understand it well enough to exercise judgment -- and that level of understanding varies by role, by function, and by the specific AI systems they interact with.

This cannot be solved with a one-size-fits-all e-learning module. It cannot be solved by sending a link to a vendor's product tutorial. It cannot be solved by adding "AI awareness" to next quarter's all-hands meeting.

It requires structured, role-specific competence building. It requires ongoing assessment -- not a one-time certificate, but continuous evaluation as the technology evolves. It requires organisational investment in the kind of deep, transferable understanding that allows your people to work with AI systems they have not seen before, evaluate outputs they cannot independently verify, and recognise failures that the system itself will not flag.

The question is not whether the AI Act applies to your organisation. It does. The question is whether you are treating it as a compliance exercise or as what it actually is: a mandate to build the competence your organisation needs to survive in a world where AI is not a tool you use, but a process you depend on.

If your answer to Article 4 is a checklist, you have not understood the question.


Sources

  1. EU AI Act (Regulation 2024/1689) — Full text of the regulation, Official Journal of the European Union, 2024. Link
  2. European Commission AI Literacy Q&A — Guidance on Article 4 obligations and scope, 2025. Link
  3. EU AI Act Implementation Timeline — Phased applicability dates and milestones. Link
  4. Treaty on the Functioning of the European Union, Article 288 — Legal basis for EU Regulations vs Directives. Link
  5. Reuters — Amazon scraps secret AI recruiting tool — Case study of silent AI failure in hiring, 2018. Link

Related: Why AI Isn't Your Next GDPR -- It's Bigger | The Six-Year Countdown: Every EU AI Act Deadline | Why Non-Technical AI Training Fails