EU AI Act Article 14 — Human Oversight, in Detail
TwinLadder Research Brief · Source Summary · May 2026
Companion reference to The Authority Gap.
Why this article matters
Article 4 of the EU AI Act has dominated the European AI compliance conversation since it came into application on 2 February 2025. Article 4 is the literacy obligation; it is broad, comparatively simple to operationalise, and has produced an entire e-learning industry. Article 14 has received much less attention. It is also, by some distance, the article most likely to determine whether an organisation's AI deployments survive their first regulatory inspection.
This brief sets out what Article 14 actually says — in the words of the regulation, not in summary — and explains why the four sub-paragraphs of Article 14(4) are the operational core of the entire human-oversight regime. It is the article every Authority Gap diagnostic ultimately routes through.
What Article 14 covers
Article 14 of Regulation (EU) 2024/1689 — the EU Artificial Intelligence Act — applies to high-risk AI systems as defined in Article 6 and Annex III. High-risk systems include AI used in recruitment and employee management, education, critical infrastructure, law enforcement, justice administration, biometric identification and categorisation, and several other named domains. Most enterprise AI deployments in HR, finance, customer-facing recommendation, and any system that profiles natural persons (Article 6(3)) fall in scope.
Article 14(1) establishes the principle: high-risk AI systems shall be designed and developed "in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use."
The article then operationalises that principle through four sub-paragraphs in Article 14(4) that specify what the assigned human overseer must be able to do.
Article 14(4) — the four operational requirements
The natural persons assigned to oversight of a high-risk AI system shall be enabled, "as appropriate and proportionate," to do all of the following:
(a) Properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance.
This sub-paragraph requires that the overseer have substantive technical understanding of the system, not just access to it. "Capacities and limitations" includes what the system was designed to do, what it was not designed to do, and how its performance can degrade. The duty to "duly monitor" is continuing — it is not satisfied by reading a manual at deployment.
(b) Remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons.
This is the most significant clause in Article 14, and the one most often misread. It does not say "be careful." It says the overseer must remain aware of automation bias. The regulation is naming a cognitive failure mode that decades of human-factors research has documented, and assigning to the overseer a continuing duty to recognise and resist it. Compliance with (b) cannot be evidenced by training records alone; it requires evidence that the overseer has maintained the cognitive engagement that automation bias actively erodes.
(c) Correctly interpret the high-risk AI system's output, taking into account, for example, the interpretation tools and methods available.
The overseer must be able to read the system's output in context. For models that produce probabilistic scores, ranked lists, or natural-language outputs, this requires understanding what the output represents and what its uncertainty is. Outputs that the overseer cannot interpret cannot be the basis of a compliant decision.
(d) Decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system.
The overseer must have standing to override the system, not just notional authority. "In any particular situation" matters: this is a per-decision authority, not a one-time policy decision. The override authority must be exercisable in real time, against operational pressure, without further approval.
A fifth sub-paragraph, Article 14(4)(e), applies specifically to systems whose decisions the overseer cannot fully understand: the overseer must be able to "intervene in the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure that allows the system to come to a halt in a safe state." This is the regulatory analogue of the stop-work authority literature in safety-critical industries (see the Weick & Sutcliffe brief for context).
The biometric identification special case — Article 14(5)
For real-time and post-remote biometric identification systems specifically, Article 14(5) imposes a stricter regime: any action taken on the basis of an identification must be verified and confirmed by at least two natural persons with the necessary competence, training, and authority. This four-eyes requirement is unique within Article 14 and reflects the elevated risk of biometric mis-identification.
For most enterprise AI deployments outside biometric systems, Article 14(4)'s four sub-paragraphs are the operational requirement.
What Article 14 does not say
Article 14 is silent on several questions that would meaningfully affect how it is applied. Three are worth flagging because they are the questions supervisory authorities will have to settle, and the answers will determine where the Authority Gap ends up being a regulatory liability.
- Article 14 does not specify who in the organisation must be the assigned overseer. It says "natural persons." Whether that person must be the formal owner of the system, the legal representative of the deployer, the operational manager closest to the system, or a designated AI oversight officer is left to the deployer to determine and document. Most organisations have not yet documented this.
- Article 14 does not specify how often the overseer must be in front of the system's outputs. The continuing-duty language in (a) and (b) implies regular engagement, but no minimum frequency, sampling rate, or coverage threshold is named. A regulator inspecting compliance will have to develop its own expectations.
- Article 14 does not directly address aggregation. When an AI tool aggregates decisions that previously sat across multiple roles, is each constituent decision now subject to Article 14 oversight, or only the aggregate output? Whether 14(4)(b) and (d) reach into the constituent decisions is currently undetermined in supervisory practice. The Authority Gap research piece argues that organisations should not assume either reading is settled.
The enforcement timeline
- 2 February 2025 — Articles 4 (literacy) and 5 (prohibited practices) come into application.
- August 2025 — Supervisory authorities began applying the literacy and prohibited-practices provisions.
- 2 August 2026 — High-risk system obligations under Articles 8–17 (which include Article 14) become fully applicable. This is the date by which compliance with Article 14(4) becomes a live regulatory exposure for any deployer of a high-risk AI system.
- 2 August 2027 — General-purpose AI model rules come fully into force; final transition period closes for systems already on the market before August 2026.
The August 2026 date is the one that matters for Article 14. Most organisations are running compliance programmes calibrated to it. Many are running them as documentation exercises rather than as substantive operating-model changes — and Article 14(4) requires substantive change.
How this connects to the Authority Gap
The Authority Gap framework identifies four moments at which formal and real authority over an AI-shaped decision can come apart: pre-visibility, drift, aggregation, reversion. Each of the four maps directly to Article 14(4):
- Pre-visibility authority ↔ Article 14(4)(a). The overseer must understand the system's capacities and limitations before it is deployed.
- Drift authority ↔ Article 14(4)(c). The overseer must correctly interpret the output, which requires being in front of it as conditions change.
- Aggregation authority ↔ Article 14(4)(b) and (d). Whether automation-bias awareness and override duty reach decisions the AI tool now aggregates across roles is the open supervisory question.
- Reversion authority ↔ Article 14(4)(d) and (4)(e). The override and stop-button duties require standing, not just notional authority.
A diagnostic conducted using the four-moments framework is, in effect, an Article 14(4) compliance audit conducted from the operating-model side rather than from the documentation side. Most organisations need both, and the documentation-only version of Article 14 compliance is the version most likely to fail in inspection.
Citation
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L series, 12 July 2024.
Article 14 — Human oversight: eur-lex.europa.eu
European Commission AI Office guidance: digital-strategy.ec.europa.eu
TwinLadder Research Briefs are short reference summaries of the foundational sources cited in our research pieces. They are not commentary; they are background reading. Companion to the Authority Gap launch series, May 2026.

