Executive Summary
AI in education and training is not prohibited under Article 5 (with one narrow exception for emotion recognition). It is, however, classified as high-risk under Annex III, Area 3. Employment-related AI is separately classified as high-risk under Annex III, Area 4. There is a limited derogation under Article 6(3) for systems that do not materially influence decision-making outcomes.
1. Is AI in Education/Training PROHIBITED Under Article 5?
No — with one specific exception.
Article 5 lists prohibited AI practices. Education/training AI is not generally prohibited. However, Article 5(1)(f) specifically prohibits emotion recognition in education and the workplace:
"AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education"
Recital 44 elaborates:
"AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data... the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited."
Exception to the prohibition: Systems placed on the market strictly for medical or safety reasons (e.g., therapeutic use) are exempt.
What this means in practice: An AI system that analyses facial expressions, voice tone, or other biometric signals to determine whether a student is engaged, stressed, bored, or frustrated during a training session is prohibited. This applies regardless of whether the context is formal education or workplace training.
2. Is AI in Education/Training HIGH-RISK Under Annex III?
Yes — Annex III, Area 3 explicitly covers education and vocational training.
Annex III, Area 3: Education and Vocational Training
Four specific use cases are listed as high-risk:
(a) "AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels"
(b) "AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels"
(c) "AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels"
(d) "AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels"
Annex III, Area 4: Employment, Workers Management and Access to Self-Employment
Two use cases are listed as high-risk:
(a) "AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates"
(b) "AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships"
3. Recital Explanations
Recital 55 (Education)
"AI systems used in education or vocational training, in particular for determining access or admission, for assigning persons to educational and vocational training institutions or programmes at all levels, for evaluating learning outcomes of persons, for assessing the appropriate level of education for an individual and materially influencing the level of education and training that individuals will receive or will be able to access or for monitoring and detecting prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine the educational and professional course of a person's life and therefore may affect that person's ability to secure a livelihood."
Recital 56 (Employment)
"AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers' rights."
4. Specific Use Case Analysis
Does the classification apply to:
| Use Case | Classification | Annex III Reference | Notes |
|---|---|---|---|
| AI systems used to assess students/trainees | HIGH-RISK | Area 3(b) | Evaluating learning outcomes is explicitly listed |
| AI systems used to determine admission | HIGH-RISK | Area 3(a) | Access/admission determination is explicitly listed |
| AI-powered tutoring systems | DEPENDS | Area 3(b)/(c) | High-risk if they steer learning or assess education level; may qualify for Art. 6(3) derogation if purely assistive |
| AI systems that monitor behaviour during training | HIGH-RISK / PROHIBITED | Area 3(d) / Art. 5(1)(f) | Monitoring prohibited behaviour during tests = high-risk; emotion recognition during training = prohibited |
| AI for competence assessment in professional contexts | HIGH-RISK | Area 3(b) or Area 4(b) | Falls under learning outcome evaluation (if training context) or performance monitoring (if employment context) |
| Digital twins used for skills development | DEPENDS | Potentially Area 3(b)/(c) | High-risk if used to evaluate competence or determine education level; may escape via Art. 6(3) if purely practice-oriented without assessment |
| AI for hiring/recruitment | HIGH-RISK | Area 4(a) | Explicitly listed |
| AI for performance evaluation | HIGH-RISK | Area 4(b) | Explicitly listed |
| AI for task allocation | HIGH-RISK | Area 4(b) | Explicitly listed if based on individual behaviour/traits |
5. The Article 6(3) Derogation — When High-Risk Does Not Apply
An Annex III system is not high-risk if it does not pose a significant risk to health, safety, or fundamental rights, specifically when it:
(a) performs a narrow procedural task (e.g., converting document formats, organising files)
(b) is intended to improve the result of a previously completed human activity (e.g., grammar-checking a human-written assessment)
(c) detects decision-making patterns or deviations without replacing or influencing human assessment (e.g., analytics dashboard showing grading patterns)
(d) performs a preparatory task to an assessment (e.g., summarising student submissions before a human evaluator reviews them)
Critical override: An AI system that performs profiling of natural persons is ALWAYS high-risk, regardless of these derogations.
Practical implications for education/training AI:
- A spell-checker or grammar tool used in an educational platform: likely escapes via (a) or (b)
- An AI tutor that adapts content but does not evaluate or certify: may escape via (b) if it only supplements human instruction
- An AI grading system that assigns scores: high-risk under Area 3(b) — no derogation applies as it materially influences outcomes
- An AI admission filter that ranks applicants: high-risk under Area 3(a) — no derogation
- A competence assessment tool that certifies skill levels: high-risk under Area 3(b)/(c) — directly determines education access
- An analytics dashboard showing student engagement patterns without making decisions: may escape via (c)
6. Formal Education vs. Workplace Training — Is There a Distinction?
The Act does not draw a sharp line between formal education and workplace training. The phrase used throughout is "educational and vocational training institutions at all levels." This language is deliberately broad:
- "At all levels" includes primary, secondary, tertiary, and professional/vocational training
- "Vocational training" encompasses workplace skills development programmes
- The recital rationale (Recital 55) focuses on impact on "educational and professional course of a person's life" — this applies equally to university degrees and professional certifications
However, there is a functional distinction:
| Context | Likely Classification | Why |
|---|---|---|
| University admission AI | High-risk (Area 3a) | Determines access to education |
| Corporate onboarding AI (informational) | Potentially not high-risk | May qualify under Art. 6(3)(a) if narrow/procedural |
| Professional certification AI | High-risk (Area 3b/c) | Evaluates learning outcomes, determines qualification level |
| Internal training recommendation AI | Depends | High-risk if it determines what training someone can access; not if purely suggestive |
| AI monitoring exam behaviour | High-risk (Area 3d) | Explicitly covered |
| AI monitoring workplace behaviour | High-risk (Area 4b) | Falls under employment monitoring |
Key principle: The classification depends on function (does it evaluate, determine access, or monitor?), not on setting (school vs. workplace).
7. Education vs. Employment — Separate Classifications
Education (Area 3) and Employment (Area 4) are separate categories in Annex III with distinct scopes:
| Aspect | Area 3: Education | Area 4: Employment |
|---|---|---|
| Scope | Educational and vocational training institutions | Work-related relationships |
| Key triggers | Admission, learning evaluation, education level assessment, test monitoring | Recruitment, promotion/termination, task allocation, performance monitoring |
| Rationale | Affects educational/professional life course | Affects career prospects, livelihoods, workers' rights |
| Overlap zone | Professional training programmes, certifications | Workplace skills assessment, competence evaluation |
Where they overlap: A competence assessment used to determine whether an employee qualifies for a promotion could fall under both Area 3 (assessing education level) and Area 4 (affecting terms of employment). In practice, the specific Annex III area matters less than the fact that the system is classified as high-risk under either.
8. Obligations for Deployers of High-Risk AI (Article 26)
Organisations deploying high-risk AI in education or training must:
- Follow provider instructions — Use the system in accordance with its intended purpose and usage instructions (Art. 26(1))
- Assign competent human oversight — Designate trained personnel with authority and support for oversight (Art. 26(2))
- Ensure input data quality — Input data must be relevant and representative for the intended purpose (Art. 26(4))
- Monitor and report — Monitor system operation and report suspected risks or serious incidents to providers/authorities (Art. 26(5))
- Retain logs — Keep automatically generated logs for at least 6 months (Art. 26(6))
- Inform workers — Employers must inform workers and their representatives before deploying high-risk AI (Art. 26(7))
- Inform affected individuals — When the system assists in decisions about natural persons, those persons must be told (Art. 26(11))
- Conduct DPIA — Conduct data protection impact assessments where required under GDPR (Art. 26(9))
- Cooperate with authorities — Cooperate with regulatory oversight (Art. 26(12))
9. Implications for Twin Ladder
Twin Ladder Assessment Tool
If the Twin Ladder assessment tool evaluates an organisation's AI competence level (Level 0-3) and this assessment:
- Determines what training programmes are recommended/accessible — could trigger Area 3(c) (assessing appropriate education level)
- Is used by employers to evaluate workforce readiness — could trigger Area 4(b) (performance monitoring/evaluation)
- Is purely informational/advisory without binding consequences — may qualify for Art. 6(3) derogation as preparatory task (d) or pattern detection (c)
Twin Ladder Training Platform
- AI tutoring features that adapt content: likely not high-risk if they do not evaluate or certify
- AI-powered competence assessments that issue certifications: high-risk under Area 3(b)/(c)
- AI proctoring during assessments: high-risk under Area 3(d); if using emotion recognition, prohibited
Strategic positioning
The Twin Ladder framework sits at the intersection of education (Area 3) and employment (Area 4). This is strategically relevant because:
- Organisations using the assessment tool may need to comply with high-risk AI requirements
- This creates demand for the exact compliance guidance Twin Ladder provides
- "Sell compliance, deliver competence" aligns perfectly — the regulatory framework validates the need for structured AI competence development
- The open standard approach (CC BY-SA 4.0) means the methodology itself is not an AI system subject to classification, but implementations of it may be
10. Timeline and Enforcement
- Article 5 prohibitions (including emotion recognition in education/workplace): Apply from 2 February 2025
- Annex III high-risk obligations (Areas 3 and 4): Apply from 2 August 2026
- Article 6(3) guidelines from the Commission: Due by 2 February 2026
Sources
- Regulation (EU) 2024/1689 — Official Journal of the European Union, EUR-Lex (CELEX:32024R1689)
- Article 5 — Prohibited AI Practices, EU AI Act
- Annex III — High-Risk AI Systems Referred to in Article 6(2), EU AI Act
- Article 6 — Classification Rules for High-Risk AI Systems, EU AI Act
- Article 26 — Obligations of Deployers of High-Risk AI Systems, EU AI Act
- Recitals 29, 30, 44, 53-56 — EU AI Act preamble

