TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder

Twin Ladder Research

The Profiling Trap: Why HR Is the EU AI Act's Highest-Risk Function

March 9, 2026|regulatory analysis

HR departments deploy more high-risk AI systems than any other corporate function. The EU AI Act's profiling clause removes the escape hatch most organisations are counting on.

The Profiling Trap: Why HR Is the EU AI Act's Highest-Risk Function

Listen to this article

0:000:00

The Profiling Trap: Why HR Is the EU AI Act's Highest-Risk Function

Every corporate function uses AI. Only one function has built its entire operational model around the precise activity the EU AI Act treats as highest-risk: profiling natural persons.

HR departments screen, rank, score, categorise, and predict human behaviour as their core business. They do it at hiring. They do it during employment. They do it at termination. Every stage involves evaluating individuals based on personal characteristics — the textbook definition of profiling under EU law.

Most compliance teams are aware that the AI Act classifies certain HR systems as high-risk under Annex III. What they have not internalised is the trap buried in Article 6(3): the profiling clause that closes every escape route the Act otherwise provides.

This research examines why HR is not merely one of several high-risk domains, but the corporate function with the greatest density of high-risk AI systems — and why the derogation that most organisations are counting on will not save them.


The Annex III categories that hit HR

The EU AI Act's Annex III lists eight categories of high-risk AI systems. Two of them target HR operations with surgical precision.

Category 3 — Education and vocational training covers four subcategories:

(a) AI systems intended to be used for determining access or admission or to assign natural persons to educational and vocational training institutions at all levels;

(b) AI systems intended to be used for evaluating learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

(c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

(d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.

Every corporate learning and development system that uses AI sits within these provisions. Adaptive learning platforms that adjust content based on learner performance fall under (b). Competence assessments that determine certification levels fall under (c). Proctoring software that monitors employee assessments falls under (d). The term "vocational training" is not limited to formal education — it covers corporate training programmes.

Category 4 — Employment, workers management and access to self-employment is broader still:

(a) AI systems intended to be used for the recruitment or selection of natural persons, in particular for placing targeted job advertisements, analysing and filtering job applications, and evaluating candidates;

(b) AI systems intended to be used for making decisions affecting terms of work-related relationships, promotion and termination of work-related contractual relationships, for allocating tasks based on individual behaviour or personal traits or characteristics and for monitoring and evaluating the performance and behaviour of persons in such relationships.

Read those provisions carefully. Category 4(a) covers the entire recruitment pipeline: targeted job advertising, application filtering, and candidate evaluation. Category 4(b) covers everything that happens after hiring: promotion decisions, termination decisions, task allocation, performance monitoring, and behavioural evaluation.

There is no common HR AI tool that falls outside these two categories. Applicant tracking systems that rank candidates: Category 4(a). Video interview platforms that score responses: Category 4(a). Performance management systems that rate employees: Category 4(b). Workforce analytics that predict attrition: Category 4(b). Scheduling algorithms that allocate shifts based on individual metrics: Category 4(b). Learning management systems that assess competence: Category 3(b) and (c).

This is not a matter of interpretation. The regulatory text is explicit.


The profiling clause: why there is no escape

Article 6(2) establishes that AI systems listed in Annex III are classified as high-risk. Article 6(3) then provides a set of derogations — conditions under which an Annex III system might escape that classification.

An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm and meets at least one of these conditions: it performs a narrow procedural task; it is intended to improve the result of a previously completed human activity; it detects decision-making patterns without replacing or influencing previously completed human assessment; or it performs a preparatory task to an assessment relevant to the use cases in Annex III.

Many compliance advisors have latched onto these derogations. The argument runs: our ATS merely assists recruiters, it does not make final decisions, therefore it performs a preparatory task and escapes high-risk classification.

Then comes the final sentence of Article 6(3):

Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.

That single sentence collapses the entire derogation framework for HR AI.

Profiling, as defined in Article 4(4) of the GDPR (which the AI Act incorporates by reference), means:

Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.

Consider what HR AI systems do:

  • CV screening evaluates candidates based on personal data to predict work performance. That is profiling.
  • Interview scoring analyses candidate responses to evaluate personal aspects. That is profiling.
  • Performance management AI processes personal data to evaluate work performance. That is profiling.
  • Attrition prediction uses personal data to predict employee behaviour. That is profiling.
  • Task allocation algorithms that assign work based on individual behaviour or characteristics. That is profiling.
  • Adaptive learning systems that evaluate and categorise learners. That is profiling.
  • Workforce analytics that score, rank, or segment employees. That is profiling.

The word in Article 6(3) is "always." Not "usually." Not "in most cases." Always. If the system performs profiling, it is high-risk, full stop. The derogations do not apply.

This is the trap. The very nature of HR AI — evaluating people based on their characteristics to make or inform employment decisions — is profiling by definition. The escape hatch that works for other Annex III categories does not work here.


What HR departments are actually doing

The scale of HR AI deployment makes this a sector-wide exposure, not a niche compliance issue.

Recruitment AI is near-universal among large employers. A 2024 ResumeBuilder survey of 948 business leaders found that 82% of companies use AI to review resumes. The Society for Human Resource Management found that 65% of HR professionals reported their organisations use AI in talent acquisition. Among Fortune 500 companies, 93% of CHROs had begun integrating AI tools by late 2024. Enterprise AI adoption in recruitment reached 78%, representing 189% growth since 2022.

The vendor ecosystem is mature. HireVue processes millions of video interviews annually with AI-assisted evaluation. Pymetrics (now part of Harver) uses neuroscience-based games to assess candidates and match them to roles. Eightfold AI and Beamery provide talent intelligence platforms that use AI to source, screen, and manage candidates across the entire lifecycle. Workday and SAP SuccessFactors embed AI across recruitment, performance management, learning, and workforce planning — touching every Annex III category simultaneously.

Performance management is increasingly algorithmic. Continuous performance monitoring platforms track productivity metrics, communication patterns, and output quality. A February 2025 survey found that 74% of US employers use online tracking tools, including real-time screen tracking (59%) and web browsing logs (62%). Workers in high-surveillance environments report 45% stress compared to 28% in lower-monitoring settings. Call centres score agent performance in real time. Warehouse and logistics operations monitor worker efficiency through wearable devices and system tracking. Each of these evaluates individuals based on personal data to assess work performance. Each is profiling.

Workforce planning uses predictive analytics at scale. Attrition prediction models identify employees likely to leave. Scheduling algorithms allocate shifts based on individual performance data. Gig economy platforms — Uber, Deliveroo, Upwork — use algorithms to allocate work, set pricing, and evaluate workers. Human Rights Watch (2025) found gig workers are governed by algorithms that are "frequently opaque, making it difficult to understand how they are monitored, paid, evaluated, and fired." The ILO (2024) found gig workers in developing countries earn 30-50% less than full-time employees in the same positions. Category 4 explicitly covers "access to self-employment" — these platforms are in scope.

Learning and development has embraced adaptive AI. Corporate LMS platforms use AI to personalise learning paths, assess competence levels, and predict learning outcomes. Online proctoring systems monitor employee assessments for prohibited behaviour. Certification platforms use AI to evaluate whether learners have reached required competence thresholds. A 2024 AERA Open study found that predictive algorithms used by universities underestimate success potential for Black and Hispanic students while overestimating for White and Asian students. E-proctoring systems fail with darker skin tones and discriminate against disabled students.

Yet only 30% of HR professionals have adequate AI training, according to a 2025 HR.com survey. The gap between deployment and competence is the definition of systemic risk.

Every one of these systems performs profiling. Every one is always high-risk under Article 6(3). There are no exceptions.


The enforcement cases that show the direction

Regulators, courts, and public pressure have already drawn a clear trajectory. The AI Act accelerates it.

Amazon's internal recruiting tool (2018) became the canonical example of algorithmic bias in hiring. The system, trained on ten years of predominantly male CVs, learned to penalise applications containing the word "women's" and downgraded graduates of all-women's colleges. Amazon scrapped the tool. Under the AI Act, it would have been a high-risk system subject to mandatory bias testing under Article 10 before deployment — and the training data deficiencies that caused the bias would have been a compliance violation.

iTutorGroup/EEOC settlement (2023) demonstrated that automated screening can violate anti-discrimination law even without explicit discriminatory intent. The online tutoring company's software automatically rejected female applicants over 55 and male applicants over 60 — screening out over 200 applicants. An applicant discovered the discrimination by submitting two identical applications with different birth dates; only the younger date received an interview. The EEOC secured a $365,000 settlement plus five years of compliance monitoring. Under the AI Act, the age-based filtering would trigger both high-risk compliance failures and potentially prohibited practices under Article 5.

HireVue's retreat from facial analysis (2021) showed market pressure working in parallel with regulatory pressure. After the Electronic Privacy Information Center (EPIC) filed an FTC complaint alleging "unfair and deceptive trade practices," HireVue discontinued its visual analysis component for video interviews. The company's own data showed nonverbal data contributed only approximately 0.25% predictive power in most cases. The EU AI Act would classify the original facial analysis system as high-risk under both Annex III Category 4(a) and the biometric provisions.

Mobley v. Workday (filed 2023, ongoing) is the first class action directly challenging an enterprise HR AI platform for algorithmic discrimination. Derek Mobley, an African American applicant over 40 with a disability, applied for 80+ jobs using Workday's screening tool and was rejected every time — one rejection came less than an hour after applying. In May 2025, the court granted conditional certification as a nationwide collective action under ADEA, potentially covering millions of job applicants over age 40. The EEOC submitted a supporting brief stating that algorithmic hiring tools can violate anti-discrimination laws even without explicit intent. The case establishes that AI service providers — not just employers — can be directly liable as "agents."

AI resume screening research (2024-2025) quantified what the cases suggested. University of Washington researchers (October 2024) tested three AI models on 554 real resumes and found LLMs favoured white-associated names 85% of the time and never favoured Black male names over white male names. A PNAS Nexus study (May 2025) tested GPT-4o, Gemini, Claude, and Llama across 361,000 fictitious resumes — all models systematically scored Black male candidates lower than white males with identical credentials. The Brookings Institution (August 2025) confirmed intersectional bias: a Black woman potentially faces discrimination that neither a white woman nor a Black man would encounter separately.

New York City Local Law 144 (effective July 2023) became the first US law mandating independent bias audits for automated employment decision tools. However, a December 2025 NY State Comptroller audit found that the enforcing agency identified only 1 instance of non-compliance among 32 companies, while independent review found at least 17 instances of potential non-compliance. The enforcement gap shows why the AI Act's more prescriptive approach — with mandatory conformity assessments, not just audits — may prove more effective.

The Dutch SyRI ruling (2020) struck down the Netherlands' System Risk Indication, which used algorithmic profiling to detect welfare fraud, on the grounds that it violated Article 8 of the European Convention on Human Rights — the right to private life. The ruling established that algorithmic profiling of individuals requires robust safeguards, transparency, and proportionality — principles now codified in the AI Act.


The penalty arithmetic

The financial exposure for non-compliant HR AI systems is cumulative across multiple regulatory frameworks.

AI Act penalties for high-risk non-compliance: up to €15 million or 3% of global annual turnover, whichever is higher. This covers failures in risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), accuracy and robustness (Article 15), and the deployer obligations in Article 26.

AI Act penalties for prohibited practices: up to €35 million or 7% of global annual turnover. If an HR system crosses into prohibited territory — for example, emotion recognition in the workplace (prohibited under Article 5(1)(f) since February 2025) — the penalty tier escalates.

GDPR Article 22 overlay: automated decision-making that produces legal or similarly significant effects on individuals — which employment decisions unambiguously do — triggers up to €20 million or 4% of global annual turnover for non-compliance with automated decision-making protections.

National employment law: Member States retain their own employment discrimination frameworks, many with independent penalty regimes. Illinois (effective January 2026) makes it a civil rights violation to use AI that has discriminatory effect. California (effective October 2025) requires meaningful human oversight and four years of record retention. Colorado mandates impact assessments for high-risk AI systems.

These frameworks are not alternatives. They are cumulative. A single recruitment AI system that performs profiling without adequate safeguards could simultaneously violate the AI Act, the GDPR, national employment discrimination law, and national data protection enforcement.

For a company with €1 billion in global turnover, the theoretical combined exposure from a single non-compliant recruitment AI system could reach €70 million or more — 3% (AI Act) + 4% (GDPR) = 7% of turnover from just two frameworks, before national penalties.


How to deploy HR AI safely under the Act

The AI Act does not prohibit HR AI. It regulates it. Organisations that comply can continue deploying AI across the employment lifecycle — with appropriate safeguards.

Human oversight is architecturally mandatory. Article 14 requires that high-risk AI systems be designed to allow effective human oversight during the period the system is in use. "Human-in-the-loop" means the human must have the competence, authority, and tools to understand the system's outputs, override them, and intervene when necessary. A recruiter who rubber-stamps every AI-generated candidate ranking is not providing human oversight.

Bias auditing must be continuous, not annual. NYC's Local Law 144 requires annual bias audits. Treat this as a floor, not a ceiling. The AI Act's Article 9 risk management system requires ongoing monitoring, and Article 15 demands that high-risk systems maintain appropriate levels of accuracy and robustness throughout their lifecycle. Bias can emerge from data drift, population changes, and feedback loops. Annual testing cannot catch these dynamics.

Vendor due diligence is a deployer obligation. Article 26 requires deployers to use high-risk AI systems in accordance with the provider's instructions, ensure human oversight, monitor for risks, and maintain input data quality. Deployers must obtain from their vendors: EU declarations of conformity, technical documentation, training data information, and instructions for use. If your ATS vendor cannot provide conformity assessment documentation by August 2026, you have a compliance problem — not a vendor management problem.

Data governance requirements are substantive. Article 10 requires that training, validation, and testing datasets be relevant, sufficiently representative, and as free of errors as possible. Article 10(2)(f) explicitly requires examination of data for biases that could affect fundamental rights or lead to discrimination. For HR AI, this means understanding whether training data reflects historical biases, whether it is representative of the candidate or employee population, and whether data quality processes are documented and auditable.

Fundamental rights impact assessments are required for deployers. Article 27 requires deployers of high-risk AI systems in certain contexts to conduct fundamental rights impact assessments before deployment. Even where Article 27 does not strictly apply, the GDPR's DPIA (Article 35) covers substantially similar ground for automated profiling systems. Conducting a combined fundamental rights and data protection impact assessment is best practice for any HR AI deployment.

AI literacy is the prerequisite for everything else. Article 4 requires providers and deployers to ensure that their staff have a sufficient level of AI literacy. For HR teams operating high-risk systems, this is not optional training — it is a regulatory prerequisite. An HR professional who cannot explain how their ATS ranks candidates, what data it uses, and what its limitations are cannot provide the human oversight that Article 14 requires. Competence is not a nice-to-have. It is a compliance dependency.


The strategic position

The AI Act's high-risk provisions for Annex III systems apply from 2 August 2026. For HR departments operating multiple AI systems — recruitment, performance management, learning, workforce planning — the compliance surface is larger than in any other corporate function.

The profiling clause in Article 6(3) ensures that there is no shortcut. Every HR AI system that evaluates, ranks, scores, categorises, or predicts anything about individual people is performing profiling. Every profiling system listed in Annex III is always high-risk. The derogations do not apply.

Organisations that treat this as a compliance burden will spend the coming months in reactive mode, scrambling to audit vendors, document systems, and train staff. Organisations that treat it as a strategic opportunity will build the infrastructure — risk management frameworks, bias monitoring pipelines, competence development programmes, vendor governance processes — that makes AI-powered HR both lawful and effective.

The competitive advantage belongs to the organisations that move now. Not because early compliance is virtuous, but because the alternative is operating blind with €15 million to €35 million exposure per system, across every HR AI tool in the portfolio, from the moment the Act's Annex III provisions take effect.

HR is not a peripheral target of the EU AI Act. It is the primary one. The profiling trap is not a loophole that clever lawyers will close. It is the deliberate design of a regulation that understands exactly what HR AI systems do — and classifies it accordingly.


Sources

  1. European Parliament and Council — Regulation (EU) 2024/1689 (EU AI Act), Annex III Categories 3 and 4, Articles 4, 5, 6, 9, 10, 14, 26, 27, 2024. eur-lex.europa.eu

  2. GDPR — Regulation (EU) 2016/679, Article 4(4) definition of profiling, Article 22 automated decision-making, Article 35 DPIA. eur-lex.europa.eu

  3. ResumeBuilder — Survey of 948 business leaders: 82% use AI to review resumes, 2024. resumebuilder.com

  4. SHRM — AI in the Workplace survey: 65% of HR professionals use AI in talent acquisition, 2024. shrm.org

  5. Reuters — "Amazon scraps secret AI recruiting tool that showed bias against women," Jeffrey Dastin, October 2018. reuters.com

  6. EEOC — "iTutorGroup to Pay $365,000 to Settle EEOC Age Discrimination Suit," August 2023. eeoc.gov

  7. HireVue — Discontinuation of facial analysis, January 2021; EPIC FTC complaint, 2019. epic.org

  8. Mobley v. Workday, Inc. — N.D. Cal., nationwide collective action certification May 2025. fisherphillips.com

  9. University of Washington — AI resume screeners favour white-associated names 85% of the time, October 2024. washington.edu

  10. PNAS Nexus — Testing GPT-4o, Gemini, Claude, Llama on 361,000 resumes; systematic racial bias, May 2025. brookings.edu

  11. HR.com — Only 30% of HR professionals have adequate AI training, 2025. hr.com

  12. NYC Local Law 144 — Automated Employment Decision Tools, effective July 2023; NY Comptroller enforcement audit December 2025. nyc.gov

  13. District Court of The Hague — SyRI judgment (ECLI:NL:RBDHA:2020:1878), February 2020. rechtspraak.nl

  14. High5Test — 74% of US employers use online tracking tools, February 2025. high5test.com