TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder
Back to Insights

EU AI Act

Your CV Screener Is a High-Risk AI System

If your organisation uses AI to filter job applications, you are deploying a high-risk AI system under the EU AI Act. The profiling clause in Article 6(3) removes every possible exemption.

March 9, 2026Alex Blumentals, Founder & CEO8 min read
Your CV Screener Is a High-Risk AI System

Listen to this article

0:000:00

Your CV Screener Is a High-Risk AI System

By Alex Blumentals — Twin Ladder

Your applicant tracking system rejected 200 candidates before breakfast. A human saw none of them. Under the EU AI Act, your HR department just operated a high-risk AI system — and over 65% of large employers are doing exactly the same thing.

Most HR leaders have no idea. They think of their CV screener as a productivity tool, a filter, a time-saver. The EU AI Act thinks of it as a system that profiles natural persons and decides who gets a shot at earning a living. The Act is right.

The enforcement date is 2 August 2026. If your organisation uses AI anywhere in recruitment — screening, ranking, shortlisting, matching — this article is your wake-up call.


What the law actually says

The EU AI Act sorts AI systems into risk categories. The highest regulated category — high-risk — comes with a full compliance regime: conformity assessments, human oversight, bias monitoring, incident reporting, fundamental rights impact assessments.

Recruitment AI lands squarely in this category.

Annex III, Category 4(a) explicitly names:

"AI systems intended to be used for recruitment or selection of natural persons, in particular for placing targeted job advertisements, for analysing and filtering job applications, and for evaluating candidates."

There is no ambiguity here. The legislators did not bury recruitment in a footnote. They put it front and centre, with specific examples. Filtering job applications is named. Evaluating candidates is named. If your system does either, you are in scope.

This is not a matter of interpretation. It is a matter of reading.


The derogation trap

Here is where most compliance teams get it wrong.

Article 6(3) of the AI Act offers derogations — ways for systems listed in Annex III to escape the high-risk classification. A system might avoid the label if it only performs a narrow procedural task, if it improves the result of a prior human activity, if it detects decision-making patterns without replacing human assessment, or if it merely prepares data for an assessment that a human will conduct.

Four exits. HR teams and their vendors will try every single one.

"Our tool just ranks candidates — a human makes the final decision."

"It only highlights keywords — it does not decide anything."

"It is a decision-support tool, not a decision-making tool."

None of these arguments survive Article 6(3).

Article 6(3) states, in language that leaves no room for creative lawyering:

"An AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons."

Always. Not usually. Not in most cases. Always.

And CV screening is profiling. By definition. The system takes personal data — education history, work experience, skills, location, sometimes age, sometimes name — and uses it to rank, score, categorise, or filter people. That is textbook profiling under the GDPR definition that the AI Act incorporates.

The moment your system assigns a score, a rank, or a yes/no to a candidate based on their personal characteristics, every derogation in Article 6(3) evaporates.

Your recruitment AI is high-risk. Full stop.


What went wrong elsewhere

If the legal argument feels abstract, the case law is concrete.

Amazon, 2018. Amazon built an internal AI recruiting tool trained on ten years of historical resumes. The system learned that most successful hires had been men — because most hires in tech had been men — and began systematically penalising CVs that contained the word "women's" (as in "women's chess club") or that listed all-women's colleges. Amazon scrapped the tool. The damage was the lesson: AI trained on biased hiring data reproduces biased hiring decisions. This is not a bug. It is the default behaviour.

iTutorGroup, 2023. The US Equal Employment Opportunity Commission settled with iTutorGroup for $365,000 after finding that the company's automated recruitment software rejected applicants based on age. Women over 55 and men over 60 were filtered out. Not by a human decision. By a system that no one was watching closely enough. The EEOC called it what it was: algorithmic age discrimination.

Workday, ongoing. In Mobley v. Workday, a Black applicant over 40 with anxiety and depression alleged that Workday's AI screening tools systematically discriminated against him on the basis of race, age, and disability across over 100 job applications using the platform. In May 2025, the court granted conditional certification as a nationwide collective action — potentially covering millions of job applicants over age 40. The case established that AI vendors — not just employers — can be held liable as employment agents.

These are not hypothetical risks. These are things that already happened, under weaker regulatory regimes, before the AI Act existed. The Act exists precisely because these things happened.


What you must do before August 2026

Article 26 sets out the obligations for deployers of high-risk AI systems. In recruitment, "deployer" means you — the employer using the tool — not the vendor who built it.

Here is the checklist. It is not optional.

1. Inventory every AI system in your recruitment pipeline.

Map every tool that touches candidate data. Your ATS. Your CV parser. Your video interview analyser. Your chatbot that pre-screens. Your sourcing tool that matches candidates to roles. If it uses AI and it touches recruitment, it goes on the list.

Most organisations discover systems they did not know they had. A hiring manager signed up for a free trial. A recruiter installed a browser extension. An agency partner uses AI scoring. All of it counts.

2. Obtain conformity documentation from every vendor.

Under the AI Act, providers of high-risk AI systems must supply technical documentation, instructions for use, and declarations of conformity. As a deployer, you are entitled to this documentation and you need it to fulfil your own obligations.

Ask every vendor: Is your system registered in the EU database? Where is your conformity assessment? What data was it trained on? What bias testing have you conducted?

If a vendor cannot answer these questions, that is your answer.

3. Implement real human oversight.

Article 14 requires that high-risk AI systems are designed to be effectively overseen by humans. Article 26 requires deployers to actually do it.

This does not mean a recruiter glances at a shortlist and clicks "approve." That is rubber-stamping. The Act requires that the human overseeing the system can understand its outputs, can identify anomalies, can intervene, and can override or reverse the system's output.

A recruiter who processes 500 AI-ranked candidates per day and rejects the bottom 400 without review is not providing oversight. They are providing a signature.

4. Run bias audits.

Test your systems for discriminatory outcomes across protected characteristics: gender, age, ethnicity, disability. Not once. Continuously. Document the results. Act on what you find.

University of Washington research (October 2024) found that AI resume screeners favoured white-associated names 85% of the time. A PNAS Nexus study (May 2025) tested GPT-4o, Gemini, Claude, and Llama across 361,000 fictitious resumes — all models systematically scored Black male candidates lower than white males with identical credentials. If your screening tool rejects a disproportionate number of women, older applicants, or candidates with non-Western names, you have a problem that will not wait for a regulator to find it.

5. Conduct a fundamental rights impact assessment.

Article 27 requires deployers of high-risk AI in certain contexts to conduct a fundamental rights impact assessment before putting the system into use. Even where Article 27 does not strictly apply, the GDPR's Data Protection Impact Assessment (Article 35) covers substantially similar ground.

The assessment must evaluate the specific risks to fundamental rights — non-discrimination, privacy, dignity — for the specific population your system will affect. Job applicants. Real people.

6. Notify candidates.

Candidates must be informed that AI is being used in recruitment decisions that affect them. Not buried in paragraph 47 of your privacy policy. Clearly. Before or at the point the AI system is used.


The Article 4 foundation

There is a requirement that underpins all of the above, and most organisations are ignoring it entirely.

Article 4 of the AI Act requires that all staff involved in the operation and use of AI systems have a sufficient level of AI literacy. Not awareness. Literacy.

Your HR team cannot provide meaningful oversight of a recruitment AI system if they do not understand what the system is doing. They cannot spot bias in outputs if they do not understand how bias enters training data. They cannot evaluate vendor documentation if they do not know what a conformity assessment is.

AI literacy is not a nice-to-have. It is the foundation that every other obligation rests on. Without it, your human oversight is theatre, your bias audits are checkbox exercises, and your fundamental rights impact assessment is a document that no one in your organisation can actually use.

Article 4 has been enforceable since 2 February 2025. If your HR team has not received structured AI literacy training, you are already behind.

The penalty for non-compliance with high-risk AI obligations is up to €15 million or 3% of global annual turnover, whichever is higher. Your CV screener is a high-risk AI system. The law says so. The case law shows why. The deadline is fixed.

The only question is whether your organisation will be ready.


Sources

  1. EU AI Act — Regulation (EU) 2024/1689, Annex III Category 4(a), Articles 6, 14, 26, 27, published in the Official Journal of the European Union, 2024. eur-lex.europa.eu

  2. Reuters — "Amazon scraps secret AI recruiting tool that showed bias against women," Jeffrey Dastin, October 2018. reuters.com

  3. EEOC — "iTutorGroup to Pay $365,000 to Settle EEOC Age Discrimination Suit," August 2023. eeoc.gov

  4. Mobley v. Workday, Inc. — N.D. Cal. Case No. 3:23-cv-00770. Conditional class certification granted May 2025 for AI hiring discrimination claims. fisherphillips.com

  5. University of Washington — "AI tools show biases in resume screening based on race and gender," October 2024. washington.edu

  6. PNAS Nexus — Study testing GPT-4o, Gemini, Claude, and Llama on 361,000 fictitious resumes; all models showed systematic racial bias, May 2025. brookings.edu

  7. GDPR — Regulation (EU) 2016/679, Article 4(4) definition of profiling, Article 22 automated decision-making, Article 35 DPIA. eur-lex.europa.eu