TWINLADDER
TwinLadder logoTwinLadder
Back to Archive
TwinLadder Intelligence
Issue #27

TwinLadder Weekly

March 2026

TwinLadder Weekly

Issue #27 | March 2026


Editor's Note

I spent Tuesday morning at an HR technology showcase in Amsterdam. Three floors of the Beurs van Berlage, two hundred vendors, and a phrase I heard from eleven separate booths: "AI-powered hiring, simplified."

At one stand, a sales representative demonstrated a candidate screening tool to a group of HR managers from a Dutch financial services firm. The system ingested four hundred CVs, produced a ranked shortlist of twenty in under ninety seconds, and displayed confidence scores beside each name. The HR managers nodded. One asked how the scoring worked. The representative smiled and said: "Our algorithm analyses over 200 data points to find the best match." She asked which data points. He pivoted to pricing.

If you have been to one of these events, you know the feeling. You are impressed and uneasy at the same time. The demonstrations get faster every year. The dashboards get prettier. Your questions get more specific. And the answers get more evasive. What struck me in Amsterdam was not that people like you lack curiosity about the tools you deploy. You are asking the right questions. You are just not getting answers you can act on. And that gap -- between deploying an AI system and understanding what it does -- is precisely the gap the EU AI Act was written to close.


HR Is Ground Zero for Article 4

Why Your People Function Faces the Toughest AI Competence Challenge in the Organisation

Alex Blumentals, with legal analysis by Liga Paulina

Every corporate function now uses AI. Marketing uses it for content. Finance uses it for forecasting. Legal uses it for research. But if you run a people function, your department likely deploys more high-risk AI systems than any other in the organisation. How many can you name?

Your teams screen, rank, score, categorise, and predict human behaviour as their core business. They do it at hiring. They do it during employment. They do it at termination. At every stage, they evaluate individuals based on personal characteristics -- the textbook definition of profiling under EU law.

The numbers describe the scale of what you are managing. 82% of companies now use AI to review resumes. [cite:resumebuilder-ai-hiring] Enterprise AI adoption in recruitment hit 78% in 2025, representing 189% growth since 2022. [cite:herohunt-adoption] The global AI recruitment market is projected to reach USD 1.12 billion by 2033. You are not experimenting with AI. Your function is saturated with it.

And here is the figure that should keep you up at night: only 30% of HR professionals report having adequate training for the AI tools they deploy. [cite:hr-com-training-gap]

82% | Companies using AI for resume screening
78% | Enterprise AI recruitment adoption (2025) | ↑189%
30% | HR professionals with adequate AI training | highlight

Eighty-two percent adoption. Thirty percent competence. That is not a gap. That is a chasm. And it sits in your department.

The regulatory architecture that targets you

I sat down with Liga to walk through the regulatory architecture piece by piece. What she showed me was sobering. The EU AI Act's Annex III lists eight categories of high-risk AI systems. Two of them target HR operations with surgical precision. [cite:annex-iii]

Category 3 -- Education and vocational training covers AI systems used for determining access to training, evaluating learning outcomes, assessing education levels, and monitoring behaviour during assessments. Think about your own technology stack: every corporate learning and development platform that uses AI sits within these provisions. Adaptive learning systems that adjust content based on learner performance. Competence assessments that determine certification levels. Proctoring software that monitors employee tests.

Category 4 -- Employment, workers management and access to self-employment is broader still. It covers the entire employment lifecycle: targeted job advertising, CV screening, candidate evaluation, promotion decisions, termination decisions, task allocation based on individual behaviour, performance monitoring, and behavioural evaluation.

Read those provisions against your HR technology stack and see how many survive:

HR AI Application Annex III Category High-Risk?
CV screening and ranking (HireVue, Harver, Eightfold) Category 4(a) Yes
Video interview scoring Category 4(a) Yes
Performance management analytics Category 4(b) Yes
Attrition prediction models Category 4(b) Yes
Shift scheduling based on individual metrics Category 4(b) Yes
AI-powered learning platforms Category 3(b) and (c) Yes
Employee monitoring and sentiment analysis Category 4(b) Yes
Workforce planning and headcount forecasting Category 4(b) Yes

Count the tools in your stack that appear on this list. Now count the ones your team can actually explain. The difference between those two numbers is your exposure.

The profiling trap

As Liga and I continued working through the Annex III provisions, she flagged something she calls the "profiling trap" -- a provision buried in Article 6(3) that closes every escape route the Act otherwise provides. [cite:art-6-3-profiling]

The AI Act offers derogations for Annex III systems that pose limited risk: systems performing narrow procedural tasks, or preparatory tasks for human decisions. If your compliance advisors have told you that your applicant tracking system merely assists recruiters and therefore escapes high-risk classification, they have latched onto these derogations. It is an understandable reading. It is also wrong.

Then comes Article 6(3)'s final sentence: "An AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons."

Profiling, as defined in GDPR Article 4(4), means any automated processing of personal data to evaluate personal aspects -- including analysing or predicting work performance, reliability, behaviour, or interests. "Every HR AI system I have reviewed profiles by definition," Liga said, tapping the regulation. "CV screening evaluates candidates based on personal data to predict work performance. That is profiling. Performance analytics processes personal data to evaluate work behaviour. That is profiling. The derogation does not apply. It was never going to apply."

Ask yourself: does your ATS process personal data to predict who will perform well in a role? Then it profiles. Does your performance management tool evaluate individuals based on behavioural data? Then it profiles. The derogation your vendor promised you does not exist.

The full enforcement date for high-risk system obligations is 2 August 2026 -- five months from now. Fines reach EUR 15 million or 3% of worldwide annual turnover, whichever is higher.

The bias evidence you cannot afford to ignore

The competence question is not theoretical. Try this: ask three recruiters on your team to explain how your AI shortlists candidates. The answers will tell you more about your Article 4 exposure than any audit. Because the research on what these tools actually do when no one is watching is now overwhelming.

Study Sample Finding
University of Washington (Oct 2024) [cite:uw-resume-bias] 554 real resumes, 3 AI models LLMs favoured white-associated names 85% of the time; never favoured Black male names
PNAS Nexus (May 2025) [cite:pnas-intersectional-bias] ~361,000 fictitious resumes, 4 leading models All models systematically scored Black male candidates lower than white males with identical credentials
HireVue facial analysis (removed Jan 2021) [cite:hirevue-facial] Company's own data Nonverbal data contributed only ~0.25% predictive power; removed after FTC complaint
iTutorGroup/EEOC (Aug 2023) [cite:itutorgroup-eeoc] 200+ rejected applicants AI automatically rejected women 55+ and men 60+; discovered when applicant submitted identical applications with different birth dates
Mobley v. Workday (2023-2025) [cite:mobley-workday] 80+ applications, potential millions in class Applicant rejected every time; one rejection came 55 minutes after applying at 12:55 AM

These are not edge cases. They are structural. The University of Washington study tested mainstream AI models on real resumes and found systematic racial bias in 85% of rankings. The PNAS Nexus study tested four leading commercial models -- GPT-4o, Gemini, Claude, Llama -- and found the same pattern across 361,000 fictitious resumes. This is not a single vendor problem. It is baked into the technology layer your tools are built on.

Now consider who in your organisation would catch this. Your CHRO signed off on the AI screening platform. But who trained the recruiters to understand what it actually does? The board asked for AI adoption metrics. They did not ask whether the people using those tools can explain how they work. Leaders are buying AI. Teams are using AI. The gap between those two decisions is where Article 4 risk lives.

And the legal consequences are materialising. Mobley v. Workday was granted conditional certification as a nationwide class action in May 2025, potentially covering millions of job applicants over 40. [cite:mobley-workday] The EEOC submitted a supporting brief establishing that AI service providers -- not just employers -- can be directly liable as agents. The iTutorGroup settlement, the first-ever EEOC case involving AI hiring discrimination, cost $365,000 plus five years of mandatory compliance monitoring. [cite:itutorgroup-eeoc]

Every one of these cases shares one characteristic: the humans overseeing the AI systems did not understand what the systems were doing. The competence gap was not incidental to the harm. It was the mechanism. When was the last time a recruiter on your team manually reviewed a candidate the AI rejected?

Europe is not waiting -- and neither should you

While US enforcement builds through litigation, Europe is constructing something more systematic. Here is what the regulatory landscape looks like in the jurisdictions that are likely to move first.

The Netherlands operates the most advanced algorithm transparency regime in Europe. The Dutch Algorithm Register, operational since 2022, requires government agencies to publicly disclose algorithmic decision-making systems, including employment-related algorithms. [cite:dutch-algorithm-register] More than 300 algorithms are now registered. The Dutch Authority for Consumers and Markets has published market studies identifying inadequate AI training as a consumer protection risk in professional services. If you operate in the Netherlands or have Dutch employees: the expectation of algorithmic transparency is already the norm, not a future obligation.

Germany has the strongest co-determination framework for AI in employment. Under the Works Constitution Act (Betriebsverfassungsgesetz), Works Councils (Betriebsrat) hold mandatory co-determination rights on technical devices designed to monitor or evaluate employee behaviour or performance. [cite:german-betriebsrat] Every AI system that scores, ranks, or monitors employees requires Works Council approval. If a Works Council member asked you tomorrow how your screening algorithm weights experience versus education, could you answer? German Works Councils are already asking the questions that Article 4 will eventually require across the EU.

France's data protection authority, the CNIL, published guidance on AI in recruitment in 2024, requiring transparency about AI use, human oversight of automated decisions, and data protection impact assessments for all AI-assisted screening tools. [cite:cnil-ai-recruitment] The CNIL framework effectively treats AI literacy as a precondition for lawful recruitment AI deployment -- a position that aligns with Article 4's mandate.

The Nordic equality bodies have begun examining AI hiring tools through anti-discrimination frameworks. The Swedish Equality Ombudsman (DO) opened an inquiry in late 2025 into algorithmic bias in recruitment platforms used by public-sector employers. Finland became the first Member State with full AI Act enforcement powers in December 2025.

Jurisdiction Mechanism Status What it means for you
Netherlands Algorithm Register + ACM market study Operational Your algorithmic transparency obligations are already here
Germany Works Council co-determination (BetrVG s.87) Established law Every HR AI system needs Betriebsrat sign-off
France CNIL AI recruitment guidance Published 2024 You need a DPIA before deploying AI screening
Sweden Equality Ombudsman inquiry into algorithmic bias Opened late 2025 Anti-discrimination lens on your recruitment AI
Finland Full AI Act enforcement authority December 2025 First Member State ready to enforce
EU-wide AI Act Annex III, Category 4 Enforceable August 2026 Full high-risk obligations for all HR AI

The enforcement net is not coming. It is here. The question is whether you will be ready when regulators move from frameworks to inspections -- or whether you will be the case study someone else reads about.

The competence question you must answer

You already know the gap exists. The question is whether you are measuring it -- or hoping no one asks.

Strip away the regulatory architecture and there is one question you should be able to answer right now:

Can your recruiter explain why the AI shortlisted candidate A but not candidate B?

Not "the algorithm decided." Not "the confidence score was higher." Can they explain -- to the rejected candidate, to a Works Council, to a data protection authority, to a tribunal -- what data the system weighed, what factors drove the ranking, why the score came out the way it did, and what limitations apply to that output?

If they cannot, your AI system has no meaningful human oversight. Your compliance documentation is decorative. And your organisation is exposed -- not just to fines under the AI Act, but to discrimination claims, GDPR violations, and the reputational cost of deploying technology you cannot explain.

The European Commission's Article 4 Q&A frames AI literacy around the ability to "interpret AI system output in suitable ways." [cite:ec-article4-qa] For your teams, "suitable" means understanding the system well enough to catch the biased shortlist, to question the attrition prediction, to override the performance score when the data inputs are wrong.

That is not what most AI training programmes teach. Most teach people where to click.


The Competence Question

A recruitment manager at a mid-size professional services firm in Frankfurt receives a complaint. A candidate -- male, 52, experienced -- was rejected at the CV screening stage for a senior analyst role. He contacts the firm directly. He applied through the careers portal, which uses an AI-powered screening tool. He wants to know why he was rejected. Under GDPR Article 22, he has a right to meaningful information about the logic involved.

If you have been in her position, you know what comes next. She opens the screening tool's dashboard. The candidate's profile shows a match score of 34 out of 100. The recommended threshold was 60. The system recommended rejection. A junior recruiter had confirmed the recommendation and moved to the next batch.

She looks for an explanation. The dashboard shows weighted factors: "experience relevance," "skills alignment," "cultural fit prediction." No further detail. She contacts the vendor. The vendor's support team explains that the model analyses over 150 features extracted from the CV and produces a composite score. They cannot disclose the specific weighting. Proprietary algorithm.

She now faces a candidate who wants an explanation, a Works Council that has flagged the tool for review, and a data protection officer asking whether a DPIA was conducted before deployment. She attended the firm's Article 4 compliance training in October 2025. It was a two-hour session covering what AI is and how to write prompts for the firm's document tools. Nobody mentioned that the recruitment screening system was an AI system subject to high-risk classification.

The training programme gave her a certificate. It did not give her the competence to answer any of the questions now on her desk.

How confident are you that your team would fare differently?


What To Do

  1. Map every AI system in your HR technology stack against Annex III. Not just the obvious ones -- the ATS, the video interview platform. Include the workforce analytics dashboard, the learning management system, the scheduling algorithm, the employee engagement survey tool. If it processes personal data to evaluate, rank, score, or predict, it is almost certainly a high-risk AI system under the profiling clause. Most HR leaders we speak with have identified two or three systems. The real number is usually eight to twelve. Do the count this week. Write the list down. That list is the beginning of your compliance programme.

  2. Test your team's explanation capability today, not next quarter. Pick your most-used AI screening tool. Select a recent candidate who was rejected. Ask the recruiter who processed the rejection to explain -- in plain language, without the dashboard open -- why the system recommended rejection and what limitations apply to that recommendation. If they cannot, you have not discovered a training need. You have measured your Article 4 exposure. Document it. That gap is what a regulator will find.

  3. Demand technical transparency from your HR AI vendors -- and treat silence as a signal. Request the conformity assessment documentation, the bias audit methodology, the data governance practices, and the human oversight design for each high-risk system you deploy. Under the AI Act's deployer obligations (Articles 26-27), you are required to keep audit logs and conduct fundamental rights impact assessments. You cannot do this if your vendor will not explain how the system works. If a vendor refuses transparency, ask yourself: would you trust a financial auditor who would not show their methodology? Then why are you trusting this vendor with employment decisions that affect people's livelihoods?

  4. Build AI competence into HR professional development as ongoing capability, not a compliance checkbox. Article 4 requires a "sufficient level" of literacy that is proportionate to the role and the risk. For your teams deploying high-risk AI systems across the employment lifecycle, "sufficient" is a high bar. It means your people understanding bias mechanisms, verification practices, and the limits of probabilistic scoring -- not just knowing where to click. Quarterly competence assessments, not annual certificate renewals, are the appropriate cadence. The difference between a team that can explain its AI and a team that merely operates it is the difference between compliance and exposure.


Quick Reads


One Question

Your AI screening tool rejected a candidate this morning. A data protection authority calls -- not your legal team, not your vendor, but the recruiter who confirmed the rejection -- and asks them to explain how the system scored the candidate and why. Can they? If you are not certain of the answer, you have just identified the most important training gap in your organisation.


TwinLadder Weekly | Issue #27 | March 2026

Helping professionals build AI capability through honest education.