The TwinLadder Standard
From Compliance to Competence
Article 4 of the EU AI Act requires AI competence. Article 14 specifies five capabilities your oversight staff must have. The TwinLadder Standard measures both — and shows you where the gaps are.
What does the EU AI Act require of your people?
The EU AI Act is Europe’s landmark regulation for artificial intelligence. Most of the Act focuses on high-risk AI systems — medical devices, recruitment tools, credit scoring. But Article 4 is different. It applies to everyone.
What Article 4 actually says
- Every organisation that deploys or provides AI systems must ensure that its staff has a sufficient level of AI literacy.
- AI literacy means understanding what AI can and cannot do, being aware of the risks, and knowing how to use AI tools responsibly.
- This requirement is proportionate — it takes into account the technical knowledge, experience, and role of each person. A software engineer and a marketing manager need different levels of understanding.
- It has been enforceable since 2 February 2025. Penalties: up to €15 million or 3% of global annual turnover, whichever is higher.
Who does it cover?
- Not just regulated industries — every European organisation that uses AI in any capacity.
- Not just IT departments — every person who interacts with AI tools in their work.
- Not just high-risk applications — all AI use, from drafting emails with ChatGPT to screening CVs with automated software.
- If your team uses ChatGPT, Copilot, AI-powered recruitment tools, contract review software, or any product with AI embedded — Article 4 applies to your organisation.
Why it matters now
Article 4 is already enforceable, but most organisations have no way to measure whether they comply. There is no official EU checklist, no ISO standard for AI literacy, and no agreed definition of “sufficient.” That is the gap the TwinLadder Standard fills.
What solutions exist — and what is missing
Dozens of AI frameworks exist. Most of them solve a different problem.
The governance layer is well served
ISO 42001 covers AI management systems. The NIST AI Risk Management Framework addresses risk identification and mitigation. EU AI Act compliance checklists help organisations map their obligations. These frameworks are valuable — they tell you whether you have the right policies, processes, and documentation.
The competence layer is not
- Governance tells you whether you have an AI policy. It does not tell you whether your people understand it.
- You can pass an ISO 42001 audit and still have a workforce that cannot explain what a hallucination is.
- You can have a perfect AI acceptable use policy and still fail Article 4 if no one has been trained to follow it.
- Article 4 specifically requires competence — literacy, understanding, capability. Not just documentation.
How TwinLadder is different
We measure competence, not just governance
Our seven pillars map directly to what Article 4 requires: deployment competence, training, evidence of capability — not just policies on paper.
A clear compliance floor
Score 52 or above and you have a defensible position. Below that, you have measurable gaps to close. No ambiguity.
Open methodology, proprietary platform
The standard is published under CC BY-SA 4.0 — free to use, adapt, and redistribute. Think TCP/IP: the protocol is open, the services built on it are commercial. Anyone can adopt the methodology; TwinLadder provides the best implementation.
Risk-calibrated and evidence-gated
A pharmaceutical company and a design studio face proportionate standards. And you prove compliance with evidence, not self-declarations.
Seven pillars of AI competence
Each pillar answers a specific question about your organisation’s readiness. Together, they cover the competencies required by Articles 4 and 14 of the EU AI Act.
Swipe horizontally to browse all pillars
Maturity Levels
Exploring
0–25
Staff have heard of AI but cannot articulate capabilities or risks. No formal awareness activities.
Developing
26–50
Leadership aware of AI obligations. Most staff have vague understanding of AI but cannot name specific risks.
Implementing
51–75
Organisation-wide AI briefings completed. Staff can identify AI systems they use and describe key risks.
Optimising
76–100
Continuous awareness programme. Staff proactively identify emerging AI risks. Context-specific understanding for all roles.
Maturity Levels
Exploring
0–25
No AI use policy exists. Data protection in AI contexts not addressed.
Developing
26–50
AI use policy drafted but not enforced. Informal guidance on acceptable use. GDPR acknowledged but not integrated.
Implementing
51–75
Active AI use policy with defined acceptable and prohibited uses. GDPR compliance integrated into AI governance. DPIAs conducted for high-risk tools.
Optimising
76–100
Comprehensive, regularly reviewed AI policy. Privacy-by-design principles embedded. Cross-regulatory compliance framework (AI Act + GDPR) fully operational.
Maturity Levels
Exploring
0–25
No structured AI training. Staff learn informally or not at all.
Developing
26–50
Generic AI training available. Self-directed learning. No role differentiation or completion tracking.
Implementing
51–75
Role-specific training programme delivered to all AI-interacting staff including contractors. Completion tracked. Refresh cycles in place.
Optimising
76–100
Personalised learning paths. Competence verified through scenario-based assessments. Continuous development culture. Third-party literacy verified contractually.
Maturity Levels
Exploring
0–25
AI tools used ad hoc. No inventory of AI systems. Shadow AI prevalent.
Developing
26–50
Partial AI inventory. Some tools assessed. Human review optional. Access controls informal.
Implementing
51–75
Complete AI systems inventory with risk classification. Human oversight for consequential decisions. Verification protocols in place.
Optimising
76–100
Automated AI systems monitoring. Continuous verification. Tool governance integrated into procurement. Shadow AI effectively eliminated.
Maturity Levels
Exploring
0–25
No documentation of AI competence efforts. No audit trail.
Developing
26–50
Some records exist. Informal documentation. Could not survive regulatory audit.
Implementing
51–75
Centralised training records and evidence portfolio. Needs assessment documented. Can demonstrate effort under audit.
Optimising
76–100
Comprehensive evidence framework. Automated compliance dashboards. Proportionality reasoning documented. Benchmark-ready data.
Maturity Levels
Exploring
0–25
No AI governance structure. No designated responsible person.
Developing
26–50
Informal responsibility. No dedicated oversight. Ad-hoc reviews when issues arise.
Implementing
51–75
Named AI governance owner. Periodic review cycle. Ethics considerations documented. Incident response procedures exist.
Optimising
76–100
Board-level AI oversight. Cross-functional governance committee. Continuous regulatory monitoring. Proactive risk anticipation.
Maturity Levels
Exploring
0–25
No decision inventory. AI makes decisions with no defined boundaries or oversight.
Developing
26–50
Some awareness of AI decision boundaries. Ad-hoc escalation. No formal delegation framework.
Implementing
51–75
Decision inventory exists. Authority boundaries documented per system. Escalation paths and human override capability in place.
Optimising
76–100
Full authority delegation framework. Continuous monitoring for authority creep. Clear accountability chains. Regular boundary audits.
Four maturity levels
Every organisation starts somewhere. The four levels describe a progression from no formal AI awareness to embedded, continuously improving AI competence. The compliance floor — the minimum Article 4 expects — sits at the boundary between Level 1 and Level 2.
Exploring
Score: 0–25
No formal AI awareness programme. AI tools adopted ad hoc by individuals. No usage policy exists. Staff cannot articulate what AI tools they use or the risks involved. High likelihood of non-compliance.
Developing
Score: 26–50
Some awareness training delivered. An AI acceptable use policy drafted but not yet consistently enforced. A tool inventory started but incomplete. Governance gaps remain. Working toward compliance but not yet there.
Implementing
Score: 51–75
Structured training programme in place, tailored to roles. AI policy enforced organisation-wide. Evidence of competence documented and auditable. This is the compliance floor — the minimum standard Article 4 expects.
Optimising
Score: 76–100
Continuous improvement embedded. External benchmarking against industry peers. AI governance integrated into business processes. Competence treated as a competitive advantage, not just a compliance obligation.
Article 4 sets the floor. Competitive advantage requires going further.
© TwinLadder 2026 · CC-BY-SA 4.0
Article 4 — Mapped to the Seven Pillars
Every operative phrase in Article 4 maps to one or more of the seven pillars. This interactive mapping shows exactly how the standard translates regulatory language into measurable competence.
| Article 4 Element | Deployment Competence | Policy & Data Protection | Training | Tools | Evidence | Governance | Authority Delegation |
|---|---|---|---|---|---|---|---|
“Providers and deployers of AI systems” | P | S | |||||
“shall take measures” | S | P | |||||
“to ensure, to their best extent” | S | P | S | ||||
“a sufficient level of AI literacy” | P | S | |||||
“of their staff and other persons dealing with the operation and use of AI systems on their behalf” | S | P | S | ||||
“taking into account their technical knowledge, experience, education and training” | S | P | |||||
“the context in which the AI systems are to be used, and considering the persons or groups of persons on whom the AI systems are to be used” | P | S | S | S |
“Providers and deployers of AI systems”
“shall take measures”
“to ensure, to their best extent”
“a sufficient level of AI literacy”
“of their staff and other persons dealing with the operation and use of AI systems on their behalf”
“taking into account their technical knowledge, experience, education and training”
“the context in which the AI systems are to be used, and considering the persons or groups of persons on whom the AI systems are to be used”
The compliance floor: score 52
Based on a line-by-line mapping of Article 4 to our seven pillars, the minimum score for a defensible compliance position is approximately 52 — the transition point from Developing to Implementing. Scoring 52 means you can demonstrate that your organisation has taken reasonable measures. Scoring below it means you have identifiable, measurable gaps.
- All seven pillars must score above zero — a single zero-score pillar means a fundamental gap in compliance
- Policy & Data Protection and Training carry the highest compliance weight — these are where regulators will look first
- The floor is not a ceiling — organisations scoring 52 are compliant but fragile. A single staff change or new tool deployment could drop you below
- Enforcement is already active (since February 2025) with penalties up to €15M or 3% of global turnover
How to use the standard
The TwinLadder Standard supports a complete journey from measuring where you are to proving where you need to be.
Assess
Take the AI-powered conversational assessment to get your baseline scores across all seven pillars. The free Quick Scan takes 15 minutes. The Executive Report provides a detailed gap analysis with prioritised recommendations.
Start assessmentLearn
Enrol in TwinLadder Academy courses mapped to your weakest pillars. Foundation, Leadership, and Mastery pathways cover everything from Article 4 basics to cross-functional AI governance.
Explore coursesCertify
Reassess to measure your progress. Build an evidence portfolio that documents training completed, policies adopted, and competence achieved. Work toward TwinLadder Certified status.
View pricingOpen Methodology
The TwinLadder Standard is published under Creative Commons Attribution-ShareAlike 4.0 International. The methodology is open — anyone can use, adapt, and redistribute it with attribution. The platform, assessment tools, and certification programme are proprietary. The standard is free. The implementation is ours.
