TwinLadder Weekly
Issue #32 | April 2026
Header Image: /images/newsletter-32-seventh-pillar.jpg
Credit: Photo by RODNAE Productions on Pexels (free license, no attribution required)
Why We Added a Seventh Pillar
Editor's Note
We built the Twin Ladder Standard with six pillars. Awareness, Policy, Training, Tools, Evidence, Governance. It felt complete. The framework covered what organisations need to know, what they need to document, how they need to train, what they deploy, what they prove, and how they oversee it all.
For six months, that was enough.
Then I started noticing a pattern in every assessment conversation, every LinkedIn thread worth reading, every real-world incident that made the news. The pattern was a question — the same question, asked differently each time, but always arriving at the same gap.
The question was: who authorised the AI to make that decision?
Not "who deployed the system." Not "who trained the team." Not "who wrote the governance policy." Those questions all had answers in the original six pillars. The question that had no home was more specific and more dangerous: at the exact moment the AI system executed a consequential action — filtering a candidate, triaging an alert, scoring a risk, routing a claim — was it explicitly allowed to do that? Did someone define the boundary? Did anyone document what the system was and was not permitted to decide?
In most organisations, the honest answer was no. The system decided what it decided because nobody told it not to.
That is not governance. That is authority by default. And the EU AI Act does not distinguish between authority you delegated deliberately and authority you delegated by neglect. [cite:eu-ai-act-article4]
This newsletter explains why we added a seventh pillar to the Twin Ladder Standard — and what it measures.
— Alex
The Gap That Would Not Close
By Alex Blumentals, with Liga Paulina and Edgars Rozentals
Three things happened in the span of two weeks that made the seventh pillar inevitable.
The LinkedIn Threads
Sol Rashidi published a post about the talent pipeline collapse — how organisations are cutting the entry-level roles that produce the people who eventually supervise AI systems. Her framing was precise: "We're not facing a talent shortage. We're engineering one." The comments section filled with senior governance practitioners agreeing, but nobody was asking the upstream question: who authorised the AI systems to absorb those roles in the first place?
Alexandra Car raised a different angle — the gap between governance frameworks on paper and governance behaviour in practice. Her point was that most organisations confuse documentation with delegation. They have policies that describe what should happen. They do not have mechanisms that enforce what must happen.
Catherine Gunnell put it most sharply: "AI authority rarely moves through formal approvals. It migrates quietly through defaults, thresholds, routing logic." [cite:gunnell-mandate] She was describing what I now call authority creep — AI systems gradually acquiring more decision-making scope than anyone explicitly granted them. Not through a board decision. Through a settings toggle, a threshold adjustment, a workflow that nobody reviewed after launch.
The Dutch Banking Story
Then the ING story broke. ING announced it would cut 1,250 AML compliance jobs — 950 in the Netherlands alone — as part of an AI-driven automation programme. [cite:ing-job-cuts] ABN Amro had already announced 5,200 job cuts by 2028, including 35% of its AML compliance division. [cite:abn-amro-cuts]
The numbers are significant, but the question behind the numbers is what matters.
These are not administrative roles. These are the people who manually review transaction alerts, assess suspicious activity reports, and decide whether to escalate. The AI now handles the triage. It filters the alerts. It scores the risk. It routes the cases. The remaining humans review what the system surfaces.
I put this to Liga.
"Under Article 4, the deployer must ensure sufficient competence among everyone operating or overseeing AI systems," she said. [cite:eu-ai-act-article4-context] "But when the AI is making the triage decision — deciding which alerts to surface and which to suppress — the human reviewing the surfaced alerts is not overseeing the consequential decision. The consequential decision already happened upstream."
"So who authorised the AI to make triage decisions?"
Liga paused. "That is the question that has no home in the original six pillars."
She was right. Pillar 1 (Awareness) asks whether people understand AI risk. Pillar 2 (Policy) asks whether rules exist. Pillar 3 (Training) asks whether people are equipped. Pillar 4 (Tools) asks what systems are deployed. Pillar 5 (Evidence) asks what you can prove. Pillar 6 (Governance) asks who oversees the programme.
None of them ask: at this specific decision point, was the AI system authorised to act, and who defined the boundary?
The ING case is instructive for another reason. In 2018, ING paid a EUR 775 million fine — EUR 675 million plus EUR 100 million disgorgement — specifically because it did not fund and staff its AML programme properly. [cite:ing-fine-2018] The prosecution service found "insufficient attention paid to compliance risk management." Eight years later, the bank has expanded its AML team from 2,000 to 6,000 — and is now cutting 1,250 of them. The stated reason is AI automation.
A regulator could reasonably ask: is this genuine efficiency, or is the pendulum swinging back to "business over compliance" with an AI justification?
Edgars was direct. "Nobody at ING has published a document that says: 'The AI triage system is authorised to suppress alerts below this threshold, and here is the named human who bears accountability for that decision.' If that document exists, I have not seen it. If it does not exist, the AI defined its own scope. And ING is liable for whatever it decides."
What Pillar 7 Measures
The seventh pillar is called Authority Delegation and Decision Boundaries. It assesses whether an organisation has explicitly defined, documented, and monitored the boundaries of AI decision-making authority.
Seven assessment questions now cover eight dimensions. Here is what each one evaluates.
Decision Inventory Completeness
Have all AI-assisted or AI-automated decision points in the organisation been identified and catalogued? You cannot govern what you have not mapped. Most organisations know which AI tools they have deployed. Far fewer have mapped every point in every workflow where those tools make or influence decisions.
Authority Boundary Documentation
For each AI decision point, are the boundaries of what the system is and is not authorised to do explicitly documented? This is the difference between "we deployed Workday Recruiting" and "Workday Recruiting is authorised to filter candidates by qualification match but is not authorised to exclude candidates based on predicted tenure, employment gaps, or inferred demographics."
Escalation Path Definition
When an AI system encounters a situation outside its authority boundaries, is there a defined escalation path to a human decision-maker? Not an informal "ask your manager" — a documented procedure with named contacts and response-time SLAs. Most organisations have escalation paths for system failures. Far fewer have escalation paths for authority boundary violations.
Human Override Capability
Can a human override or reverse any AI-assisted decision within a reasonable timeframe? The EU AI Act emphasises meaningful human oversight. If the override mechanism is undocumented, untested, or practically impossible under time pressure, oversight is ceremonial.
Stakeholder Awareness of Delegation
Do affected stakeholders — employees, customers, partners — know when they are interacting with an AI decision versus a human decision? Hidden delegation erodes trust and creates transparency risk under the regulation.
Delegation Risk Assessment
Has the organisation assessed the risks specific to each delegation of authority? Not tool-level risk (which lives in Pillar 4) but delegation-level risk: what happens when the AI makes this specific decision incorrectly, and can the human meaningfully intervene before harm occurs?
Authority Creep Monitoring
Is there monitoring for AI systems gradually taking on more decision-making scope than originally delegated? This is Catherine Gunnell's territory — authority that "migrates quietly through defaults, thresholds, routing logic." [cite:gunnell-mandate] Without active monitoring, scope creep is invisible until something breaks.
Accountability Chain
For every AI-assisted decision, can the organisation trace the accountability chain from the AI output through to the human who bears responsibility? If nobody owns the outcome of an AI decision, the decision exists in a responsibility vacuum. That is precisely what regulators audit for.
How This Connects to the Three Governance Models
Last month's newsletter described three governance models. [cite:eu-ai-act-article4] Model 1 (Instrumentation) logs risk but does not act on it. Model 2 (Behavioural) depends on humans doing the right thing every time. Model 3 (Structural) builds constraints into the system itself.
Pillar 7 is how you measure which model your organisation actually operates in — and whether it is defensible.
An organisation that scores high on Pillar 7 has done the work of Model 3: it has defined authority boundaries, built escalation paths, documented accountability chains, and established override mechanisms that do not depend on human consistency.
An organisation that scores low has delegated authority without knowing it — which means it is operating in Model 1 at best. It logged the deployment. It trained the staff. But it never defined what the system was allowed to decide. And it cannot demonstrate, under Article 4, that competence is being exercised at the point where decisions actually happen.
The fines for Article 4 non-compliance are up to EUR 15 million or 3% of global turnover. [cite:eu-ai-act-penalties] The enforcement question will not be "did you have a governance policy?" It will be "at this decision point, was the system authorised to act, and who is accountable for the outcome?"
If you cannot answer that question for every AI-assisted decision in your organisation, Pillar 7 is where you start.
What Changes for Assessments
The Twin Ladder Standard moves from v1.0 to v1.1. Here is what that means in practice.
Seven pillars, equal weight. The overall score is now computed across seven pillars (14.3% each) instead of six (16.7% each). This is a methodology evolution — not a regression.
Compliance floor unchanged. The compliance floor remains at 52 (the Developing to Implementing transition). The threshold was designed to be pillar-count-agnostic.
Existing assessments are unaffected. Any assessment completed under v1.0 retains its six-pillar score. The dashboard shows both views during the transition period.
New assessments include Pillar 7. Seven structured questions now cover the authority delegation dimension. The Alma conversational assessment also evaluates authority boundaries as part of the governance exploration phase.
No competitor isolates delegation as a standalone assessment dimension. We analysed 18 competing AI maturity frameworks. Several address governance. Several address risk. None separate the delegation act itself — the moment authority is granted, the boundaries of that authority, and the monitoring of whether it remains appropriate — into a measurable pillar.
That gap is why we built it.
What To Do This Week
-
Pick your highest-risk AI workflow. The one that touches customers, employees, or regulated decisions. Ask: who authorised this system's decision scope? Not "who approved the procurement." Who defined what the system is allowed to decide?
-
Check for invisible delegation. For every AI tool in that workflow, ask: what does the system decide before a human sees anything? If you cannot answer, you have delegated authority you did not know you delegated.
-
Name the accountable human. For each AI decision point, identify the person who bears responsibility if the system makes a harmful decision. If the answer is "the team" or "the vendor" or silence — that is a Pillar 7 gap.
-
Test your escalation paths. Do they exist? Are they documented? Have they been tested under realistic conditions? If the escalation path is "the analyst calls their manager," that is a behavioural safeguard, not a structural one.
Start Measuring Authority Delegation
Our Quick Scan assessment now includes Pillar 7. In 15 minutes, you can see where your organisation stands across all seven pillars — including whether your AI authority boundaries are documented, monitored, and defensible.
For teams that want to go deeper, our Article 4 and Foundation courses now include authority delegation modules: structured exercises where you map decision boundaries, classify safeguards, and build the evidence trail that Article 4 compliance requires.
Take the free Quick Scan | Explore our courses
Quick Reads
- EU AI Act Article 4 full text — The one sentence that created a EUR 15 million obligation. Now with a seventh pillar to measure whether you are actually meeting it.
- Catherine Gunnell: AI Governance Starts with Mandate, Not Monitoring — "AI authority rarely moves through formal approvals. It migrates quietly through defaults, thresholds, routing logic." The sharpest articulation of authority creep I have found.
- Catherine Gunnell: AI Governance Becomes Real in the Inner Rings — Where accountability diffusion and visibility gaps break enterprise governance in practice. Essential context for why Pillar 7 exists.
- ING to cut 1,250 AML jobs — The case study that made the seventh pillar inevitable. AI replaces triage. Nobody documents who authorised the AI to triage.
- OWASP AI Security and Privacy Guide — The emerging benchmark for AI system security. Pairs well with Pillar 7's focus on authority boundaries and override capability.
- TwinLadder Standard v1.1 — Seven pillars. Four maturity levels. The only open-source competence assessment framework that isolates authority delegation as a standalone dimension.
One Question
Take the AI system your organisation relies on most — the one that runs every day, that your team trusts, that has become part of the workflow.
Now answer: does a document exist, anywhere in your organisation, that states exactly what that system is authorised to decide and where its authority ends?
If the answer is yes — you are ahead of most. Pillar 7 will help you prove it.
If the answer is no — you have not delegated authority. Authority delegated itself. And when the regulator asks who is accountable for the outcome, the silence will be louder than any policy document you can produce.
TwinLadder Weekly | Issue #32 | April 2026
Helping European professionals build AI competence through honest education.
