More major errors among intensive AI users without judgment calibration
Performance with AI — but −17% without it after 18 months
HR departments with AI governance training despite 45% deploying AI
Pillars in the Twin Ladder Standard — the open benchmark for AI competence excellence
Source: BCG/HBR Study, March 2026, N=1,488 · Bedard, Kropp, Hsu et al.
The threat that doesn’t appear on any dashboard
The employees making the most errors are not the ones avoiding AI. They are the ones using it most intensively — without the judgment calibration to match. Cognitive dependency doesn’t appear in monthly reports. It appears during a crisis.
The Competence Paradox
Performance increases +48% with AI, but drops −17% without it after 18 months. Employees hired after AI arrived never learned to work without it.
Invisible Error Accumulation
Output looks good. Targets are met. But errors accumulate silently behind the numbers. No one is measuring this.
Governance Gap
45% of organizations have deployed AI. Only 18% have governance training. The gap between adoption and accountability is widening every quarter.
No Standard of Excellence
Organizations know they need to build AI competence but have no shared yardstick for what excellence looks like. Leadership, governance, training, and technical depth all move at different speeds — without a standard, none of them compound.
Measuring what matters — people and organizations
Psychometric assessment of individuals, combined with organizational measurement against the Twin Ladder Standard — an open benchmark for AI competence excellence.
Individual AI Competence Assessment
Psychometric measurement of how well individuals calibrate trust in AI output — the AI Power Test and situational-judgment instruments. Validated across professional, leadership, university, and educator tracks. Produces individual reports people actually learn from.
Organizational Assessment — Twin Ladder Standard
Seven-pillar organizational assessment mapping your AI competence across awareness, policy, training, tools, evidence, governance, and authority delegation. Scored against the open Twin Ladder Standard so you know where you sit on the path to excellence.
Benchmarks, Roadmaps & Local Normative Data
Sector and regional benchmarks built from the Indonesia baseline study and the wider TwinLadder dataset. Translate assessment findings into a prioritized competence roadmap — with cultural and market context only a local scientist can provide.
Compliance Platform
Evidence, records, and audit trail for AI governance — one source of truth across HR, Legal, Compliance, and Engineering.
“The organizations that will lead the next decade are not the ones adopting AI fastest. They are the ones building AI-competent people.”
A regional team — grounded in local science, connected to global rigour
Our Southeast Asia chapter combines TwinLadder’s global assessment platform with psychometric expertise and organizational practice from inside the region.
Ferdian Satriawan
Psychometrician and assessment scientist leading TwinLadder’s expansion across Indonesia and Southeast Asia.
- SEA Lead — TwinLadder
- Building the Southeast Asian chapter of the AI Competence Platform
- Psychometrician & Assessment Scientist
- M.Si. Psychometrics · UIN Jakarta · Trained under Prof. Jahja Umar
- Principal Investigator — AI Judgment Baseline
- First structured psychometric study of AI judgment quality among Indonesian professionals
- Head of Assessment & Development
- Mitra Automobile · 5 years leading organizational assessment systems
Our team brings a capability that does not yet exist in the Southeast Asian market: psychometric measurement of human judgment quality in AI-augmented environments — designed for regional organizations, not adapted from a foreign template.
We are not an AI tools training shop, and we are not digital transformation consultants. We measure — rigorously and credibly — whether the humans in your organization can still think clearly when the AI is wrong, unavailable, or confidently mistaken.
The work connects TwinLadder’s global assessment science to the organizational, cultural, and market realities of ASEAN — building normative data that makes the measurement locally credible and globally comparable.
Southeast Asia — Starting from Indonesia
Indonesia is the largest market in Southeast Asia and the entry point for TwinLadder’s APAC presence. The work being built here — normative data, institutional partnerships, and awareness infrastructure — creates the foundation that makes every subsequent market faster to enter.
TwinLadder — The Global Standard for AI Competence
TwinLadder’s AI Power Test is the world’s most rigorous behavioral assessment of AI judgment — validated across professional, university, high school, and educator tracks in multiple countries.
As TwinLadder’s SEA Representative, I bring this scientific infrastructure to Indonesian and Southeast Asian organizations — grounded in local data, delivered in local context.
- AI Power Test — behavioral SJT across 4 tracks (professional, leadership, university, educator)
- Twin Ladder Standard v1 — open, CC BY-SA 4.0, seven-pillar organizational competence benchmark
- Individual & team scoring — from calibration gaps to band classification
- APAC normative dataset — Indonesia anchor, expanding across Southeast Asia
- Assessment science grounded in CFA/SEM methodology and field-validated psychometrics
