The Authority Gap
TwinLadder Research Series | May 2026
Why the two serious conversations about AI in organisations don't talk to each other — and what falls through the gap.
A composite case
A senior procurement manager in a European industrial group signs off a vendor contract on a Tuesday afternoon. The supplier evaluation memo is fluent, well-structured, and arrives with an executive summary that flags the right risks in the right order. She approves it.
She does not know that the memo was assembled by an AI tool a junior analyst has been quietly using for six months. The analyst does not know that the model's training data weights one supplier category in ways the procurement framework explicitly excludes. The vendor relationship manager who would have caught the framing problem two years ago no longer reviews these memos at this stage — the workflow was redesigned around the AI tool by a transformation office that does not own the operational risk.
Six weeks later, when the contract is challenged in an internal audit, three questions surface in sequence:
Who actually decided to award this contract? The signature is the manager's. The reasoning is the model's. The framing is the analyst's. The org chart says the manager. The operating reality says something else, and the audit trail cannot distinguish them.
Who was authorised to detect that the model's framing had drifted away from the procurement standard? Nobody, formally. The standard was version-controlled. The model's behaviour was not. Article 14(4)(b) of the EU AI Act requires the human overseer to "remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system" — but the regulation does not name a role to do that work, and the organisation has not assigned one.
Who has the standing to halt the workflow now that the problem is visible? The manager bears the formal accountability but did not choose the tool. The analyst chose the tool but does not bear the accountability. The transformation office that sponsored the deployment does not own the operational risk. The system runs because no one has the authority to stop it.
The case above is illustrative — a stylised composite the framework in this piece was built to surface. It is not reportage; it is what the framework predicts you should expect to find in any organisation where AI deployment has outrun role redesign. Whether the pattern is rare or common in your organisation is the question this piece is built to help you answer. (See the methodology note at the end of this article for the audit format used.)
We are calling the underlying pattern the Authority Gap: the divergence between the de jure decision rights an organisation has documented (org chart, RACI, delegated authorities matrix, Article 14 oversight assignments) and the de facto decision pathways that AI tools have created. Stated precisely: decisions are being made by parties the organisation has not formally authorised to make them, and the authorised parties cannot reconstruct how the decisions were reached.
What's already in the literature — and what isn't
The de jure / de facto distinction is not new. The starting point for any honest treatment of this question is Philippe Aghion and Jean Tirole's 1997 paper Formal and Real Authority in Organizations (Journal of Political Economy, 105(1)). Their core move is to separate formal authority — "the right to decide" — from real authority — "the effective control over decisions." The two diverge whenever the formal principal lacks the information or expertise to evaluate the agent's recommendation. In their model, the principal then either rubber-stamps the agent's preferred decision (delegation by default) or invests in independent information acquisition (which is costly and often not done). The principal who delegates by default does so without the formal organisation noticing.
Aghion and Tirole's framework was built for a world in which the locus of information moved slowly — through promotions, through specialist hiring, through the accumulation of operational experience. The principal could plan around it. The Authority Gap is what happens to that framework under a new condition: AI tools that move information access faster than the organisation can re-allocate formal authority, in a regulatory environment that has begun to attach personal liability to formal authority specifically. Aghion and Tirole's principal in 1997 could afford to delegate by default because nothing was about to ask her to certify, in writing, that she retained meaningful oversight. Article 14(4) of the EU AI Act, SR 11-7's documentation requirements, and the model-risk literatures across multiple sectors now ask exactly that. The principal cannot delegate by default and also satisfy the documentation requirement.
The supporting literature behind this point — Galbraith (1973) on organisations as information-processing structures, Mintzberg (1979) on tacit authority in the operating core, Holmström (1979) on monitoring under information asymmetry, Weick and Sutcliffe on stop-work authority and deference to expertise — describes the same dynamic from different angles. None of it engages the AI-plus-personal-liability condition because none of it could.
The contribution this piece tries to make is therefore narrower than the diagnosis Aghion and Tirole already supplied. It is the four-moments lifecycle decomposition below — pre-visibility, drift, aggregation, reversion — applied to the lifecycle of an AI-shaped decision. The four moments are the points at which formal and real authority can come apart specifically under AI, and each can be observed independently of the others.
The two conversations
There are two serious conversations happening about AI in organisations right now. They both make sense within their own frame. They are largely conducted in different rooms by different people with different vocabularies, and the integration between them is mostly aspirational.
It is fair to object that the integration is not entirely aspirational. ISO/IEC 42001 explicitly requires AI governance to interlock with operational risk management. McKinsey's Rewired names governance as one of six dimensions. The OECD AI Principles bridge productivity and accountability. The objection is correct on the page and weak in the room. In most large enterprises, the AI deployment programme reports to the COO or CTO; the AI Act conformity assessment reports to the General Counsel or DPO; their first joint meeting on a specific deployment typically happens after design lock. The dichotomy below is rhetorical, but it tracks an organisational separation we have observed repeatedly in the audit work cited at the end of this piece.
The productivity conversation
The first conversation is about how AI rewires work itself. Three findings carry the argument.
Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, working through Stanford's Digital Economy Lab in 2025, used ADP payroll data covering millions of US workers to track employment by age cohort and AI exposure. Early-career workers (ages 22–25) in the most AI-exposed occupations show a roughly 16% relative decline in employment since the late-2022 takeoff of generative AI; senior employment in the same occupations has grown over the same period. Software developers in that age band were down close to 20% from late-2022 peaks by July 2025. The shift is asymmetric, already measurable in payroll data, and not happening evenly across the org chart.
McKinsey's March 2025 State of AI: How Organizations Are Rewiring to Capture Value tested 25 organisational attributes against EBIT impact from gen AI. The single strongest predictor was whether the organisation had fundamentally redesigned at least some of its workflows. Only 21% of respondents had — a McKinsey survey finding, not an independent peer-reviewed result, and worth treating with the appropriate caution about self-reported data, but still the single most-cited datapoint in the productivity literature.
Manuel Hoffmann and colleagues at Harvard Business School (Working Paper 25-021, 2024) studied 187,489 GitHub developers panelled weekly. Access to GitHub Copilot raised time spent on core coding by 12.4% and cut time spent on project-management activities by 24.9%. The study measures time allocation, not who decided what — that distinction matters for the argument that follows. The largest gains accrued to less-experienced developers. The authors are explicit that the assumption AI lets firms cut junior hiring is a "profound strategic error," because Copilot's complementarity is what accelerates skill development.
Ethan Mollick's Leader–Lab–Crowd framework documents a recurring pattern across large organisations: the most advanced AI users are usually frontline staff working around an internal review process that takes months to convene. Sangeet Paul Choudary's February 2026 HBR pieces argue that AI's value comes from re-architecting how work is coordinated across newly decomposed units, not from accelerating tasks within the existing architecture. Jonathan Rosenthal and Neal Zuckerman (HBR, April 2026) propose that consensus-based decision-making needs to give way to explicit decision-ownership structures.
Pull the threads together: AI's payoff is structural, the structure is changing whether or not management notices, and the existing decision-making architecture is incompatible with the speed AI now imposes. What the conversation gets right is that AI is a structural intervention, not a productivity overlay. What it tends to skip is the legal, governance, and accountability layer underneath the structure being rearranged.
The compliance conversation
The second conversation is happening in EU AI Act briefings, ISO 42001 audits, and NIST AI RMF working groups. It is rigorous, careful, and almost entirely oriented around the existing org chart.
EU AI Act Article 4 came into application on 2 February 2025. It requires providers and deployers of AI systems to ensure a sufficient level of AI literacy for staff and other persons dealing with the operation and use of AI systems on their behalf. Supervisory authorities began applying it from August 2025; full enforcement from August 2026. Article 4 has no standalone penalty, but the absence of demonstrable literacy is treated as an aggravating factor in any other AI Act breach.
Article 14 layers human oversight obligations onto high-risk systems. Four sub-paragraphs of Article 14(4) carry the operational weight: (a) the overseer must understand the system's capacities and limitations; (b) they must remain aware of the tendency to over-rely on the output (automation bias); (c) they must correctly interpret the output; (d) they must be able to disregard, override, or reverse it. Article 9 imposes risk management. Articles 50 and 52 add transparency duties.
Outside the EU, the same logic appears under different statutes — each addressing different aspects of AI accountability rather than a single unified frame. ISO/IEC 42001 sets out a management-system standard for AI governance. NIST AI RMF provides the US-aligned govern/map/measure/manage frame. SR 11-7 governs model risk management for US banks and is the longest-standing of these regimes (2011), specifically designed to surface what banks call "shadow models" — a precursor to the Authority Gap problem in the financial sector. DORA imposes operational resilience and ICT third-party risk requirements on EU financial-sector firms; it is not an AI-specific regime, but its third-party risk provisions reach AI vendor relationships. Sectoral regulators — the FDA, EMA, MHRA on clinical AI; the FCA on financial AI — add jurisdiction-specific overlays. The Authority Gap is not a parochial EU problem, even though the EU AI Act is currently the most legible expression of it.
What the conversation gets right is that AI deployment creates real, named, enforceable obligations sitting on identifiable roles. What it tends to skip is the assumption that the org chart on which those obligations sit is static. Article 4 asks whether the right people are trained. Article 14(4) asks whether the assigned overseer can override the system. Neither asks whether the person assigned is still the person who can actually see what the system is doing.
Where the streams fail to meet
The two conversations operate on incompatible models of the organisation. The productivity stream treats the org chart as a variable to be redesigned. The compliance stream treats it as the surface on which obligations get assigned. Both are right within their own frame. The problem is that the org chart is the variable in the first frame and the constant in the second, and most organisations are running both frames simultaneously without reconciling them.
What's actually happening in the data is that the location of coordination work is changing, and neither stream is set up to see it. The Hoffmann et al. study shows developer time on traditional project-management activity falling 24.9% under Copilot. The intuitive reading is that coordination work is disappearing or being absorbed upward. The more accurate reading, once you look at what frontline AI users are actually doing with the freed time, is that coordination work has been redefined and relocated. The work that used to mean coordinating people now increasingly means coordinating agents — structuring prompts, orchestrating tool handoffs, designing the boundary between what the human does and what the model does, debugging the workflow when the model's output drifts. That work has to sit close to the production task, because only the person doing the task knows what the coordination needs to enable. So coordination is not being absorbed downward; it is being re-localised to wherever agentic work happens, which is the front line. Meanwhile, the Brynjolfsson, Chandar, and Chen data show entry-level work migrating out of the organisation entirely as the headcount that used to populate the bottom of the pyramid shrinks. The work is being relocated across the org chart, and some of it is being relocated off the org chart. The legal accountability has not moved.
A reader might reasonably push back: this is not new. Authority has always been ambiguously allocated; informal delegation is how organisations actually function; subsidiarity is a good thing in EU governance. Aghion and Tirole made exactly this point about real versus formal authority three decades ago. The objection is correct as far as it goes. It identifies the conditions under which the Authority Gap moves from a healthy organisational dynamic to a regulatory and operational problem. Implicit delegation matters when:
- The implicitly delegated decision crosses a regulated threshold (Article 14 oversight, model risk management under SR 11-7, fundamental rights impact assessment).
- The decision pathway cannot be reconstructed in audit, so accountability cannot be assigned after the fact.
- The implicit delegate has no escalation path matched to the decisions they are now making — failures accumulate quietly rather than triggering visible alerts. (Leveson's distinction between fail-silent and fail-loud system behaviours, applied to organisational rather than technical systems.)
When all three apply, you do not have flexible decision-making. You have an authority gap.
That gap — between de jure decision rights and de facto decision pathways under AI — is the live tension. Neither stream is pricing it.
Authority Delegation, in four moments
Authority Delegation, as it usually appears in governance frameworks, asks one question: who is authorised to approve this AI workflow? The question works when the org chart is stable. It becomes inadequate the moment AI changes who has the relevant competence to make the decision the approval gate exists to manage.
A more useful version of the question maps to the four moments in the lifecycle of an AI-shaped decision: before the system is deployed, during its operation, across the roles it touches, and after it begins to fail. The four authority types below are the four points at which de jure and de facto decision rights can diverge, and each can be diagnosed independently. The framework owes its underlying distinction to Aghion and Tirole; what is added is the lifecycle decomposition under AI specifically.
1. Pre-visibility authority — before
Authority structures evolved partly as information-bottleneck management. Managers approved things because they had cross-functional visibility their reports didn't. The approval gate existed because the reports could not see what the manager could see — a Galbraithian point about information processing made operational.
When an AI system surfaces synthesis or generates output that gives a frontline operator the cross-functional view a manager used to monopolise, the gate's original justification collapses. The operator has the visibility; the manager retains the accountability and the legal exposure.
The question is not "should we keep the gate?" but: which approval gates exist because of information asymmetry, and which exist because of liability allocation? AI collapses the first kind. The second kind has to be redesigned, not removed. Treating the two as identical is how organisations end up with audit trails that no longer match the decisions actually being made.
We use pre-visibility authority — there is no settled term — to name the residue of the first kind: gates that persist after the information asymmetry that justified them has been collapsed by AI.
2. Drift authority — during
AI workflows operate inside a validity envelope at deployment. Conditions change. Models drift. Inputs shift outside the distribution the system was tested against. Article 14(4)(c) requires the overseer to "correctly interpret the high-risk AI system's output" — but interpretation requires being in front of the output, repeatedly, over time.
Who is authorised — and competent — to detect the drift, halt the workflow, and re-validate it? Is that the same person who approved the original deployment, or has the technical competence to recognise drift moved elsewhere in the organisation while the formal authority hasn't?
The pattern we see repeatedly is that the person who signed off the deployment was a senior whose competence was about the business case, while the person who can actually see drift in operation is a technical analyst three levels below them. The formal authority sits where it did. The competence to exercise it has moved.
3. Aggregation authority — across
When an AI tool enables a single role to coordinate work that previously required handoffs across three or four roles, the authority to make the in-flight decisions of that work has effectively been delegated to that role. The Hoffmann et al. data are consistent with this dynamic: a 24.9% reduction in developer project-management activity is what you would expect when the coordination work itself has been redefined — from coordinating people on a roadmap to coordinating an AI agent's contribution to the developer's own task — and re-localised to the role doing the production work. The study itself measures time allocation, not decision rights, so the inference is directional rather than established. But the directional point is enough: a frontline role now makes coordination decisions in real time that used to be staged through formal sign-offs. Whether the delegated authorities matrix has been updated to reflect that is a separate question, and the answer in most organisations is no.
The question is whether anyone formally delegated that authority. In most cases, no. The workflow allowed it. The tool permitted it. The delegated authorities matrix did not catch up. The work is being done anyway, and the formal accountability is sitting where it always sat — somewhere else.
Whether Article 14(4) reaches aggregation authority is currently undetermined in supervisory practice. Organisations should not assume either reading is settled. The risk-prudent posture is to surface aggregated decision rights in advance and document them, regardless of how the supervisory question lands.
4. Reversion authority — after
When an AI workflow needs to be paused, rolled back, or replaced, who has the standing to make that call against the productivity loss?
This is not a technical question. It is political, and it parallels the stop-work authority literature in safety-critical industries — where the most reliable organisations are the ones that grant clear standing to halt operations to roles that would not otherwise have it.
AI deployments in most organisations are sponsored by a function that doesn't bear the operational risk — a transformation office, a CTO's purview, a chief innovation officer. The line manager whose team is using the tool every day was not the buyer. The compliance lead who would catch a violation does not have the authority to halt a system the COO signed off on. So when the system needs to be paused — and they all do, eventually — reversion is nobody's job.
The fix is unglamorous: name, in writing, before deployment, the individual with explicit reversion authority and the conditions under which they can use it without seeking new approval. If you cannot name that person now, you do not have a governance posture. You have an audit waiting to happen.
A counter-case: when the Authority Gap doesn't open
The framework is more useful if we engage with a case where it does not apply. The strongest such case is Ingka Group's deployment of the Billie chatbot at IKEA from 2021 onwards.
Billie has handled approximately 47% of customer enquiries — about 3.2 million interactions handled without a human agent — according to Ingka Group's own reporting (ingka.com newsroom; covered in Retail Gazette and RTÉ, June 2023). The conventional move would have been a headcount reduction matched to the displaced workload. Ingka did not do this. Instead, the company reskilled 8,500 call-centre workers into remote interior design advisors. Ingka reports €1.3 billion in revenue from the remote design channel in FY2022, representing 3.3% of total Ingka sales, with a stated target of growing the share to 10% over the following years.
The Authority Gap did not open at Ingka — not because Ingka used different AI, but because the workflow redesign and the role redesign were done together, in the same governance act, with the same people accountable for both. The transformation was not sponsored by a function that did not bear the operational risk; it was sponsored by the operator. Decision rights over the redesigned workflow were re-allocated explicitly: the reskilled workers had defined authority to conduct full design consultations, and the chatbot had defined authority to handle specified transactional queries. The boundary was named.
Ingka is an unusually favourable case — a vertically integrated retailer with a pre-existing interior design service it could expand into. Most enterprises cannot redirect displaced labour into a hundred-fold revenue stream that was conveniently already on the shelf. The lesson is therefore not "every AI deployment can do what Ingka did." The lesson is narrower and more useful: the Authority Gap is avoidable when role redesign happens in the same governance act as workflow redesign. Where they are split — typically because the productivity conversation runs in one room and the compliance conversation runs in another — the gap opens by default.
A different kind of audit
The audit most organisations are running right now is one of two things. An AI maturity assessment (productivity stream) — which platforms, which workflows, which use cases. Or a compliance gap assessment (governance stream) — which articles, which controls, which documentation.
What is missing is an audit of the organisation itself, on the working assumption that AI is already changing it. Each authority type has a diagnostic question and a specific operating-model artefact that has to change:
- Pre-visibility — Diagnostic: Walk five recent approvals. Did the approver add information the requester did not have, or absorb liability the requester could not? Artefact to change: the AI deployment intake form. Every gate has to be classified by basis (information vs liability) and the information-only gates moved or removed.
- Drift — Diagnostic: Pull the last 30 days of overseer logs against deployment logs. Has the assigned overseer been in front of the system's outputs? Artefact to change: the Article 14 oversight assignment, which has to be re-attached to the role with operational visibility, with named escalation if visibility moves.
- Aggregation — Diagnostic: List the decisions the AI tool now makes that previously required two or more sign-offs. Are any documented in the delegated authorities matrix? Artefact to change: the delegated authorities matrix itself, which has to incorporate AI-mediated decisions explicitly rather than treating them as a property of the role using the tool.
- Reversion — Diagnostic: Name the individual with reversion authority for each high-risk AI system in production, and the conditions under which they can act without seeking new approval. Artefact to change: the change-management gate, which has to require explicit reversion-authority designation before deployment approval.
This is what a competence audit looks like when it takes seriously that AI is restructuring the organisation in real time. It is not a training plan. It is not a compliance scorecard. It is a snapshot of where authority lives now versus where the org chart says it lives — and a deliberate decision about which version to bring forward.
Closing: the unit of governance is no longer the role
The intellectual claim of this piece is narrow and load-bearing: the unit of AI governance is no longer the role.
Roles assume a stable assignment of decision rights to a position on the org chart. AI does not respect that assignment. Authority over AI-shaped decisions is increasingly a flow — by which we mean a path of decision rights that is determined by the workflow at runtime rather than assigned to a position at design time. Modelling authority as a flow requires governance frameworks to specify constraints on authority transfers — who may receive authority, under what conditions, with what audit trail — rather than fixed assignments to positions.
This is not a metaphor for organisational uncertainty. It is a specific design requirement. RACI, the delegated authorities matrix, and Article 14 oversight assignments all assume that decision rights can be drawn as a static map. Under AI, they cannot. The map has to become a graph with conditional edges, and the governance instruments have to operate on the graph.
The productivity stream is right that organisations that don't restructure won't capture AI's value. The compliance stream is right that organisations that restructure without governance are accumulating liability faster than they're capturing value. Neither stream gives you a way to model authority as a flow. The next generation of AI accountability frameworks will not be improvements on RACI. They will be something else, because the org chart is no longer the unit they need to govern.
If your continuity plan currently reads "the platform will tell us what to do," and your governance plan reads "the org chart will tell us who's accountable," you have a gap large enough that someone is going to fall through it.
Glossary — for the non-governance reader
The piece uses several governance and regulatory terms that mean specific things in the rooms where compliance officers, lawyers, and risk managers work. Brief definitions, in the order they appear:
-
RACI. A responsibility-assignment matrix used in project and process management. Each task or decision gets four roles: Responsible (who does the work), Accountable (the single person who signs off and carries the consequence), Consulted (whose input is sought), Informed (who is told the outcome). Built as a grid of tasks × people, with one letter per cell. RACI assumes decision rights can be drawn as a static map. The piece argues that under AI, they cannot — which is what is meant in the closing line that "the next generation of AI accountability frameworks will not be improvements on RACI."
-
Delegated authorities matrix. A formal document that lists, for each category of decision in an organisation (procurement above a threshold, hiring decisions, financial commitments, system deployments), which named role has the authority to make it. Often legally binding. Tends to be reviewed annually; tends not to anticipate AI tools that absorb decisions across categories.
-
Article 4 (EU AI Act). Requires providers and deployers of AI systems to ensure a "sufficient level of AI literacy" for staff and others operating AI on the organisation's behalf. In application since 2 February 2025; full enforcement August 2026. Has no standalone penalty but is treated as an aggravating factor in any other AI Act breach.
-
Article 14 (EU AI Act). Imposes human oversight obligations on high-risk AI systems. Article 14(4)(a)–(d) specify what the assigned overseer must be able to do: understand capacities and limitations, remain aware of automation bias, correctly interpret the output, and override or reverse it.
-
High-risk AI system. Defined in Article 6 and Annex III of the EU AI Act — systems used in employment, education, critical infrastructure, law enforcement, justice administration, biometric categorisation, and several other named domains. Triggers the heaviest set of obligations under the Act.
-
ISO/IEC 42001. International management-system standard for AI governance, structured like ISO 27001 (information security) or ISO 9001 (quality). Voluntary but increasingly required by procurement and audit functions.
-
NIST AI RMF. US National Institute of Standards and Technology's AI Risk Management Framework. Voluntary; structured as four functions — Govern, Map, Measure, Manage. The US-aligned counterpart to ISO 42001.
-
SR 11-7. US Federal Reserve and OCC guidance from 2011 on model risk management for banks. Long-standing regime; the precursor literature on "shadow models" (banks running models the formal model-risk inventory did not capture) prefigures the Authority Gap problem in financial services.
-
DORA. Digital Operational Resilience Act (EU), in force from January 2025. Imposes ICT operational resilience and third-party risk management requirements on EU financial-sector firms. Not AI-specific, but its third-party provisions apply to AI vendors.
-
Conformity assessment. Under the EU AI Act, the documented process by which a high-risk AI system is shown to meet the Act's requirements before being placed on the market or put into service. The audit artefact most likely to surface an Authority Gap during regulatory review.
-
FRIA — Fundamental Rights Impact Assessment. Required under Article 27 of the EU AI Act for certain deployers of high-risk AI systems. A documented assessment of the system's impact on fundamental rights, including specific human-oversight arrangements.
Methodology note
The composite case opening this piece is illustrative, not reportage. It is constructed from organisational patterns observed across our advisory work with European mid-cap teams in 2025–26, in which we run a structured audit format we call the Competence Audit Sprint. The framework in this piece is the product of that work. Where the article uses phrases like "the pattern we see repeatedly," we are referring to that audit pipeline; we are not claiming statistical generalisability. The framework's value is diagnostic — it is built to surface the Authority Gap where it exists in a specific organisation, not to assert how prevalent the Gap is across organisations in aggregate.
Sources
-
Aghion, P., & Tirole, J. (1997). Formal and Real Authority in Organizations. Journal of Political Economy, 105(1), 1–29.
-
Brynjolfsson, E., Chandar, B., & Chen, R. (2025). Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. Stanford Digital Economy Lab. digitaleconomy.stanford.edu
-
Choudary, S. P. (February 2026). Why New Technologies Don't Transform Incumbents. Harvard Business Review. hbr.org
-
Choudary, S. P. (February 2026). AI's Big Payoff Is Coordination, Not Automation. Harvard Business Review. hbr.org
-
European Commission. (May 2025). AI Literacy – Questions & Answers (Article 4 guidance). digital-strategy.ec.europa.eu
-
EU AI Act, Regulation (EU) 2024/1689 — Articles 4, 9, 14, 50, 52. Article 14(4)(a)–(d) on understanding capacities and limitations, automation bias, output interpretation, and override authority. eur-lex.europa.eu
-
Galbraith, J. R. (1973). Designing Complex Organizations. Addison-Wesley.
-
Hoffmann, M., et al. (2024). Generative AI and the Nature of Work. Harvard Business School Working Paper 25-021. (Panel of 187,489 GitHub developers; Copilot raised core coding time by 12.4% and lowered project-management time by 24.9%.) hbs.edu
-
Holmström, B. (1979). Moral Hazard and Observability. Bell Journal of Economics, 10(1), 74–91. (The foundational paper on monitoring under information asymmetry.)
-
Ingka Group. (2023). AI and Remote Selling bring IKEA design expertise to the many. Newsroom. ingka.com — the primary source for the 47% / 8,500 / €1.3B / 3.3% figures cited.
-
ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system.
-
Leveson, N. (2012). Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press. (Source for the fail-silent vs fail-loud distinction adapted in this piece.)
-
McKinsey & Company. (March 2025). The State of AI: How Organizations Are Rewiring to Capture Value. (Workflow redesign was the strongest of 25 attributes correlated with EBIT impact in McKinsey's survey; only 21% of respondents had redesigned. Self-reported survey data.) mckinsey.com
-
Mintzberg, H. (1979). The Structuring of Organizations: A Synthesis of the Research. Prentice-Hall.
-
Mollick, E. (22 May 2025). Making AI Work: Leadership, Lab, and Crowd. One Useful Thing. oneusefulthing.org/p/making-ai-work-leadership-lab-and
-
NIST AI RMF 1.0 (2023) — AI Risk Management Framework.
-
Weick, K. E., & Sutcliffe, K. M. (2015). Managing the Unexpected: Sustained Performance in a Complex World. Wiley (3rd ed.). (The reference text on high-reliability organisations and the deference-to-expertise principle that grounds stop-work authority.)
-
Rosenthal, J., & Zuckerman, N. (April 2026). Decision-Making by Consensus Doesn't Work in the AI Era. Harvard Business Review. hbr.org
-
U.S. Federal Reserve & OCC. (2011). SR 11-7: Guidance on Model Risk Management. (Long-standing US banking regime; the model-risk literature's treatment of "shadow models" prefigures the Authority Gap problem.)

