TWINLADDER
TwinLadder logoTwinLadder

TwinLadder Research Briefs

Aghion & Tirole (1997) — Formal and Real Authority in Organizations

May 3, 2026|source summary

A reference summary of the canonical paper that separates the formal right to decide from real control over decisions. The starting point for any honest analysis of the Authority Gap, and the framework the EU AI Act has now overtaken. What Aghion and Tirole proved in 1997, and where their model breaks under AI.

Aghion & Tirole (1997) — Formal and Real Authority in Organizations

Listen to this article

0:000:00

Aghion & Tirole (1997) — Formal and Real Authority in Organizations

TwinLadder Research Brief · Source Summary · May 2026

Companion reference to The Authority Gap.


Why this paper matters

Almost nothing written about AI governance in 2024–2026 has noticed that its central problem was formalised in 1997. Philippe Aghion and Jean Tirole's Formal and Real Authority in Organizations, published in the Journal of Political Economy in February 1997, is the canonical paper on what happens when the person with the legal right to decide is not the person who actually controls the decision. It is the paper any serious treatment of the Authority Gap has to start from — and the paper our current generation of AI compliance frameworks has overtaken.

This brief summarises what the paper actually argues, in the language of the original, and identifies where its model needs extending under the conditions AI now imposes.


What the paper proves

Aghion and Tirole separate two concepts of authority that are usually conflated:

  • Formal authority"the right to decide." The legally documented power to make a particular class of decision. In an organisation, this is what the org chart, the delegated authorities matrix, and (post-2024) Article 14 oversight assignments encode.
  • Real authority"the effective control over decisions." The capacity to determine what gets decided, regardless of who signs off. Real authority flows to whoever has the relevant information.

The paper's central proposition is that formal and real authority diverge whenever the formal principal lacks the information or expertise to evaluate the agent's recommendation. In Aghion and Tirole's model, the principal then faces a binary choice:

  1. Rubber-stamp the agent's preferred decision — formal authority is exercised but real authority has been delegated by default. The formal organisation does not notice.
  2. Invest in independent information acquisition — the principal pays a cost (time, expertise, monitoring infrastructure) to retain real authority.

The paper's contribution is showing, formally, that under most plausible conditions the principal will choose option 1. Independent monitoring is expensive; the agent has every incentive to make the recommendation easy to approve; and the principal who chooses option 2 too aggressively destroys the agent's incentive to bring decisions forward in the first place.

The model produces a list of conditions under which a subordinate's real authority increases inside a formally integrated structure: overload of the principal, lenient rules, urgency of the decision, the agent's reputation, the available performance measurement, and the multiplicity of superiors. Each condition has a direct analogue in the AI-deployment situation, which is why the paper is the right starting point for the Authority Gap.


What Aghion and Tirole did not model

The 1997 model was built for a world in which the locus of information moved slowly. Information accumulated in organisations through promotions, specialist hiring, and the slow build-up of operational experience — all processes the principal could plan around. The principal who delegated by default could afford to do so because the cost of being wrong was internal and reversible: the worst case was a poor decision the organisation could course-correct.

Two conditions Aghion and Tirole did not model are now load-bearing:

  • The locus of information moves at the speed of tool deployment, not promotion. When a frontline operator deploys an LLM-based research tool, they acquire — overnight — the cross-functional view that historically took five years and a senior title to assemble. The principal cannot plan around this; she did not authorise it and may not know it has happened.
  • Personal liability has been attached to formal authority specifically. Article 14(4) of the EU AI Act, SR 11-7 model risk management documentation, ISO/IEC 42001 traceability requirements, and a growing body of sectoral regulation (FDA on clinical AI, FCA on financial AI) now require the formal principal to certify in writing that she retained meaningful oversight. Rubber-stamping by default — the equilibrium Aghion and Tirole's model produced — is now itself a regulatory failure mode.

These two changes together break the equilibrium. The principal can no longer afford to delegate by default and cannot afford to monitor exhaustively. The model produces no third option because in 1997 there was no third option to model.

This is the structural condition the Authority Gap names. It is not a failure of Aghion and Tirole's analysis; it is the limit of its applicability, reached approximately twenty-eight years after publication.


The conditions list, applied to AI

Aghion and Tirole's conditions for increased subordinate real authority — overload, lenient rules, urgency, reputation, performance measurement, multiplicity of superiors — read uncomfortably like a list of properties of an organisation in the middle of an AI deployment programme.

  • Overload of the principal. Senior managers in 2026 are typically overseeing more AI deployments than they can substantively review. The result is approval by exception rather than by inspection.
  • Lenient rules. Most enterprise AI policies are still general — "use AI responsibly," "avoid sensitive data" — rather than specific to particular workflows. Lenient rules increase agent latitude, exactly as the model predicts.
  • Urgency of decision. AI deployments are governed by competitive pressure to ship, which compresses the time window for principal review.
  • Reputation. Once an agent has shown that an AI-assisted output is faster and "looks right," the principal's psychological cost of disagreement rises. Aghion and Tirole call this reputation; in AI deployments, it is also called automation bias.
  • Performance measurement. AI workflow outputs are often evaluated on speed and volume. The principal has no easy measure of correctness, so agent latitude expands.
  • Multiplicity of superiors. When a tool is sponsored by a transformation office, owned by IT, deployed into a business unit, and overseen by compliance, the principal who could in principle revoke real authority is split across four functions. The model predicts what we observe: real authority remains with the agent.

Aghion and Tirole did not write about AI. The fit of their conditions to AI deployment is an indication that the underlying structural dynamics they identified are still operating — only faster, and with personal liability attached to the formal-authority side of the equation.


How this brief connects to the Authority Gap

The Authority Gap research piece builds a four-moments diagnostic framework — pre-visibility, drift, aggregation, reversion — for surfacing the divergence between formal and real authority over AI-shaped decisions. The framework's underlying distinction is Aghion and Tirole's. What the Authority Gap adds is the lifecycle decomposition: the four points at which formal and real authority can diverge during the operational life of an AI system, each diagnosable independently.

A reader who wants to think carefully about AI governance should read Aghion and Tirole's paper before they read another book on RACI, three-lines-of-defence, or the EU AI Act. The conceptual ground was laid in 1997. The compliance frameworks of 2025–26 are catching up.


Citation

Aghion, P., & Tirole, J. (1997). Formal and Real Authority in Organizations. Journal of Political Economy, 105(1), 1–29. doi.org/10.1086/262063

A widely-circulated PDF copy is hosted by Harvard's DASH repository at dash.harvard.edu.


TwinLadder Research Briefs are short reference summaries of the foundational sources cited in our research pieces. They are not commentary; they are background reading. Companion to the Authority Gap launch series, May 2026.