TWINLADDER
TwinLadder logoTwinLadder

TwinLadder Research Briefs

Weick & Sutcliffe — Stop-Work Authority and Deference to Expertise

May 3, 2026|source summary

A reference summary of Karl Weick and Kathleen Sutcliffe's work on high-reliability organisations, the deference-to-expertise principle, and stop-work authority in safety-critical industries. Why the safety-engineering literature on aviation, nuclear power, and pharmaceutical manufacturing is the source most relevant to the reversion-authority problem in AI governance.

Weick & Sutcliffe — Stop-Work Authority and Deference to Expertise

Listen to this article

0:000:00

Weick & Sutcliffe — Stop-Work Authority and Deference to Expertise

TwinLadder Research Brief · Source Summary · May 2026

Companion reference to The Authority Gap.


Why this work matters

The AI governance frameworks of 2025–26 — RACI, three-lines-of-defence, the EU AI Act's human-oversight regime — assume that decisions about authority are made once, at design time, and then encoded into roles. They handle the operational state of an AI system poorly because they were built for stable systems, not for systems whose behaviour can drift away from their validity envelope while running.

There is, however, a body of literature that has been thinking about exactly this problem for forty years. It is the safety-engineering literature on high-reliability organisations — organisations that operate in conditions where failure is catastrophic, complexity is high, and information is uncertain. Aviation, nuclear power, naval flight operations, pharmaceutical manufacturing, emergency medicine. Karl Weick and Kathleen Sutcliffe synthesised this literature most influentially in Managing the Unexpected (first edition 2001, second 2007, third 2015) and a body of associated journal articles.

The connection to AI governance is direct, and almost completely absent from current AI compliance discourse. AI deployments share the structural properties of safety-critical systems — high consequence, drift-prone behaviour, distributed decision-making, time pressure on intervention — and the principles Weick and Sutcliffe identified for governing them apply with very few modifications.

This brief sets out the most relevant of those principles, with particular attention to the deference-to-expertise principle that grounds stop-work authority and that the Authority Gap research piece draws on for its treatment of reversion authority.


What high-reliability organisations are

A high-reliability organisation (HRO) is an organisation that produces fewer-than-expected accidents in conditions that should produce many. The canonical examples are US Navy aircraft carriers, the air-traffic control system, nuclear power plants, and certain hospital emergency departments. Empirical study of these organisations — much of it conducted by the Berkeley HRO Project under Karlene Roberts in the 1980s and 1990s, then synthesised by Weick and Sutcliffe — identified a pattern of practices that distinguished HROs from comparable organisations whose accident rates were higher.

Weick and Sutcliffe articulated five principles that characterise mindful operation in high-reliability conditions. Three are about anticipation (preoccupation with failure, reluctance to simplify, sensitivity to operations). Two are about containment (commitment to resilience, deference to expertise).

The fifth principle — deference to expertise — is the one that grounds stop-work authority and the one most directly relevant to AI governance under the Authority Gap framing.


The deference-to-expertise principle

The principle is straightforward to state and counter-intuitive to operate.

Decisions migrate, in real time, to the person with the most relevant expertise to make them — regardless of where that person sits in the formal hierarchy.

In a high-reliability organisation, when a situation arises in which the formal authority for a decision lies higher in the hierarchy than the relevant expertise, the formal authority defers. The pilot defers to the engineer who can see the warning indicator. The plant supervisor defers to the operator who has fifteen years of experience with the specific reactor cooling pattern that just appeared. The senior surgeon defers to the anaesthetist who has just spotted something on the monitor.

The crucial feature of this principle is that it is not informal. It is structurally encoded. The HRO has spent training time, planning time, and cultural-formation time making it possible for expertise to be recognised and authority to migrate quickly without status conflict, and then to migrate back when the local expertise problem has been resolved.

Weick and Sutcliffe's central claim is that organisations that do not practise deference to expertise produce more accidents than those that do, holding constant the underlying riskiness of the operation. This is not a normative argument. It is the empirical pattern.


Stop-work authority

Stop-work authority is the operational instantiation of the deference-to-expertise principle in industrial and clinical settings. It is the formally documented right of a designated role — often a relatively junior one — to halt a process when they observe a condition that warrants halting, regardless of the productivity consequence and regardless of whether they have the authority to restart the process.

Examples in the literature:

  • Aviation. The captain's authority to refuse a flight, the dispatcher's authority to ground a flight, and the maintenance engineer's authority to refuse to release an aircraft. Each is structurally protected from being overridden by commercial pressure.
  • Nuclear power. The operator's authority to manually scram a reactor, exercisable without prior approval and without the operator having to demonstrate post-hoc that the scram was necessary. The structure deliberately accepts false-positive shutdowns as the cost of avoiding false-negative meltdowns.
  • Pharmaceutical manufacturing. The QA representative's authority to halt a production line. Codified in cGMP regulation; enforceable against operational management.
  • Surgical and emergency medicine. The anaesthetist's authority to halt a procedure, the senior nurse's authority to call a stop, exercised under formal hospital protocols.

The structural feature shared across these cases is that stop-work authority is named, documented, pre-authorised, and protected from being overridden by the operational management whose throughput will be affected by the stop. Without each of those features, stop-work authority does not function in practice; the role designated to exercise it cannot exercise it under pressure.


Why this applies to AI

The AI governance literature has been slow to recognise that AI deployments produce conditions structurally similar to those Weick and Sutcliffe describe.

  • High consequence. AI decisions in HR, finance, healthcare, and critical infrastructure can produce significant harm — to individuals, to firms, to fundamental rights — at scale.
  • Drift-prone behaviour. Models trained on historical data degrade as conditions shift. The drift can be silent until it produces a visible failure.
  • Distributed decision-making. AI systems generate outputs that are then acted on by humans across the organisation; the locus of decision-making is not concentrated in a single role.
  • Time pressure on intervention. Once an AI system is producing outputs at scale, the cost of halting it grows quickly. The decision to stop has to be made under pressure.

These are exactly the conditions for which Weick and Sutcliffe argue that deference to expertise and named, protected stop-work authority are the operationally necessary governance instruments.

The current generation of AI governance frameworks does not provide them. RACI assigns Accountable roles statically; it has no mechanism for real-time authority migration. Article 14(4)(d) of the EU AI Act gives the assigned overseer override authority, but does not protect that authority from being overridden by operational management, and does not specify how the assigned overseer should be selected in the first place. The AI Act's Article 14(4)(e) stop-button requirement is a literal stop-button — it does not address who has the standing to use it.

The gap between what AI governance currently provides and what high-reliability operations require is the structural insight Weick and Sutcliffe's literature contributes to the Authority Gap framing.


What the literature warns against

Three failure modes documented in the HRO literature have direct AI parallels:

  • The drift to status quo. When stop-work authority is not used, organisations conclude (wrongly) that it is not needed, and over time the documented authority becomes nominal. The role that nominally has the authority lacks the practice and the standing to exercise it under pressure. The same dynamic is visible in AI deployments where Article 14 oversight has been formally assigned but the assigned overseer has not been in front of the system's outputs in three months.
  • The diffusion of responsibility. When stop-work authority is shared across multiple roles ("any of these four people can halt the line"), it is in practice exercised by none of them. HRO research finds that authority needs to be concentrated in named individuals, not distributed across committees. AI deployments routinely make the opposite design choice.
  • The override under pressure. Stop-work authority that is not protected from operational management is overridden the first time it produces a meaningful productivity loss. The HRO literature specifically argues that the right to stop must include the right to be wrong about stopping — false-positive halts must be tolerated structurally, or the authority is hollow. AI governance frameworks have not yet absorbed this point.

How this connects to the Authority Gap

The Authority Gap framework's "reversion authority"after a problem surfaces, who has the standing to halt the workflow against the productivity loss — is the AI-specific instantiation of the stop-work authority Weick and Sutcliffe's literature describes. The Authority Gap argument that "if you cannot name the individual with reversion authority, you do not have a governance posture" is a paraphrase of forty years of safety-engineering research.

The Authority Gap research piece does not invent the reversion-authority principle. It points out that the existing AI governance frameworks have not yet incorporated it, and that organisations whose AI deployments are running without named, pre-authorised, protected reversion authority are running them outside the standard the safety-engineering literature has established for systems of comparable risk.


Citation

Weick, K. E., & Sutcliffe, K. M. (2015). Managing the Unexpected: Sustained Performance in a Complex World. (3rd edition.) Hoboken, NJ: Wiley.

Earlier editions: Jossey-Bass, 2001 (1st edition); Jossey-Bass, 2007 (2nd edition).

Foundational journal articles in the HRO literature include:

  • Roberts, K. H. (1990). Some Characteristics of One Type of High Reliability Organization. Organization Science, 1(2), 160–176.
  • Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (1999). Organizing for High Reliability: Processes of Collective Mindfulness. Research in Organizational Behavior, 21, 81–123.

TwinLadder Research Briefs are short reference summaries of the foundational sources cited in our research pieces. They are not commentary; they are background reading. Companion to the Authority Gap launch series, May 2026.