TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

AI Strategy

Where Does the Company Remember? Institutional Knowledge in the Age of AI

I have been watching organisations lose their memory for thirty years.

April 22, 2026TwinLadder Research Team, Editorial Desk18 min read

Listen to this article

0:000:00

I have been watching organisations lose their memory for thirty years.

Not the metaphorical kind. The literal kind. A procurement officer retires and takes forty years of supplier judgement with her. A senior product manager leaves for a competitor and three years of design rationale vanish with him. A customer-service team turns over twice in two years and nobody in the building can explain why the de-escalation script says what it says. Every large organisation I have worked with has survived this by replacing people with other people who, after a few painful years of apprenticeship, eventually learn the role well enough to keep it running.

AI is about to end that arrangement. And almost nobody is talking about what replaces it.

The tasks that juniors used to do are the tasks that juniors used to learn from. If AI does those tasks instead, the learning does not happen. The apprenticeship breaks. Ten years from now, there is no senior.

This is the problem. It is not a training problem. It is a memory problem — a problem about where, in an organisation that no longer trains its people through first-draft work, the knowledge of how the role actually thinks is going to live.


The problem, in one sentence per function

Every knowledge-work function has the same shape. A junior does a lot of repetitive work under a senior's eye. The work is partly output — the memo, the screening, the draft, the first-pass analysis — and partly tuition. After a few years the junior has absorbed enough of the role's tacit judgement to become a senior, and the cycle repeats.

AI is eating the repetitive-work half. The first drafts. The first-pass screening. The tier-one customer calls. The basic variance analysis. The starter marketing copy. The initial design variants. Everywhere the bottom of the pyramid used to live, there is now a copilot that does the work faster, cheaper, and often better than a human learning the ropes.

What disappears with that work is the tuition.

In procurement, the junior buyer who used to process hundreds of RFPs manually developed, over time, an intuition for which suppliers are spinning stories and which are telling the truth. That intuition was built by pattern recognition across hundreds of cases. AI does the screening now. The pattern recognition never builds.

In customer service, the agent who took five hundred escalated calls in their first year learned — without being taught — when a complaint carries regulatory exposure, when it signals a retention risk, and when it is a signal of something broken in product. AI summarises the call history now. The signal-recognition never builds.

In marketing, the junior who drafted two hundred campaign briefs learned, through rejection after rejection, where the brand's voice boundary actually lives — not where the brand guidelines say it lives, but where it actually lives. AI drafts the briefs now. The voice-judgement never builds.

In product design, the junior who built fifty wireframes learned which accessibility constraints are negotiable in which contexts, which design patterns signal dark-pattern territory, and why the team rejected the obvious flow two years ago. AI generates variants now. The rationale never builds.

In risk and compliance, the analyst who read a thousand transactions learned — viscerally — what a suspicious pattern looks like before a model flags it. AI flags it now. The visceral recognition never builds.

This is the same problem wearing six different uniforms. The answer cannot be "stop using AI." The work is genuinely better, faster, and cheaper, and no organisation is going to turn the copilots off. The answer has to be: build the tuition somewhere else.


What aviation figured out fifty years ago

Let me borrow a parallel that helps focus the solution.

Commercial aviation manages a problem structurally identical to this one, and has managed it successfully for decades. The sky is a shared, high-consequence, rapidly changing environment. No individual pilot can be trusted to learn it through apprenticeship alone — the stakes of a mistake are too high, and the rate of change (new aircraft, new procedures, new airspace rules) is faster than on-the-job experience can track. So the industry solved the competence problem at three layers simultaneously:

The first layer is governance. Regulators — ICAO internationally, EASA in Europe, the FAA in the United States — set the rules of the shared airspace. Who may fly where, under what conditions, with what certifications, separated by what minima. Without this layer, there is no shared environment to be competent within.

The second layer is infrastructure. Airports, air traffic control, navigation aids, certified aircraft. This is the physical and procedural substrate that makes the governance enactable. Rules without infrastructure are fiction.

The third layer is individual competence, continuously maintained. Every commercial pilot spends recurring time in a simulator. Not as a one-off graduation event — as a recurring discipline. Type ratings are renewed. Emergency procedures are rehearsed. New aircraft types require new simulator programmes. The simulator does something important and specific: it preserves, in a testable form, the accumulated institutional judgement of the aviation industry — every edge case, every emergency, every failure mode that anyone has ever encountered, compressed into scenarios the pilot can practise against.

The crucial point is that the simulator is not a training product the pilot buys once. It is the living memory of the profession, and it trains the pilot against that memory on a continuous basis. When a new failure mode is discovered — a sensor fault in a particular aircraft type, a new pattern of controlled-flight-into-terrain incidents — the simulator programme updates. Every pilot flying that type encounters the new scenario in their next check ride. The memory of the profession and the competence of the individual are coupled by design.

This is why aviation can replace pilots without losing the profession's knowledge. The knowledge lives in the simulator, not only in the pilots' heads. When a captain retires, a first officer is promoted, because the first officer has already been calibrated against the same memory the captain was calibrated against. Nothing important leaves with the departing person that has not already been absorbed by the simulator.

Every knowledge-work function now needs exactly this third layer. Governance is being built — the EU AI Act, national AI strategies, sectoral regulation. Infrastructure is being built — AI governance frameworks, model registries, procurement policies, audit regimes. But the third layer — the equivalent of the simulator, the continuous maintenance of individual competence against a living memory of the role — is almost entirely missing.


What a living memory for a role actually is

Let me be precise about what is missing, because "training" is too soft a word and it invites the wrong solution.

A wiki is not a living memory. Neither is a policy document, a set of guidelines, or a Notion page full of process diagrams. These are artifacts of codification — snapshots of what someone, at some moment, decided to write down. They do not update themselves when the work changes. They do not test whether anyone has actually absorbed them. They do not know when a practitioner is about to make a decision that prior cases suggest they are not yet ready to make. They sit there, read by nobody, until an audit asks for evidence of governance and someone prints them.

Every large consultancy I have worked with learned this the hard way. I was at A.T. Kearney in the early 2000s when we were asked to find and catalogue everything innovative EDS — the company that essentially invented IT outsourcing — had ever built. Roboticised factories for General Motors. The first satellite-to-car system that became OnStar. A parade of boundary-pushing projects that had, over two decades, made EDS something more than a commodity supplier. We found them all. What we also found was that almost every person who had built them had left. EDS had codified knowledge into tidy operational categories — mainframe, midframe, desktop — and in the process made itself into a place where the people who did things that didn't fit the categories had nowhere to belong. The categories remained. The innovators did not. The knowledge they carried out the door was not in the wiki, because it had never been the kind of thing a wiki could hold.

That pattern — codification without living connection — is what "knowledge management" has been doing for three decades, and it is the pattern every organisation is about to repeat, much faster, with AI.

A living memory for a role is different in kind. It has four properties.

It contains scenarios, not documents. Not "our policy on supplier screening" but a thousand anonymised cases of actual supplier screenings — what was proposed, what the buyer decided, what constraints were in play, what the outcome was. Not "our brand voice guidelines" but two hundred instances of content that was approved, content that was rejected, and the specific reason the rejection happened. Not "our de-escalation framework" but the recordings of calls that escalated anyway, with annotations of what was missed. Documents describe a role in theory. Scenarios contain how the role actually thinks.

It is measurable. A living memory is not a library that hopes to be read. It is the ground truth against which current practitioners are continuously calibrated. Does the buyer catch the authority-limit breach when the AI's confidence score is 97%? Does the customer-service agent escalate the regulated-category complaint that the AI's sentiment analysis missed? Does the marketer identify the unsupported claim the generative tool confidently produced? Each of those is a measurement. Each measurement is a datum about the practitioner. Aggregated, they describe whether a person is competent in this role, in this organisation, now — not at the point they finished their induction three years ago.

It updates from use. When a practitioner makes a judgement that diverges from the memory's prediction — and turns out to be right, because they caught an edge case the memory did not yet contain — the memory absorbs that judgement. When a regulation changes, the constraint catalogue updates and the next set of scenarios reflects the new rule. When a new AI tool is introduced, its failure modes are added to the library of scenarios the practitioner must learn to catch. The memory grows from the work, continuously, instead of decaying between quarterly reviews.

It trains proactively. When the measurement shows a specific practitioner is weakening in a specific dimension — their authority-contraction is slipping, their tool-output discrimination is drifting, their confidence is rising faster than their clarity — the memory generates targeted practice against that specific weakness, drawn from real cases, before the weakness produces an incident. This is what the simulator does for the pilot. It is what no current system does for the procurement officer, the marketer, the designer, or the analyst.

Four properties. One system. This is the third layer.


Why this is not what vendors are currently selling

I want to be fair to the comparison, because several adjacent categories look like they might already solve this.

Enterprise search and RAG systems — Glean, Guru, and the enterprise knowledge-graph category — are a genuine advance over the old corporate wiki. They index everything in place, they respect source-system permissions, they let an AI assistant answer questions grounded in the company's actual documents. They are useful. They are not a living memory for a role. They make retrieval faster; they do not close the judgement loop. An employee who can find any document in three seconds still has to read it, judge it, and internalise it, and the system cannot tell whether they did.

Cognitive Workforce Twins and Human Digital Twins are a 2025 vendor category that sounds close to what we are describing. They are not. They model the workforce as an asset to plan around — skills inventories, role simulations, attrition forecasts — much as a supply-chain digital twin models a logistics network. The workforce is the system under observation. What we are describing is the inverse: the role is the system, preserved and enacted through a living memory, and the incumbent is its calibrated operator.

Corporate learning platforms — the Cornerstones, the Degreeds, the LinkedIn Learnings — deliver courses. Courses are combination-and-internalisation artifacts in Nonaka's sense: general knowledge, packaged, consumed by many. They do not contain the specific tacit judgement that lives in a specific role in a specific organisation, and they cannot, because that knowledge has never been codified and most of it never will be.

Vendor training — the "become a certified Harvey user," the Copilot enablement programmes — is sales material wearing an education costume. A lawyer trained on one AI research tool cannot necessarily evaluate a different one. Worse, vendor training is structurally incentivised to minimise limitation awareness. The vendor wants adoption. The practitioner needs competent, critical use. These interests diverge, and you cannot buy your way out of the tension.

The living-memory layer does not exist as a product you can procure today. It has to be built.


What companies are actually doing

The companies that depend most on apprenticeship are making four different bets at once. It is worth looking at them, because the split tells you something about who has noticed the problem and who has not.

The contraction bet. The Big Four are quietly shrinking the base of the pyramid. KPMG cut its UK graduate intake by roughly 29% between 2023 and 2024, from 1,399 places to about 942. Deloitte cut 18%, EY 11%, PwC 6%. Across the sector, UK accountancy graduate adverts fell 44% against 2023 (City AM). Inside PwC, global chair Mohamed Kande has said explicitly that the firm wants "a different set of people" — more engineers, fewer generalist analysts (Irish Times). EY has delayed graduate start dates three years running: 2025 hires are beginning work in March 2026. This is not a hiring pause. It is a pyramid becoming an obelisk.

The reconfiguration bet. The big law firms, facing the same pressure, are doing the opposite. They are still hiring first-year associates, but they are rebuilding what a first-year associate does. Latham & Watkins flew all four hundred of its US first-years to Washington in 2024 for a two-day AI Academy on Harvey and Microsoft Copilot, and has repeated the programme with every subsequent cohort (Above the Law). Ropes & Gray has gone further: first-year associates may now dedicate up to four hundred non-billable hours per year — roughly a fifth of their billable target — to AI training, tool experimentation, and mentoring circles across fifteen approved tools (Above the Law). Think about what that number means. The firm is explicitly paying for competence formation that used to arrive as a by-product of billable work. It is accounting, on the P&L, for the apprenticeship that AI has broken. A&O Shearman — the first law firm to deploy generative AI firmwide — now actively encourages trainees to trial agentic legal AI on live matters (Legal Cheek). These firms still believe in juniors. They just no longer believe juniors learn by drafting memos.

The contrarian expansion bet. One firm at scale is betting in the opposite direction. In February 2026, IBM's Chief HR Officer Nickle LaMoreaux said: "We are tripling our entry-level hiring, and yes, that is for software developers and all these jobs we're being told AI can do" (Fortune). This followed CEO Arvind Krishna's October 2025 commitment to hire more college graduates "over the next twelve months than we have in the past few years." IBM's wager is specific and worth naming: if the rest of the industry thins its pipeline, there will be a senior-talent shortage in the mid-2030s, and the firms that preserved their juniors will own it. It may or may not pay off. But it is the only major tech firm making a pipeline-preservation bet in public.

The contrarian-by-omission bet. And then there are the firms doing neither. JPMorgan's LLM Suite onboarded two hundred thousand users within eight months of its 2024 launch; bankers now generate pitch decks in thirty seconds that "previously took a junior analyst hours" (JPMorgan Chase). Goldman Sachs rolled out its AI Assistant firmwide in mid-2025; by July, forty-six thousand staff were issuing over a million prompts a month (American Banker). Both firms have deployed, at scale, the exact tool that eliminates the task juniors used to learn from. Neither has publicly addressed what replaces the apprenticeship. Headcount is not falling. The task is. Nobody in those firms has explained where the 2035 managing director is supposed to come from.

One structural outlier is worth noting, because it suggests the problem is not insoluble. The German dual-system apprenticeship — the three-year combination of vocational school and on-the-job training that has underpinned the Mittelstand for a century — is absorbing AI as a new regulated trade rather than hollowing out existing ones. TRUMPF, the Ditzingen-based laser manufacturer, launched the first cohort of a new dual-study degree in Data Science and AI with DHBW Karlsruhe in August 2025, with roughly a hundred apprentices in the first intake (TRUMPF press release). The Federal Employment Agency reports a 340% rise in AI-related apprenticeship positions since 2023. The dual system's answer to the problem is the dual system. It works because Germany has, for a hundred years, treated apprenticeship as a piece of national infrastructure — a shared simulator, in effect, maintained across companies, schools, and the state. Most countries do not have that infrastructure, and cannot wish it into being.

The pattern across all four bets is what matters. Almost no firm has a credible public answer for where senior professionals come from in 2035. IBM is acting on the problem institutionally. The big law firms are improvising an internal answer function by function. The German Mittelstand has the advantage of a pre-existing public answer. Everyone else — and this is the majority — is deploying the tool that breaks the apprenticeship and leaving the question of what replaces it to the next CEO.


The capability gap ahead

Here is the position organisations will find themselves in, two or three years from now.

The governance layer will be in place. The EU AI Act is already binding. Sector-specific rules are coming. Every competent company will have an AI use policy, a tools inventory, a model registry, an audit trail. Boards will demand evidence that the workforce is qualified to oversee the AI it deploys, because Article 4 will require it and because the first few enforcement cases will make the requirement concrete.

The infrastructure layer will be in place. Procurement teams will have vendor-assessment frameworks. Legal teams will have GDPR-integrated data classifications for AI. IT will have model access controls. This is where most budget is currently going, and it is necessary work.

What will not be in place, in most organisations, is the third layer. The pilot's simulator. The living memory of the role, updated from use, testing incumbents against itself, proactively training the weakest dimensions before they produce incidents. Organisations will be asked to show that their people can recognise when the AI is wrong — which is what the regulation actually asks — and they will reach for the evidence and find that all they have is attendance logs and certificates.

Attendance is not competence. A certificate is a receipt. What the regulation will eventually want, and what the business genuinely needs, is demonstrated practical judgement, sustained over time, in a role-specific context.

That is not a product anyone can sell from a catalogue. It is a capability organisations will have to build, function by function, with a specific shape: a living role memory coupled to a measurement instrument coupled to a proactive training loop. The three are one system. Any two without the third produce something that has been tried before and has failed before.

The organisations that build this layer — quietly, inside their critical functions, starting now — will still have senior practitioners in 2036. The organisations that treat this as a training problem to be solved with a procurement exercise will not.

The simulator is the point. Everyone else is building the airport.


This piece develops a thesis from the Twin Ladder research dossier. A subsequent functional deep-dive — how this layer looks specifically inside a procurement team — is in preparation.


Sources

  1. City AMBig Four slash graduate jobs as AI takes on entry-level work, June 2025. https://www.cityam.com/big-four-slash-graduate-jobs-as-ai-takes-on-entry-level-work/
  2. Irish TimesTop consultancies freeze starting salaries as AI threatens pyramid model, December 2025. https://www.irishtimes.com/business/2025/12/01/top-consultancies-freeze-starting-salaries-as-ai-threatens-pyramid-model/
  3. Above the LawThe Grace To Dabble: Two Biglaw Firms Look To An AI-First Future, November 2025. https://abovethelaw.com/2025/11/the-grace-to-dabble-two-biglaw-firms-look-to-an-ai-first-future/
  4. Legal Cheek — A&O Shearman firm profile, 2025. https://www.legalcheek.com/firm/ao-shearman/
  5. FortuneTech Giant IBM Tripling Gen Z Entry-Level Hiring According To CHRO, February 2026. https://fortune.com/2026/02/13/tech-giant-ibm-tripling-gen-z-entry-level-hiring-according-to-chro-rewriting-jobs-ai-era/
  6. American Banker / JPMorgan ChaseLLM Suite: Innovation of the Year, 2025. https://www.jpmorganchase.com/about/technology/blog/llmsuite-ab-award
  7. American BankerGoldman Sachs staff now write a million gen AI prompts a month, July 2025. https://www.americanbanker.com/news/goldman-sachs-staff-now-write-a-million-gen-ai-prompts-a-month
  8. TRUMPFStart of apprenticeship program across Germany: TRUMPF is training AI professionals for the first time, August 2025. https://www.trumpf.com/en_INT/newsroom/global-press-releases/press-release-detail-page/release/start-of-apprenticeship-program-across-germany-trumpf-is-training-ai-professionals-for-the-first-time-9582/