TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder
Back to Insights

Implementation Guides

Your Compliance SOPs Were Written for a World Without AI

A medical products company wants to deploy an AI patient navigator. The SOP says pre-approve all content in final, static form. The AI generates dynamic responses. The compliance team has two options: block the project or build a new model. Most are choosing wrong.

2026. gada 16. martsAlekss Blumentāls, Dibinātājs un vadītājs7 min read

Your Compliance SOPs Were Written for a World Without AI

A real hiring test for a Legal & Compliance Manager reveals the gap that most organisations have not noticed yet.


A medical products company in the EU wants to launch a Digital Patient Navigator — an AI-powered tool on their website that helps patients find diagnostic centres, understand treatment pathways, and navigate the healthcare system.

The compliance team's Standard Operating Procedures say that "all digital content must be pre-approved in its final, static form before being published."

The AI generates dynamic responses based on what patients ask. You cannot pre-approve every possible answer. The SOP does not work. The Marketing Director is furious. "Compliance is supposed to help us innovate. Right now, you are the reason we are falling behind."

This is not a hypothetical. It is a real case study used to test candidates for a Legal & Compliance Manager position at a major medical products company. And it captures, in miniature, the problem that every organisation deploying AI is about to face — if they have not already.


The SOP is not wrong. It is static.

Five years ago, when that SOP was written, "digital content" meant a web page, a PDF, a social media post. Content that existed in finished form before it was published. Reviewing it before publication was not just possible — it was the only sensible approach.

AI-generated content breaks this model entirely. A patient asks the navigator a question. The AI assembles a response from its knowledge base, shaped by the patient's specific query, in real time. The response did not exist ten seconds ago. It will never be generated in exactly the same form again. There is nothing to "pre-approve."

The compliance officer who enforces this SOP is not being obstructive. She is competent at the job the SOP describes — reviewing static content against a checklist. She does not have the competence for the new job the AI creates — designing governance frameworks for dynamic systems that produce content she cannot see before it exists.

That is not a policy failure. It is a competence gap. And it is the same gap that Article 4 of the EU AI Act was written to address.


The shift: from approving outputs to governing inputs

You cannot pre-approve every response an AI system generates. But you can govern what goes in, set boundaries on what comes out, and monitor what actually happens.

Approve the input layer. Pre-approve the knowledge base the AI draws from — the medical content, the healthcare system information, the diagnostic centre database. Pre-approve the system prompt that defines the AI's behaviour. Pre-approve the list of topics the AI must never touch — no treatment recommendations, no drug comparisons, no off-label information.

This is approving the system, not the content. The same principle applies to a call centre: you approve the training, the scripts, and the escalation rules. You do not pre-approve every individual phone call.

Build guardrails. Output filtering for prohibited content categories. Mandatory disclaimers on every response. Escalation triggers when the AI detects a question outside its scope. Response length limits to prevent the system from generating extended medical commentary it was not designed to provide.

Monitor after deployment. Weekly sample review of AI interactions. Adverse event detection — any interaction where a patient expresses harm or confusion gets flagged. Quarterly review with Marketing and Medical. A clear incident response protocol. A kill switch that can disable the system within hours.

Map the regulatory landscape. Under the EU AI Act, this is likely a limited-risk system — not high-risk unless it makes clinical decisions. Article 50 requires transparency: patients must know they are interacting with AI. Article 4 requires the people operating the system to understand its capabilities and limitations. GDPR applies to any personal data collected. National healthcare advertising regulations apply to anything that could constitute promotion.

None of this is exotic. It is the standard governance model for any AI system that produces dynamic content. But it requires the compliance team to understand how AI works — what a knowledge base is, what a system prompt does, what output filtering can and cannot catch. Without that understanding, the only available answer is the one in the five-year-old SOP: pre-approve everything. Which means: block everything.


The competence gap is the real blocker

The Marketing Director in this case study is frustrated with compliance. But compliance is not the problem. The problem is that nobody in the compliance function has been trained on how AI systems work.

This is Article 4 in its purest form. Not as a regulation to comply with, but as a capability to build. The people responsible for governing AI deployment need to understand AI well enough to govern it intelligently. Without that literacy, every AI project hits the same wall: a policy written for a different technology, enforced by people who do not have the knowledge to adapt it.

Most organisations have invested in AI training for developers, for data teams, for business users. Almost none have invested in AI training for the compliance function. The people who will approve or block every AI deployment are the people who have received the least preparation for it.

The irony is structural. The function responsible for ensuring the organisation uses AI responsibly is the function least equipped to understand how AI works.


Every organisation has this SOP

The healthcare case is vivid because patient safety makes the stakes obvious. But the same pattern exists everywhere.

A law firm's document retention policy says "all client correspondence must be reviewed before external transmission." A lawyer uses AI to draft a response that is sent through the firm's email system. Was the AI's contribution "correspondence"? Does the policy apply? Nobody knows, because the policy was written before AI drafting existed.

A bank's model risk management framework requires "validation of all models before deployment." The framework defines a model as a quantitative system with documented inputs and outputs. A generative AI system that summarises customer complaints does not fit the definition. Does it need validation? The framework cannot answer, because it was written for statistical models, not language models.

An HR department's hiring policy says "all candidate assessments must be conducted by qualified personnel." The ATS uses AI to score and rank applicants before any human sees them. Is the AI conducting an assessment? The policy does not say, because when it was written, "assessment" meant a person reading a CV.

In every case, the SOP is not wrong. It is static. It was written for a world where content was fixed, models were mathematical, and assessments were human. AI broke all three assumptions simultaneously. The policies have not caught up.


What Article 4 actually requires here

Article 4 of the EU AI Act does not prescribe specific compliance frameworks. It requires organisations to ensure "sufficient AI literacy" for all staff involved in operating AI systems.

In this case, "all staff involved" includes:

  • The compliance officer who must approve or adapt the governance framework
  • The marketing team that will manage the AI tool day to day
  • The medical team that validates the knowledge base
  • The IT team that implements guardrails and monitoring
  • The General Manager who signs off on the risk

Each of these people needs a different kind of AI literacy. The compliance officer needs to understand input governance versus output approval. The marketing team needs to understand what the AI can and cannot do. The medical team needs to understand how a knowledge base works and what happens when it is incomplete. The GM needs to understand residual risk — what the guardrails catch and what they miss.

A generic AI awareness training does not build any of this. It produces people who know what AI is but cannot make decisions about how to govern it. The SOP stays static because nobody has the competence to rewrite it.


Compliance as competitive advantage

The medical products company that solves this first has something no competitor has: a documented, defensible compliance framework for AI-generated patient content. When regulators begin asking how healthcare companies govern their AI systems — and they will — the organisation with a framework has an answer. The organisation without one has a five-year-old SOP and a blocked project.

This is the reframe the compliance function needs. Not "how do we prevent the organisation from using AI" but "how do we become the team that makes AI deployment possible." The compliance officer who can design governance for dynamic AI systems is more valuable than one who can only enforce rules written for static content.

But that requires investment in the compliance team's AI literacy. Not a webinar. Not an e-learning module. Structured training on how AI systems work, what governance looks like for dynamic content, and how to design frameworks that manage risk without blocking innovation.

The SOP that blocks the project was not written by someone who was wrong. It was written by someone who did not have the information they needed. Article 4 says that is now the organisation's responsibility to fix.


This analysis draws on a real employment case study used by a medical products company in the EU, March 2026. The scenario has been anonymised.

Take the free AI competence assessment at twinladder.ai/en/assess — ten minutes, no account required.

Sources