TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

EU AI Act

Building an AI Governance Framework for Your Firm

79% of firms have adopted AI. 10% have governance. That arithmetic explains every sanctions case you have read about.

2025. gada 5. novembrisLīga Pauliņa, Līdzdibinātāja un TwinLadder Akadēmijas direktore14 min read
Building an AI Governance Framework for Your Firm

Building an AI Governance Framework for Your Firm

79% of firms have adopted AI. 10% have governance. That arithmetic explains every sanctions case you have read about.


I spend a lot of my time helping legal professionals build AI competence. And the single question I hear most often from managing partners is not "how do we use AI better?" It is "how do we make sure nobody uses it badly?"

That is a governance question. And the fact that 79% of law firms have adopted AI tools while only 10% have implemented formal governance tells you everything you need to know about why the sanctions cases keep coming.

Governance is not about slowing down AI adoption. It is about making AI adoption survivable.

Why Governance Matters Now

The Mata v. Avianca case. The Morgan & Morgan citations incident. The Ayinde ruling in the UK. Every one of these cases shares a common root cause: lawyers used AI without adequate governance — no policies, no verification requirements, no supervision framework, no training that addressed how the tools actually fail.

ABA Formal Opinion 512, issued July 2024, makes governance an explicit professional obligation. Rules 5.1 and 5.3 require managing and supervising lawyers to establish measures ensuring compliance with professional obligations. In the AI context, that means governance.

And for firms with EU exposure, Article 4 of the AI Act creates a separate, regulatory obligation to ensure AI literacy — which requires governance structures to implement and document.

Governance is no longer optional. It is a professional duty and, for many firms, a legal requirement.

Choosing Your Model

The right governance structure depends on your firm's size and complexity. I work with three models.

Enterprise model (firms over 200 lawyers). A dedicated AI Governance Board with representatives from technology, risk management, ethics, practice group leadership, and information security. This model provides centralised policy development, coordinated vendor assessment, and unified training standards. The overhead is significant — but so is the risk exposure at scale.

80% of AmLaw 100 firms have established AI governance boards. If you are in this category and have not, you are behind your peers and exposed relative to the standard of care your size implies.

Distributed model (50-200 lawyers). Governance responsibilities are assigned to existing roles. The managing partner or executive committee sets strategic direction. The IT director handles tool evaluation and vendor management. An ethics partner monitors compliance. Practice group leaders oversee implementation within their groups.

This works when accountability is clear and coordination is regular. It fails when governance becomes "everyone's responsibility" — which in practice means nobody's responsibility.

Partner-led model (under 50 lawyers). A single partner or small committee owns governance. The focus is on the essentials: a written acceptable use policy, a defined tool approval process, mandatory training, an incident reporting mechanism, and periodic policy review.

Smaller firms sometimes believe they do not need governance because their size provides natural oversight. The Morgan & Morgan case should dispel that notion. Supervising attorney T. Michael Morgan was sanctioned for a filing he did not create because his name was on it and his supervisory obligation applied.

The Essential Policy Components

Regardless of firm size, every AI governance framework needs these elements.

Data classification. What information can be entered into which AI tools? At minimum, your policy should distinguish between prohibited inputs (client confidential information in public tools, privileged communications, personal data subject to protective orders), conditional inputs (anonymised case facts, general research queries in approved enterprise tools), and unrestricted inputs (publicly available legal materials, internal administrative tasks).

If your firm does not have clear categories for what can go into AI tools, every employee is making that decision independently. That is not a policy. That is a liability.

Verification requirements. Every governance framework needs explicit, non-negotiable verification obligations. All AI-generated legal citations must be verified against primary sources. AI-drafted content must be reviewed before submission to courts or clients. Factual assertions from AI require independent confirmation.

The word "non-negotiable" is doing important work here. Verification that is "recommended" or "expected" will be skipped under time pressure. Verification that is mandatory — documented, monitored, enforced — will actually happen.

Disclosure obligations. Over 200 federal judges have issued standing orders requiring AI disclosure in court filings. Pennsylvania mandates it statewide. Your policy must address where disclosure is mandatory, what your firm's standard is for voluntary disclosure, how clients are informed about AI use, and what documentation is required.

Training and competence. Opinion 512 requires lawyers to have "a reasonable understanding of the capabilities and limitations of AI tools they use." That standard cannot be met without training. Your policy should specify what training is required before AI tool access is granted, how competence is assessed, what ongoing training is required, and how training completion is documented.

Supervision. Partners and managers are responsible for ensuring that associates, staff, and AI tools are appropriately supervised. Your governance framework must define who is responsible for supervising AI-assisted work, what review is required before AI-assisted work products are finalised, how compliance with AI policies is monitored, and what happens when violations are identified.

Implementation That Works

I have seen governance frameworks that look excellent on paper and are completely ignored in practice. The difference between a framework that works and one that does not usually comes down to implementation.

Phase it. Do not try to implement everything at once. Start with the highest-risk areas: court filings, client-facing work products, confidential information handling. Expand from there.

Support it with technology. Approved tool lists should be enforced through IT systems, not just communicated in memos. Access controls should limit AI tool availability to trained users. Logging should track usage for compliance monitoring.

Test it. Before rolling out governance firm-wide, test it with a pilot group. Identify friction points, workflow disruptions, and unrealistic requirements before they become firm-wide problems.

Review it regularly. AI capabilities change. Regulations evolve. Court requirements shift. A governance framework that is not regularly updated becomes a governance framework that is eventually inadequate.

The Supervision Trap

Let me be direct about something that many managing partners underestimate: supervisory liability is the biggest governance risk in AI-assisted practice.

The Morgan & Morgan case is instructive. The supervising attorney was sanctioned even though he was not involved in creating the problematic filing. His signature was on it. His supervisory obligation applied. End of analysis.

Under Rules 5.1 and 5.3, partners are responsible for ensuring that the firm has measures giving reasonable assurance of compliance. If your firm uses AI and you do not have governance measures in place, every partner is potentially exposed when an AI-related error occurs.

Governance is not bureaucratic overhead. It is the mechanism by which supervisory lawyers discharge their professional obligations. Without it, you are personally exposed to liability for every AI mistake made by everyone you supervise.

The Bottom Line

Building an AI governance framework is not exciting work. It does not generate revenue. It does not win clients. It does not produce interesting case studies.

What it does is prevent disasters. And in a profession where a single AI-related error can result in sanctions, malpractice liability, client loss, and reputational damage, disaster prevention is worth the investment.

Start with a written policy. Add verification requirements. Build in training. Define supervision obligations. Document everything. Review regularly.

It is not glamorous. But it is what competent AI governance looks like. And competence, as always, is the professional obligation that everything else depends on.