NIST AI Risk Management Framework: A Lawyer's Guide
Two states now offer legal safe harbour for NIST compliance. That changes the calculus for every firm with AI governance obligations.
The NIST AI Risk Management Framework was voluntary guidance when it was released. It is now something considerably more valuable: a liability shield.
Colorado and Texas have both enacted legislation that provides explicit safe harbour or affirmative defence for organisations that implement the NIST AI RMF. For lawyers — both those advising clients on AI governance and those governing AI in their own practices — this transforms the framework from recommended reading to essential compliance infrastructure.
Let me walk you through why, and how to implement it.
The Framework in Brief
NIST organises AI risk management around four core functions. Understanding the logic of the structure is important because the safe harbour provisions reference "substantial compliance" — which requires understanding what the framework actually asks you to do.
Govern. Establish the structures, policies, and accountability mechanisms for AI governance. Who is responsible? What are the rules? How are decisions made and documented? This function is about creating the organisational context within which AI risk management occurs.
Map. Understand what your AI systems do, where they operate, who they affect, and what can go wrong. This function documents the landscape — purpose, users, operating environment, risk categories, and potential impacts.
Measure. Assess and quantify AI risks through testing, monitoring, and evaluation. This function produces evidence — metrics on system performance, accuracy testing results, monitoring data, and incident records.
Manage. Allocate resources to address identified risks. This function implements the controls — mitigation strategies, human oversight requirements, response procedures, and ongoing improvement processes.
The framework is deliberately flexible. It does not prescribe specific controls for specific risks. It provides a structure within which organisations develop their own risk management approaches appropriate to their context.
Colorado: The Safe Harbour
The Colorado AI Act, signed May 2024 and effective June 30, 2026, covers high-risk AI systems in employment, housing, credit, healthcare, education, insurance, government services, and — notably — legal services.
Organisations that demonstrate consideration of the NIST AI RMF when developing their required Risk Management Policy and Program may qualify for an affirmative defence against enforcement actions.
What "consideration" means in practice: You do not need to implement every element of the NIST framework verbatim. You need to demonstrate that you considered the framework's structure and principles when developing your governance approach and that your approach reflects its logic. Documentation of this consideration process is essential.
Penalties under the Colorado AI Act reach $20,000 per violation under the Consumer Protection Act, with enforcement by the Attorney General. There is no private right of action.
For law firms: legal services are explicitly within scope. If your firm uses AI systems that affect client outcomes, Colorado's law applies to that use. The NIST safe harbour is your most efficient path to compliance.
Texas: The Affirmative Defence
TRAIGA — the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026 — provides a rebuttable presumption that an entity used reasonable care when the AI system substantially complies with the NIST AI Risk Management Framework or other similar recognised frameworks.
The Texas provision is stronger than Colorado's in one respect: it creates a presumption, not just a defence. Substantial NIST compliance shifts the burden to the enforcement authority to demonstrate that reasonable care was not exercised.
Additional affirmative defences apply when a third party misuses the AI system, when violations are discovered through good-faith testing, or when the entity follows state-established guidelines.
The strategic implication: For any organisation operating in Texas, NIST AI RMF compliance is the single most efficient investment in legal protection against AI-related enforcement.
The Federal Question
I need to address the uncertainty. On December 11, 2025, President Trump signed an Executive Order establishing policy to "sustain and enhance the United States' global dominance through a minimally burdensome national policy framework for AI." The order creates an AI Litigation Task Force whose sole responsibility is to challenge state AI laws inconsistent with this policy.
A proposed federal measure would impose a ten-year moratorium on state AI regulations unless designed to accelerate AI deployment. If enacted, this could override state frameworks including TRAIGA and the Colorado AI Act.
My assessment: comply with current state laws while monitoring federal developments. NIST framework compliance has value regardless of the state law landscape — it demonstrates reasonable care, supports due diligence, and provides operational improvement. If state safe harbours are preempted, the framework still serves its core purpose.
Implementation for Law Firms
Here is how I recommend law firms implement the NIST framework for their own AI governance.
Step 1: Govern.
Designate an individual or committee responsible for AI governance. Establish written policies covering AI tool procurement, deployment, acceptable use, and monitoring. Document decision-making processes — who approves new tools, who defines acceptable use, who handles incidents.
For most firms, this does not require a new position. It requires assigning clear accountability to existing roles and ensuring those roles have the authority and resources to fulfil the governance function.
Step 2: Map.
Inventory every AI system in use at the firm. Document for each system: its purpose, what data it processes, who uses it, what clients or matters it affects, and what risk category it falls into. Identify which systems are high-risk under applicable state laws.
This step often reveals that firms are using more AI tools than they realise. AI features embedded in research platforms, document management systems, and practice management software may not be obvious but still require governance.
Step 3: Measure.
Establish testing procedures for AI tool accuracy. Implement monitoring for performance degradation over time. Create incident tracking — when AI produces errors, record them, analyse them, and use them to improve processes.
For legal AI tools specifically: track hallucination rates, citation accuracy, and verification catch rates. These metrics tell you whether your verification processes are working and whether specific tools are becoming less reliable.
Step 4: Manage.
Develop mitigation strategies for the risks you identified in the Map phase. Establish human oversight requirements based on risk level — higher-risk uses require more rigorous oversight. Create response procedures for AI failures. Allocate resources for ongoing compliance.
For law firms, the most important management controls are verification workflows, confidentiality protections, and supervision structures. These map directly to the professional obligations in ABA Opinion 512.
Documentation: The Make-or-Break Factor
Safe harbour benefits require documentation. Not just policies, but evidence of systematic implementation. Incomplete or superficial documentation will not satisfy the safe harbour requirements in either Colorado or Texas.
Document:
- The governance structure and who is accountable
- The AI system inventory and risk classifications
- Testing procedures and results
- Incident records and corrective actions
- Training programmes and completion records
- Policy reviews and updates
The standard is not perfection. It is good-faith, systematic implementation documented thoroughly enough to demonstrate reasonable care. Build the documentation practice into your governance from day one — reconstructing it later is exponentially more expensive.
The Dual Benefit
NIST AI RMF compliance serves your firm in two ways simultaneously.
First, it provides legal protection. Safe harbours in Colorado and Texas, and the due diligence evidence it creates everywhere else.
Second, it provides operational improvement. The discipline of systematically governing AI use — inventorying tools, assessing risks, testing performance, managing incidents — produces a firm that uses AI more effectively and more safely.
Most firms I work with find that the operational benefits exceed the compliance benefits. Better governance produces fewer errors, more confident use of AI tools, and more satisfied clients who trust that their data and interests are protected.
The safe harbour is the reason to start. The operational improvement is the reason to continue.
Invest in the framework. Document the implementation. And remember that the firms building this infrastructure now will have a significant advantage — legal, operational, and competitive — over those who wait.

