TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

Regulatory Updates

TRAIGA Compliance: A Practitioner's Guide for Texas

Texas has enacted one of America's most comprehensive state AI laws. Here is what it actually requires — and what it does not.

2025. gada 12. maijsLīga Pauliņa, Līdzdibinātāja un TwinLadder Akadēmijas direktore13 min read
TRAIGA Compliance: A Practitioner's Guide for Texas

TRAIGA Compliance: A Practitioner's Guide for Texas

Texas has enacted one of America's most comprehensive state AI laws. Here is what it actually requires — and what it does not.


When Texas enacted the Texas Responsible Artificial Intelligence Governance Act in June 2025, it created a regulatory framework that every lawyer practising in the state needs to understand. Not because the law is perfect — it has significant gaps and open questions — but because it applies broadly, carries real penalties, and takes effect on January 1, 2026.

I have spent considerable time with this statute. Let me walk you through what matters.

Who TRAIGA Covers

The scope is expansive. TRAIGA applies to any individual or entity conducting business in Texas, any organisation offering products or services to Texas residents, and any developer or deployer of AI systems within Texas. If your AI system is accessible to Texas users, TRAIGA likely applies regardless of where your organisation is headquartered.

For law firms, this means two things. First, your own AI tools — research platforms, drafting assistants, analytics tools — fall under TRAIGA to the extent they serve Texas clients or are deployed in Texas. Second, your clients who deploy AI systems need compliance counsel, and TRAIGA creates a significant advisory opportunity.

What TRAIGA Prohibits

The law establishes categorical prohibitions that cannot be contracted around.

Behavioural manipulation: AI systems designed to manipulate human behaviour in ways that cause harm. Government social scoring: Systems that assign social scores to individuals when deployed by government entities. Intentional discrimination: AI systems developed or deployed with intent to discriminate against protected classes. Biometric data without consent: Collecting biometric data without appropriate authorisation. The remaining prohibitions cover constitutional rights infringement, child exploitation content, and unlawful deepfakes.

The critical word here is "intent." TRAIGA uses an intent-based liability framework for discrimination claims. Disparate impact alone is not sufficient — the claimant must demonstrate that the developer or deployer intended to discriminate. This is a significant departure from impact-focused regulations and provides clearer compliance guidelines for businesses. But I would caution against reading this as carte blanche. Intent can be inferred from circumstances, and this framework will be tested in litigation.

The NIST Safe Harbour

Here is the provision that should shape your compliance strategy: TRAIGA provides an affirmative defence for organisations that substantially comply with the NIST AI Risk Management Framework.

This is not a technical detail — it is the centrepiece of practical compliance. Substantial compliance with the NIST AI RMF creates a rebuttable presumption that the entity used reasonable care. If you are advising clients on TRAIGA compliance, the NIST framework should be the foundation of their governance programme.

The statute also references "similar recognised frameworks" as potential bases for affirmative defence, though it does not enumerate which frameworks qualify. Until that is clarified, NIST is the safe choice.

Sector-Specific Exemptions

TRAIGA carves out certain sectors:

  • Financial institutions compliant with all applicable federal and state banking laws
  • Insurance entities subject to anti-discrimination statutes
  • Uses of biometric data for security, fraud prevention, or healthcare under HIPAA

These exemptions are narrow. If you are advising clients in these sectors, do not assume blanket exemption — review the specific conditions carefully.

The Penalty Structure

Enforcement authority rests exclusively with the Texas Attorney General. There is no private right of action — individuals cannot bring suits under TRAIGA. This is important for both litigation strategy and risk assessment.

Before bringing an action, the Attorney General must provide notice and an opportunity to cure. This pre-enforcement procedure signals a regulatory philosophy focused on achieving compliance rather than maximising penalties.

The penalties:

  • Curable violations: $10,000 to $200,000 per violation
  • Non-curable violations: up to $200,000 per violation
  • Continued violations after notice: $2,000 to $40,000 per day

The per-day exposure for continued violations is the number that should focus attention. An organisation that receives notice of a violation and fails to cure faces rapidly accumulating penalties.

The Regulatory Sandbox

TRAIGA creates a 36-month regulatory sandbox programme overseen by the Texas Artificial Intelligence Council. The sandbox offers temporary relief from certain state licensing and regulatory requirements while organisations test new AI systems in a controlled environment.

The sandbox is not a compliance holiday. Prohibitions on manipulation, discrimination, and unlawful content remain in force within the sandbox. Quarterly reporting is required. But for organisations developing novel AI applications, the sandbox provides a structured path to market.

The Federal Preemption Question

This is the area of greatest uncertainty. TRAIGA was enacted while Congress debated federal AI regulation preemption. A proposed measure would have imposed a ten-year moratorium on state AI laws. That provision was removed in an early Senate vote, but the possibility of federal preemption remains.

My practical advice: comply with TRAIGA as if it will remain in force indefinitely. If federal legislation eventually preempts state AI regulation, you will have built governance infrastructure that serves you regardless. If it does not, you will be compliant. The downside risk of over-preparation is minimal; the downside risk of under-preparation is significant.

A Compliance Roadmap

Before January 1, 2026:

  1. Inventory your AI systems. Identify every AI system that touches Texas residents or is deployed in Texas. This includes AI features embedded in broader software platforms — not just standalone AI tools.

  2. Classify by risk. Determine which systems implicate prohibited practices or high-risk uses. Map each system against TRAIGA's prohibition categories.

  3. Audit policies and contracts. Review internal AI policies, vendor agreements, and data governance practices for compliance with TRAIGA's requirements on discrimination, biometric data, and transparency.

Structural compliance:

  1. Implement the NIST AI RMF. This is the most efficient compliance strategy. Document your implementation thoroughly — the safe harbour requires demonstration of substantial compliance, not perfect implementation.

  2. Establish disclosure mechanisms. For public-facing tools used by or on behalf of government agencies, disclosure requirements apply. Build these into the user experience from the start.

  3. Train your people. TRAIGA does not include an explicit training mandate comparable to Article 4 of the EU AI Act, but ensuring personnel understand the law's requirements is essential for compliance. This is also a professional development investment that aligns with broader competence obligations.

Ongoing:

  1. Monitor the Texas AI Council. The Council will issue reports and opinions that will shape TRAIGA's interpretation. Track these developments and adjust your compliance programme accordingly.

  2. Update risk assessments. As your AI systems change and regulatory guidance evolves, update your risk classification and governance documentation.

My Assessment

TRAIGA is not a perfect statute. The intent-based discrimination framework will be tested. The "similar recognised frameworks" language needs clarification. The interplay with federal preemption is uncertain.

But it is a serious attempt at practical AI governance, and it provides clear compliance pathways — particularly the NIST safe harbour. For Texas practitioners, TRAIGA compliance is not optional. For lawyers in other states, it is a preview of where state-level AI regulation is heading.

Build the governance infrastructure now. The investment pays dividends regardless of how the regulatory landscape evolves.