TWINLADDER
TwinLadder
TWINLADDER
Back to Insights

Regulatory Updates

TRAIGA atbilstības ceļvedis Teksasas praktizētājiem

Padziļināts ceļvedis Teksasas Atbildīgā MI pārvaldības likuma prasību izpildei.

May 12, 2025Liga Paulina, Co-founder & TwinLadder Academy Director13 min read
TRAIGA atbilstības ceļvedis Teksasas praktizētājiem

Klausīties šo rakstu

0:000:00

TRAIGA Compliance: What Texas Lawyers Need to Know

Navigating the Texas Responsible AI Governance Act's requirements, penalties, and safe harbors.


On June 22, 2025, Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), establishing one of the most comprehensive state-level AI regulatory frameworks in the United States. The law takes effect January 1, 2026.

For Texas lawyers, TRAIGA creates both compliance obligations for their own AI use and advisory opportunities for clients deploying AI systems. This guide covers the law's key provisions, prohibited practices, penalty structure, and available safe harbors.

Applicability

TRAIGA applies broadly to:

  • Any individual or entity conducting business in Texas
  • Organizations offering products or services to Texas residents
  • Developers or deployers of AI systems within Texas
  • Texas-based, out-of-state, and international organizations whose AI systems are accessible to Texas users

The geographic scope is notably expansive. If your AI system is accessible to Texas users, TRAIGA likely applies regardless of where your organization is headquartered.

Prohibited Practices

TRAIGA establishes categorical prohibitions on certain AI uses. These prohibitions cannot be contracted around and carry significant penalties.

Absolutely prohibited:

  1. Behavioral manipulation: AI systems designed to manipulate human behavior in ways that cause harm
  2. Government social scoring: AI systems that assign social scores to individuals (by government entities)
  3. Unlawful discrimination: AI systems developed or deployed with intent to discriminate against protected classes
  4. Constitutional rights infringement: AI systems that infringe on constitutional rights
  5. Biometric data capture without consent: Collecting biometric data without appropriate authorization
  6. Child exploitation content: AI systems creating or distributing CSAM
  7. Unlawful deepfakes: Creating or distributing prohibited synthetic media

Critical distinction: TRAIGA uses an intent-based liability framework for discrimination claims. Disparate impact alone is not sufficient to demonstrate an intent to discriminate. This represents a significant departure from impact-focused regulations that create strict liability for discriminatory outcomes.

The intent requirement provides businesses with clearer compliance guidelines while maintaining consumer protections against deliberate abuse. However, lawyers should note that this framework may be tested in litigation, and intent can often be inferred from circumstances.

Compliance Requirements

Beyond prohibited practices, TRAIGA establishes affirmative compliance obligations:

Transparency requirements:

  • Public-facing AI tools used by or on behalf of government agencies require disclosure mechanisms
  • Users must be informed when interacting with AI systems in covered contexts

Documentation requirements:

  • Organizations should maintain records of AI system development and deployment decisions
  • Risk assessments and mitigation measures should be documented

Data governance:

  • Biometric data handling must comply with consent requirements
  • Data practices must address discrimination and privacy concerns

Safe Harbors and Affirmative Defenses

TRAIGA provides meaningful compliance safe harbors:

NIST AI RMF compliance: Substantial compliance with the NIST AI Risk Management Framework serves as an affirmative defense against enforcement actions. Organizations that document NIST RMF implementation have significant protection.

Comparable frameworks: Compliance with "similar recognized frameworks" may also provide affirmative defense protection, though the statute does not enumerate which frameworks qualify.

Sector-specific exemptions:

  • Financial institutions compliant with all federal and state banking laws
  • Insurance entities subject to anti-discrimination statutes
  • Uses of biometric data for security, fraud prevention, or healthcare under HIPAA

The Regulatory Sandbox

TRAIGA establishes a 36-month regulatory sandbox program to encourage innovation:

Eligibility: Organizations can apply to test new AI systems in a controlled environment

Benefits: Temporary relief from certain state licensing and regulatory requirements

Ongoing requirements:

  • Prohibitions on manipulation, discrimination, and unlawful content remain in force even within the sandbox
  • Quarterly reports required on system performance, risk mitigation, and stakeholder feedback

Governance: The Texas Artificial Intelligence Council oversees the sandbox program and advises on ethical AI issues

Penalty Structure

The Texas Attorney General has exclusive enforcement authority. TRAIGA provides no private right of action—individuals cannot bring suits under the statute.

Pre-enforcement procedure: The Attorney General must provide notice and an opportunity to cure before bringing an action.

Penalty ranges (per Ropes & Gray's TRAIGA analysis):

Violation Type Penalty Range
Curable violation (unfixed after cure period) $10,000 - $12,000 per violation
Non-curable violation $80,000 - $200,000 per violation
Continued violation $2,000 - $40,000 per day

The cure opportunity and graduated penalty ranges suggest a regulatory approach focused on achieving compliance rather than maximizing penalties. However, non-curable violations and continued violations after notice face substantial exposure.

The Texas AI Council

TRAIGA creates the Texas Artificial Intelligence Council, composed of experts responsible for:

  • Advising on the regulatory sandbox program
  • Opining on ethics of certain AI uses
  • Addressing public safety issues
  • Identifying legal roadblocks hindering AI innovation
  • Issuing reports on AI compliance, ethics, data privacy and security, and legal risks

The Council represents both an advisory resource and a policy-shaping body. Its guidance will likely influence how TRAIGA is interpreted and enforced.

Federal Preemption Uncertainty

TRAIGA was enacted amid federal debate over AI regulation preemption. As Baker Botts noted, the One Big Beautiful Bill Act initially proposed a 10-year moratorium on state AI laws, which was stripped in an early Senate vote.

Current status: The federal moratorium is not moving forward, but this could change as Congress continues to debate AI policy.

Practical implication: Compliance with TRAIGA should proceed as if the law will remain in effect, but organizations should monitor federal developments that could preempt state requirements.

Compliance Roadmap

Immediate actions (before January 1, 2026):

  1. Inventory AI systems: Identify all AI systems that touch Texas residents or are deployed in Texas
  2. Classify risk levels: Determine which systems implicate prohibited practices or high-risk uses
  3. Review policies: Audit internal policies, contracts, and data governance practices for discrimination, biometric data, and transparency requirements

Structural compliance:

  1. Implement NIST AI RMF: The safe harbor makes NIST compliance the most efficient compliance strategy
  2. Establish disclosure mechanisms: For public-facing government-related tools
  3. Document decisions: Create records of AI development and deployment reasoning

Ongoing requirements:

  1. Monitor Council guidance: Track Texas AI Council reports and opinions
  2. Update risk assessments: As systems change and guidance evolves
  3. Train personnel: Ensure those deploying AI understand TRAIGA requirements

Implications for Legal Practice

For law firm AI use: Firms using AI for research, drafting, or client service delivery should ensure their tools comply with TRAIGA to the extent they serve Texas clients.

For client advisory: TRAIGA creates significant advisory opportunities—clients deploying AI in Texas need compliance counsel.

For litigation: The intent-based discrimination framework and AG-exclusive enforcement will shape how AI-related disputes develop in Texas.

Key Takeaways

  • TRAIGA takes effect January 1, 2026, applying to any AI system accessible to Texas users regardless of where the developer is located
  • Prohibited practices include behavioural manipulation, government social scoring, intentional discrimination, and unconsented biometric data capture
  • Intent-based liability for discrimination claims provides clearer compliance guidelines but differs from federal disparate-impact frameworks
  • NIST AI RMF compliance provides an affirmative defense — making NIST alignment the most efficient compliance strategy
  • The Texas Attorney General has exclusive enforcement authority with penalties ranging from $10,000 to $200,000 per violation and $40,000 per day for continued violations