TWINLADDER
TwinLadder
TWINLADDER
Back to Newsletter

Issue #5

Texas HB 1709 Explained: First US State AI Regulation and Its European Parallels

Texas's new AI disclosure requirements take effect March 2026. We compare the Texas approach with the EU AI Act's transparency obligations and identify which provisions will matter most for legal services.

TRAIGA
Texas Regulation
EU AI Act
Compliance
April 11, 202516 min read
Texas HB 1709 Explained: First US State AI Regulation and Its European Parallels

Listen to this article

0:000:00

TwinLadder Weekly

Issue #5 | April 2025


Editor's Note

I practised in Brussels long enough to develop a reflex: when an American jurisdiction passes technology regulation, I read it against the EU framework first. Not because Europe always gets it right — anyone who has wrestled with the GDPR's consent architecture knows better — but because comparing approaches reveals what each system values and what each is willing to tolerate.

Texas's new AI law is interesting precisely because of what it chose not to do. The EU AI Act builds an elaborate risk-tiering architecture with annexes, implementing acts, and enough delegated authority to keep compliance departments busy for years. Colorado classifies high-risk AI systems. Texas took a simpler path: here are the things you cannot do with AI, here are the penalties if you do them, here is a sandbox for experimentation. No risk categories. No compliance bureaucracy for low-risk uses. Just bright lines and enforcement.

Whether you find that refreshingly practical or dangerously underspecified probably depends on where you trained. Either way, if you have clients operating in Texas — and many European firms with US practices do — this matters from January 1, 2026. And for those of us navigating the EU AI Act, comparing these two regulatory philosophies is more than academic exercise. It is preparation for advising clients who operate under both.


TRAIGA: When America Draws Bright Lines and Europe Builds Frameworks

On June 22, 2025, Governor Abbott signed the Texas Responsible Artificial Intelligence Governance Act into law. It takes effect January 1, 2026, and unlike some state AI guidance we have seen, it has real enforcement teeth.

The philosophical differences between TRAIGA and the EU AI Act deserve careful examination, because they reveal fundamentally different theories about how to govern technology.

Dimension EU AI Act TRAIGA (Texas)
Approach Risk-tiered, comprehensive, preventive Prohibitions-based, targeted, reactive
Scope All AI systems, all sectors Specific prohibited uses only
Discrimination standard Effect-based (disparate impact matters) Intent-based (must show design intent)
Enforcement Multiple authorities, private rights of action possible AG only, no private right of action
AI literacy Mandatory (Article 4, in force Feb 2025) No literacy requirement
Sandbox Regulatory sandboxes in draft implementing acts 36-month sandbox via Texas DIR
Penalties Up to EUR 35M or 7% global turnover $10K-$200K per violation, $2K-$40K/day continuing

TRAIGA prohibits AI systems intentionally designed to harm people, engage in criminal activity, infringe constitutional rights, discriminate against protected classes, manipulate behaviour through cognitive exploitation, assign government social scores, or capture biometric data without consent. The operative word is "intentionally" — disparate impact alone is not sufficient under the statute. Prosecutors must show design intent. If you are accustomed to EU discrimination frameworks where effect matters as much as intent, this is a significant philosophical difference. It means a system that produces discriminatory outcomes in Texas may not violate TRAIGA unless prosecutors can demonstrate the system was designed with that purpose.

The penalties are real: $10,000-$12,000 per curable violation, escalating to $80,000-$200,000 per uncurable violation, with continuing violations costing $2,000-$40,000 per day. Only the Attorney General can enforce — there is no private right of action — but clients can file complaints that trigger AG investigations. A 60-day cure period for fixable violations provides operational breathing room, though false representations about fixing a violation escalate penalties dramatically.

The safe harbours deserve close attention from any firm advising on compliance. Substantial compliance with the NIST AI Risk Management Framework creates an affirmative defence — this is significant because NIST compliance also provides cover under Colorado's AI Act. One framework, multiple US jurisdictions. For European firms, the parallel is instructive: firms that build EU AI Act compliance now will find themselves substantially prepared for whatever framework emerges in their other markets. Third-party misuse does not attach liability if you did not design for prohibited purposes. Good-faith internal testing that discovers violations will not trigger liability if addressed.

The most innovative feature is the regulatory sandbox administered by the Texas Department of Information Resources — up to 36 months of testing with certain regulatory exemptions. The EU AI Act contemplates similar sandboxes, but implementing acts are still in development. Texas moved faster on this front. For legal technology companies and law firms experimenting with AI, both sandboxes provide structured experimentation environments — watch for the EU's sandbox rules to learn from the Texas experience.

The federal preemption question adds uncertainty. President Trump's December 2025 executive order proposed federal AI policy that could preempt inconsistent state laws. Practical advice: comply with TRAIGA now. If federal preemption happens, you will be ahead of any federal framework. If it does not, you are already compliant. The same logic applies to European firms and the EU AI Act — build compliance infrastructure now, before enforcement crystallises the requirements.

For firms using legal AI tools — Harvey, LegalOn, Lexis AI — the third-party deployer question matters most. If the tool you deploy produces discriminatory outcomes in decisions about hiring, lending, or housing-related agreements, does deployer liability attach if you knew or should have known? TRAIGA says disparate impact alone is not sufficient, but continued use after discovering disparate impact might look like intentional discrimination. The statute does not resolve this clearly. Vendor due diligence matters more now than it ever has. Ask about NIST compliance. Get representations in contracts. Document your evaluation process.


The Competence Question

A 40-lawyer firm in Dallas conducted a firm-wide AI audit before TRAIGA passed. They discovered twelve different AI tools in use across practice groups — some entirely unknown to management. Three had questionable data handling practices. They replaced them before the law took effect. The audit took twenty hours and probably saved them from a compliance nightmare.

I have heard identical stories from European firms. A mid-sized practice in Frankfurt discovered that associates in four different practice groups were using four different AI tools, none approved by management, each processing client data under different — and in one case, non-existent — data protection arrangements. The GDPR exposure alone was significant. Under Article 4 of the EU AI Act, the firm could not demonstrate that these staff had "sufficient AI literacy" because it did not even know what tools they were using.

Most firms have no idea what AI tools their lawyers are actually using. An associate signs up for a $20-per-month subscription, pastes client matter details into a prompt, and nobody in management knows it happened. That is not a hypothetical. It is the norm at mid-market firms I have spoken to across both continents.

Oregon's Formal Opinion 2025-205 addressed one dimension of this shadow AI problem: if AI cuts your research time from three hours to thirty minutes, you cannot bill three hours. But the competence question runs deeper than billing ethics. When your firm cannot inventory the AI tools in use, you cannot assess compliance with TRAIGA, the EU AI Act, or any other framework. You cannot evaluate whether client data is being processed appropriately. You cannot determine whether outputs are being verified before they reach clients or courts.

The first step in AI governance is not policy. It is knowing what is happening in your own practice. You cannot govern what you do not know exists. Every compliance framework in the world is useless if you do not know what you are trying to make compliant.


What To Do

  1. Conduct a firm-wide AI inventory this month. List every tool in use, who uses it, what data it processes, and what decisions it influences. Include personal subscriptions associates may be using without firm approval. You cannot govern what you do not know exists. In my experience, firms that conduct these audits discover three to five times more AI tools in use than management expected.

  2. Map your AI tools to the relevant compliance framework. The NIST AI Risk Management Framework creates safe harbours under both Texas TRAIGA and Colorado's AI Act. The EU AI Act's risk classification and Article 4 literacy requirements apply to all European deployers. Build compliance once, apply across jurisdictions. The investment is the same; the returns multiply.

  3. Add AI representations to vendor contracts. Ask vendors about NIST compliance, data handling, bias testing, and discrimination monitoring. For European firms, add GDPR and EU AI Act compliance representations. Document the responses. If a vendor cannot answer these questions, that tells you something important about their maturity.

  4. Build to the highest common denominator. If you operate across states or across the Atlantic, NIST compliance plus EU AI Act documentation plus bias monitoring covers most current requirements. Do not build separate compliance programmes for each jurisdiction. Build one programme that meets the most demanding standard and document the jurisdiction-specific exceptions.

  5. Watch the regulatory landscape actively. Roughly 50% of US states have now issued AI guidance. The EU AI Act's Article 4 is already in force. New York requires 2 CLE credits in AI competency by Q3 2025. The direction is consistent even where the details differ. Firms that build compliance infrastructure now will advise clients on these requirements later. Firms that wait will be playing catch-up.


Quick Reads

  • The regulatory patchwork is growing on both sides of the Atlantic. Texas (TRAIGA, January 2026), Colorado (AI Act, February 2026), California (SB 574 pending), plus the EU AI Act (Article 4 already in force, full enforcement August 2026). For firms advising transatlantic clients, mapping these overlapping frameworks is becoming a practice area in itself.

  • TRAIGA analysis from Ropes & Gray provides one of the better practical compliance walkthroughs for deployers. Worth reading alongside the EU AI Office's implementation guidance for a transatlantic perspective.

  • The Texas AI Council has been created to oversee the sandbox programme. The EU is developing parallel sandbox frameworks through national competent authorities. Worth monitoring both if you advise legal technology companies or clients evaluating sandbox-tested products.

  • Greenberg Traurig's key provisions summary and WilmerHale's analysis are both useful as quick references for client conversations about compliance obligations.


One Question

If continued use of an AI tool after discovering it produces discriminatory outcomes looks like intentional discrimination under TRAIGA's intent standard, how many firms — on either side of the Atlantic — have even tested their tools for disparate impact?


TwinLadder Weekly | Issue #5 | April 2025

Helping European professionals build AI competence through honest education.

Included Workflow

AI Disclosure Decision Tree

Decision framework for determining when and how to disclose AI use. Covers research, drafting, contract review, and client-facing scenarios with jurisdiction-specific notes.

Start this workflow