Failed Federal Moratorium Triggers State AI Regulation Revolution
Congress tried to hit pause on state AI regulation. The proposal for a 10-year moratorium on new state laws governing artificial intelligence was pitched as a way to prevent “regulatory chaos.” When the Senate voted it down 99 to 1 in July 2025, the result was the opposite. The rejection triggered a surge of state legislation as governors, attorneys general and bar associations rushed to fill the vacuum. Washington may debate principles, but the states are now writing the rules.
From Preemption to Proliferation
The proposed moratorium originated in the House version of a 2025 innovation bill that sought to block states from regulating AI for a decade. Specifically, the provision was included in H.R. 1, which the House formally titled the “One Big Beautiful Bill Act,” a broad budget reconciliation package passed by the House in May 2025. Technology companies supported it, arguing that a national standard would prevent a patchwork of conflicting state laws. But when the Senate rejected the plan by near-unanimous vote, lawmakers effectively handed authority to the states. Within weeks, legislatures from Colorado to Connecticut began introducing comprehensive AI bills modeled on existing data-privacy statutes. The era of federal preemption gave way to 50 laboratories of governance.
Colorado and Texas: Two Paths to AI Compliance
Colorado moved first with Senate Bill 24-205, the nation’s first enforceable AI accountability law. Effective June 30, 2026, it defines “high-risk AI systems,” imposes risk-assessment duties on both developers and deployers, and requires consumer disclosures when automated tools shape consequential decisions. Enforcement authority rests with the state attorney general. Texas followed with a framework designed for innovation. Its Responsible Artificial Intelligence Governance Act (TRAIGA), signed June 22, 2025, includes mandatory prohibitions (like those against unlawful discrimination) but promotes a different compliance model through an AI regulatory sandbox and an exclusive enforcement authority vested in the Attorney General with a 60-day right-to-cure period. Together they form the bookends of the emerging compliance landscape: Colorado prioritizes strict algorithmic accountability, Texas, a compliance path balanced with innovation.
For law firms and in-house counsel, these contrasts reveal the new challenge of advising clients across jurisdictions. The same algorithm that complies in Austin may violate disclosure requirements in Denver. Multistate compliance programs now require AI inventories, documented risk classifications, and jurisdiction-specific disclaimers.
Copycats and Countermodels
Utah was an early mover, passing S.B. 149 in 2024, which focuses on consumer transparency by requiring disclosures when generative AI is used to interact with consumers. After the federal vote, Connecticut and Washington advanced bills adopting similar “high-risk system” definitions, often modeling their language on the EU AI Act. Oregon drafted a transparency statute requiring AI-impact assessments for public agencies.
In California, a CalMatters investigation revealed that state departments reported “no high-risk models” despite using automated tools for benefits and employment decisions, prompting calls for independent audits. Illinois expanded its Biometric Information Privacy Act to cover AI facial-recognition and emotion-analysis tools, while New York introduced deepfake-disclosure laws aimed at political advertising.
The result is not uniformity but acceleration. Each legislature writes its own vocabulary for “algorithmic accountability,” producing overlapping but incompatible definitions. Businesses that once worried about 50 privacy laws must now map fifty AI ones.
Why Federal Silence Fueled State Action
The Senate’s rejection of the moratorium exposed the absence of a federal counterpart to the EU AI Act. The White House’s “AI Bill of Rights” and the NIST AI Risk Management Framework remain voluntary guidance documents. Notably, the Texas law (TRAIGA) creates a rebuttable presumption of reasonable care for companies that demonstrate compliance with a recognized framework like the NIST RMF, effectively delegating standards-setting authority to states that incorporate federal guidance. The Carnegie Endowment observed that this vacuum left states with both the political incentive and the legal latitude to legislate. The TIME report on the vote described the outcome as “a repudiation of Big Tech’s attempt to federalize innovation policy.”
For now, the federal government exerts influence only through procurement rules and grant conditions. Congress has not passed a binding AI statute, and the Federal Trade Commission relies on existing consumer-protection authority to police algorithmic deception. Without a national framework, state enforcement is defining the operational boundaries of lawful AI use in real time.
Legal Friction on the Horizon
Fragmented regulation invites litigation. Companies operating across state lines are expected to challenge the most restrictive laws under the Commerce Clause, arguing that divergent disclosure and audit requirements unduly burden interstate commerce. Early suits are anticipated once Colorado’s law takes effect in 2026. Courts will face a familiar question: when does consumer protection become economic protectionism?
The constitutional landscape mirrors the early days of data privacy, when California’s CCPA collided with federal proposals that never passed. Unless Congress enacts a unifying statute, preemption disputes are inevitable. The outcome may determine whether states remain the primary laboratories of AI governance or yield to a belated federal standard.
Practical Lessons for Law Firms
For practitioners, the proliferation of state AI laws transforms compliance from a theoretical exercise into a client imperative. Law firms advising national businesses must build living inventories of AI systems, classify them by statutory risk level, and document disclosure protocols. Many are adopting internal “AI registers” inspired by frameworks like ISO/IEC 42001 to log model provenance, data sources, and audit outcomes.
Insurance carriers are responding as well. Cyber-liability underwriters increasingly ask whether firms maintain AI risk management programs. Verified documentation such as impact assessments, vendor due diligence checklists and human review policies can lower premiums. Transparency is becoming not just a compliance requirement but a condition of coverage.
The Laboratories Are Open
The Senate’s vote did not produce chaos; it produced experimentation. States are defining what “responsible AI” means through the only method that ever works in American governance: trial, error, and iteration. Some frameworks will overreach and be struck down. Others will become national templates. In time, a handful of states may emerge as the Delaware or California of AI law, setting default standards for the rest.
Uniformity may come not from Congress but from market convergence. Companies and law firms already design their policies to satisfy the strictest jurisdiction in which they operate. In effect, federal inaction has delegated regulation to competition. The laboratories are open, and the experiments are already underway.
Sources
- Brookings Institution: “How Different States Are Approaching AI” (2025)
- CalMatters: “California Somehow Finds No AI Risks” (May 2025)
- Carnegie Endowment for International Peace: “State AI Law: What’s Coming Now That the Federal Moratorium Is Dead” (July 2025)
- Colorado General Assembly: “Senate Bill 24-205: Consumer Protections for Artificial Intelligence” (Signed May 17, 2024; Effective June 30, 2026)
- European Commission: “AI Act – Shaping Europe’s Digital Future” (2024)
- National Institute of Standards and Technology (NIST): “AI Risk Management Framework” (2023)
- TIME: “Senators Reject 10-Year Ban on State-Level AI Regulation in Blow to Big Tech” (July 2025)
- Utah Legislature: “S.B. 149: Artificial Intelligence Policy” (Signed March 13, 2024)
- White & Case: “AI Watch — United States Tracker” (June 2025)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All sources cited are publicly available through official publications and reputable outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Colorado’s Groundbreaking SB 24-205: The AI Law Every Lawyer Should Know
