State Attorneys General Turn Legacy Laws Into Working AI Regulation
|

State Attorneys General Turn Legacy Laws Into Working AI Regulation

There is still no federal AI code, no single statute that tells American companies what they may or may not automate. Yet enforcement is already here. State attorneys general are reaching for privacy rules, unfair practices laws, civil rights statutes and new deepfake bills, then using them to police generative tools in hospitals, hiring platforms, lending models and child-facing chatbots. In practical terms, these offices now function as the country’s de facto AI regulators, even as Congress argues over preemption and federal agencies haggle over guidance.

State AGs Fill the AI Regulatory Void

In May 2025, state attorneys general were described as filling the AI regulatory void, with only a few states having passed AI-specific laws while AGs in places like California, Texas, New Jersey and Oregon were already enforcing existing statutes against AI-driven harms. That pattern has only hardened since. New laws remain piecemeal, but enforcement actions, advisory letters and headline settlements continue to arrive.

At the federal level, the FTC has pursued its own AI enforcement strategy through Operation AI Comply, launched in September 2024. The initiative uses Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices, to challenge companies making false claims about AI capabilities. The FTC has taken action against firms like DoNotPay for claiming to offer the world’s first robot lawyer and Evolv Technologies for unsubstantiated security claims. Yet the FTC’s approach focuses primarily on deceptive marketing and business opportunity schemes, leaving state attorneys general to address the broader sweep of AI harms including employment discrimination, tenant screening and child protection.

By late November 2025, a bipartisan group of 36 attorneys general had warned Congress not to preempt state AI rules, arguing that blocking state authority would have disastrous consequences for children, consumers and public safety. In a companion release, California Attorney General Rob Bonta framed the issue bluntly: states are already using their own laws to manage AI risks and intend to keep doing so.

For U.S. lawyers, the practical lesson is simple. Even if federal AI bills stall or reverse, state attorneys general will continue to treat generative tools and automated decision systems as new vectors for familiar violations. The work now is to understand how that approach looks in the jurisdictions that matter most to clients.

California Builds Broad AI Enforcement Baseline

California is the clearest example of an AG’s office building AI oversight out of legacy law plus a growing stack of new statutes. In January 2025, Bonta issued twin legal advisories explaining how existing California rules on unfair competition, false advertising, consumer remedies, data security and civil rights already apply to AI tools. The accompanying legal memorandum is explicit that companies cannot market AI systems as unbiased, safe or accurate without substantiation and cannot deploy chatbots, deepfakes or voice clones in ways that mislead consumers.

Those advisories land on top of an unusually dense legislative base. Analysis by the Center for Security and Emerging Technology found that California enacted 18 AI-related bills in 2024 alone, covering deepfake criminalization, model transparency in certain sectors, health care automation, automated decision tools in housing and employment, and governance structures for state use of AI. Law firm overviews frame these as the beginning of a statewide AI regime rather than one-off experiments.

Children and intimate imagery sit at the sharp edge of this strategy. Bonta has repeatedly warned that abusive uses of AI will be treated as violations of existing law, not novel gray zones. A January 2025 advisory emphasizes that nonconsensual deepfake sexual imagery likely violates California’s consumer protection and privacy statutes. In August 2025 he joined 44 other attorneys general in a joint letter telling major AI firms that if they knowingly harm children, they should expect enforcement. By September, his office was sending a separate letter to OpenAI focused on youth safety and supporting legislation that would restrict companion chatbot behaviors with minors.

For counsel, California’s message is not subtle. If a company markets or deploys AI within the state, it should assume that every representation about safety, bias, accuracy or emotional support can be tested against statutes that predate generative models and that the attorney general will supplement those tools with targeted AI legislation where needed.

Texas Targets AI Accuracy and Deception

If California represents the comprehensive model, Texas illustrates how a single enforcement action can reset expectations for AI vendors nationwide. On September 18, 2024, Attorney General Ken Paxton announced a first-of-its-kind settlement with Pieces Technologies, a Dallas-based company whose generative tools summarize and draft clinical documentation for hospitals. According to the attorney general’s office and subsequent law firm analyses, the investigation alleged that the company misrepresented the accuracy, safety and performance of its systems in ways that could have affected patient care.

The agreement required the company to stop making unsubstantiated claims about accuracy metrics, submit to independent testing, provide more fulsome disclosures to hospital customers and report compliance measures to the state. The legal hooks were not AI-specific rules but the Texas Deceptive Trade Practices Act and related consumer protection provisions. In other words, the novelty was factual rather than doctrinal: this was the first time a state AG had used standard unfair practices law to resolve concerns about the performance of a generative health care product.

The implications reach well beyond one vendor. Commentaries on the settlement point out that any company touting AI-driven accuracy, bias reduction or safety gains now has a clear template for what can trigger scrutiny. Metrics must be grounded in real testing, not optimistic projections or selective benchmarking. Marketing teams cannot describe statistical outputs as clinical guarantees. Hospitals and health systems, in turn, are on notice that AI procurement and oversight now sit within the scope of state consumer protection and health care fraud enforcement, not just private contracting.


Paxton has also joined the multistate front on children’s interactions with AI, signing on to coalition letters that accuse platforms and AI providers of repeating the governance mistakes of early social media. Texas therefore illustrates two sides of AG AI work: sector-specific enforcement around accuracy and safety, and coalition enforcement around child protection.

New Jersey Focuses on Algorithmic Discrimination

New Jersey has chosen a different focal point. In January 2025, Attorney General Matthew Platkin and the Division on Civil Rights issued a guidance on algorithmic discrimination under the New Jersey Law Against Discrimination. The document defines automated decision-making tools broadly, ranging from classical models to generative systems and decision trees, and makes clear that using these tools does not insulate employers, landlords, lenders or service providers from liability.

Summaries from employment firms such as Littler, Ogletree and Ansell underscore the key points. The guidance explains that the Law Against Discrimination applies whether bias arises from intentional design choices or from data and models that encode historic inequities. It stresses that covered entities cannot avoid responsibility by blaming vendors or algorithms, and that disparate impact theories remain available even when the underlying inputs seem facially neutral. It also encourages proactive testing, documentation and oversight for any automated screening or scoring system.

New Jersey legislators have paired that civil rights focus with criminal and civil remedies for deepfake abuse. On April 2, 2025, the governor signed a law that makes creating or distributing deceptive AI-generated media a crime, with penalties of up to five years in prison and a private right of action for victims. The statute was motivated in part by a high school student’s experience of nonconsensual deepfake imagery and places New Jersey among more than 20 states that now treat certain AI-manipulated media as a specific offense.

Taken together, the guidance and the statute show how an AG’s office can reframe AI not as a special category of technology, but as a context in which existing civil rights and criminal rules now clearly apply. For companies deploying hiring tools, tenant screening software or generative image services, New Jersey has become one of the more consequential jurisdictions to watch.

Oregon and Privacy States Apply Existing Legal Frameworks to AI

In December 2024, Attorney General Ellen Rosenblum issued guidance titled “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence.” The document walks businesses through state unfair trade practices, consumer privacy, data breach and anti-discrimination laws, then explains how each applies to the training and deployment of AI tools.

The focus is less on announcing new prohibitions and more on clarifying expectations: do not feed personal data into models without honoring existing privacy rights, do not deploy opaque systems in ways that create discriminatory outcomes, and do not assume that the novelty of AI relaxes Oregon’s traditional standards of fairness and transparency. The message resonates beyond the state’s borders because it describes an enforcement approach that many AGs are now adopting.

Similar patterns appear in other privacy-focused states. Connecticut Attorney General William Tong issued guidance in December 2024 on opt-out preference signals under the state’s privacy statute, requiring covered entities to honor consumer signals starting January 1, 2025. While Connecticut’s comprehensive AI bill (Senate Bill 2) stalled in the legislature for the second consecutive year in 2025, the attorney general retains authority to enforce existing consumer protection and privacy laws against AI-driven harms.

Colorado presents a unique model among privacy states. On May 17, 2024, Governor Jared Polis signed the Consumer Protections for Artificial Intelligence Act into law, making Colorado the first state to pass comprehensive AI legislation focused on algorithmic discrimination. The law, which takes effect February 1, 2026, requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination in consequential decisions affecting education, employment, financial services, health care, housing, insurance and legal services. The Colorado Attorney General has exclusive enforcement authority and is currently engaged in a public rulemaking process to implement the law’s requirements. Unlike Texas’s reactive enforcement or Oregon’s guidance-based approach, Colorado represents a proactive legislative framework with built-in AG oversight from the start.

Utah took yet another path. On March 13, 2024, Governor Spencer Cox signed the Utah Artificial Intelligence Policy Act into law, which took effect May 1, 2024. The law focuses on disclosure requirements for generative AI rather than comprehensive regulation, requiring businesses using generative AI to interact with consumers in regulated contexts to clearly disclose that interaction. The Utah Division of Consumer Protection can levy fines of up to $2,500 per violation. In November 2025, Utah Attorney General Derek Brown partnered with North Carolina Attorney General Jeff Jackson to launch a bipartisan AI Task Force that includes Microsoft and OpenAI, taking a collaborative rather than purely adversarial approach to AI governance.

While not a generic consumer protection statute, Illinois’s Biometric Information Privacy Act (BIPA) represents another unique and powerful state level legal mechanism that affects AI deployment. BIPA is considered the most aggressive biometric data protection law in the nation, providing a private right of action and statutory damages for the collection, retention, and use of biometric identifiers (such as face geometry, fingerprints, and voiceprints) without proper notice and consent. Because many AI models rely on biometric data for training or deployment—especially in facial recognition, security, and authentication contexts—BIPA has become a significant source of liability for companies operating within Illinois. Major settlements under BIPA have solidified that state law can impose massive compliance and financial risk on AI adjacent technologies, even without explicit, dedicated AI legislation.

The pattern across privacy states is clear. Whether through comprehensive new legislation like Colorado, targeted disclosure requirements like Utah, or application of existing statutes like Oregon and Connecticut, state attorneys general are making clear that AI tools must respect data minimization, purpose limitation, opt-out rights and anti-discrimination principles that predate generative models. The legal theory is not complex: if a company violates an existing privacy or consumer protection law by using AI, the presence of an algorithm does not excuse the underlying conduct.

Multistate Coalitions Coordinate National AI Enforcement Strategy

While individual states develop their own emphases, multistate coalitions function as a kind of informal AI regulator of national reach. On August 26, 2025, the National Association of Attorneys General announced a bipartisan letter from 44 attorneys general to leading AI companies, warning that sexually inappropriate or manipulative chatbot interactions with children will trigger enforcement under existing consumer protection and child safety laws. The underlying policy letter cites reports of flirtatious or violent chatbot responses and stresses that actions that would be unlawful if taken by a human do not become lawful when routed through automation.

Even where only a handful of AGs have launched formal investigations, dozens have publicly endorsed enforcement against harmful AI practices. The coalition’s approach signals that whatever the shape of federal AI law, states intend to retain their own enforcement space.

Federal Preemption Push Creates New Uncertainty

The state AG enforcement strategy now faces its most serious challenge. On November 19, 2025, a draft executive order surfaced showing the Trump administration’s intent to establish an AI Litigation Task Force specifically to challenge state AI laws. The draft order directs the Attorney General to pursue legal action against state regulations that allegedly interfere with interstate commerce or conflict with federal law, setting up a direct confrontation with the coalition of state attorneys general who have positioned themselves as frontline AI regulators.

According to reporting by Axios, NBC News, and CNBC, the draft order tasks the Attorney General with establishing an AI Litigation Task Force within 30 days to challenge state AI laws including on grounds that such laws unconstitutionally regulate interstate commerce. The dormant Commerce Clause argument closely mirrors a position published by venture capital firm Andreessen Horowitz in September 2025.

The draft order also directs multiple agencies to evaluate state AI laws for conflicts with federal policy. The Federal Communications Commission would be instructed to consider adopting a Federal reporting and disclosure standard for AI models that preempts conflicting State laws. The Federal Trade Commission would be directed to issue a policy statement explaining circumstances under which state laws requiring alterations to AI outputs are preempted by Section 5 of the FTC Act. Federal funding, including from broadband infrastructure programs, could be conditioned on states not enforcing certain AI regulations.

As of early December 2025, the order had not been signed, and a White House official told CNN that until officially announced by the White House, discussion about potential executive orders is speculation. Simultaneously, House Republicans were working to include AI preemption language in the National Defense Authorization Act, though similar efforts failed in July 2025 when the Senate voted 99-1 to reject a proposed 10-year moratorium on state AI enforcement.

Legal analysis by firms including Crowell and Moring and Covington notes that executive orders do not have the force of federal statute and that the President generally cannot unilaterally preempt state law through executive action. Whether the AI Litigation Task Force could successfully challenge state laws under dormant Commerce Clause theories remains an open question that would ultimately be resolved by federal courts.

The coalition’s November 25, 2025, letter to Congressional leaders explicitly rejected proposals for a federal moratorium on state AI enforcement, warning that broad preemption of state protections is particularly ill-advised because constantly evolving emerging technologies, like AI, require agile regulatory responses that can protect our citizens.

How Legal Counsel Should Respond to State AG AI Enforcement Actions

For in-house teams and outside counsel, the practical question is how to turn this patchwork into a plan. One starting point is to map out AI exposure against the jurisdictions that have moved most aggressively: California for comprehensive statutes and advisories, Texas for accuracy and marketing claims, New Jersey for algorithmic discrimination and deepfake liability, Colorado for proactive AI-specific legislation, and Oregon for guidance centered on privacy and unfair trade practices. That map should then be overlaid with the company’s product lines, data flows and customer footprint.

Marketing and product groups should assume that any public assertion about AI accuracy, safety, fairness or bias mitigation will be read against consumer protection standards. The Pieces Technologies settlement shows that AGs are comfortable treating optimistic performance claims as deceptive trade practices when they are not backed by evidence. Legal and compliance teams should therefore insist on documented testing, clear model limitations and conservative language around what systems can actually do.

Organizations that use AI for screening, eligibility or pricing decisions will need a parallel civil rights posture. That means testing for disparate impact in high-risk contexts such as employment, housing and lending, insisting on contractual rights to audit vendor tools and avoiding architectures that make it impossible to explain adverse decisions. New Jersey’s guidance and Massachusetts enforcement in lending contexts demonstrate that regulators now see algorithmic discrimination as a straightforward extension of existing anti-discrimination doctrine.

Child-facing uses deserve their own category. Any experiment that puts generative chatbots in front of minors, whether as companions, content filters or educational tools, should be subject to strict governance and oversight. The NAAG letter and state press releases make clear that abusive or sexually explicit chatbot interactions will be treated as a serious enforcement priority under current law, not as a technical accident that regulators will overlook.

Finally, monitoring should adjust to the way AI law is actually being made. In practice, that now means reading state AG press releases, legal advisories and coalition letters alongside federal agency guidance. For many sectors, those state documents provide the most concrete picture of what regulators expect from AI governance, regardless of how long it takes Congress to settle its own approach or how current federal preemption efforts ultimately fare.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Lost in the Cloud: The Long-Term Risks of Storing AI-Driven Court Records

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *