Three Regulatory Models Reshaping AI Compliance Across Jurisdictions

Three Regulatory Models Reshaping AI Compliance Across Jurisdictions

As artificial intelligence systems move from pilot projects into core infrastructure, lawmakers are converging on three distinct ways to regulate them: by policing outputs, constraining inputs, and imposing governance standards on the organizations that deploy them.

No Single Model Emerging

Across jurisdictions, governments are writing new rules for AI while simultaneously stretching older consumer protection, civil rights, and product safety laws to cover algorithmic systems. The result is not a single model of AI regulation but a set of overlapping approaches that target what an AI system produces, what it is built from, and how it is managed. Understanding those typologies has become a practical necessity for in-house counsel, regulators, and litigators who must translate between regimes that are evolving at different speeds.

In Europe, the Artificial Intelligence Act and the Council of Europe’s Framework Convention on Artificial Intelligence harden these approaches into binding law. In the United States, the picture is more fragmented, as federal agencies lean on longstanding statutes while states adopt new rules for automated decision tools and risk management obligations. International standards bodies add a third layer, promoting management system frameworks that regulators increasingly incorporate by reference. Together, these developments sketch three regulatory families that now shape how AI systems are designed, documented, and contested.

This article traces those families in detail. It examines how output-based, input-based, and standard-based models appear in current law and guidance, and how they combine in the European Union’s hybrid regime, state-level statutes such as Colorado’s Artificial Intelligence Act, and emerging treaty and standards architectures. The focus is descriptive rather than speculative, drawing on enacted legislation, official guidance, and major academic and media reporting through Nov. 2025.

Three approaches to regulating AI systems

Although statutes rarely describe themselves in typological terms, recent laws and guidance cluster into three functional approaches. Output-based regulation focuses on the consequences of AI systems, such as discrimination, deception, or unsafe outcomes. Input-based regulation governs the data, models, and computational resources that feed those systems. Standard-based regulation concentrates on organizational processes, requiring risk management, documentation, and oversight frameworks.

These approaches are not mutually exclusive. The European Union’s Artificial Intelligence Act, which entered into force in Aug. 2024, combines all three by classifying “high-risk” systems, imposing detailed data governance and documentation obligations, and requiring post-market monitoring of real-world performance. The Council of Europe’s Framework Convention on Artificial Intelligence, opened for signature in Sept. 2024, similarly obliges parties to ensure that AI activities remain consistent with human rights, democracy, and the rule of law through graduated measures across the lifecycle of AI systems. In parallel, national regulators and international bodies, including the U.S. Federal Trade Commission, the Organisation for Economic Co-operation and Development, and the National Institute of Standards and Technology, have developed guidance that reinforces these three lenses in practice.

Output-Based Regulation: Policing Consequences

Output-based regulation targets what AI systems do rather than how they are built. In the United States, this logic is most visible in the application of general consumer protection and civil rights statutes to algorithmic tools. The Federal Trade Commission has warned companies that misrepresentations about AI capabilities or performance may be treated as deceptive practices under Section 5 of the FTC Act, and that unfair practices principles apply when AI products cause foreseeable harm. The agency’s 2023 business guidance on AI claims stressed that performance claims must be substantiated and that firms cannot hide limitations behind technical jargon.

Civil rights enforcement follows a similar pattern. The Equal Employment Opportunity Commission’s 2023 technical assistance document on software, algorithms, and AI in employment selection procedures explains how Title VII disparate impact analysis applies to automated tools used for screening and promotion. It reiterates that employers can be liable if an AI system disproportionately screens out protected groups and is not job-related and consistent with business necessity. In New York City, Local Law 144 builds on this output focus by prohibiting employers from using automated employment decision tools unless they undergo an independent bias audit, publish a summary of the audit, and provide notice to candidates. The implementing rules require audit metrics that examine adverse impact along specified demographic lines.

Comprehensive AI statutes are beginning to embed similar concepts. Colorado’s Artificial Intelligence Act, enacted in 2024 and now scheduled to take effect on June 30, 2026, creates duties for “developers” and “deployers” of high-risk AI systems designed to make or materially influence consequential decisions in areas such as employment, housing, credit, health care, education, and insurance. The law defines an “unlawful discriminatory practice” to include outcomes produced by high-risk AI that result in algorithmic discrimination on the basis of protected characteristics, and ties compliance obligations directly to preventing those outcomes.

Outside the discrimination context, output-based reasoning also appears in financial regulation. The Consumer Financial Protection Bureau has issued supervisory guidance emphasizing that creditors remain responsible for adverse actions generated by credit scoring models, including those that rely on machine learning, and must provide specific reasons for credit denials. Securities regulators have highlighted the risk that AI-driven marketing might mislead investors, and have brought enforcement actions where firms overstated or mischaracterized their use of AI in investment products. In each case, the legal hook is the effect of AI outputs on consumers or investors rather than the technical details of training regimes.

Input-based regulation: Governing Data, Models, and Compute

Input-based regulation targets the components of AI systems. The European Union’s Artificial Intelligence Act is the clearest example. The regulation classifies certain systems as “high-risk,” including AI used in employment, credit scoring, essential public services, and various safety-related applications. Providers of high-risk systems must meet detailed obligations concerning data governance, including training, validation, and testing datasets that are relevant, representative, free of errors as far as possible, and complete with respect to the intended purpose. The Act also mandates technical documentation that describes model design, training processes, and performance characteristics, as well as logging capabilities to enable traceability.

The EU regime pays particular attention to foundation models and general-purpose AI systems. For providers of models that meet specified threshold criteria, including certain capabilities and scale, the Act requires documentation on training data sources, evaluations of systemic risks, and measures to address cybersecurity and misuse. These provisions effectively regulate inputs by demanding transparency and risk controls around the data and computing infrastructure that underlie large models.


In the United States, comprehensive federal AI legislation has not yet been enacted, but input-based concepts have appeared in executive policy and sectoral initiatives. President Biden’s 2023 Executive Order 14110 directed agencies and NIST to develop testing standards and reporting obligations for “dual-use foundation models,” focusing on safety testing and secure development. That order was rescinded in Jan. 2025 by President Trump, which removed many of the federal mandates but left intact ongoing technical work at NIST and other agencies. States have begun to fill part of that gap. California’s proposed frontier model legislation and related draft rules have explored obligations tied to compute thresholds, safety evaluations, and security controls for large-scale training runs, although those proposals continue to evolve in the legislative process.

Beyond traditional data protection, the use of copyrighted material for training foundation models has introduced a new layer of input-based control enforced through litigation and policy. The U.S. Copyright Office has continued to investigate how existing copyright law applies to both the data used to train AI systems and the outputs they generate. Major lawsuits against model developers, centered on claims of mass infringement from training data ingestion, exert significant commercial pressure, effectively acting as an input-constraint by defining the legal risk associated with various data sources.

Outside formal statutes, input-based controls are also embedded in privacy and data protection regimes. The OECD’s 2019 Recommendation on Artificial Intelligence, which has been endorsed by a broad group of countries, calls for robust data governance frameworks that include data quality, integrity, and security for AI systems. Data protection laws in Europe and elsewhere constrain the categories of personal data that may be used for training and profiling, and impose purpose limitation and minimization duties that indirectly shape AI training pipelines. Together, these rules define what data may legally flow into models and under what conditions.

Standard-Based Regulation: Institutionalizing Governance

Standard-based regulation addresses how organizations manage AI rather than only what systems do or what data they use. It typically takes the form of risk management frameworks, governance requirements, and conformity assessments. NIST’s Artificial Intelligence Risk Management Framework, released in 2023, is a central example. The framework provides a voluntary structure for mapping, measuring, managing, and governing AI risks across the AI lifecycle. In 2024, NIST released a Generative AI Profile that applies these principles to large language models and other generative systems, identifying specific risks and suggested controls in areas such as data provenance, synthetic content detection, and abuse mitigation.

International standard-setting has followed a similar direction. ISO and IEC published ISO/IEC 42001:2023, described as the first management system standard focused specifically on AI. The standard outlines requirements for establishing, implementing, maintaining, and continually improving an AI management system, including policies, roles, risk assessment, monitoring, and improvement processes. Although ISO standards are voluntary, regulators and industry bodies have begun to reference this framework when describing expected practices for AI governance.

The Council of Europe’s Framework Convention on Artificial Intelligence also reflects a standards-oriented approach. The treaty requires parties to adopt legislative and other measures that are “graduated and differentiated” based on the severity and probability of risks, and to ensure transparency, oversight, and accountability throughout the AI lifecycle. Rather than prescribing technical specifications, it directs governments to embed AI considerations into existing human rights and rule-of-law compliance systems, including impact assessments, remedies, and supervision mechanisms.

In practice, standard-based regulation often operates through guidance and soft law. The OECD AI Principles, which promote the development of transparent and accountable AI systems that respect human rights and democratic values, have been incorporated into national AI strategies and multilateral initiatives. Sectoral regulators, including financial, health, and data protection authorities, have issued expectations for AI governance programs that emphasize documentation, human oversight, incident response, and board-level accountability. These measures rely on internal processes and institutional design more than on prescriptive rules about particular models.

Hybrid models in current law

Most comprehensive AI frameworks combine output, input, and standards elements. The European Union’s Artificial Intelligence Act is the clearest illustration. It begins with a risk-based classification that identifies unacceptable uses, such as certain forms of social scoring, which are prohibited outright. It then defines “high-risk” systems and subjects providers and deployers of those systems to obligations that include data governance, technical documentation, human oversight, robustness, cybersecurity, post-market monitoring, and incident reporting. These obligations draw heavily on conformity assessment and quality management system concepts that are familiar from other product safety regimes.

At the same time, the EU Act preserves national and sectoral enforcement through existing laws. Member states remain responsible for applying anti-discrimination law, privacy statutes, and consumer protection rules to AI-related harms. This dual structure means that an AI provider operating in Europe must simultaneously comply with input and standard-based requirements under the AI Act and output-based liabilities under other legal instruments.

Colorado’s Artificial Intelligence Act reflects a hybrid design in a single state statute. The law adopts output-based concepts by defining algorithmic discrimination in consequential decisions, but it also imposes governance duties on developers and deployers, including documentation, notice to consumers, impact assessments, and risk management procedures. Recent commentary from legal practitioners and civil society groups has highlighted that the 2025 amendment delaying its effective date to June 30, 2026 has not altered the core structure of these obligations, which still rely on a mix of outcome monitoring and process controls.

In the absence of a comprehensive federal AI statute in the United States, state and local rules further underscore the hybrid trend. New York City’s Local Law 144 relies on bias audits and transparency requirements that look like governance standards, but it enforces them based on measured outcomes across demographic groups. State attorneys general have invoked consumer protection and civil rights laws to investigate AI-enabled fraud, unfair or deceptive practices, and discrimination in hiring, credit, and housing. Recent reporting has documented how these enforcement efforts are filling a gap left by the rescission of the 2023 federal executive order and the absence of new congressional legislation.

Compliance implications for organizations

For organizations that develop or deploy AI systems, the coexistence of output, input, and standard-based models translates into layered compliance tasks. From an output perspective, companies must monitor the real-world behavior of their systems, collect evidence about how decisions are distributed across different groups, and be prepared to explain and remediate adverse outcomes. This requires logging, testing, and audit capabilities that can detect discrimination, safety failures, or misleading content before they lead to enforcement actions or litigation.

From an input perspective, organizations must maintain visibility into their data pipelines and model supply chains. Providers subject to the EU AI Act’s high-risk provisions will need detailed records of training, validation, and testing datasets, including their sources and known limitations. Firms that license or integrate third-party models will have to obtain contractual assurances about data provenance and risk controls, and may need to align their data practices with privacy and data protection requirements that restrict the reuse of personal data for training or profiling.

Standard-based expectations add a further layer. Even where ISO 42001 or the NIST AI RMF remain formally voluntary, regulators increasingly treat them as benchmarks when assessing whether an organization has taken reasonable steps to manage AI risk. That trend appears both in federal guidance, where agencies reference NIST frameworks when describing secure development practices, and in private-sector contracts that require vendors to implement documented AI governance programs. Internal compliance teams therefore face a convergence of technical, legal, and organizational requirements that call for cross-functional coordination.

The situation is complicated by changing federal policy signals. The rescission of Executive Order 14110 removed a set of formal directives that had instructed agencies to build out testing, reporting, and content authentication standards. However, technical work initiated under that order, including NIST’s generative AI profile and content provenance research, has continued. At the same time, states and city governments have expanded their own AI-focused rules. For many organizations, particularly those operating across borders, the practical response has been to assume that stricter regimes, such as the EU AI Act and Council of Europe treaty obligations, will shape global practices and to build governance programs that can be adapted to less prescriptive jurisdictions.

Regulatory architecture takes shape amid uncertainty

The regulatory landscape for AI remains unsettled, but its structure has become more legible. Output-based rules rely on general legal principles to police algorithmic harms after the fact. Input-based rules establish conditions for data and models before systems are deployed. Standard-based rules institutionalize governance expectations that run across both. Together, they form a composite architecture that lawmakers and regulators use to manage technologies that change faster than statutory text.

For now, the practical effect is to push organizations toward comprehensive AI governance programs that can satisfy multiple demands at once. Systems that are designed and documented with input and standard-based requirements in mind may be easier to defend when output-based enforcement arrives. Conversely, investigations and litigation focused on harmful outputs are already shaping how regulators interpret and update input and standards frameworks. As national legislatures, treaty bodies, and standards organizations continue to refine their approaches, the three typologies described here provide a way to track how new measures alter the balance between consequences, components, and controls in AI law.

Sources


This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, regulations, and sources cited are publicly available through official publications and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Recalibrating Competence: Updating Model Rule 1.1 for the Machine Era

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *