Foundation Model Export Controls Create Fragmented Global Regulatory Landscape
Export controls used to sit at the edge of the AI story, confined to chip fabs, lithography tools, and freight routes. In 2025, they moved into the middle. The same Bureau of Industry and Security rules that once targeted advanced processors now reach into model weights, cloud clusters, and fine-tuning pipelines for frontier systems. As the United States reshapes export law around artificial intelligence, foundation model developers, cloud platforms, and heavy users of advanced APIs are discovering that export compliance is no longer a side issue. It is part of the governance layer for powerful models themselves.
Export Controls Move From Chips to Models
The modern story starts with hardware. In October 2022 and October 2023, the U.S. Department of Commerce’s Bureau of Industry and Security issued sweeping rules that restricted exports of advanced computing integrated circuits, supercomputer-scale systems, and semiconductor manufacturing equipment to China and other countries of concern. Those measures tightened licensing around high-end GPUs and data center clusters that could be used to train large-scale models, and they pushed compliance responsibilities onto U.S. and non-U.S. firms under the Export Administration Regulations.
In January 2025, BIS moved further up the stack. An interim final rule in the Federal Register created a Framework for Artificial Intelligence Diffusion that, for the first time, placed certain unpublished model weights for advanced systems under explicit export control classifications. A separate rule expanded licensing obligations for foundries and packaging companies that handle controlled computing items, reinforcing a two-track structure: one set of controls aimed at chips and clusters, and another aimed at the diffusion of the most powerful models themselves.
Client alerts from law firms such as Sidley Austin and Vinson & Elkins quickly translated the new framework into compliance terms. They highlighted how updated export control classification numbers captured not only advanced integrated circuits but also technology for certain frontier models, and how compliance deadlines in early 2025 and mid-2025 gave companies a short runway to understand whether any of their systems fell into scope.
Defining Frontier and Foundation Models
The AI diffusion rule does not regulate every large model. It focuses on a narrower category that BIS labels frontier systems. The rule ties that label to technical criteria, including the amount of compute used for training and qualitative indicators such as the model’s capability to perform certain high impact tasks. Those thresholds are designed to capture the most powerful general purpose systems and to exclude routine machine learning models, although practitioners have pointed out that the line will shift as hardware and training techniques continue to improve.
Earlier, the Biden administration’s Executive Order 14110 on artificial intelligence referred to dual-use foundation models that have tens of billions of parameters and broad applicability across sectors, while leaving detailed definitions to agencies and subsequent rulemaking. Congressional researchers summarized that approach in a CRS report on the AI executive order that emphasized safety testing and reporting obligations for high-risk systems. Although the order has since been withdrawn, its vocabulary persists in policy debates about which models require special governance.
The AI Diffusion Rule: Global Licensing and Metered Access
The BIS interim final rule explains that the agency added a new Export Control Classification Number for certain unpublished model weights and related technology associated with frontier AI systems, while reinforcing existing entries for advanced computing items. The rule targets exports, reexports, and in-country transfers of those weights to specified destinations, and it establishes licensing policies that vary by country group and end use.
Analyses by firms such as Mayer Brown and Perkins Coie note that compliance dates in May 2025 gave companies only a few months to classify their models and identify any controlled technology. They also emphasize that the rule draws a line between model weights and public research. Most publicly released model weights and research outputs remain outside the new control category, although other export rules may still apply.
One of the most consequential concepts in the AI diffusion framework is metered access. As Gibson Dunn puts it, BIS is trying to allow controlled, monitored access to frontier systems while restricting bulk transfers of the underlying technology. In practice, this means the agency is more comfortable with structured API access to powerful models, and more cautious about any arrangement that gives foreign entities custody of controlled weights or unrestricted training capacity.
Model Weights And The First Amendment
Moving from chips to model weights raises constitutional questions that did not arise in the same way for lithography tools and etching equipment. Encryption export rules in the 1990s prompted litigation over whether source code is protected speech. Courts ultimately recognized code as expressive, yet they upheld some export restrictions where the regulation was framed as targeting functional effects tied to national security rather than the content of the message itself.
In a 2024 article for Lawfare, Alan Rozenshtein argues that there is no general First Amendment right to distribute machine learning model weights, distinguishing them from traditional source code. He notes that model weights function primarily as machine readable parameters, and that any expressive content is indirect compared with the role they play in enabling powerful capabilities. That framing is likely to influence how courts evaluate constitutional challenges to export controls on weights.
The control of model weights is complicated by the open-source movement. Companies and research groups, including Meta and Mistral AI, have released powerful foundation models with permissive licenses, allowing widespread download and local use of the weights. While the BIS rules primarily target unpublished, frontier systems, the existence of increasingly capable open-source models creates a regulatory paradox. Policymakers must continually assess whether technical thresholds set for proprietary systems remain effective when state-of-the-art capabilities are rapidly being democratized and are beyond the direct control of U.S. export law.
At the same time, model weights often sit at the intersection of trade secrets, copyright, and safety obligations. Companies treat them as confidential assets that embody valuable training investments, while policymakers increasingly view them as potential channels for harmful capabilities. Export controls on weights therefore interact with private law doctrines that protect secrecy and exclusivity, raising questions about how much visibility regulators and courts will demand into the parameters of controlled systems.
Cloud Access APIs and Remote Exports
Export controls do not only apply when something is placed on a ship or loaded onto a drive. Under the Export Administration Regulations, providing access to controlled technology through remote means can qualify as an export or re-export, depending on where the user is located and what they are permitted to do with the system. That is why the AI diffusion framework looks closely at situations where foreign persons gain deep access to controlled model weights, not only at physical shipments.
Law firm alerts, including guidance from Dentons, stress that companies need to map where their model artifacts are stored, which entities control the infrastructure, and how access controls are configured for employees, contractors, and customers. An advanced model that sits on a U.S. cloud provider’s servers but is administered by a foreign subsidiary can raise different questions than the same model accessed only through a tightly limited interface.
Commercial decisions are already reflecting that risk calculus. In September 2025, Anthropic announced that it would stop providing its Claude AI services to companies that are majority-owned by Chinese entities and other adversarial jurisdictions, citing legal, regulatory, and security risks. Subsequent reporting on the company’s service noted that the policy would apply even when those firms attempted to access models through foreign subsidiaries or cloud resellers.
For in-house counsel, the practical question is not only whether a particular model is controlled, but also whether its deployment patterns could be characterized as exports or reexports. Contract terms with customers, resellers, and hosting providers need to align with the company’s export classifications and licensing decisions, especially when foundation models sit at the heart of a broader SaaS offering.
Strategic Domains and Partial Convergence
Export control law has always been multilateral, at least on paper. Forums such as the Wassenaar Arrangement provide structures for participating states to coordinate control lists for dual use items. In practice, geopolitical tensions have pushed major players toward more unilateral measures. Russia’s war in Ukraine and growing concern about Chinese access to critical technologies have intensified that trend.
On the European side, the dual-use framework remains grounded in Regulation 2021/821 and its common control list. The European Commission adopted a 2025 update to the EU control list that added new items in areas such as quantum technology, semiconductors, and advanced computing. Law firm analyses, including a November 2025 alert from Hogan Lovells and commentary from NautaDutilh, describe the update as a significant step in which the EU asserted more autonomy over emerging technologies instead of relying solely on Wassenaar consensus.
At the same time, the EU is building a separate framework for general purpose AI models with systemic risk through the Artificial Intelligence Act. The act entered into force in August 2024, and by mid-2025 the Commission had issued guidelines for providers of general purpose AI models and Article 55 obligations that will apply to models with systemic risk from August 2026 onward. Foundation models from companies such as OpenAI, Anthropic, and European providers will face stricter transparency, evaluation, and incident reporting duties, even though those rules do not function as export controls in the narrow sense.
Commentary has described the EU’s shift as part of a broader economic security strategy in which quantum technologies, semiconductors, and advanced computing are treated as strategic domains. For practitioners, the result is a patchwork in which U.S. export restrictions on model weights and compute coexist with European AI act obligations and new EU level dual use controls, often applying simultaneously to the same companies and infrastructure.
Policy Shifts In Washington
The U.S. export control regime for AI now operates alongside a changing set of White House directives. Executive Order 14110 framed frontier and dual-use foundation models within a safety oriented policy that emphasized testing, reporting, and standards. The Federal Register entry for that order underscored the need to manage national security and civil rights risks while sustaining innovation.
That framework did not last. On January 20, 2025, President Trump revoked Executive Order 14110, as documented in legal commentary from Baker McKenzie and reflected in the text of Trump’s Removing Barriers to American Leadership in Artificial Intelligence order. That new order criticized the prior approach as a drag on innovation and directed agencies to identify and roll back policies that could hinder AI development, while promoting an American AI stack abroad.
Export controls, however, did not reset in the same way. BIS’s AI diffusion rule and advanced computing regulations continued to evolve under Commerce authority. Lawfare analysis by Christian Chung argues that enforcement capacity, not rule text, will determine whether these measures meaningfully constrain adversaries or merely add compliance overhead for U.S. companies. A May 2025 note from Hogan Lovells reported that Commerce initiated a rescission and replacement process for the original AI diffusion rule, underscoring how unsettled the regulatory design remains.
Compliance Burdens and Costs
The expanded export control framework imposes substantial compliance obligations on companies operating in the AI space. According to Sidley Austin, the new rules require companies to supply vast amounts of information to BIS to retain their export privileges, either through pursuit of validated, authorized, or approved status, as part of license applications, or to use existing license exceptions. These reporting and monitoring requirements are described as complex, time consuming, and potentially expensive.
Industry observers have raised concerns about whether companies, especially medium and small ones, can handle these compliance demands. As Axios reported in October 2025, exporters now face a whole new set of due diligence obligations regardless of whether they are in a sensitive industry or dealing with China. The compliance challenges extend beyond direct costs to include supply chain disruptions, delays in accessing cutting-edge technologies, and the need to retrofit data centers with stringent security requirements.
For cloud providers and enterprises relying on GPUs for AI development, industry analysis suggests these frameworks may disrupt GPU supply chains, causing project delays and increased operational costs, while forcing enterprises to invest in alternative technologies that could impact their competitiveness and profitability.
A Checklist For AI and Trade Counsel
For companies that build or rely on foundation models, export control risks now belong alongside privacy, cybersecurity, and competition law in the standard governance matrix. The details will vary by sector and footprint, but several baseline questions recur across clients.
First, inventory the models. Organizations need to know which systems they train, host, or fine tune, what compute resources those systems used, and whether any approach the thresholds described in Commerce guidance and client alerts. Firms that advise on export classification recommend tying each significant model to specific hardware configurations and training runs, rather than treating AI as a single category.
Second, map custody and access for model weights and associated training artifacts. That includes understanding which entities control repositories, where those repositories sit geographically, which administrators can see or move the artifacts, and whether any third parties have deep access. The more a model resembles a traditional exported product with foreign custody, the more likely it is to trigger licensing questions.
Third, integrate export analysis into contracting and transaction work. Mergers, joint ventures, cross border research collaborations, and even large commercial API deals can implicate transfer restrictions when they involve controlled technology. Counsel are increasingly asked to draft representations and covenants around export compliance for AI assets, much as they already do for encryption, satellite components, or advanced materials.
Finally, align export control efforts with broader AI governance programs. Documentation prepared for internal AI risk committees, for NIST AI Risk Management Framework alignment, or for EU AI Act compliance can also support export assessments, provided that it is specific enough about models, data, and infrastructure. The goal is to avoid separate, siloed programs that pull the same technical teams in conflicting directions.
The Road Ahead: 2026 Implementation and Beyond
The export control picture for foundation models will not stand still. Commerce has indicated that it may revise compute and capability thresholds for frontier systems as technology advances, potentially sweeping more models into the controlled category without new legislation. Firms tracking the rules expect further adjustments to the balance between hardware, weight level controls, and due diligence obligations for intermediaries.
In Europe, the stepwise implementation of the AI Act will bring systemic risk obligations for general purpose AI models into force by August 2026, alongside the 2025 dual-use control list update and a growing emphasis on economic security. Law firm briefings such as Dentons’ overview of AI Act implementation and specialized trackers at sites like ArtificialIntelligenceAct.eu frame those developments as a second regulatory layer above export rules.
Geopolitical tensions will continue to shape both regimes. European policymakers have moved to consolidate more export control authority at the Commission level, and partners such as Taiwan have tightened dual-use controls for advanced semiconductor and quantum technologies, according to recent policy updates and official statements. Those moves sit alongside U.S. debates about whether the current mix of export controls, outbound investment review, and voluntary industry steps such as Anthropic’s access restrictions are sufficient to manage national security risks without freezing beneficial AI collaboration.
For lawyers working at the intersection of AI and trade, the practical implication is straightforward. Foundation models now live inside the export control conversation, not outside it. Advising clients on deployment, licensing, and partnerships for advanced systems increasingly requires fluency in both model governance and the logic of dual-use controls.
Sources
- Anthropic: “Updating Restrictions of Sales to Unsupported Regions” (Sept. 5, 2025)
- ArtificialIntelligenceAct.eu: “Article 55” (2025)
- ArtificialIntelligenceAct.eu: “High-Level Summary” (2025)
- Axios: “How a wonky Commerce rule could disrupt AI companies” (Oct. 3, 2025)
- Baker McKenzie: “United States: AI Tug of War – Trump Pulls Back Biden’s AI Plans” (Jan. 28, 2025)
- CIO: “Tech industry sounds alarm over US AI Export Control Framework” (April 21, 2025)
- Congressional Research Service: “Executive Order on Artificial Intelligence” (2023)
- Cybersecurity Dive: “Trump rescinds Biden executive order in AI regulatory overhaul” (Jan. 23, 2025)
- Dentons: “Commerce Department Releases New and Expansive Export Restrictions on Advanced Computing and AI” (Jan. 17, 2025)
- Dentons: “EU AI Act Implementation: New Obligations for General Purpose AI Models” (Sept. 5, 2025)
- European Commission: “AI Act” (2024)
- European Commission: “2025 Update of the EU Control List of Dual-Use Items” (Sept. 8, 2025)
- European Commission: “Guidelines for Providers of General-Purpose AI Models” (2025)
- Federal Register: “Framework for Artificial Intelligence Diffusion” (BIS Interim Final Rule, Jan. 15, 2025)
- Federal Register: “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (Executive Order 14110, Nov. 1, 2023)
- Gibson Dunn: “BIS Lays the Groundwork for Global and Metered Access to Frontier AI Models and the Computing Power to Train Them” (Jan. 30, 2025)
- Hogan Lovells: “BIS Announces Rescission of Biden-Era AI Diffusion Rule and Issues New AI Policy and Guidance” (May 27, 2025)
- Hogan Lovells: “EU Updates Dual-Use Control List: New Controls on Emerging Technologies and Shift in Export Control Policy” (Nov. 2025)
- Lawfare: “To Win the AI Race, Bolster Export Control Enforcement With Intelligence,” by Christian Chung (Sept. 24, 2025)
- Lawfare: “There Is No General First Amendment Right to Distribute Machine Learning Model Weights,” by Alan Rozenshtein (2024)
- Mayer Brown: “US Commerce Department Announces New Export Compliance Expectations Related to Artificial Intelligence” (May 16, 2025)
- NautaDutilh: “EU Export Control Update: New Licensing Requirements for Critical Technology” (2025)
- Sidley Austin: “New U.S. Export Controls on Advanced Computing Items and Artificial Intelligence Model Weights: Seven Key Takeaways” (Jan. 16, 2025)
- Skadden: “AI: Broad Biden Order Is Withdrawn, but Replacement Policies Are Yet To Be Drafted” (Jan. 23, 2025)
- Tom’s Hardware: “Anthropic blocks Chinese-controlled firms from Claude AI” (Sept. 5, 2025)
- Vinson & Elkins: “Sweeping New Framework Expands BIS Export Controls on Advanced Computing ICs and AI Technologies” (Jan. 2025)
- White House: “Removing Barriers to American Leadership in Artificial Intelligence” (Jan. 23, 2025)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Navigating The Transparency Paradox in AI Regulation

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
