Cross-Border AI Forces Companies to Design Around Overlapping Legal Rulebooks
Most AI systems are built to be global. The law is not. A U.S. provider can expose one model through a single API and discover the same system triggers risk-based obligations in the European Union, government filings and content filtering in China, and U.S. export restrictions that may block deployment altogether. These three jurisdictions have implemented the only binding AI frameworks that create immediate, enforceable compliance obligations with substantial penalties. Unlike emerging governance approaches in the United Kingdom, Japan, and Canada, the EU AI Act, Chinese algorithm regulations, and U.S. export controls require different product architectures rather than procedural adjustments. Companies must now design compliance into systems from the start, and those architectural choices increasingly shape how AI deployments work everywhere else.
From Single Stack to Regional Variants
On architecture diagrams, cross-border deployment still looks simple. U.S. companies expose model capabilities through cloud APIs, on-premise instances, software development kits embedded in customer products, or model-as-a-feature inside larger SaaS platforms. The same underlying weights can power multiple offerings in different regions.
Law treats those offerings very differently. The European Union’s Artificial Intelligence Act applies when systems are placed on the EU market or their outputs are used in the Union, regardless of where the provider is established, as described in the European Commission’s AI Act overview. China’s algorithm recommendation rules, deep synthesis regulations, and generative AI measures apply when services are provided to users in mainland China, even if the code is hosted abroad.
The result is a cross-border puzzle with many moveable parts. Counsel must understand what the EU AI Act expects from providers and deployers, what Chinese rules expect from algorithm and deep synthesis services, where those expectations collide in practice, and how to design architectures and contracts that can withstand scrutiny on both sides.
EU AI Act: One Model, Several Legal Identities
The EU AI Act entered into force on Aug. 1, 2024, as the first comprehensive AI statute in a major jurisdiction. The regulation follows a risk-based structure that categorizes systems as prohibited, high-risk, limited-risk, or minimal-risk. The same underlying model can sit in more than one bucket depending on how it is integrated and deployed.
Providers of high-risk systems face the most intensive duties. Summaries from IBM and the AI law guide at aigl.blog explain that high-risk providers must implement quality management systems, document training and testing, perform conformity assessment, log system operation, ensure human oversight, and run post-market monitoring and incident reporting programs. High-risk status can attach when AI is used in safety components of regulated products, in employment screening, in essential services, or in other sensitive contexts listed in the Act’s annexes.
Deployers, often the business customers of U.S. providers, carry their own obligations. Article 26 requires deployers to use systems in line with instructions, assign human oversight, ensure input data are appropriate, monitor operation, inform workers when high-risk AI is used, and keep logs generated by the system for at least six months, according to the English text published at artificialintelligenceact.eu. For companies that combine third-party models with proprietary workflows, this divide between provider and deployer duties becomes a contractual and governance issue, not just a technical one.
General-purpose AI and high-impact foundation models add another layer. The International Association of Privacy Professionals highlights in its analysis of obligations for general-purpose AI models that providers must prepare technical documentation, perform risk and cybersecurity testing, and give integrators enough information to understand capabilities, limitations, and residual risks. A code of practice, published by the European Commission and detailed at the Commission’s GPAI code page, now serves as a reference path to compliance for general-purpose providers.
The Act’s obligations do not switch on all at once. The Commission’s implementation timeline explains that bans on certain prohibited uses apply first, followed by transparency duties for some AI uses, with obligations for high-risk systems and general-purpose models phasing in over the following years. Prohibited AI practices became enforceable on Feb. 2, 2025, according to the official implementation timeline. Governance rules and obligations for general-purpose AI models took effect on Aug. 2, 2025, while most high-risk system requirements become fully applicable on Aug. 2, 2026. High-risk AI systems embedded into regulated products have an extended transition period until Aug. 2, 2027.
Enforcement carries significant financial consequences. Article 99 of the AI Act establishes administrative fines up to 35 million euros or 7 percent of total worldwide annual turnover, whichever is higher, for violations of prohibited AI practices. Breaches of high-risk AI requirements carry fines up to 15 million euros or 3 percent of global turnover, while providing incorrect information to authorities can result in penalties up to 7.5 million euros or 1 percent of turnover.
China’s Algorithm and Deep Synthesis Rules
China has taken a different approach by layering AI-related rules on top of its cybersecurity and data-protection framework. The Administrative Provisions on Algorithm Recommendation of Internet Information Services, effective March 1, 2022, apply to services that use recommendation algorithms to provide information within mainland China. English translations and commentary published by China Law Translate and the Digichina project explain that providers must implement security management, avoid discriminatory or addictive practices, and refrain from using algorithms to influence public opinion in ways that violate Chinese law.
Those provisions also create user-facing controls. Guidance from the compliance consultancy AppInChina notes that the rules require some services to give users options to turn off targeted recommendations or adjust basic profiling settings, as outlined in its summary of the algorithm recommendation provisions. This transparency looks modest compared with the EU AI Act but signals that recommendation logic is not a purely internal matter.
Deep synthesis regulations then target AI that generates or alters content. The Provisions on the Administration of Deep Synthesis Internet Information Services, effective Jan. 10, 2023, apply to services that use deep learning and related techniques to create or edit text, images, audio, video, and virtual scenes. Law firms such as Allen & Gledhill and Fasken’s overview of China’s rules for generative AI describe obligations to label synthetic content, prevent misuse for impersonation or disinformation, conduct security assessments when required, and maintain records that allow authorities to trace problematic material.
On top of these measures, China adopted Interim Measures for the Management of Generative Artificial Intelligence Services, which apply to services that use generative AI to provide content to the public in mainland China. The interim measures require providers to manage training data lawfully, carry out security assessments where services may influence public opinion, and align outputs with content requirements in Chinese law.
Regulators have also built an algorithm filing system. The International Association of Privacy Professionals reports in its overview of global AI governance in China that thousands of algorithms have been registered with Chinese authorities, many associated with search engines, content platforms, and e-commerce services. A separate IAPP analysis on key differences between EU and Chinese AI regulations notes that services meeting certain thresholds must submit information on their algorithms, service types, and risk controls through a dedicated online portal.
Where EU and Chinese Rules Collide
The collision between EU and Chinese approaches becomes visible in three areas: transparency, logging and localization, and content standards. Both regimes talk about accountability, but they push products and governance into different shapes.
Under the EU AI Act, providers of high-risk and general-purpose models must prepare detailed technical documentation, maintain logs, and make information available to regulators and deployers so that they can understand performance and residual risk. IBM’s summary of provider duties and the aigl.blog guide to the EU AI Act stress documentation, post-market monitoring, and cooperation with market surveillance authorities. Deployers must keep logs for defined periods and monitor systems for anomalies, as Article 26 explains.
Chinese rules also use the language of transparency, but with a different emphasis. Algorithm recommendation providers must publish basic information about the nature of their services and give users some ability to influence how recommendations work, according to commentary from Digichina. Deep synthesis and generative AI providers must label AI-generated content, disclose when synthetic material is presented, and offer complaint channels and tools to limit misuse.
Logging and data localization also interact. EU law expects logs that allow oversight and post-market monitoring but limits how personal data move through transfer rules and data-protection requirements. These AI-specific logging requirements sit atop foundational data protection regimes. The EU’s General Data Protection Regulation limits international data transfers through adequacy decisions and standard contractual clauses, while China’s Personal Information Protection Law requires security assessments for data exported from the country. AI systems must therefore navigate both the sector-specific AI rules and the broader data protection frameworks that govern the personal information flowing through them.
The Commission’s AI Act materials emphasize fundamental rights, risk management, and data protection. China’s framework requires many providers to retain logs and training data in ways that support security reviews and content investigations, and these obligations sit alongside broader localization rules in cybersecurity and data-protection statutes.
Content standards highlight the starkest divergence. The EU AI Act addresses risks to fundamental rights and bans specific practices, such as some forms of social scoring or manipulative biometric identification, as outlined in the European Commission’s high-level regulatory framework. Chinese measures integrate AI governance with longstanding content controls, including provisions targeting false information, activity that endangers national security, or content deemed harmful to public order. The Carnegie Endowment’s analysis of China’s AI regulations traces how those content rules emerged from earlier cybersecurity and media frameworks.
For a U.S. provider operating in both environments, the same capability may therefore face divergent expectations. Logging and documentation practices that support EU accountability can expose more material to inspection under Chinese regimes. Content safeguards strong enough for Chinese requirements can exceed what European law demands, yet still need to be reconciled with EU and U.S. speech and discrimination rules.
Architectural Patterns for Cross-Border Compliance
Legal and technical teams are starting to solve these tensions with a set of design patterns. None is perfect, but each can be aligned with a particular risk appetite and market strategy.
One pattern is regional segmentation. Providers maintain separate model instances or configurations for EU and Chinese deployments, sometimes with different default prompts, logging settings, and content controls. High-risk EU uses might involve stricter oversight features and more detailed documentation, while Chinese deployments may rely on more conservative content generation, mandatory labeling, and closer integration with local partners for algorithm filings, as suggested in compliance commentary from IAPP and the global trackers maintained by White & Case.
Another pattern is feature gating based on geography. Rather than treating capabilities as all-or-nothing, companies restrict higher-risk features in certain jurisdictions. They may limit direct end-user access to rich deep synthesis tools in some markets while allowing more constrained business-to-business integrations. Geography-based feature flags can then be tied to regulatory assessments and internal approvals, so that adding a feature in a new market automatically triggers a legal review.
These governance patterns also position companies to handle emerging requirements in other jurisdictions. The United Kingdom’s evolving AI framework, Japan’s AI governance guidelines, and regulatory initiatives following OECD AI principles share elements of both the EU’s risk-based approach and China’s sector-specific controls. Companies that build modular governance architectures around EU and Chinese requirements find it easier to add compliance layers for additional markets, as tracked in resources like White & Case’s global regulatory tracker and IAPP’s legislation database.
These patterns create technical infrastructure burdens that extend beyond legal compliance. Regional segmentation requires separate hosting environments to satisfy data residency requirements, duplicated model training pipelines when content filtering differs by jurisdiction, and model pruning techniques to remove capabilities that cannot be deployed in certain markets. The infrastructure cost of maintaining EU-compliant and China-compliant variants of the same system can exceed the initial development cost of a single global deployment.
Contracting becomes the third leg of the structure. EU-facing contracts often include appendices that allocate provider and deployer obligations under the AI Act, set expectations for logging and incident reporting, and describe the status of conformity assessments for high-risk uses, as outlined in updates such as White & Case’s AI Watch. China-facing contracts or joint venture arrangements may address who bears responsibility for algorithm filings, content labeling workflows, user complaints, and engagement with regulators, topics emphasized in the IAPP’s guidance on EU and Chinese AI compliance.
To support these patterns, companies are building internal governance that is specific to cross-border AI. Common elements include a centralized inventory of AI systems mapping each system to EU risk tiers and Chinese rule categories, a standing review group that includes legal, security, and product leaders for new deployments, and standard procedures for regulator queries and incident management across jurisdictions, informed by resources such as the IAPP’s global AI legislation tracker.
For smaller companies, the compliance burden presents additional challenges. The EU AI Act’s small business provisions offer some relief, including free access to regulatory sandboxes, simplified technical documentation requirements, and proportionate conformity assessment fees. Member States must establish dedicated communication channels for small- and medium-sized enterprises and organize training activities to support compliance. However, as implementation progresses, even these accommodations leave smaller providers managing substantial documentation and monitoring requirements across multiple regulatory frameworks.
Export Controls Determine What Can Cross Borders
Export controls sit in the background of these design decisions. Even when the EU AI Act and Chinese AI rules could both be satisfied in theory, U.S. export controls on advanced computing items limit where certain chips and AI model weights can be shipped or accessed.
The U.S. Bureau of Industry and Security has progressively tightened controls on advanced AI chips and, in 2025, expanded restrictions on advanced computing items and some AI model weights destined for China and other countries of concern. A Sidley Austin client alert on new U.S. export controls on advanced computing items and AI model weights and the BIS press release on strengthening restrictions on advanced computing semiconductors describe license requirements and due diligence expectations that now sit alongside AI-specific laws.
For U.S. counsel, this means AI deployment decisions must integrate export controls, sanctions, data-protection rules, and AI statutes into a single review. The fact that a deployment satisfies EU AI Act obligations or Chinese algorithm rules does not make it permissible under export control policy, and a single change in hardware or model access can shift the regulatory analysis.
A Practical Playbook for Cross-Border AI
Cross-border AI regulation is no longer an abstract risk. It has become a set of concrete conflicts between documentation demands, logging expectations, content standards, and trade controls. Lawyers who advise on deployment decisions need a structured way to surface those conflicts early.
One starting point is system inventory. Providers can maintain a living register of models and applications that records, for each system, where it is offered, how it is accessed, what data it processes, and which regulatory regimes it touches. That inventory should flag whether a system is likely to be treated as high-risk or general-purpose AI under the EU AI Act and whether it is subject to algorithm, deep synthesis, or generative AI measures in China.
Next is classification and segmentation. For each system, legal and technical teams can decide whether a single configuration will serve all markets or whether distinct regional variants are needed. The factors include documentation burdens under the EU AI Act, filing and content requirements in China, and any relevant export licenses. In many cases, segmentation becomes a governance choice rather than a purely technical preference.
A third element is documentation that serves both regimes without undermining either. Technical documentation that explains capabilities, limitations, and training data at a level that meets EU expectations should be prepared with the understanding that parts of it may be requested by Chinese authorities in the context of filings or investigations. Counsel should work with security and product teams to decide how to structure documentation so that it supports accountability while managing disclosure risk in each jurisdiction.
Finally, U.S. teams can embed cross-border review into familiar operational processes. New product introduction, major model updates, and market expansion plans can all trigger a standard set of questions: what risk category this deployment will fall into under the EU AI Act, which Chinese AI rules might apply, and whether export controls or sanctions limit the deployment. Those questions sit alongside, not behind, traditional privacy impact assessments.
As more jurisdictions draw from elements of both the EU and Chinese approaches, early architectural choices will matter. Companies that treat cross-border AI as a design problem, rather than as a string of exceptions to a single global product, are better positioned to adapt when new statutes and guidelines arrive.
Sources
- aigl.blog: “European Union Artificial Intelligence Act: A Guide”
- Allen & Gledhill: “China Seeks to Regulate Deep Synthesis Services and Technology” (Jan. 4, 2023)
- AppInChina: “Administrative Provisions on Algorithm Recommendation for Internet Information Services,” by Yoni Hao (Sept. 29, 2024)
- artificialintelligenceact.eu: “Article 26: Obligations of Deployers of High-Risk AI Systems”
- artificialintelligenceact.eu: “Article 99: Penalties”
- artificialintelligenceact.eu: “Implementation Timeline”
- artificialintelligenceact.eu: “Small Businesses’ Guide to the AI Act”
- Bureau of Industry and Security: “Commerce Strengthens Restrictions on Advanced Computing Semiconductors” (Jan. 15, 2025)
- Carnegie Endowment for International Peace: “Tracing the Roots of China’s AI Regulations,” by Matt Sheehan (Feb. 23, 2024)
- Digichina (Stanford): “Translation: Internet Information Service Algorithmic Recommendation Management Provisions” (March 1, 2022)
- Digichina (Stanford): “Translation: Measures for the Management of Generative Artificial Intelligence Services,” (April 12, 2023)
- European Commission: “AI Act Enters into Force” (Aug. 1, 2024)
- European Commission: “AI Act: Regulatory Framework for Artificial Intelligence”
- European Commission: “The General-Purpose AI Code of Practice” (July 10, 2025)
- Fasken: “China’s New Rules for Generative AI: An Emerging Governance Model” (Aug. 2023)
- IBM: “What is the Artificial Intelligence Act of the European Union?” by Matt Kosinski and Mark Scapicchio
- International Association of Privacy Professionals: “Global AI Governance Law and Policy: China,” by Barbara Li
- International Association of Privacy Professionals: “Global AI Legislation Tracker”
- International Association of Privacy Professionals: “Preparing for Compliance: Key Differences Between EU and Chinese AI Regulations,” by Hunter Dorwart, Harry Qu, Tobias Bräutigam, and James Gong (Feb. 5, 2025)
- International Association of Privacy Professionals: “Top Impacts of the EU AI Act: Obligations for General-Purpose AI Models,” by Phillip Lee and Uzma Chaudhry (Aug. 2024)
- Sidley Austin: “New U.S. Export Controls on Advanced Computing Items and Artificial Intelligence Model Weights” (Jan. 16, 2025)
- White & Case: “AI Watch: Global Regulatory Tracker”
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All regulations, implementation dates, and sources cited are publicly available through official government publications and reputable legal and technology outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Three Regulatory Models Reshaping AI Compliance Across Jurisdictions

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
