Trade Secret Law Confronts the Realities of Foundation Model Training
Artificial intelligence runs on training data, tuning pipelines, and prompt design that companies guard as critical competitive assets. As these systems scale, so do the disputes over how that information is collected, protected, and used. Courts are now sorting out which elements qualify as trade secrets, how developers must safeguard them, and how deeply discovery can probe into model internals. The litigation that follows is reshaping the boundaries of intellectual property, competition, and security in the AI industry.
AI Disputes Redefine Trade Secrets
The Defend Trade Secrets Act of 2016 and state analogues derived from the Uniform Trade Secrets Act provide the foundation for modern trade secret protection in the United States. These laws define a trade secret as information that derives economic value from not being generally known and is subject to reasonable secrecy measures. Foundation model developers frequently rely on this doctrine to protect training data curation processes, evaluation harnesses, model weights, red team logs, and system prompts. Because these assets are difficult to patent and not covered by copyright protection in their composite form, plaintiffs and defendants increasingly turn to trade secret law to secure or challenge competitive advantage in the AI marketplace.
According to analysis published in Feb. 2025, trade secret litigation saw a dramatic increase in 2024, with over 1,200 cases filed in federal courts. This rise has been driven by the robust framework of the DTSA, which provides federal jurisdiction without diversity requirements, potential for higher damages compared to patent disputes, swift injunctive relief for misappropriation claims, and extraterritorial protection allowing for global enforcement of U.S. trade secrets.
Trade secret analysis also intersects with federal competition and consumer protection law. The Federal Trade Commission has emphasized accuracy, documentation, and security obligations for companies deploying AI systems, and has brought enforcement actions under Section 5 against inadequate security practices involving machine learning pipelines. While not trade secret actions, these cases influence how courts evaluate reasonable secrecy measures as part of DTSA claims.
Training Data Emerges as Central Battleground
Training data is one of the most valuable and contested components of a foundation model. Plaintiffs have alleged in multiple suits that proprietary corpora, confidential internal documents, or licensed materials were included in training sets without authorization. Courts must determine how much of this material is relevant, whether alternative disclosures satisfy proportionality requirements, and how protective orders can reduce the risk of losing trade secret status.
The European Union’s Artificial Intelligence Act requires providers of general purpose AI models to publish a summary of training data and maintain deeper documentation for regulators. The European Commission’s template for training data summaries, released on July 24, 2025, outlines information that must be disclosed without revealing proprietary materials or personal data. This regulatory approach influences litigation strategy because disclosures made in Europe may be referenced in U.S. proceedings, increasing the need for careful documentation that balances transparency with protection.
Organizations that create or license datasets have responded by adopting clearer provenance frameworks. Reports from the OECD AI Policy Observatory and technical guidance from the ISO/IEC 42001 AI Management System Standard, published in December 2023, highlight the importance of documenting data sources, permissions, and limitations. These frameworks do not mandate transparency of specific secrets but inform how courts may interpret reasonable secrecy measures, including access controls, data hygiene processes, and auditability.
System Prompts: The New Frontier in Trade Secret Protection
System prompts, prompt templates, and fine tuning instructions have emerged as new forms of proprietary material. Some plaintiffs argue that prompts should be treated as protectable trade secrets because they encode domain expertise and evaluation logic. The legal significance of system prompts depends on whether they meet the DTSA’s definition of information that derives value from secrecy. Companies often argue that prompt templates used for safety, ranking, or specialization represent competitive advantage and therefore qualify as protectable secrets. At the same time, litigants challenge whether a prompt that can be inferred through model behavior truly remains confidential.
Academic research on prompt injection demonstrates that carefully structured prompts can elicit system behavior that reveals sensitive design details. These studies influence judicial assessment of whether certain attack patterns constitute improper means and whether companies must harden their models to prevent disclosure. The interplay between technical vulnerability and legal obligation will continue to shape trade secret claims involving prompt artifacts.
Model Extraction and Inversion Raise Secrecy Questions
Model extraction attacks attempt to recreate a foundation model by systematically querying it and training a surrogate system. Research from organizations such as Palo Alto Networks describes how extraction techniques can approximate the behavior of proprietary models. Plaintiffs argue that these extraction attempts can reveal internal decision boundaries or characteristics of training data, amounting to unauthorized acquisition or use of trade secrets.
Model inversion and membership inference attacks pose additional challenges. Studies document how these techniques can infer sensitive attributes or reconstruct inputs from model outputs. These vulnerabilities may undermine reasonable secrecy measures because they allow third parties to glean aspects of the training data or internal architecture. Courts evaluating trade secret claims must determine whether failure to mitigate such vulnerabilities weakens a developer’s secrecy arguments or whether the attacks constitute improper acquisition by the adversary.
Organizations increasingly adopt privacy enhancing technologies, including differential privacy and secure multiparty computation, to reduce inversion and extraction risks. Reports from the OECD and guidance from NIST’s Privacy Enhancing Cryptography initiative show how these measures support compliance and reduce exposure. While these technologies do not eliminate all risks, they shape how courts assess reasonable secrecy measures in AI development environments.
Discovery Disputes Shape Litigation Outcomes
Discovery is one of the most contentious aspects of foundation model litigation. Plaintiffs often seek training data lists, data lineage documentation, model weight snapshots, red team logs, system prompt archives, and evaluation benchmarks. Defendants argue that disclosing these materials risks destroying trade secret value. Federal courts rely on proportionality requirements under the Federal Rules of Civil Procedure to balance the value of information against the burden and competitive harm of disclosure.
Law firms have published analyses detailing how courts approach these disputes. Publicly accessible commentaries from firms such as Debevoise and Plimpton and Sterne Kessler describe strategies that include tiered disclosures, in camera review of sensitive documents, secure data rooms, and neutral experts. These approaches aim to provide plaintiffs with necessary information while preserving trade secret status. They also reflect the reality that courts are still developing consistent principles for handling AI specific technical evidence.
Protective orders also play a central role. Courts must decide whether attorneys’ eyes only designations, access restrictions, or multi tier review processes are sufficient to prevent dissemination of sensitive materials. Because trade secret protection can be lost through inappropriate disclosure, discovery procedures can be outcome determinative even in early stages of litigation. Companies therefore adopt documentation policies and access controls that anticipate potential discovery obligations.
Regulators Influence Secrecy Obligations
Regulatory frameworks increasingly define what reasonable secrecy measures should look like. The EU AI Act includes documentation and traceability requirements for high risk systems and general purpose AI models. These documents often include descriptions of datasets, design processes, and evaluation results. While the Act provides trade secret protections, regulators may still request information that litigants later seek in civil disputes. Companies must ensure that disclosures to regulators are consistent with secrecy claims in litigation.
In the United States, NIST Special Publication 800-63-4, released in July 2025, and the AI Risk Management Framework promote documentation of data, model behavior, system limitations, and security controls. While these materials are voluntary outside federal systems, they influence courts evaluating whether a developer adhered to recognized best practices. The FTC’s enforcement posture also affects judicial expectations regarding data governance, accuracy testing, and incident reporting.
International bodies reinforce these obligations. The OECD AI Principles and ISO/IEC 42001 emphasize transparency, safety, and oversight. Even though these standards do not dictate specific secrecy mechanisms, they describe governance architectures that regulators increasingly reference in guidance and enforcement actions. As these frameworks become more widely integrated into procurement rules and regulatory requirements, they shape the evidentiary environment in which trade secret disputes unfold.
Insurance and Liability Coverage Gaps
The intersection of AI development and professional liability insurance presents emerging challenges for organizations seeking coverage for trade secret disputes. Traditional professional liability policies were not designed to address AI-specific risks, creating potential coverage gaps when trade secret misappropriation occurs in the context of AI systems.
According to analysis published in 2025, affirmative AI insurance coverages have begun to emerge. On April 30, 2025, Armilla Insurance Services launched an AI liability insurance policy underwritten by certain underwriters at Lloyd’s, including Chaucer Group. The following month, Google announced a partnership with insurers Beazley, Chubb, and Munich Re to introduce tailored cyber insurance solutions specifically designed to provide affirmative AI coverage, including protection for trade secret losses linked to malfunctioning AI tools.
The Delaware Superior Court held in Precision Medical Group Holdings, Inc. v. Endurance American Specialty Insurance Co. (Aug. 27, 2025) that trade secret disclosure can qualify as a “Privacy Event” under professional liability insurance policies. The court determined that trade secrets constitute “non-public information” and therefore implicate coverage provisions designed to protect against unauthorized disclosure. This ruling demonstrates how courts are adapting traditional insurance frameworks to cover AI-related trade secret disputes.
Companies integrating third party foundation models must conduct diligence to ensure that vendors maintain appropriate insurance coverage and secrecy controls. Contracts should require documentation of training data provenance, evaluation methods, and security measures. They may also include warranties regarding the lawful acquisition of training data and indemnities for trade secret claims arising from vendor misconduct.
Cross-Border Enforcement Challenges
Trade secret litigation involving foundation models frequently implicates multiple jurisdictions, creating complex enforcement challenges. According to analysis from Finnegan, trade secrets transcend international borders in ways that other intellectual property rights do not. A Chinese company can be found to have violated U.S. trade secret law for conduct that occurred exclusively in China, and U.S. courts can apply Taiwanese trade secret law to disputes between UK and Taiwanese companies.
The Seventh Circuit held in Motorola Solutions v. Hytera Communications (2024) that the federal Defend Trade Secrets Act has extraterritorial reach, affirming a damages award that consisted entirely of the defendant’s foreign sales. The court determined that an “act in furtherance” of misappropriation within the United States, such as advertising products at a trade show, suffices to establish jurisdiction over foreign conduct. This decision represents the first circuit court to explicitly find extraterritorial reach under the DTSA.
Cross-border disputes involving AI systems present unique challenges in establishing jurisdiction over parties and enforcing judgments. Different countries maintain distinct legal frameworks for trade secret protection, and litigation involving cross-border parties often involves unique procedural requirements. Organizations developing or deploying foundation models across multiple jurisdictions must navigate these overlapping frameworks when asserting or defending trade secret claims.
International coordination mechanisms remain limited. While treaties such as the Agreement on Trade-Related Aspects of Intellectual Property Rights provide baseline standards for trade secret protection, enforcement mechanisms remain largely national. Companies operating globally must develop trade secret policies that account for varying jurisdictional requirements and anticipate cross-border enforcement scenarios.
How Organizations Should Respond
Organizations developing or deploying foundation models face increasing pressure to document how their models are built and secured. Entities asserting trade secret protection should maintain version histories, access logs, data provenance records, and documentation describing training and fine tuning procedures. Technical teams must also work closely with legal counsel to align secrecy measures with regulatory obligations and potential litigation exposure.
Insurance considerations have become critical. Organizations should review existing professional liability and cyber insurance policies to understand coverage for AI-related trade secret disputes. As affirmative AI insurance products emerge, companies should evaluate whether additional coverage is warranted based on their specific risk profile and business model.
Litigants asserting trade secret misappropriation must gather evidence demonstrating ownership, confidentiality, and economic value. Plaintiffs often rely on internal documentation, employee testimony, and expert analysis to demonstrate how specific components of a model pipeline function as trade secrets. Defendants must show that they implemented reasonable secrecy measures and that the alleged secrets were either independently developed, publicly known, or not protectable. Courts evaluate these claims through detailed technical and legal analysis, often requiring expert testimony.
A Framework Takes Shape
Trade secret litigation involving foundation models reveals a maturing legal framework. As courts evaluate training data, prompts, model weights, extraction vulnerabilities, and secrecy measures, they are defining the boundaries of protectable information in the AI era. Regulators influence these boundaries through documentation and transparency requirements, while international standards bodies promote governance structures that shape judicial expectations. Organizations must navigate these overlapping frameworks when asserting or defending trade secret claims.
Although legal doctrines vary across jurisdictions, common principles emerge. Courts require evidence that claimed secrets provide economic value, that reasonable secrecy measures exist, and that defendants acquired or used the information through improper means. Foundation model cases add new technical dimensions, but they remain grounded in longstanding trade secret doctrine. As litigation evolves, companies that proactively document processes, implement robust governance controls, secure appropriate insurance coverage, and align regulatory disclosures with litigation strategy will be better positioned to protect their intellectual assets.
Sources
- Bloomberg Law: EU AI Act Demands Informed, Disclosure-Aware Patent Strategies by Lestin L. Kenton, Jr. and Roozbeh Gorgin of Sterne Kessler (Oct. 23, 2025)
- Centre for International Governance Innovation: Into Uncharted Waters: Trade Secrets Law in the AI Era (May 2024)
- Debevoise and Plimpton: Debevoise Digest: Securities Law Synopsis (June 2025)
- EU Template for Training Data Summary (July 24, 2025)
- Federal Trade Commission: Biometric Information Policy Statement
- Finnegan: Across the Border – Global Enforcement of Trade Secrets
- Hunton Andrews Kurth: Affirmative Artificial Intelligence Insurance Coverages Emerge (2025)
- ISO/IEC 42001: AI Management System Standard (Dec. 2023)
- McGuinness, Patrick, “The Era of Foundational Models,” Substack: AI Changes Everything, March 29, 2023
- Mondaq: Litigation Year in Review 2024 IP Highlights (Feb. 18, 2025)
- National Law Review: Trade Secret Damages and Legal Trends in 2024 (Nov. 4, 2024)
- NIST Special Publication 800-63-4: Digital Identity Guidelines (July 2025)
- NIST Privacy Enhancing Cryptography Initiative
- OECD AI Observatory
- OECD AI Policy Papers & Publications
- OECD AI Principles
- Palo Alto Networks: AI Model Security: What It Is and How to Implement It
- Schulich School of Law, Dalhousie University – Schulich Law Scholars: Legal Risks of Adversarial Machine Learning Research (2020)
- UK Government Digital Service: AI Insights: Prompt Risks (HTML) (Nov. 4, 2025)
- United States Congress: Defend Trade Secrets Act of 2016
- Wiley Executive Summary Blog: Trade Secret Disclosure as Privacy Event (Aug. 27, 2025)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, regulations, and sources cited are publicly available through official publications. Readers should consult professional counsel for specific legal or compliance questions related to AI development and trade secret law.
See also: Blockchain Stamping Creates Verifiable Audit Trails for AI Evidence

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
