USPTO Ties AI Filing Errors to Sanctions Risk in Patent and Trademark Practice

Generative AI now sits in the middle of IP practice, and the U.S. Patent and Trademark Office (USPTO) has drawn a bright line: automation does not dilute the duty to file accurate, supportable papers. “Hallucinated” citations, fabricated quotes, and confident but wrong statements are record-integrity failures that can follow a matter for years. Examination and adjudication run on the written record, so a bad cite or invented authority becomes administrative friction, litigation fuel, and professional-conduct exposure. The USPTO guidance does not ban tools; accuracy becomes a workflow requirement tied to rules that can sanction lawyers, strike papers, and damage credibility across trademark and patent filings.

Accuracy as a Filing Duty

The USPTO’s April 2024 guidance on AI-based tools in practice before the Office reads less like a tech announcement and more like a reminder of baseline professional obligations. The document frames AI as a productivity tool that can also generate inaccurate or non-existent authorities, then points practitioners back to existing rules that already govern filings in patents, trademarks, and Board proceedings.

Those existing rules matter because they convert “AI made a mistake” into a certification problem. Submitting a paper is not a neutral act. USPTO practice treats a signature as a representation about purpose, factual support, and reasonable inquiry, which becomes a direct line from a hallucinated citation to sanctions risk. The Office’s point is structural: the public record is the product, and record integrity drives examination quality, adjudication fairness, and downstream reliance by courts, investors, and competitors.

Federal courts have been writing the same lesson in louder ink. The Southern District of New York’s sanctions order in Mata v. Avianca, Inc. became a shorthand warning about fabricated case citations produced with generative AI. The court found that lawyers had acted with subjective bad faith sufficient for sanctions under Federal Rule of Civil Procedure 11, noting the filings included non-existent citations and quotations output by a generative AI system. The decision underscored a basic dynamic: once a court or tribunal believes the citations cannot be trusted, everything else in the paper becomes suspect.

USPTO Maps the AI Risks

The USPTO’s AI guidance focuses on practical failure modes that show up in real filings: inaccurate legal citations, misquoted holdings, overconfident factual assertions, and disclosures that do not match underlying support. The guidance also flags confidentiality risk when practitioners paste client information into third-party tools, and it warns that privileges and trade secrets can be compromised by tool terms, retention, or model-improvement pipelines. The USPTO’s accompanying announcement frames the same message in plainer terms: the Office wants lawyers and parties to understand the risks and mitigate them, not pretend the tool absolves responsibility.

The Office also reinforces Board accountability explicitly. The February 6, 2024 Director memorandum on AI use addressed to PTAB and TTAB emphasized that parties before the Boards remain accountable for the accuracy and integrity of submissions. The PTAB points practitioners to AI materials through its resources and guidance page, where the February 2024 guidance appears alongside other Board policies, signaling that the Office expects lawyers to treat AI literacy as part of competent practice before administrative tribunals.

That posture is not uniquely American. The European Patent Office added an explicit reminder in its Guidelines that responsibility for submissions does not change because an AI tool helped draft them. The 2025 EPO Guidelines state that parties and their representatives are responsible for the content of their patent applications and submissions to the EPO and for complying with the requirements of the EPC regardless of whether a document has been prepared with the assistance of an artificial intelligence tool. The language appears in the General Part of the EPO’s Guidelines for Examination, showing where global administrative bodies are converging, even when their substantive law differs.

Hallucinations Break the Record

Trademark and patent practice is unusually sensitive to “record pollution” because later decisions rely on earlier submissions. A hallucinated case citation in a TTAB brief is not just embarrassing, it can send an administrative judge on a needless hunt, waste client money, and shift attention from merits to credibility. A fabricated quote in an Office action response does something worse: it can distort prosecution strategy, induce incorrect examiner assumptions, and create a file history that future litigators will have to defend or explain.

AI hallucinations also create a distinctive asymmetry: the error often looks polished. Generative systems tend to supply plausible case names, plausible reporters, and plausible pin cites, which means the mistake can survive multiple internal handoffs if nobody runs the citations to ground. The USPTO’s guidance treats that as a foreseeable risk, which pushes “verification” from a best practice into a predictable expectation whenever AI tools touch legal authorities or factual assertions. The guidance is explicit that AI can produce inaccurate information that must be checked, not trusted.

Patent practice adds a second layer: duty-driven disclosures. When AI tools are used to summarize prior art, extract claim charts, or draft technical characterizations, a small hallucination can become a big problem if it skews how references are described or how distinctions are framed. The USPTO has long treated candor and accuracy as core to prosecution integrity, and AI use does not soften that expectation.

Certification under 11.18

The compliance hinge in USPTO practice is often 37 C.F.R. § 11.18, which ties signature to certification. A practitioner’s signature is not merely an identifier. Section 11.18 treats signing as a representation that the paper is not presented for an improper purpose and that factual contentions have evidentiary support after an inquiry reasonable under the circumstances. The USPTO’s MPEP section on representations to the Office emphasizes the same framework and links the certification duty to patent and trademark correspondence.

That matters for AI because a hallucinated citation can trigger two separate questions at once: whether the practitioner made a reasonable inquiry, and whether the filing was presented in a manner that undermines the integrity of the proceeding. The Office does not need a new “AI rule” to address that. The existing signature regime already treats unsupported assertions as sanctionable in appropriate cases.


USPTO trademark sanctions orders show how the Office uses 11.18 in real life, even outside AI fact patterns. The agency has imposed sanctions for false or fictitious information in trademark submissions, including misrepresentations tied to signatures and attorney information. One published sanctions order illustrates how the Office frames improper-purpose filings and ties them to 11.18 consequences. That enforcement history is the backdrop: once the Office believes a filing process is producing unreliable records, the sanction tools are already on the shelf.

Confidentiality and Privilege Traps

Hallucinations are not the only USPTO-identified risk. The April 2024 guidance also emphasizes confidentiality exposure when client data is entered into third-party AI tools, particularly when tool terms allow retention or secondary use. The USPTO’s guidance is pointed about mitigation: treat AI platforms as potential disclosure channels unless terms, settings, and deployment architecture support confidentiality expectations.

IP practice has special sensitivity here because prosecution files can contain trade secrets, commercialization plans, licensing strategy, and pending product details that are not yet public. A “helpful” prompt that asks an AI tool to rewrite claim language can inadvertently include the very technical specifics counsel would never email to an unknown recipient. A tool that stores prompts or uses them for product improvement can turn that disclosure into a longer-lived risk than the user intended.

Mitigation is not philosophical. Firms that want to use AI for drafting and editing can route sensitive work through approved enterprise deployments, restrict prompts to non-confidential abstractions, and enforce a rule that privileged matter facts never enter consumer-grade systems. The practical objective is auditability: the firm should be able to explain which tools were approved, what settings were used, what data could be retained, and why the workflow protected client confidences.

Evidence for AI Use

The most durable compliance move is to assume a future dispute will ask how a filing was produced. A tribunal will not be satisfied by “the associate used an AI tool and then checked it.” A defensible program preserves enough process evidence to show what was verified and how. The USPTO hosted a public webinar and posted slides that reflect the agency’s view of practitioner risk and mitigation, which is useful for training and policy design.

For IP teams, “evidence” does not mean saving every prompt forever. Evidence means setting a repeatable review and signoff workflow. A common approach keeps a lightweight internal record that answers four questions:

  • Tool and mode: Which AI tool was used, which plan or deployment, and whether retrieval, connectors, or uploads were enabled.
  • Scope: What the tool did, such as style edits, summarization, cite-checking assistance, or first-draft text.
  • Verification: Which citations were independently checked, which factual statements were tied back to the record, and which technical assertions were validated by a human with subject-matter competence.
  • Confidentiality control: What steps prevented disclosure of client confidential information, including redaction choices and tool settings.

That kind of record can be short and still powerful. When a question lands from a client, a Board, or a malpractice carrier, the team can show that verification and confidentiality were engineered into the workflow, not improvised at filing time.

Playbook for IP Filings

A filing playbook for AI use in IP practice is mostly about forcing the right checks at the right moment. The goal is not to ban tools. The goal is to prevent AI from becoming an unmonitored ghostwriter of the administrative record. A practical playbook typically includes seven controls.

Decide which tasks are “AI allowed.” Style edits on non-confidential text and organization help for attorney-written analysis are easier to govern than tool-driven legal research, cite generation, or factual summarization.

Prohibit fabricated authorities by design. Require that every citation in a filing be pulled from an authoritative database or primary source before signature. Courts punished fabricated citations in Mata v. Avianca because nobody performed that basic check. The same failure mode can surface in PTAB and TTAB papers when deadlines compress review time.

Treat signature as a certification event. Section 11.18 is a reminder that a filing is a representation. Build a signature checklist that asks whether the record support exists, whether citations were verified, and whether the filing language accurately describes evidence and authorities.

Keep technical assertions traceable. When AI summarizes prior art or proposes claim language, the drafter should preserve the source excerpts used to support the summary. A later dispute should not require reverse-engineering what the AI “meant.”

Lock down confidentiality defaults. Use the USPTO’s own framing on confidentiality risk as a policy anchor: assume a tool can retain, log, or reuse inputs unless proven otherwise. Approved tools, restricted inputs, and disabled connectors reduce the risk that trade secrets become training data somewhere else.

Train to the real workflow. Training should show lawyers how hallucinations appear, how to verify citations efficiently, and how to avoid creating errors by “fixing” AI text that was wrong to begin with. The USPTO’s posted webinar materials help as a baseline training asset.

Define consequences. A policy without enforcement becomes a suggestion. A policy with defined escalation paths, remedial steps, and reporting expectations can prevent repeat mistakes that damage the firm’s credibility in front of examiners and administrative judges.

Looking Beyond the USPTO

AI-related filing errors are not an isolated American story, which is useful for U.S. lawyers because comparative practice can sharpen internal controls. The EPO’s Guidelines statement on AI-assisted drafting underscores the same premise the USPTO is advancing: responsibility stays with the party and representative. That convergence suggests a practical prediction for U.S. practice: more tribunals will treat AI use as normal, and more tribunals will treat citation integrity as non-negotiable.

USPTO practice is also trending toward “AI governance by reference.” Rather than creating a separate AI discipline code, the Office is mapping AI behavior onto existing duties: truthful filings, reasonable inquiry, confidentiality discipline, and tribunal integrity. That approach makes the compliance target easier to define and harder to evade. A practitioner who would never invent a case cite manually cannot treat an AI-invented cite as a lesser sin. The signature still means what it always meant.

The result is a simple operational standard for IP teams: AI can accelerate drafting, but humans must own accuracy. Policies that make verification fast and routine will protect clients, protect the record, and protect the credibility that matters most in trademark and patent practice.

Sources

This article is provided for informational purposes only and does not constitute legal advice. Readers should consult qualified counsel for guidance on specific legal or compliance matters.

See also: Lost in the Cloud: The Long-Term Risks of Storing AI-Driven Court Records

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *