Design an AI Use Policy That Governs Real Law Firm Workflows
AI now shows up in legal work as infrastructure, not a special project. Courts, clients, and bar authorities keep drawing the same line: lawyers can use powerful tools, but responsibility never delegates. Inside many firms, the gap is not intent but design. A policy that reads like etiquette will not survive connectors, transcription bots, draft generation, and agent-style workflows that move client data fast. A defensible AI use policy sets defined scope, tool controls, data boundaries, verification gates, and measurable enforcement.
Oversight Shifts from Principles to Proof
Generative AI stopped being a novelty the moment courts and ethics authorities started describing AI governance as ordinary professional responsibility applied to a new production engine. ABA Formal Opinion 512 frames the core duties in familiar terms, including competence, confidentiality, supervision, candor, client communication, and reasonable fees. Several jurisdictions have since moved from general caution to concrete, workflow-facing guidance, including Washington’s Advisory Opinion 2025-05 issued Nov. 20, 2025, and the New York City Bar’s Formal Opinion 2025-6 issued Dec. 22, 2025, on AI recording, transcription, and summarization of client conversations.
Outside the bar-opinion lane, courts are building policy infrastructure of their own. California’s Judicial Council adopted rule 10.430, effective Sept. 1, 2025, requiring any California court that does not prohibit generative AI to adopt a generative-AI use policy for court staff and judicial officers by Dec. 15, 2025. The rule applies to the superior courts, Courts of Appeal, and Supreme Court. A firm policy has to assume more questions, not fewer, from judges, clients, opposing counsel, and insurers.
Most “AI use” policies fail for a simple reason: the document describes values while the tools shape conduct. A policy can declare “no confidential information in public AI,” then a meeting platform records by default, a browser extension captures text, and a connector indexes a shared drive full of client files. Behavior follows defaults.
Controls convert a policy into action. Think in four verbs: allow, block, require, and log. A workable policy specifies what the firm allows, what the firm blocks, what the firm requires before work product goes external, and what the firm logs to prove those rules held in practice.
Define Scope as Operations, Not Aspirations
Scope decides whether the policy matches how the firm actually works or only covers a small, convenient slice of it. “AI” cannot mean only chatbots. The scope should cover at least four categories:
- Generative AI drafting tools (chat, document assistants, email assistants, brief and contract drafting)
- Retrieval tools and connectors (RAG search, drive indexing, matter workspaces, knowledge bases)
- Recording and transcription tools (meeting capture, summaries, call recording, voicemail transcription)
- Agent-style workflows (tools that call other tools, run actions, or pull data from multiple systems)
Scope language should also define where the policy applies: firm devices, firm accounts, firm networks, and any work performed for firm matters, even on personal devices. That last clause prevents the common loophole where policy applies “at work” while the risky copy-paste happens at home.
Make Approved Tools the Default and Everything Else an Exception
A policy that “permits AI” without naming approved tools becomes an invitation to shadow AI. Approved-tool governance is the behavior lever that lawyers actually feel. Firm policy should include an approved-tools list with owners, permitted use cases, and required settings.
Exception handling matters more than the list. Build a one-page process that answers: who can approve a new tool, what review steps are required, what data can be used during evaluation, and how the firm documents the decision. A short exception process reduces the incentive to ignore policy when a new tool shows up mid-matter.
Security frameworks help here because they treat governance as a repeatable cycle. NIST’s Cybersecurity Framework 2.0, published Feb. 26, 2024, emphasizes risk outcomes and organizational governance rather than vendor promises. AI governance standards aim in a similar direction. ISO/IEC 42001, published in Dec. 2023, describes an AI management system approach that fits well with firm policy design: define responsibilities, set controls, run reviews, and improve based on evidence.
Build a Data Boundary Rule People Can Follow at Speed
“Do not enter confidential information” fails because lawyers rarely stop to debate whether a detail counts as confidential while drafting under deadline. A behavior-governing policy needs a short data boundary rule with examples, plus a simple decision trigger.
One practical trigger uses four buckets and one question. The question: Does the input include client-confidential, privileged, regulated personal data, or deal-sensitive terms? A yes answer routes the user to an approved tool configured for that bucket, or to a no-AI workflow where necessary.
Recording and transcription require special handling. NYC Bar Formal Opinion 2025-6 highlights how AI-enabled recording and summarization can raise confidentiality and supervision issues, especially when tools store data or train on user inputs. A policy that ignores transcription defaults will miss one of the most common ways client data exits a controlled channel.
Require Verification Gates Before Anything Goes External
Verification is the difference between “AI assistance” and “AI substitution.” Formal Opinion 512 warns that uncritical reliance on AI output can create inaccurate advice or misleading filings, and it ties review expectations to competence and candor duties. A policy that governs behavior should define verification gates for common workflows.
Three verification gates cover most legal work product:
- Citation gate: verify every case, statute, quotation, and pin cite against authoritative sources before filing or sending to a client.
- Fact gate: confirm key factual assertions against the record, the client file, or primary documents, not model output.
- Instruction gate: confirm the output matches the assignment, including jurisdiction, procedural posture, defined terms, and required constraints.
Gate language should specify who performs the check and when the check occurs. A good default assigns the verifying lawyer as the accountable signer, with escalation to a practice leader for high-risk filings or time-sensitive motions. Clear gates reduce the temptation to treat fluent output as final work product.
Assign Ownership So Somebody Gets Called When Things Break
Policy without ownership becomes policy without enforcement. Washington’s Advisory Opinion 2025-05 organizes AI duties around competence, diligence, confidentiality, communication, candor, supervision, and billing. Those duties map cleanly onto internal roles when the policy names owners.
A straightforward ownership map usually includes:
- Policy owner: General counsel, risk partner, or professional responsibility leader
- Tool approval owner: IT and security lead with a legal sign-off requirement
- Practice workflow owner: practice group leaders who set verification gates and review norms
- Training owner: professional development lead who tracks completion and updates
- Incident owner: security or privacy lead who runs containment and notification workflows
Ownership should include authority. Tool approval must include the power to disable access, revoke connectors, or restrict features such as chat history, training retention, or external sharing. Enforcement cannot depend on persuasion alone.
Write the Vendor Terms the Policy Depends On
Vendor language decides whether a firm can keep its own promises. A policy can ban training on client data, but contract terms and settings determine whether the ban holds. Formal Opinion 512 points lawyers back to confidentiality duties when using tools that require input of client information. The policy should require procurement checklists that match those duties.
Core vendor terms to require for any approved AI tool include data-use limits, retention limits, segregation by tenant, access controls, breach notification, subprocessor controls, audit rights, and clear statements about whether user inputs train models. A policy that names these terms also improves client confidence because many client outside-counsel guidelines now ask the same questions. Firms should review existing outside-counsel guidelines for AI-specific requirements and build policy provisions that can satisfy these mandates.
NIST’s AI Risk Management Framework 1.0, published Jan. 2023, treats risk management as a lifecycle discipline, not a one-time vendor choice. That same principle applies to law firms: approval cannot be permanent when models, features, and data flows change.
Map the Regulatory Landscape Beyond Ethics Rules
Firms with international operations or clients should note that the EU AI Act becomes fully applicable on Aug. 2, 2026. AI systems used in the administration of justice fall within the high-risk category under Annex III of the Act. This includes AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. High-risk AI systems are subject to strict obligations including risk assessment, data quality standards, documentation requirements, transparency obligations, human oversight, and accuracy measurements.
The Act applies not only to providers placing AI systems on the EU market but also to deployers using those systems, and even to providers and users outside the EU when the AI system’s output is used within the EU. The rules for high-risk AI systems take effect August 2, 2026, with extended transitions to August 2, 2027 for high-risk AI embedded in regulated products. Firms handling cross-border matters or advising multinational clients should integrate EU AI Act requirements into their vendor evaluation and tool-approval processes.
At the state level, several jurisdictions have enacted AI regulations that become effective in 2026. The Colorado AI Act takes effect June 30, 2026 (delayed from February 1, 2026), and regulates high-risk automated decision systems including those used in employment, housing, education, and access to financial services. Illinois House Bill 3773, effective Jan. 1, 2026, amends the Illinois Human Rights Act to prohibit discriminatory use of AI in employment decisions and requires notification when AI influences employment-related decisions.
California has enacted Assembly Bill 2013, effective Jan. 1, 2026, requiring developers of generative AI systems to disclose detailed information about training data. California also enacted Senate Bill 942, the California AI Transparency Act, with an effective date of Aug. 2, 2026 (delayed by AB 853), which requires covered providers to make available AI detection tools and include provenance data in AI-generated content. Firms operating in these jurisdictions should review state-specific requirements and incorporate relevant obligations into their AI use policies.
Track Behavior, Not Training Completion
Training completion rates measure attendance, not behavior. Controls generate better signals. A policy that governs behavior should specify how the firm detects drift without turning every matter into surveillance theater.
Useful signals include audit logs for approved tools, connector change logs, blocked-domain telemetry for unapproved tools, and periodic sampling of AI-assisted work product for verification compliance. Trend lines matter more than perfect coverage. Teams can use signals to focus training on the workflows that produce risk, rather than repeating generic warnings.
Incident response needs a clean, blame-free channel. A rapid report and containment process protects clients and protects the firm. A policy should say what to do when a user pastes the wrong content, when an AI summary misstates advice, or when an output includes fabricated citations. Fast escalation should feel normal, not punitive.
Turn the Policy into a One-Page Workflow Appendix
Behavior changes when the policy fits on the screen where the work happens. Supervision benefits from a one-page appendix that lawyers can open mid-matter. Keep the appendix operational, not philosophical.
- Approved tools and permitted use cases: with links to internal instructions
- Data boundary rule: with four buckets and examples
- Verification gates: for citations, facts, and instructions
- Recording and transcription rules: including consent and retention checks
- Escalation channel: for mistakes and suspected tool misconfiguration
Firms rarely need longer policy documents to govern behavior. Firms need tighter documents plus stronger defaults. A policy that names tools, restricts data movement, requires verification, and assigns owners will govern conduct under deadline, which is the only environment that counts.
Sources
- American Bar Association: News Release on Formal Opinion 512, “Generative Artificial Intelligence Tools” (July 29, 2024)
- California Legislative Counsel: Assembly Bill 2013, “Generative Artificial Intelligence: Training Data Transparency” (2023-2024 Regular Session)
- California Legislative Counsel: Senate Bill 942, “California AI Transparency Act” (2023-2024 Regular Session)
- Colorado General Assembly: Senate Bill 24-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (2024)
- Colorado General Assembly: Senate Bill SB25B-004, “Increase Transparency for Algorithmic Systems” (2025)
- European Commission: AI Act, “Shaping Europe’s Digital Future” (entered into force August 1, 2024)
- Illinois General Assembly: House Bill 3773, Amendment to the Illinois Human Rights Act (103rd General Assembly)
- International Organization for Standardization: ISO/IEC 42001, “Artificial Intelligence Management System” (publication date: December 2023)
- Judicial Council of California: California Rules of Court, rule 10.430, “Generative Artificial Intelligence Use Policies” (adopted July 18, 2025; effective Sept. 1, 2025)
- National Institute of Standards and Technology: NIST AI 100-1, “Artificial Intelligence Risk Management Framework (AI RMF 1.0)” (Jan. 2023)
- National Institute of Standards and Technology: NIST CSWP 29, “The NIST Cybersecurity Framework (CSF) 2.0” (Feb. 26, 2024)
- New York City Bar Association: Formal Opinion 2025-6, “Ethical Issues Affecting Use of AI to Record, Transcribe, and Summarize Conversations With Clients” (Dec. 22, 2025)
- Washington State Bar Association: Advisory Opinion 2025-05, “Artificial Intelligence-Enabled Tools in Law Practice” (Nov. 20, 2025)
This article is provided for informational purposes only and does not constitute legal advice. Readers should consult qualified counsel for guidance on specific legal or compliance matters.
See also: How Screening Algorithms Are Rewriting Trade Sanctions Compliance for Financial Institutions

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
