AI Drafting Tools Force Legal Profession to Redraw Unauthorized Practice Boundaries
Generative AI can draft a motion to dismiss, analyze discovery, write a demand letter and summarize case law. It cannot verify its own output, explain its reasoning, or take responsibility when citations prove fictitious. Courts and regulators have spent much of 2025 figuring out which side of that equation matters more when determining who is practicing law.
When AI Crosses Into Legal Practice
The stakes are already visible in enforcement files and court dockets. In February 2025, the Federal Trade Commission finalized an order against DoNotPay that bans deceptive “AI lawyer” claims, requires a $193,000 payment, and obliges the company to notify current and former subscribers about the settlement terms. In November 2025, multiple news outlets reported how the Nevada County District Attorney’s office filed an AI-drafted motion with nonexistent case citations, raising due process concerns once the errors surfaced in a criminal case. In September 2025, a California appellate court sanctioned a Los Angeles lawyer for filing a ChatGPT-assisted brief that relied on fictitious case citations, signaling that judges now view AI-related missteps as sanctionable conduct. At the same time, firms and courts are building AI tools into daily routines for research, drafting and discovery, guided by a growing stack of ethics notices from bar regulators.
For lawyers, vendors and regulators, the core problem is no longer whether AI can draft legal text. It is whether those drafting tools sit on the safe side of the line between technology assistance and the unauthorized practice of law, and what happens when that line blurs for consumers who think the software itself is their advocate.
Old Rules Meet New Tools
Unauthorized practice rules were built for human intermediaries. Most U.S. jurisdictions treat it as UPL when an unlicensed person offers tailored legal advice, drafts instruments that affect legal rights, or represents clients in court, even if a licensed lawyer never appears on the letterhead. The details vary by state, but the central idea is that legal judgment on another person’s behalf is reserved for licensed professionals and tightly controlled entities.
Generative AI complicates every piece of that framework. A language model can now produce a demand letter, a motion to dismiss or a separation agreement that looks as if it came from an experienced practitioner. It can adjust the content to a user’s facts and jurisdiction, sometimes more confidently than accurately. GenAI tools make it trickier to navigate the line between legal advice and legal information, particularly when interfaces feel conversational and authoritative.
Traditional UPL analysis focuses on people and entities that deliver services. AI tools raise a harder question: if a vendor trains and markets a system that generates strategy, picks authorities and drafts filings, is the “practice of law” happening in the model weights, in the user’s prompts, or in the business decisions about how the tool is deployed to the public?
Robot Lawyer Claims Meet Real Law
DoNotPay has become the emblem of what happens when marketing gets ahead of doctrine. The company promoted itself as “the world’s first robot lawyer” and offered consumers tools to contest tickets, negotiate bills and handle small-claims disputes. Those claims drew a wave of criticism, private litigation alleging unauthorized practice and, eventually, federal enforcement.
The February 2025 FTC order prohibits DoNotPay from advertising that its products are comparable to or better than a human lawyer unless it has competent substantiation, and bars the company from calling its service a “robot lawyer” without clear proof. The order follows earlier FTC allegations that consumers were misled about the nature and quality of the legal services. North American commentary, including an analysis by Harrison Pensa LLP, has flagged the case as a warning that claims about replacing lawyers are no longer just brand positioning; they are potential consumer protection and UPL issues.
The litigation and enforcement record is still relatively thin, but a pattern is visible. Regulators have not banned AI legal tools outright. Instead, they focus on four red flags: promising lawyer-like performance without evidence; implying that a model can represent consumers in court; failing to disclose that no lawyer is reviewing the output; and using terms like “law firm” or “legal services” when no licensed professional stands behind the product. Each of those elements appears in current complaints and orders.
At the same time, access-to-justice research suggests that generative AI can meaningfully help self-represented litigants when used carefully. A field study by Colleen Chien and Miriam Kim in the Loyola of Los Angeles Law Review, documents dozens of use cases where tools are used to draft letters, summarize case law and translate legal language, with human advocates still in the loop. That tension runs through the entire UPL debate: the same tools that can mislead consumers if oversold can expand basic access if they are framed as assistance rather than representation.
When Lawyers Let AI Hold the Pen
Inside firms and government agencies, the question is different. Lawyers are unquestionably licensed, yet they increasingly rely on AI for first drafts of briefs, contract language and client alerts. Ethics regulators have responded with a wave of guidance that treats AI as a powerful but risky form of “nonlawyer assistance” subject to existing duties of competence, supervision and confidentiality.
ABA Formal Opinion 512, issued in July 2024, sets the tone. The opinion instructs lawyers who use generative AI tools to fully consider their obligations of competent representation, protection of client information, communication about how work is performed, candor toward tribunals, supervision of staff and vendors, and reasonable fees. The message is direct: using AI does not dilute existing responsibilities. If AI output is wrong, the lawyer, not the model, answers to the court and the client.
State and provincial bars have gone further into the practical details. The State Bar of California’s Practical Guidance for the use of generative AI warns lawyers to avoid disclosing confidential information to public tools, to verify all citations and factual claims, and to maintain transparency with clients where AI is materially involved in their matters. In Colorado, a December 2024 Colorado Lawyer article examines how AI tools aimed at nonlawyers can bleed into UPL territory when they appear to provide tailored advice and not just information.
Canadian regulators echo the same themes. The Canadian Bar Association’s ethics toolkit on AI urges firms to treat AI selection and use as a risk-management exercise anchored in professional obligations, not just cost or convenience, and provides detailed guidelines relating to use. The Law Society of Alberta’s Generative AI Playbook, complemented by its rules of engagement for Canadian lawyers, recommends meaningful human control of AI output and cautions that lawyers remain fully responsible for work product even when models produce the first draft.
Real-world incidents have sharpened those warnings. The September 2025 fine against Los Angeles lawyer Amir Mostafavi illustrates the consequences of inadequate verification. According to CalMatters, the California 2nd District Court of Appeal found that 21 of 23 citations in Mostafavi’s opening brief were fabricated by ChatGPT. The court issued a published warning that no brief should contain citations an attorney has not personally verified, regardless of whether AI or another source provided them. Mostafavi told CalMatters he used ChatGPT to improve his draft but did not review the output before filing. The $10,000 sanction appears to be the largest AI-related fine issued by a California state court. A Nevada County case reported in November 2025 underscores why courts are beginning to demand disclosure of AI use and verification of authorities. For regulators, the lesson is not that AI drafting is inherently improper, but that relying on it without rigorous review can fall below the standard of competence and candor that courts expect.
Sandboxes Reframe Service Models
Regulatory sandboxes are the laboratory where some of these questions are being tested in practice. Utah’s Supreme Court approved a legal services sandbox that allows nontraditional providers, including entities with nonlawyer ownership, to offer services under close supervision. IAALS, which helped design the framework, describes in its analysis of the Utah sandbox how regulators can monitor innovative delivery models, identify consumer risks early and adjust conditions as needed.
Five years in, a Stanford study summarized by LawNext finds that most sandbox participants still rely heavily on human lawyers, even when they use automation for intake, triage or document assembly. Fully automated “no-lawyer” services are rare. The study suggests that, in practice, innovators view AI more as an extender of limited legal capacity than as a complete substitute for counsel.
Policy advocates argue that better-designed sandboxes could support more ambitious AI drafting tools while managing risk. A paper from the National Taxpayers Union Foundation calls for clearer sandbox rules to encourage experimentation while protecting consumers from opaque or untested systems that purport to deliver legal outcomes. For UPL analysis, sandboxes matter because they show regulators are willing to redraw the line between legal information and legal representation, at least for tightly supervised pilots.
New Lines Between Help and Representation
Scholars are already sketching out doctrinal models for AI-assisted practice. “ChatGPT, Esq.” in the Yale Journal on Law & Technology, argues that current UPL regimes protect a professional monopoly that has not delivered adequate access, and that AI might be a catalyst for more flexible licensing structures that separate high-stakes advocacy from routine assistance. The study suggests regulators could focus on how systems are integrated into workflows and what kind of human oversight exists, rather than treating every AI tool as a virtual lawyer.
Drew Simshaw, writing in the Yale Law Journal Forum, pushes in a complementary direction. He calls for “interoperable legal AI,” in which courts, agencies and legal tech providers adopt common standards for legal data and interfaces so that AI tools can connect to trustworthy sources and support consistent, transparent outcomes. Simshaw notes that without procedural reforms on the court side, even well-designed AI tools may deliver only incremental improvements for self-represented litigants.
The access-to-justice literature suggests a middle path. Generative AI is most promising when it handles translation, explanation and first-draft work that humans then review, rather than when it attempts to decide strategy or file documents autonomously. That framing aligns with the direction regulators are taking: AI may be a powerful drafting assistant, but legal judgment and accountability must remain human.
Staying on the Right Side
For practicing lawyers, the immediate task is less philosophical and more practical. The safest course is to treat AI drafting tools as extensions of the traditional research and template systems that have always supported legal work, not as stand-alone advisors. Internal policies should require human review of every AI-generated sentence that reaches a client, a court or a counterparty, and firms should document when and how AI is used so that they can answer questions from judges or regulators.
Ethics guidance points to concrete checkpoints. Following ABA Opinion 512, firms should evaluate how AI affects competence, confidentiality, client communication, supervision and fees, and should revisit engagement letters if they plan to pass through AI costs or rely heavily on automation. Guidance from state and provincial bars reinforces that lawyers remain responsible for all outputs and must disclose AI use when it plays a significant role in work product.
Vendors face their own set of guardrails. After the DoNotPay order, any company that markets AI tools for legal use has a clear example of what not to do: promise lawyer-like performance without evidence, blur the difference between software and representation, or advertise a “robot lawyer” when no licensed professional is involved. Safer positioning emphasizes that tools support legal professionals, generate drafts for human review and help users understand procedures, without claiming to substitute for counsel.
As doctrine develops, the details will change, but the central divide is already visible. On one side are AI drafting tools that extend the reach of competent lawyers and legal aid organizations. On the other are tools that present themselves as lawyers in their own right and invite users to delegate legal judgment to an opaque system. For regulators and practitioners, drawing that line clearly, and respecting it in both practice and marketing, is what will keep AI-assisted drafting on the right side of the unauthorized practice boundary.
Sources
- American Bar Association – ABA issues first ethics guidance on a lawyer’s use of AI tools (July 29, 2024)
- CalMatters – California issues historic fine over lawyer’s ChatGPT fabrications by Khari Johnson (Sept. 22, 2025)
- Canadian Bar Association – Ethics of Artificial Intelligence for the Legal Practitioner (November 2024)
- Canadian Lawyer – GenAI tools are making it trickier to navigate the line between legal advice and legal information (Nov. 5, 2025)
- Colorado Lawyer – Can Robot Lawyers Close the Access to Justice Gap? Generative AI, the Unauthorized Practice of Law, and Self-Represented Litigants (Dec. 2024)
- Federal Trade Commission – FTC finalizes order with DoNotPay that prohibits deceptive “AI lawyer” claims (Feb. 11, 2025)
- Government Technology – California Prosecutor Says AI Caused Errors in Criminal Case by Sharon Bernstein (Nov. 26, 2025)
- Harrison Pensa LLP – “Robot lawyer” accused of practicing law (May 3, 2023)
- IAALS – Utah Sandbox Inspires Similar Regulatory Initiatives in Canada and other States (Nov. 20, 2024)
- LawNext – Five Years After Reform: Stanford Study Offers Comprehensive Look at Legal Innovation in Arizona and Utah by Bob Ambrogi (June 4, 2025)
- Law Society of Alberta – The Generative AI Playbook: How Lawyers Can Safely Take Advantage of the Opportunities Offered by Generative AI (Jan. 9, 2024)
- Loyola of Los Angeles Law Review – Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap (2025)
- National Taxpayers Union Foundation – Why the United States Needs Better-Designed AI Sandboxes (Oct. 15, 2025)
- State Bar of California – Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (Nov. 16, 2023)
- Yale Journal on Law & Technology – ChatGPT, Esq.: Recasting Unauthorized Practice of Law in the Era of Generative AI by Joseph J. Avery, Patricia Sánchez Abril, Alissa del Riego (Vol. 26, Issue 1, 2023)
- Yale Law Journal Forum – Interoperable Legal AI for Access to Justice (2024)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Engineering Constitutional Safeguards Into Algorithmic Code

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
