Recalibrating Competence: Updating Model Rule 1.1 for the Machine Era
Competence has long been the lawyer’s first commandment. But when the tools of competence begin to think, write, and learn on their own, the rule that once defined professionalism starts to blur. Model Rule 1.1 requires “legal knowledge, skill, thoroughness and preparation,” yet nowhere does it imagine a lawyer sharing those traits with a machine. In 2025, artificial intelligence has forced the profession to ask a new question: can a lawyer be competent without being technologically fluent?
Model Rule 1.1: A Duty Older Than the Digital Age
When the American Bar Association adopted the Model Rules of Professional Conduct in 1983, competence meant diligence and preparation. A competent lawyer understood precedent, evidence, and procedure. Tools were incidental: the fountain pen, the fax machine, the online database. The assumption was that technology served the law, not that it shaped it.
That assumption began to fracture in the 2010s, when e-discovery, predictive coding, and document automation blurred the line between research and reasoning. The ABA’s Formal Opinion 477R (2017) on cybersecurity and Formal Opinion 498 (2021) on virtual practice hinted at a new category of competence: technological awareness. By late 2025, 40 states, the District of Columbia, and Puerto Rico had formally recognized that lawyers must understand the benefits and risks of relevant technology. What began as a digital housekeeping rule now defines professional survival.
From Legal Mastery to Machine Literacy: AI’s Impact on Competence
The rise of generative AI has expanded competence from knowledge of law to knowledge of algorithms. Lawyers now use tools like Lex Machina, Blue J, and Thomson Reuters’ AI-Assisted CoCounsel to predict case outcomes, generate arguments, and analyze judges’ tendencies. AI has moved beyond litigation analytics. Contract review platforms now draft NDAs, analyze M&A agreements, and flag non-standard terms in commercial leases. Due diligence tools summarize thousands of pages of discovery in minutes. The ABA’s Formal Opinion 512 (July 2024) officially recognized this shift, finding that competence now requires lawyers to understand the capabilities, limitations, and risks of AI systems they use in practice.
In other words, lawyers need not become programmers, but they must know enough to recognize when a model hallucinates, misstates precedent, or breaches confidentiality. The once-abstract “duty of knowledge” now extends to understanding how data enters and leaves the algorithmic black box. Failing to verify an AI-generated citation, or to disclose AI assistance when required by court rule, is no longer a technical error. It is a professional one.
Model Rule 1.1’s language has not changed, but its interpretation has. Comment [8], which first introduced the idea of technological competence in 2012, was drafted before generative models existed. Today, the “relevant technology” clause encompasses everything from client-side chatbots to AI-assisted due diligence systems. The duty extends beyond competence alone. Rule 1.6’s confidentiality requirements are directly triggered when client information is uploaded to third-party AI systems. Many generative AI platforms use prompts for training unless explicitly configured otherwise, creating potential disclosure without client consent—a clear ethics violation that predates any questions of technological competence.
State bars have filled the gap with guidance. The Florida Bar’s Ethics Opinion 24-1 (January 2024) requires reasonable understanding of how AI tools source, process, and store client data. The California State Bar issued Formal Opinion 2015-193 requiring competence in e-discovery technologies. North Carolina’s Standing Committee has warned that relying on AI without independent verification may breach both Rule 1.1 and Rule 5.3, which governs the supervision of non-lawyer assistants. The subtext is clear: when machines perform legal work, they must be supervised like paralegals documented, trained, and controlled.
Sanctioned by ChatGPT: When Competence Meets Automation
Generative AI challenges the assumption that the lawyer’s mind is the final checkpoint of quality. In Mata v. Avianca (S.D.N.Y. 2023), a lawyer submitted a brief generated by ChatGPT containing fictitious citations. The case became a parable for the machine age: a lawyer’s ignorance of technology produced a failure of diligence, preparation, and verification, all elements of Rule 1.1. The court sanctioned the attorneys and their firm $5,000. It also previewed a deeper issue. If lawyers cannot detect errors generated by the systems they use, can they still claim to be “competent” under the Rule?
Federal judges are now codifying generative AI rules directly into their court procedures. In the Northern District of Texas, Judge Brantley Starr requires attorneys to certify that any AI-generated text in their filings has been checked for accuracy by a human reviewer. In the Northern District of Illinois, Judge Gabriel A. Fuentes mandates disclosure whenever a generative AI tool is used, including naming the specific platform and confirming that citations and factual statements have been verified. What began as informal ethics guidance is turning into a procedural standard, as more judges adopt standing orders that hold lawyers accountable for the reliability of AI-assisted submissions.
The legal profession’s stance on generative AI has shifted from guarded curiosity to active exploration. New research from Thomson Reuters shows that adoption of AI tools within law firms has accelerated sharply over the past year, accompanied by a growing conviction that these systems belong in the fabric of daily legal work. What was once treated as an experimental aid is now edging toward becoming standard infrastructure. The profession’s debate has evolved, too: the focus is no longer whether lawyers should use AI, but how to integrate it responsibly, balancing efficiency with the accuracy, judgment, and confidentiality that define legal competence.
Algorithmic Due Diligence: The New Professional Standard
Competence today also means conducting due diligence on the systems themselves. The principle mirrors the duty to supervise under Model Rule 5.3. If a law firm uses an AI vendor to summarize discovery, that vendor is a de facto assistant whose processes must be understood and documented. Lawyers must verify data-handling practices, model training sources, and retention policies. They must ask whether the system can delete prompts on request and whether any of its training data could reveal client information.
This “algorithmic due diligence” has emerged as a new professional habit. Some firms now maintain internal AI registers, listing approved tools, vendor agreements, and usage conditions. Others follow frameworks such as the NIST AI Risk Management Framework or ISO/IEC 42001 to audit AI governance and accountability. What was once an IT checklist is becoming an ethics document.
Regulatory Osmosis: Global AI Standards and U.S. State Experiments
Other jurisdictions have already begun to rewrite competence rules for the algorithmic age. The Canadian Bar Association’s toolkit “Ethics of Artificial Intelligence for the Legal Practitioner” (also see my article about the CBA and AI) defines competence as including the ability to assess, select, and oversee AI systems used in client service. In the U.K., the Solicitors Regulation Authority has warned that a lack of digital-skills among solicitors when implementing AI systems in legal work creates risk for both firms and clients. The European Union’s AI Act goes further, classifying legal decision-support tools as “high-risk” systems requiring documentation, explainability, and human review.
Against this backdrop, the United States remains largely self-regulated. The ABA’s opinions are advisory, not binding, and no federal body defines “AI competence.” Yet global clients increasingly expect verifiable standards. Multinational firms now find themselves aligning to the strictest regime in their portfolio, not the weakest. The result is regulatory osmosis: AI competence may become a de facto international duty before it becomes a domestic one.
Some U.S. states have begun experimenting. Colorado’s Artificial Intelligence Act (signed May 2024, effective June 30, 2026) requires developers and deployers of “high-risk AI systems” to implement risk-management and transparency measures. While directed at businesses, its principles echo those of professional ethics: disclosure, explainability, and accountability. These pilots are not just regulatory curiosities; they are stress tests for what a rewritten Model Rule 1.1 might require.
In Colorado, competence is being treated as verifiable behavior rather than a self-proclaimed trait. Firms must document how they test and monitor AI tools. In time, that approach could form the template for nationwide reform: competence as a workflow, not an aspiration.
Redefining Competence for the AI Era
One of the emerging lessons from AI adoption is that competence is only as good as its paper trail. Professional liability carriers are already asking whether firms maintain AI-use policies and audit logs. In malpractice claims, plaintiffs’ lawyers will soon demand discovery of prompt histories to test diligence. The question “Did you verify this output?” may become as routine as “Did you cite-check this case?”
As insurers adapt, the cost of ignorance will rise. Carriers may require firms to implement internal training or AI disclosure checklists as conditions of coverage. Some are exploring endorsements excluding coverage for “unverified AI output.” In this environment, documenting competence becomes not just ethical self-protection but financial necessity.
Updating Model Rule 1.1 does not require reinventing it. The core text has survived every technological revolution. What must change is the commentary and interpretation. The ABA could revise Comment [8] to explicitly include artificial intelligence among the technologies lawyers must understand. It could also recommend that competence includes awareness of algorithmic bias, data provenance, and explainability.
Several scholars have proposed creating a supplemental comment, perhaps Comment [9], defining “algorithmic competence” as the ability to evaluate and supervise AI systems that affect client interests. Others argue that competence should be integrated with Rule 5.3 to form a joint obligation: supervising both human and machine assistants under a unified standard of oversight. Either approach would clarify that ignorance of technology is no defense to professional misconduct.
Law firms implementing AI competence protocols should consider specific, actionable measures. First, establish clear verification workflows requiring human review of all AI-generated legal work before submission to clients or courts. Document each verification step, noting what was checked and by whom. Second, create vendor assessment criteria that evaluate AI tools for data security, accuracy rates, training data sources, and retention policies. Third, develop internal policies specifying which tasks may be delegated to AI and which require direct attorney involvement.
Training programs should go beyond one-time CLE seminars. Regular skill-building sessions on prompt engineering, output verification, and bias detection help lawyers develop practical competence. Some firms designate “AI liaisons” who stay current on emerging tools and best practices, serving as internal resources. Others create tiered approval processes, requiring partner-level review before deploying new AI tools on client matters. The most progressive approach treats technological competence as an ongoing professional development requirement, not a compliance checkbox.
AI and the Future of Law: A Profession at Its Inflection Point
Competence cannot be legislated by rule alone. The most progressive firms treat AI literacy as part of professional culture, not compliance. They host internal “AI academies,” require CLE credits on generative systems, and create cross-functional governance boards combining lawyers, technologists, and risk officers. Some firms are introducing certification badges indicating staff trained in responsible AI use, an internal signal of credibility for clients wary of algorithmic errors.
For younger lawyers, this shift redefines mentorship. Learning to practice law now includes learning to manage machines that practice it alongside you. The profession that once prized memory now prizes judgment: knowing when to question the model, when to override it, and when to leave it alone.
The debate over competence is really a debate over authority. If algorithms become more accurate than humans at certain tasks, does the lawyer’s role diminish or evolve? The answer lies in responsibility. Machines may assist, but only humans can be held accountable. Model Rule 1.1 was written to ensure that lawyers remain the custodians of judgment, not just consumers of information. AI does not erode that principle; it forces its rigorous reapplication.
Rewriting the Rule for the machine era would reaffirm a timeless truth: technology can augment competence, but it cannot replace conscience. The lawyer’s duty remains what it has always been: to know, to verify, and to stand behind the work, even when the first draft was written by code.
Sources
- American Bar Association: “Model Rule 1.1: Competence”
- American Bar Association: “Model Rule 1.6: Confidentiality of Information”
- American Bar Association: “Formal Opinion 512: Generative Artificial Intelligence Tools” (July 29, 2024)
- American Bar Association: “Formal Opinion 498: Virtual Practice” (March 10, 2021)
- American Bar Association: “Formal Opinion 477R: Securing Communication of Protected Client Information” (Revised May 22, 2017)
- American Bar Association: “Model Rule 5.3: Responsibilities Regarding Nonlawyer Assistance”
- California State Bar: “Formal Opinion 2015-193: Attorney Competence in E-Discovery” (June 30, 2015)
- Canadian Bar Association: “Ethics of Artificial Intelligence for the Legal Practitioner” (Toolkit, November 2024)
- Colorado General Assembly: “Senate Bill 24-205: Consumer Protections for Artificial Intelligence” (Signed May 17, 2024; Effective June 30, 2026)
- European Commission: “AI Act – Shaping Europe’s Digital Future” (2024)
- The Florida Bar: “Ethics Opinion 24-1: Generative Artificial Intelligence” (January 19, 2024)
- International Organization for Standardization: “ISO/IEC 42001 – AI Management System Standard” (2023)
- LawSites: “Tech Competence” (State-by-state tracking of technology competence rules)
- Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023)
- National Institute of Standards and Technology: “AI Risk Management Framework” (January 26, 2023)
- North Carolina State Bar: “2024 Formal Ethics Opinion 1”
- Solicitors Regulation Authority (UK): “Risk Outlook report: The use of artificial intelligence in the legal market” (2023)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All sources cited are publicly available through official publications and reputable outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Data Provenance Emerges as Legal AI’s New Standard of Care
