Protecting Client Trust in North Carolina’s AI-Driven Practice
|

Protecting Client Trust in an AI-Driven North Carolina Law Practice

North Carolina’s landmark first formal ethics opinion on artificial intelligence delivers a clear message: lawyers may use AI, but they must protect client confidences and supervise these systems as rigorously as any human assistant. The guidance defines how traditional duties of competence and confidentiality apply when machine learning enters the practice of law.

North Carolina’s Framework for AI Use

The North Carolina State Bar adopted 2024 Formal Ethics Opinion 1 on Nov. 1, 2024, authorizing the use of AI in law practice while anchoring that permission to three duties: competence, confidentiality, and supervision. The opinion applies Rule 1.6(c)’s command that lawyers make reasonable efforts to prevent unauthorized disclosure of client information to any circumstance where confidential data is entered into an AI tool.

To meet that standard, the lawyer must understand how the system stores and processes information, what security measures exist, and whether the vendor uses client data to train its model. The opinion treats these inquiries as part of competence under Rule 1.1, not optional technical literacy.

North Carolina joins a growing number of states providing formal guidance on AI ethics. California released its practical guidance in Nov. 2023, Florida issued Advisory Opinion 24-1 in January 2024, and Pennsylvania published Joint Formal Opinion 2024-200 in June 2024. The District of Columbia, New York, and Kentucky have also issued opinions addressing lawyer use of artificial intelligence, creating a national pattern of bar association engagement with these technologies.

What Reasonable Efforts Actually Means

Reasonable efforts now mean reading the vendor’s terms of service, confirming encryption, storage location, and deletion rights, and ensuring that data will not be used for model training without consent. A lawyer who cannot verify those protections must avoid inputting client material altogether.

The opinion builds on earlier guidance. In 2011 FEO 6, the State Bar approved use of cloud-based software as long as lawyers exercised due diligence to safeguard confidential information. The same logic extends to generative AI: technological convenience never excuses lapses in confidentiality.

The State Bar’s opinion identifies specific considerations borrowed from the cloud computing context. Lawyers must evaluate the experience, reputation, and stability of the AI vendor; review contractual agreements on how the company will handle confidential information and what security measures protect that data; and determine whether the terms clarify data retrieval procedures or safe destruction protocols if the vendor ceases operations or services terminate. These requirements apply whether the AI system operates through an external platform or an in-house installation using local servers.

Supervision as Oversight

Rule 5.3 requires lawyers to supervise nonlawyer assistants. The State Bar interprets that duty to include AI vendors and systems. Lawyers must test, monitor, and verify all AI outputs. Human review remains mandatory; delegation of professional judgment is not permitted.

This mirrors the American Bar Association’s Formal Opinion 512, released in July 2024, which directs lawyers to maintain documentation of human review, verify all AI-generated authority, and communicate with clients about material AI use. North Carolina aligns these national standards with state rules already familiar to practitioners.

The ABA’s guidance emphasizes several principles that inform North Carolina’s approach. Competence requires lawyers to understand the capacity and limitations of AI and to update that understanding periodically as the technology evolves. Confidentiality obligations mean that lawyers must know how AI uses data and must implement adequate safeguards to prevent unwitting or unauthorized disclosure. The ABA opinion clarifies that lawyers may charge for time spent on AI-assisted work, including time spent crafting inputs and reviewing outputs, but may not charge for time saved by using AI, and generally may not bill clients for learning how to use AI tools.

Competence in a Technological Age

Rule 1.1 of the North Carolina Rules of Professional Conduct requires a lawyer to provide competent representation, defined by the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the task. Comment 8 to the ABA Model Rule 1.1, adopted in many states including North Carolina’s commentary, explicitly requires lawyers to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.

That mandate effectively extends to AI literacy. Lawyers must understand how AI systems function, their limitations, and the potential risks of misuse or inadvertent disclosure. Competence now means the ability to evaluate whether AI tools enhance accuracy or threaten client confidentiality. Failing to understand those implications may itself constitute a lapse in professional competence.


The duty of competence applies throughout the lifecycle of AI use. Before adoption, lawyers must educate themselves on the benefits and risks of specific tools, reviewing current resources on both AI generally and the particular program intended for use. During deployment, lawyers must understand how inputs are processed, where data is stored, and what security protocols govern access. After implementation, lawyers must monitor for degradation in AI functionality and reassess whether the system continues to serve client interests without creating unacceptable confidentiality risks.

Disclosure and Client Consent

While 2024 FEO 1 does not expressly require client consent for AI use, Rule 1.4 obligates lawyers to communicate adequately with clients and explain matters to the extent reasonably necessary for informed decisions. Some practitioners interpret this to mean that if a generative AI system significantly contributes to analysis, drafting, or fact evaluation, especially when data is transmitted outside the firm, disclosure may already be prudent.

Given the well-documented risk of hallucinated citations and unpredictable outputs, disclosure serves both ethical and practical functions. Transparency allows clients to understand how their information will be handled and how much human review occurs. Even if not yet mandatory, obtaining informed consent for material AI use strengthens trust and mitigates future disputes over reliance or data exposure.

North Carolina’s opinion notes that lawyers need not inform clients about AI use for ordinary tasks such as conducting legal research or generic case management. However, if a lawyer delegates substantive tasks to an AI tool, the use becomes analogous to outsourcing legal work to a nonlawyer or third-party service, for which client consent is required under the State Bar’s 2007 FEO 12. Additionally, if decisions about using or not using AI affect fees, lawyers must inform clients and obtain their input.

When Confidentiality Collides with Data Breach Law

North Carolina’s data breach statute, G.S. 75-65, adds another layer. Firms that own or license personal information must provide prompt notice to affected individuals and, in many cases, to the Attorney General if a security incident occurs. Because most AI tools operate through third-party vendors, lawyers remain responsible for ensuring contract terms cover notification and remediation.

Ethical compliance therefore overlaps with statutory compliance. A vendor’s failure to report or secure data could trigger both professional discipline and regulatory enforcement. The safest course is to treat breach planning as part of confidentiality itself.

The breach statute defines a security breach as an incident of unauthorized access to and acquisition of unencrypted and unredacted records or data containing personal information where illegal use has occurred or is reasonably likely to occur, or that creates a material risk of harm to a consumer. The statute requires notice without unreasonable delay and mandates specific content including a description of the incident, the type of personal information affected, protective measures taken, a contact number for further information, and advice directing individuals to review account statements and monitor credit reports. For breaches affecting more than one thousand persons, firms must also notify consumer reporting agencies.

Verification as Defense

AI can assist with drafting, summarizing, or research, but it cannot exercise judgment. Every citation, quotation, and factual assertion produced by an AI system must be checked against primary sources. Verification is the final barrier between professional diligence and digital negligence.

The State Bar’s opinion emphasizes that errors generated by AI are still lawyer errors. Accuracy requires documentation, supervision logs, and human sign-off. The opinion’s structure leaves little ambiguity: using AI without oversight is using it unethically.

This verification requirement extends beyond legal citations. When AI generates contract language, litigation strategy, or factual summaries, lawyers must independently confirm that the output accurately reflects legal authority, serves client interests, and contains no fabricated information. The duty applies regardless of the sophistication of the AI tool or the reputation of its vendor. As the opinion notes in Inquiry 4, a lawyer’s signature on a pleading certifies good faith belief in the factual and legal assertions within, and that certification cannot be delegated to an algorithm.

Billing Practices and Fee Transparency

North Carolina’s opinion addresses billing directly in Inquiry 6, following the ABA’s framework. A lawyer may use AI to increase efficiency but may not bill a client for three hours of work when only one hour was actually expended. If AI reduces the time required to draft documents from three hours to one hour, the lawyer bills for one hour, not for the value that the documents might represent absent AI assistance.

Billing practices must remain accurate, honest, and not clearly excessive, consistent with Rules 7.1, 8.4(c), and 1.5(a). The State Bar references its 2022 FEO 4 to clarify that lawyers enjoying efficiency gains from AI may complete more work for more clients but may not inflate individual client bills based on hypothetical time expenditures.

As an alternative to hourly billing, lawyers may charge flat fees for document drafting even when using AI, provided the flat fee is not clearly excessive and the client consents to the billing structure. Lawyers may also bill clients for expenses incurred related to AI use, including costs specifically identified and directly related to legal services provided during the representation, or general administrative fees covering generic expenses such as copies, printing, postage, or technology expenses implemented to improve services or client convenience. All such charges must be accurate, not clearly excessive, and disclosed to the client, preferably in writing.

Preparing for Broader Regulation

North Carolina’s ethics framework exists alongside state and national regulation of automated systems. The Colorado Artificial Intelligence Act and the federal government’s OMB Memorandum M-24-10 illustrate the direction of policy: transparency, documentation, and human accountability. Lawyers practicing across jurisdictions will need to adapt to these overlapping regimes while maintaining the same confidentiality baseline.

The Colorado Act, enacted in May 2024, requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination. The statute creates a rebuttable presumption of reasonable care when entities comply with specified provisions including impact assessments, risk management policies, annual reviews, consumer notification of AI-driven consequential decisions, and opportunities to correct incorrect data or appeal adverse decisions. The Act defines high-risk AI as systems making or substantially contributing to consequential decisions concerning consumers in areas such as education, employment, financial services, healthcare, housing, insurance, and legal services.

OMB Memorandum M-24-10, issued in March 2024, establishes requirements for federal agencies using AI but signals broader governmental approaches to managing AI risks. The memorandum requires agencies to designate Chief AI Officers, conduct impact assessments before deploying rights-impacting or safety-impacting AI, perform ongoing monitoring and periodic human reviews, provide adequate human training and oversight, ensure public notice and documentation, and assess AI impacts on equity and fairness while mitigating algorithmic discrimination. These federal requirements, though not directly binding on private lawyers, indicate the regulatory trajectory and the seriousness with which government entities approach AI governance.

The State Bar has signaled that future opinions may address disclosure and client communication duties as AI use expands. For now, the controlling rule remains simple. A lawyer may employ AI only when doing so protects client information as effectively as a locked file cabinet and a supervised paralegal.

Professional Consequences of Violations

Violations of the Rules of Professional Conduct carry disciplinary consequences ranging from private admonition to disbarment. A lawyer who fails to maintain client confidentiality through inadequate AI vendor vetting, who submits AI-generated pleadings containing fabricated citations without verification, or who bills clients dishonestly for AI-assisted work risks investigation by the State Bar and potential sanctions.

Beyond bar discipline, lawyers face potential malpractice liability. Professional liability insurance policies typically cover negligent acts, errors, or omissions in the rendering of legal services, but coverage questions may arise when AI tools contribute to client harm. Some insurers have begun requiring disclosure of AI use in practice, and policy exclusions for certain technology-related claims continue to evolve. Prudent risk management includes confirming that malpractice coverage extends to AI-assisted work and that adequate cyber liability protection exists for data breaches involving client information processed through AI platforms.

Sources


This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, statutes, and sources cited are publicly available through official publications and reputable outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Can AI Build a Legal Argument, or Only Mimic One?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *