When Ignoring AI Becomes Malpractice And Using It Becomes Negligence
Professional standards of care have evolved alongside every major technological advance. X-ray imaging, electronic discovery, and digital records each shifted what counted as reasonable practice. Artificial intelligence is now compressing that evolution into a much shorter time frame. Courts and regulators are beginning to ask not only whether professionals are misusing AI, but whether there are circumstances where ignoring reliable systems may itself fall below the standard of care.
How Standard Of Care Worked Before AI
Across negligence regimes, plaintiffs still have to prove duty, breach, causation, and damages. For licensed professionals, the benchmark is not an ordinary reasonable person but a reasonably prudent practitioner under similar circumstances. In the United States, courts look to expert testimony, practice guidelines, and custom to decide whether conduct met that standard in a given field.
Technology has always been part of that analysis. In medicine, widespread adoption of diagnostic imaging and electronic health records changed what reasonable diagnosis and documentation required. In law, the same dynamic unfolded around electronic discovery, encryption, and secure communications. Courts did not create entirely new doctrines. They asked whether prudent professionals had reason to know about available tools, understand their limits, and deploy them in ways that protected clients and patients.
Artificial intelligence now enters the same doctrinal channel. The question is not whether AI creates an entirely new standard of care, but how it modifies the content of established duties as tools move from experimental to ordinary. That inquiry is unfolding first in professional guidance and academic work, even as litigated negligence cases are still rare.
Ethics Rules Begin To Name AI Explicitly
In law, the starting point remains the ABA Model Rules of Professional Conduct. Rule 1.1 requires competent representation and speaks in general terms about legal knowledge, skill, thoroughness, and preparation. Comment 8, amended in 2012, makes technology part of that obligation by urging lawyers to keep abreast of the benefits and risks associated with relevant technology. The Model Rules are not binding on their own, but they are incorporated or mirrored in most U.S. jurisdictions.
AI moved from implication to explicit treatment in July 2024, when the American Bar Association issued Formal Opinion 512 on generative AI tools. The opinion emphasizes that lawyers remain fully responsible for accuracy, confidentiality, supervision, and billing, and that they must understand enough about AI systems to verify outputs and protect client interests. A detailed explainer in The Bar Examiner walks through how these duties apply to research, document drafting, and discovery workflows.
State bars and courts are now filling in the operational detail. The North Carolina State Bar’s 2024 Formal Ethics Opinion 1 explains that if a lawyer adopts an AI tool’s output as her own, she is professionally responsible for that work product. The New York City Bar Association’s 2025 survey of AI ethics opinions collects similar guidance from multiple jurisdictions and flags a trend. Bars do not say lawyers must use AI today, but they note that a time may come when understanding and using generative AI tools is part of the competence baseline.
Media coverage has reinforced that message for practitioners. The ABA’s announcement of Opinion 512 highlighted that the guidance responds to real sanctions incidents, including lawyers who submitted briefs with fabricated case citations. These themes map easily onto negligence analysis when AI use goes wrong.
Medicine Becomes The First Liability Testbed
Health care has generated the most sustained analysis of AI and standard of care. Clinical AI tools now assist with imaging, decision support, triage, and documentation. A 2023 systematic review in Frontiers in Medicine surveyed medical professional liability literature on AI-based diagnostic algorithms and found growing concern about how to allocate responsibility when tools influence clinical judgment without clear legislative guidance. The regulatory status of a tool (e.g., FDA clearance) is fast becoming a key evidentiary factor, suggesting that ignoring an approved system may be less reasonable than ignoring an experimental one.
In a separate book chapter, Gary Chan’s open access piece on medical AI and the standard of care in negligence and tort law frames the core question this way. Courts still judge clinicians by what a reasonable physician would have done, but that inquiry must now account for both the performance of AI systems and the reasonableness of relying on them. If a tool is widely adopted, validated, and embedded in guidelines, ignoring it may eventually look unreasonable. If a tool is opaque, poorly validated, or known to be biased, heavy reliance without verification may be equally problematic.
Practitioner guidance reflects similar tensions. The Canadian Medical Protective Association’s policy paper, “The medico-legal lens on AI use by Canadian physicians”, cautions that AI should support, not replace, clinical judgment and urges physicians to understand regulatory status, data limitations, and workflow impacts before deployment. A companion resource, “AI in medical practice”, emphasizes that clinicians remain fully accountable for decisions even when AI tools are in the loop.
In 2025, commentary has begun to explore liability on both sides of the adoption line. A Missouri Medicine article, “How Physicians Might Get in Trouble Using AI (or Not Using AI)”, explains that under current malpractice doctrines, courts still judge conduct by the reasonable physician standard. The authors note, however, that as AI systems demonstrably reduce certain types of diagnostic error, plaintiffs may argue that failure to use widely available tools is itself negligent, while overreliance on unvalidated tools can also trigger claims.
Overuse Of AI: When Reliance Becomes Negligence
The first wave of legal scrutiny focuses on overuse and uncritical reliance. In law, sanctions decisions such as Mata v. Avianca, Inc. illustrate how courts respond when lawyers submit filings with fictitious AI-generated citations and then fail to verify or correct them. Ethics surveys collected by the New York City Bar Association’s 2025 report catalog similar incidents and stress that delegating professional judgment to a tool is incompatible with existing rules.
Corporate counsel have reached the same conclusion in their own risk analyses. A 2025 Association of Corporate Counsel program on legal ethics concerns when using generative AI in client matters identifies hallucinations, hidden training data, and confidentiality risks as reasons why unverified outputs can breach duties of competence and confidentiality. The guidance urges lawyers to document when and how AI is used, verify substantive outputs, and ensure they do not bill clients for time spent correcting avoidable AI errors.
In medicine, similar concerns arise when clinicians rely on vendor-supplied algorithms without understanding training data, performance metrics, or potential bias profiles. The failure to mitigate known algorithmic bias could itself be used to prove a breach of the reasonable practitioner standard. The Frontiers in Medicine systematic review concludes that there is still no consensus on how to allocate liability between clinicians, hospitals, and developers, but notes that most proposed frameworks keep primary responsibility with the human professional who integrates AI recommendations into care decisions. That approach mirrors legal ethics guidance that treats AI as a tool whose misfires ultimately belong to the professional user.
Underuse Of AI: When Refusal Becomes Negligence
The more novel theory points in the opposite direction. As AI systems demonstrate superior performance on narrow tasks, plaintiffs and commentators have begun to ask whether refusal to use them can itself fall below the standard of care. In radiology, for example, decision support systems may outperform humans in detecting subtle patterns on certain imaging modalities, at least in controlled studies.
The Missouri Medicine article on how physicians might get in trouble using or not using AI highlights this emerging risk. The authors suggest that if clinical practice guidelines or institutional protocols eventually integrate specific AI tools as part of ordinary workflow, a clinician who ignores those tools without good reason could face arguments that she failed to employ readily available risk-reducing measures. Similar questions have been raised in scholarship on stroke detection, sepsis alerts, and population risk scoring, especially where tools have cleared regulatory review and are widely deployed.
For now, most professional bodies stop short of declaring any AI tool mandatory. The CMPA’s medico-legal paper and other guidance documents stress that clinicians should understand and evaluate tools, not that they must use them. The NYC Bar’s 2025 survey likewise states that no jurisdiction currently requires lawyers to use AI, while acknowledging that the competence baseline is likely to evolve. That leaves courts with flexibility to recognize underuse arguments in future cases without freezing today’s technology landscape into doctrine.
Insurance Implications And Emerging Coverage Questions
Professional liability insurers are beginning to grapple with how AI affects coverage and risk assessment. The Association of Corporate Counsel’s 2025 guidance advises lawyers to examine their malpractice insurance policies to determine the extent of coverage for AI tool usage, noting that different policies including medical device insurance, cyber insurance, and traditional malpractice insurance may overlap when AI is involved. The guidance recommends coordinating with attorneys and insurance brokers to discern appropriate coverage levels.
As AI adoption accelerates, insurers face questions about whether AI use increases or decreases malpractice risk, how to price coverage for firms using AI tools, and whether separate cyber or technology liability policies are needed to cover AI-related failures. These insurance market responses will likely influence how quickly AI becomes embedded in standard professional workflows.
Proving Breach And Causation When AI Is In The Loop
Even if duty and breach theories adapt to AI, causation remains a practical hurdle. Plaintiffs must still show that using, misusing, or failing to use an AI system made a material difference to the outcome. In medicine, that often means reconstructing counterfactual decision paths, comparing what a reasonable clinician would have done with and without the tool, and examining logs that show what recommendations were issued and how they were framed.
The Frontiers in Medicine review describes how many proposed liability models depend on access to technical evidence, including validation data, performance metrics, and post-deployment monitoring records. In practice, that discovery may collide with trade secret and regulatory protections for medical devices and software. Similar tensions exist in legal practice when firms deploy third-party AI systems whose training data and architectures are proprietary. Courts are only beginning to address how far litigants can probe these details to prove negligence.
In law, ethics opinions sidestep some of these evidentiary complications by refocusing on process. The Bar Examiner explainer on ABA Opinion 512 and the North Carolina opinion on AI use both stress documentation of how tools are selected, calibrated, and supervised. In a malpractice context, that documentation may become central evidence when courts decide whether reliance on AI was reasonable or whether a firm ignored obvious warning signs.
Governance Frameworks As Evidence Of Reasonable Care
Organizational AI governance frameworks are already influencing how regulators and courts talk about reasonable use. The U.S. National Institute of Standards and Technology released the AI Risk Management Framework 1.0 in 2023 and maintains an overview page on its AI RMF initiative. The framework is voluntary, but it lays out concrete practices for mapping, measuring, and managing AI risks, including documentation, impact assessment, and human oversight. Agencies, bar groups, and private organizations now cite the AI RMF when describing prudent governance.
Court and bar-led initiatives echo those themes. The Arizona Supreme Court’s steering committee on artificial intelligence issued ethical best practices for lawyers and judges using generative AI, which urge institutions to inventory AI tools, restrict use to approved systems, and provide training. The NYC Bar’s AI ethics survey likewise encourages firms to adopt written AI policies, including disclosure expectations and verification protocols.
In a negligence case, these governance artifacts may operate much like industry guidelines or internal policies do today. An organization that can show it followed a recognized framework, trained professionals on specific tools, and monitored performance can argue that it exercised reasonable care. An organization that adopted policies but ignored them in day-to-day practice may find that these same documents supply plaintiffs with a roadmap to breach.
Cross-Border Pressures On Professional Duty
Standard of care analysis is becoming more complex as AI tools and professional services cross borders. Medical AI systems approved under European or Canadian regulatory regimes may be deployed in U.S. hospitals. Legal AI tools may be designed to comply with both domestic ethics rules and foreign privacy or data localization laws. Professionals who rely on these systems must navigate overlapping regulatory expectations even when liability will be decided under a single jurisdiction’s law.
International guidance increasingly frames AI as a professional responsibility issue. The Canadian Judicial Council’s guidelines on AI use in courts emphasize that AI should assist, not replace, judicial decision-making and that risks must be managed proactively. Medical regulators such as the College of Physicians and Surgeons of Saskatchewan have issued guidance on AI in medical practice that underscores continued professional accountability. These statements provide additional reference points for courts assessing what reasonable professionals should know about AI risks even when local statutes are silent.
What A Defensible AI Standard Of Care Looks Like
No jurisdiction has yet declared that professionals must use AI in particular tasks, and there are few reported negligence cases decided squarely on AI use or nonuse. Yet the direction of travel is visible. Ethics opinions, medico-legal guidance, and governance frameworks all converge on a simple proposition. Professionals remain responsible for the tools they choose, the outputs they accept, and the workflows they design.
In practice, that means at least five expectations are taking shape. Professionals should understand the capabilities and limitations of AI tools they adopt. They should verify substantive outputs before relying on them in high-stakes decisions. They should protect confidential data when interacting with third-party systems. They must also address the duty of disclosure, ensuring patients or clients are informed when AI substantially influences a high-stakes decision. Finally, they should be prepared to explain and document how AI fits into their practice. As courts and regulators continue to integrate these expectations into doctrine, the standard of care will not be defined by AI itself, but by how professionals govern it.
Download the PDF Presentation Version
Sources
- American Bar Association: “ABA issues first ethics guidance on a lawyer’s use of AI tools” (News release, July 29, 2024)
- American Bar Association: Model Rules of Professional Conduct
- Arizona Supreme Court Steering Committee on Artificial Intelligence: “Generative AI: Ethical Best Practices for Lawyers and Judges” (Nov. 14, 2024)
- Association of Corporate Counsel: “Legal Ethics Concerns When Using Generative Artificial Intelligence in Client Matters” (June 4, 2025)
- The Bar Examiner: “Generative Artificial Intelligence Tools – ABA Formal Opinion 512 Provides Needed Guidance” (Fall 2024)
- Canadian Judicial Council: “Guidelines on the Use of Artificial Intelligence in Canadian Courts” (Sept. 2024)
- Canadian Medical Protective Association: “AI in medical practice” (Oct. 2024)
- Canadian Medical Protective Association: “The medico-legal lens on AI use by Canadian physicians” (Sept. 2024)
- Cestonaro, Clara et al.: “Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review,” Frontiers in Medicine 10:1305756 (Nov. 27, 2023)
- Chan, Gary Kok Yew: “Medical AI, Standard of Care in Negligence and Tort Law,” in AI, Data and Private Law: Translating Theory into Practice (2021)
- Chew, Kimberly et al.: “How Physicians Might Get in Trouble Using AI (or Not Using AI),” Missouri Medicine 122(3):169-172 (May-June 2025)
- College of Physicians and Surgeons of Saskatchewan: “Guidance on AI in Medical Practice” (Nov. 2024)
- Mata v. Avianca, Inc., No. 1:2022cv01461 – Document 54 (S.D.N.Y. 2023)
- New York City Bar Association: “Current Ethics Opinions and Reports Related to Generative Artificial Intelligence” (May 28, 2025)
- North Carolina State Bar: 2024 Formal Ethics Opinion 1 – Use of Artificial Intelligence in a Law Practice (Nov. 1, 2024)
- U.S. National Institute of Standards and Technology: AI Risk Management Framework 1.0 (Jan. 2023)
- U.S. National Institute of Standards and Technology: AI Risk Management Framework (overview page)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: When Legal AI Gets It Wrong, Who Pays the Price?

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
