New Jersey Building the Playbook for AI-Written Briefs Through a Layered Approach
|

New Jersey Building the Playbook for AI-Written Briefs Through a Layered Approach

As courts nationwide wrestle with how to govern generative AI in filings, New Jersey has been laying the groundwork for a functioning model. The state Supreme Court has issued guidance for lawyers, the judiciary has defined how judges may use AI, and a federal judge now requires disclosure of AI-assisted briefs. Together, these measures form an ethical, institutional, and procedural framework that treats AI as a tool requiring discipline, transparency, and human accountability.

Three Layers of Oversight

Released on Jan. 24, 2024, the New Jersey Supreme Court’s Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers ties generative AI directly to the state’s Rules of Professional Conduct, emphasizing duties of competence (RPC 1.1), confidentiality (RPC 1.6), candor (RPC 3.3), and supervision (RPC 5.1–5.3). Lawyers may use AI in drafting briefs, but they must verify accuracy and protect client data. The guidelines invite public comment, signaling a living policy rather than a final rule.

The next day, the judiciary released its companion Statement of Principles for the Judiciary’s Ongoing Use of Artificial Intelligence. The policy permits AI for “preliminary gathering and organization of information” but bars its use for judicial decision-making. This dual-track approach, governing both lawyers and judges, remains rare. California’s new Rule 10.430 similarly limits internal court AI, but New Jersey published both sides of the equation first.

At the federal level, District Judge Evelyn Padin of the District of New Jersey requires lawyers to disclose whether AI helped draft a filing and to certify that all citations and factual assertions have been checked by a human. That filing-level disclosure, ethics meeting procedure, gives New Jersey the full stack: statewide guidance, judicial principles, and courtroom enforcement.

A Data-Driven Regulatory Culture

The New Jersey judiciary is collecting data rather than just issuing warnings. Its attorney survey on generative AI use mapped how lawyers already integrate such tools into practice. The courts have also issued an AI notice for self-represented litigants, warning about the risks of using generative tools. It is an unusual public-education move that underscores New Jersey’s three-layered, public-facing approach. Results informed continuing-education planning and future updates to the preliminary guidelines. Few jurisdictions are measuring real-world adoption; New Jersey’s empirical approach suggests governance based on observation, not fear.

Where Other States Diverge

California’s Rule 10.430 requires every court to adopt an internal AI policy by 2025 but does not yet mandate disclosure from outside counsel. Florida’s Ethics Opinion 24-1 stresses confidentiality and billing honesty but leaves courtroom disclosure optional. Texas federal judges have issued their own orders following AI-generated citation scandals, but statewide guidance remains fragmented. New Jersey alone aligns all three layers—lawyer ethics, judicial use, and procedural disclosure—within a single framework.

Ethics Duties, Reaffirmed Not Rewritten

The New Jersey Supreme Court’s message is that no new ethics rules are needed, only renewed attention to existing ones. Competence now includes understanding AI’s limits. Confidentiality means knowing what happens to client data fed into public models. Supervision extends to technology vendors. Candor requires verifying every cite. The guiding sentence appears early in the document: “A lawyer remains fully responsible for any AI-generated content used in their work.”

The CLE and Competence Frontier

New Jersey is also weighing a formal duty of technological competence and a mandatory CLE credit in technology. If adopted, it would join only a handful of states that connect AI literacy to continuing-education compliance. The move reflects a forward-looking view: normalization, not novelty, will define legal AI.

Implications for Practice

Firms practicing in New Jersey should establish written AI-use policies, train staff on verification protocols, and vet third-party tools for confidentiality. Before filing, lawyers should confirm whether the presiding judge requires disclosure of AI assistance. In-house counsel should ensure outside firms mirror these standards. The safest rule across all tiers: if AI touched the document, a human must be able to defend every word.

The Broader Meaning

New Jersey’s model treats AI not as an ethical novelty but as a governance problem. By codifying responsibilities for both bench and bar, the approach demonstrates that regulation can evolve faster than panic. In doing so, it positions the state as a laboratory for a post-panic era, where law and machine coexist under verifiable human command.

Sources


This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: When Machines Decide, What Are the Limits of Algorithmic Justice?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *