Guiding Without Advising: How Courts Are Testing AI for Pro Se Assistance
|

Guiding Without Advising: How State Courts Are Testing AI for Pro Se Assistance

Across the United States, courts are searching for ways to bridge the widening gap between those who can afford counsel and those who cannot. Generative artificial intelligence now promises instant guidance for self-represented litigants, but it also tests the limits of judicial ethics, data governance, and procedural fairness. As several states explore AI in court operations, the debate is shifting from technology to trust.

From Static Forms to AI Conversations

State courts have spent decades refining online self-help portals and document libraries. The next phase is conversational: using large language models to explain procedures, locate the right forms, and triage filings. Consider a self-represented tenant facing eviction who types “I got a notice to leave but I paid my rent” into a court-based chatbot. An AI system could respond with jurisdiction-specific deadlines, suggest relevant defenses, and generate a response form. The question is not whether such systems are technically feasible but whether courts can deploy them without crossing ethical lines.

Utah has advanced furthest. Within its regulatory sandbox, the Office of Legal Services Innovation authorizes entities to test innovative legal-service models under court supervision. These models may include software tools, document-generation platforms, or non-traditional providers serving small-claims and landlord-tenant matters. The sandbox is designed to measure access-to-justice outcomes and consumer risk under judicial oversight. While many states remain in exploratory phases, Utah’s framework demonstrates how courts can balance innovation with accountability with accountability through structured experimentation.

Where Information Ends and Legal Advice Begins

The key distinction is between information and advice. Court staff may explain procedure but cannot suggest legal strategy. Embedding generative AI inside a court website may raise unauthorized-practice-of-law concerns. The American Bar Association’s Formal Opinion 512, issued in July 2024, directs lawyers to verify AI outputs, maintain confidentiality, and disclose when technology assists their work. For courts, the same principles apply: transparency, competence, and accountability. The National Center for State Courts urges jurisdictions to adopt clear AI policies that define procurement, data governance, testing, and oversight before deployment.

Arizona has focused on expanding access through technology. The judiciary created a Steering Committee on Artificial Intelligence and the Courts and has piloted AI-generated avatars that deliver summaries of rulings to the public. Officials emphasize that such tools must simplify procedures and communication without straying into counsel territory.

California joined the discussion through the Judicial Council of California, which established an Artificial Intelligence Task Force in May 2024 to evaluate responsible use of generative AI in court administration, records management, and self-help services. The task force developed Rule 10.430 and Standard 10.80, adopted in July 2025 and effective September 1, 2025, establishing comprehensive guidelines for AI use in California courts. These initiatives show how courts are moving from experimentation toward structured governance of AI use in the judicial system.

Who Owns the Data?

Every AI initiative in the judicial sphere raises the same structural question: who holds the data. Litigant information, including addresses, claims, and personal identifiers, qualifies as court record material protected by state and federal privacy statutes. The Organisation for Economic Co-operation and Development warns that even anonymized judicial datasets can re-identify individuals if merged with external sources. For that reason, Utah requires sandbox participants to store data locally and prohibits reuse for commercial model training. Other states considering AI deployment have echoed similar conditions in preliminary discussions, emphasizing that any future deployment would exclude live-case data until formal safeguards are approved.

Auditing remains another unresolved area. The NCSC recommends independent review of accuracy, bias, and explainability before any AI system goes public. Some states are adapting the NIST AI Risk Management Framework to judicial contexts, treating each tool as a high-risk system requiring traceable decision logs. Without such documentation, courts risk undermining both appellate review and public confidence. Cost considerations further complicate deployment: enterprise-grade AI systems can require substantial initial investment and ongoing licensing fees, placing them beyond reach for many smaller jurisdictions without state or federal funding support.

Expanding Access, Expanding Risk

For proponents, AI offers the first scalable tool to address chronic underrepresentation in civil court. Research by the Pew Charitable Trusts shows that many people facing civil matters such as eviction, debt collection, or family disputes do so without legal counsel. Properly designed AI guides could help narrow that gap by clarifying deadlines, terminology, and procedural steps that often confuse self-represented litigants. Yet these same systems expose courts to new forms of liability. If a litigant relies on faulty machine guidance, questions arise about due process and institutional responsibility. Courts therefore face a paradox: expanding help may also expand risk.

Legal technologists suggest phased deployment. Early versions should handle neutral tasks such as fee calculations or jurisdiction checks, while human clerks continue to review all filings. Only after measurable success should systems progress to generative explanations. This cautious approach reflects a common philosophy among courts exploring AI: study first, test later, disclose always.

National Pattern Emerges

The convergence across states is visible. Utah’s sandbox formalizes experimentation under judicial oversight. Arizona explores AI tools for public communication. California’s Judicial Council has developed formal rules and standards for AI governance. Together they illustrate a national pattern: incremental modernization constrained by constitutional duty. As courts digitize dockets and integrate analytics, the line between administrative efficiency and legal reasoning must remain visible.

Throughout 2024 and 2025, national judicial organizations moved toward AI governance. The Conference of State Court Administrators published guidance on generative AI in August 2024, and the Council on Criminal Justice launched a Task Force on Artificial Intelligence in June 2024 to develop standards for responsible use in criminal justice.

The American Bar Association issued Formal Opinion 512 in July 2024, directing lawyers to verify AI outputs, maintain confidentiality, and disclose when technology assists their work. Multiple state court systems—including California, Illinois, Arizona, and Delaware—have established commissions or task forces to study AI governance, reflecting a national pattern of cautious oversight that reinforces how efficiency must never outpace accountability.

Accountability Over Automation

Whether through Utah’s regulated sandbox, Arizona’s technological pilots, or California’s comprehensive rules, the judiciary’s use of AI remains bounded by ethics and constitutional design. Courts may digitize assistance, but they cannot digitize judgment. The path forward lies in transparency, narrow purpose, and continuous human review. For lawyers observing these experiments, the question is not when machines will practice law but how courts will practice oversight.

Sources


This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, statutes, and sources cited are publicly available through court filings, government publications, and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use in the justice system.

See also: Georgia’s AI Framework: A Three-Year Strategy to Transform the Courts

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *