When AI Algorithms Help Decide Who Gets Charged
Prosecutorial discretion has always existed in the uneasy space between judgment and power. But as artificial intelligence begins to assist in charging decisions, case screening, and predictive policing, that discretion is acquiring a digital accent. Algorithms now sort defendants by probability, rank neighborhoods by “risk,” and recommend charges through unseen data models. If those systems encode bias, the question becomes constitutional: can an algorithm commit selective prosecution?
From Human Judgment to Machine Recommendation
Across the United States, prosecutorial offices are experimenting with AI-assisted tools to triage caseloads, analyze evidence, and identify patterns of criminal behavior. Some systems operate as dashboards that visualize data on prior convictions, while others estimate a defendant’s likelihood of reoffending. Local prosecutors in Los Angeles County and Cook County have piloted analytics platforms built by Palantir Gotham, Axon’s Justice Data Platform, and Truleo to manage case backlogs and assess charging consistency. The goal is efficiency, but the risk is algorithmic bias that reinforces the very inequities data was meant to dissolve.
Unlike sentencing algorithms that judges can review in open court, prosecutorial algorithms typically function inside the black box of discretion. Their models are trained on historical data that already reflects unequal policing patterns, from stop-and-frisk to narcotics enforcement. When that bias is digitized, it can replicate historical disparities under a veil of objectivity. The result is not a transparent machine, but a mechanical echo of past prejudice disguised as neutral analysis.
The Equal Protection Standard: Intent Versus Impact
Under United States v. Armstrong, a defendant claiming selective prosecution must show that others similarly situated were not prosecuted and that the decision was motivated by a discriminatory purpose. The bar is high because courts presume prosecutors act in good faith. Statistical disparities alone are insufficient; proof of intent is required. But artificial intelligence complicates that calculus: algorithms discriminate by pattern, not prejudice. Bias emerges through training data and model design rather than overt intent.
This creates a constitutional blind spot. Equal Protection doctrine is built on human motive, not machine learning. When prosecutors rely on algorithmic recommendations that amplify racial or socioeconomic disparities, defendants may experience unequal treatment without any identifiable human actor intending it. The traditional standard of intent collapses in a system where discrimination is embedded in code rather than conscience.
Legal scholars now question whether the Equal Protection Clause can meaningfully address discrimination generated by machine-learning systems. Aziz Z. Huq, in A Right to a Human Decision (Virginia Law Review, 2020), argues that automated government decision-making erodes the constitutional requirement of individualized judgment. Ari Ezra Waldman’s Power, Process and Automated Decision-Making (Fordham Law Review, 2019) demonstrates how algorithmic scoring reproduces structural bias even without human intent. Barry Friedman and Danielle Citron’s Indiscriminate Data Surveillance (Virginia Law Review, 2024) extends the concern to privacy law, warning that unchecked automation can amplify inequality. Together these works suggest that when government actors adopt biased tools, they inherit not just efficiency but the system’s embedded discrimination.
When Data Bias Becomes a Constitutional Risk
Algorithmic bias in prosecution can originate from several sources: skewed policing data, selective data labeling, and unexamined proxy variables such as zip code or employment history. These factors may serve as indirect indicators of race or class. In predictive policing, for example, crime prediction maps often reinforce over-policing of minority neighborhoods, generating self-perpetuating cycles of surveillance and arrest. If prosecutors rely on those outputs to justify charging decisions, the effect may mirror selective enforcement even if no individual intended it.
The National Institute of Standards and Technology’s AI Risk Management Framework (January 2023) remains the leading voluntary guidance for mitigating such bias. It emphasizes traceability and documentation as cornerstones of accountability. Without clear records of data provenance, testing, and model performance, prosecutors cannot prove their systems meet fairness standards or that human review was meaningful.
The Department of Justice’s Artificial Intelligence and Criminal Justice Final Report (December 2024) offers similar guidance, urging that AI tools “enhance, not replace, human judgment.” It also calls for independent auditing, bias assessments, and documentation of any algorithmic outputs used in prosecutorial or investigative contexts. These recommendations remain voluntary, but they reflect a federal understanding that transparency must precede trust.
European frameworks go further. The European Union’s AI Act classifies predictive policing and criminal-risk assessment systems as “high-risk,” requiring human oversight, data documentation, and fairness evaluations. The United Kingdom’s Crown Prosecution Service mandates verification whenever AI tools assist in evidence analysis. These models illustrate a global shift from post-hoc defense to proactive accountability.
Emerging Test Cases and Comparative Signals
Few U.S. cases have directly addressed algorithmic bias in prosecutorial decision-making, but analogues exist. In State v. Loomis (2016), the Wisconsin Supreme Court upheld the use of a proprietary risk assessment at sentencing while warning that algorithmic opacity could violate due process. That same reasoning could extend to prosecutorial systems influencing charging or diversion recommendations. If a defendant is charged because an algorithm predicts higher risk based on race-correlated data, the question becomes whether Equal Protection applies to the machine’s bias or the human who used it.
Comparative jurisprudence provides early indicators. The European Court of Human Rights has begun examining algorithmic discrimination under Article 14 of the European Convention, focusing on measurable outcomes rather than intent. Canada’s Office of the Privacy Commissioner has urged limits on automated decision-making in law enforcement and parole. Both frameworks treat discrimination as a process, not a motive – a distinction that U.S. Equal Protection law may soon have to reconcile.
Prosecutorial Accountability in the Algorithmic Age
The challenge for prosecutors is balancing innovation with constitutional fidelity. AI tools can help identify wrongful convictions, streamline evidence review, and flag inconsistent sentencing. But absent disclosure and auditability, they also risk transforming discretion into delegation. If algorithms recommend charges or plea offers, prosecutors must ensure that human judgment remains the final arbiter, not a rubber stamp.
California’s SB 524, signed in October 2025, governs law enforcement use of AI in report writing. It requires disclosure whenever AI assists in drafting police reports, mandates retention of first AI-generated drafts, and prohibits vendors from sharing law enforcement data. Though not directed at prosecutors, it reflects the growing expectation that every stage of justice administration—from investigation to reporting—must leave a human-readable audit trail.
Colorado’s SB 24-205, signed May 2024 and effective June 30, 2026, takes a consumer-focused approach. It regulates “high-risk AI systems” in employment, housing, and financial services, emphasizing transparency and bias mitigation. While not targeted at prosecutorial tools, its structure hints at how states might eventually legislate oversight for government or justice-related AI systems. The statute grants enforcement authority to the Attorney General but offers no private right of action.
The Council on Criminal Justice launched its Task Force on Artificial Intelligence in June 2025. Its Guiding Principles Framework, released October 30, 2025, calls for transparent procurement, independent audits, and fairness testing. The report warns that “accountability must evolve alongside automation,” signaling that ethical use of AI in prosecution is no longer optional but foundational to legitimacy.
Rethinking Equal Protection
Selective prosecution claims based on AI bias could force courts to confront the gap between human intent and systemic effect. The Equal Protection Clause has historically required proof that a prosecutor acted “because of” rather than “in spite of” discriminatory impact. But algorithms operate on correlation, not motive. As technology becomes integral to charging and diversion, the line between human and machine agency may blur beyond recognition. Equal Protection may need to evolve from a doctrine of motive to one of design.
Legal scholars and policymakers are beginning to recognize this shift, arguing that constitutional safeguards such as due process must extend to automated decision-making through transparency, bias audits, and contestability. They also warn that unchecked automation can erode procedural fairness and equality in practice. Internationally, the Organisation for Economic Co-operation and Development’s Governing with AI: Building Trust and Accountability (June 2024) urges member nations to ensure that algorithmic decisions are explainable and reviewable in court. The United Kingdom’s Solicitors Regulation Authority similarly warns that delegating legal judgment to automated systems without human oversight may breach ethical duties.
In the United States, the first true test may arise when a defendant challenges a prosecutorial decision influenced by algorithmic outputs. Whether framed as selective prosecution, due process, or equal protection, such a case would ask courts to define intent in the absence of a human actor. The result could determine how deeply automation can enter the prosecutorial domain before colliding with constitutional limits.
Prosecutorial discretion once depended on individual conscience; soon it may depend on code. As artificial intelligence becomes a fixture of justice administration, courts will face a defining question: when does delegation become discrimination? The answer will shape not only the future of AI in prosecution but the meaning of equality itself in the digital age.
Sources
- California Legislature: “SB 524: Law Enforcement Agencies: Artificial Intelligence” (2025)
- Colorado General Assembly: “SB 24-205: Consumer Protections for Artificial Intelligence” (2024)
- Council on Criminal Justice: “Task Force on Artificial Intelligence: Guiding Principles Framework” (October 2025)
- Department of Justice: “Artificial Intelligence and Criminal Justice Final Report” (December 2024)
- European Commission: “Regulatory Framework for AI / AI Act” (2024)
- Fordham Law Review: “Power, Process and Automated Decision-Making” by Ari Ezra Waldman (2019)
- National Institute of Standards and Technology: “AI Risk Management Framework” (January 2023)
- Organisation for Economic Co-operation and Development: “Governing with Artificial Intelligence: Are Governments Ready?” (June 2024)
- UK Crown Prosecution Service: “How the CPS Will Use Artificial Intelligence” (2025)
- UK Solicitors Regulation Authority: “Compliance Tips for Solicitors Regarding the Use of AI and Technology”
- United States v. Armstrong, 517 U.S. 456 (1996)
- Virginia Law Review: “A Right to a Human Decision” by Aziz Z. Huq (2020)
- Virginia Law Review: “Indiscriminate Data Surveillance” by Barry Friedman and Danielle Citron (2024)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, statutes, and sources cited are publicly available through official publications and reputable outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: AI Evidence on Trial: When Algorithms Contradict Forensic Experts
