When Algorithms Suppress Evidence, Brady Rules Still Apply
|

When Algorithms Suppress Evidence, Brady Rules Still Apply

Artificial intelligence is now embedded in the criminal justice process, from digital forensics to predictive policing. When an algorithm’s output favors a defendant but never reaches the defense, the omission may look less like oversight and more like a constitutional failure. Courts must now decide whether suppressing machine-generated evidence is any different from suppressing human testimony, and whether prosecutors who hide what an AI found should face sanctions under the same standards that govern every other form of disclosure.

Recoding Brady for the Algorithmic Age

For six decades, the rule in Brady v. Maryland has required prosecutors to disclose evidence favorable to the accused. That obligation extends to impeachment material under Giglio v. United States and remains one of the profession’s most fundamental ethical duties. Yet none of these precedents anticipated evidence created by software. Algorithms that flag inconsistent witness statements, alternative suspects, or low confidence scores are producing new categories of potential exculpatory data. When that output is ignored or deleted, the defense never learns what the machine detected, and the line between negligence and suppression disappears.

Federal discovery obligations under Rule 16 increasingly intersect with Brady when digital outputs constitute part of the prosecution’s case file. Modern prosecutorial tools now include machine learning systems trained to search phone dumps, classify body camera footage, and summarize thousands of text messages. The Department of Justice’s December 2024 report on Artificial Intelligence and Criminal Justice documents the widespread adoption of algorithmic systems across federal law enforcement, including facial recognition, digital forensics, and predictive analytics. These same systems can also identify errors or bias in state reports, material that may favor defendants. If those results never appear in discovery, suppression is not a metaphor. It is a literal deletion of due process carried out by design choice, data filter, or ignorance of how an algorithm’s findings should be preserved.

Courts have long treated nondisclosure as a matter of constitutional fairness rather than intent. That logic applies equally to algorithms. Whether the concealment results from human will or machine opacity, the effect on the defendant’s right to a fair trial is the same. A prosecutor who relies on AI to find incriminating evidence must also be responsible for ensuring that its exculpatory traces reach the defense.

When Digital Tools Become Witnesses

Algorithms now function like silent investigators. Facial recognition platforms produce ranked candidate lists, probabilistic DNA software assigns match probabilities, predictive systems recommend which cases deserve resources. Each generates digital statements that can either reinforce or undermine the prosecution’s theory. Treating these outputs as non-evidentiary data allows favorable results to surface while inconvenient ones vanish. Scholars have described this as the “missing algorithm” problem, a void where transparency should be. Recent research shows that trade secret protections and licensing contracts often prevent defense access to model internals, even when those systems drive charging decisions.

In practice, that means a defendant may face a conviction supported by machine-assisted identification without ever seeing the software’s error rates, alternate matches, or low confidence flags. Courts confronted similar issues in State v. Loomis, where a proprietary risk assessment algorithm informed sentencing but could not be scrutinized for bias. The same logic now extends to evidentiary AI. If a prosecutor cherry picks which algorithmic outputs to disclose, the system effectively becomes a witness whose unfavorable testimony was silenced. The Brady rule was designed precisely to prevent that.

Defense attorneys have begun to respond. The National Association of Criminal Defense Lawyers has called on governments to refrain from contracting with facial recognition providers that do not make their algorithm source code, training data, and system parameters available for external validation and disclosure to the defense. In New Jersey v. Arteaga, a 2023 appellate decision, a court held for the first time that defendants are entitled to detailed information about facial recognition searches under Brady, including system error rates, candidate lists, and analyst procedures. The principle is clear: once prosecutors use algorithmic tools to build a case, those tools’ internal records become part of the evidentiary chain, not proprietary curiosities beyond review.

Constitutional Rights vs. Proprietary Code

Prosecutors increasingly rely on commercial vendors whose systems remain closed to outside inspection. When defendants request access, agencies invoke intellectual property rights or cybersecurity exemptions. Yet due process cannot depend on a company’s licensing terms. Courts have confronted this tension in multiple jurisdictions, with the New Jersey appellate decision in Arteaga establishing that the government cannot outsource its constitutional obligations to private code. A forensic algorithm’s inner workings, including error rates and alternative outputs, are part of the prosecution’s evidence and therefore subject to disclosure.

Transparency mechanisms are emerging. The Stanford Institute for Human Centered Artificial Intelligence and the Georgetown Law Center on Privacy & Technology both advocate for model logging requirements similar to chain of custody records: structured documentation showing how an algorithm processed inputs, generated outputs, and flagged anomalies. Such documentation could satisfy both proprietary concerns and Brady’s constitutional mandate. Without it, courts are left with evidence whose provenance is unknowable, and defendants who cannot challenge the digital witness against them.

Ultimately, the argument that “the algorithm is confidential” mirrors an older era’s claim that “the informant’s identity must remain secret.” Both erode fairness when applied categorically. Once a government actor relies on a tool to generate incriminating proof, that tool becomes part of the case file. Concealing its limitations is not innovation but obstruction under new branding.

Enforcement and Ethical Accountability

Traditional remedies for disclosure violations were built for human misconduct: new trials, dismissals, or professional discipline. Algorithmic suppression introduces a layer of deniability where prosecutors can claim they never saw the hidden data because the machine filtered it out. That defense, while technologically plausible, is ethically hollow. The prosecutor remains responsible for what the government knows, including knowledge stored inside its digital systems. The American Bar Association’s Model Rule 1.1 on competence and Rule 3.8 on prosecutorial responsibility both require reasonable familiarity with technology used in practice. Ignorance of an algorithm’s capabilities cannot excuse suppression of its exculpatory findings.

Maryland courts have already rejected this logic. In Johnson v. State, an appellate court reversed a conviction in 2025 when prosecutors claimed they did not know which facial recognition system police had used to identify the defendant. The court held that prosecutorial ignorance of algorithmic tools cannot excuse the failure to disclose their use, leaving the conviction vacated because the defense was denied the ability to test the technology’s reliability.


Some reformers suggest that sanctions for AI evidence suppression should follow a graduated scale. Minor nondisclosures might trigger judicial orders to produce additional data, while repeated or deliberate concealment could justify dismissal or bar referral. Federal judges already possess supervisory authority to impose evidentiary sanctions for discovery misconduct. Extending that framework to algorithms would simply update existing doctrine.

More broadly, the profession must treat algorithmic literacy as part of prosecutorial competence. Training programs in model oversight, data validation, and disclosure procedures could prevent unintentional suppression before it occurs. The alternative is a generation of prosecutors who rely on tools they do not understand, producing cases they cannot fully vouch for. Ethical modernization is not optional but constitutional hygiene.

Building Judicial Oversight

Courts are beginning to craft procedural guardrails. The New Jersey appellate decision in Arteaga required pre-trial disclosure of algorithmic logs and system validation studies when prosecutors use facial recognition in investigations. Other jurisdictions have authorized in camera review by technical experts under protective orders. These methods preserve proprietary code while allowing meaningful defense access to reliability data. The approach parallels how courts manage confidential informants: limited disclosure, judicial supervision, and balancing tests that favor fairness over secrecy.

Institutional oversight will likely follow. The Department of Justice appointed its first Chief AI Officer in 2024 and established an Emerging Technology Board to develop comprehensive AI governance programs. Academic proposals suggest integrating algorithmic disclosure duties into Rule 16 of the Federal Rules of Criminal Procedure, explicitly defining digital outputs and confidence metrics as discoverable. Such reforms would formalize what ethical reasoning already demands: if the prosecution benefits from an algorithm’s intelligence, it must also bear the burden of its transparency.

Ultimately, judicial sanctioning power may become the primary enforcement tool. Courts that treat algorithmic nondisclosure as ordinary Brady misconduct will deter selective reliance on machine evidence. Those that ignore it will invite a new generation of appeals arguing that artificial intelligence quietly reshaped what defendants were allowed to see. The technology may be novel, but the remedy is not: disclosure remains the oxygen of fairness, and due process still requires that every piece of state generated knowledge, human or machine, be subject to sunlight.

Sources


This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, studies, and sources cited are publicly available through official publications and reputable outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: What Happens When Artificial Intelligence Learns a Client’s Secret

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *