Municipal AI Procurement Tests the Limits of Public Records Law
The algorithm that calculates a teacher’s performance rating, predicts which families will neglect their children, or decides where police should patrol tonight is probably invisible to the public whose lives it affects. Municipal AI systems now make consequential decisions across every domain of local government, yet the documentation of how they work, if it exists at all, sits scattered across vendor portals, private foundation servers, and cloud dashboards that may not even be accessible to the agencies that rely on them.
AI Procurement Collides with Public Records Law
Municipalities procure these systems the same way they once bought office furniture: through standard vendor contracts, on compressed timelines, with minimal public input. The difference is scale and stakes. AI tools now sort benefit applications, triage 311 calls, flag buildings for inspection, and generate risk scores that influence bail decisions. Studies of automated decision systems in cities such as Washington, D.C., have documented dozens of such tools operating across housing, criminal justice, and social services, often discovered only through freedom-of-information requests and investigative reporting rather than proactive disclosure.
State public-records statutes were drafted for a world in which government information lived inside agencies, on servers they controlled. When a city subscribes to a cloud service that classifies procurement records, triages resident complaints, or screens benefit applications, the logs and model documentation often sit on vendor infrastructure. Requesters then face a three-part puzzle: the city’s own emails and contracts, the data that move through the system, and the algorithmic logic that drives decisions.
Early guidance is emerging, mostly at the state and local level. In Washington, the Secretary of State’s office has advised that prompts and outputs from generative AI tools count as public records when they relate to official business, in guidance addressing whether generative AI interactions are public records. North Carolina’s local-government specialists likewise warn that records created with generative AI are subject to the same retention and disclosure duties as email in their bulletin on developing guidelines for the use of generative artificial intelligence in local government.
Police Tech, Private Foundations, and Shadow Procurement
The most explosive public-records disputes so far have involved policing technology. Predictive-policing pilots, real-time crime centers, and protest-monitoring tools are often funded or routed through quasi-private entities such as police foundations or economic-development authorities. That structure can move contract negotiations and data-sharing agreements out of the plain view of city councils and records clerks.
In Atlanta, years of controversy over the proposed public-safety training center known as “Cop City” triggered litigation over whether the Atlanta Police Foundation must respond to open-records requests. The Georgia Recorder reported that a state-court judge ordered the foundation to comply with Georgia’s Open Records Act in June 2025, and the Electronic Frontier Foundation described the ruling as a significant step toward preventing cities from outsourcing controversial projects to avoid transparency.
Similar dynamics surface wherever police foundations play an intermediary role in surveillance procurement. Records about contracts, vendor performance, and algorithmic risk assessments may sit on the foundation’s servers while the system operates in police precincts. Requesters then must persuade courts that a formally private body is effectively an arm of the government for public-records purposes, a theory some open-government precedents support but that remains contested in many jurisdictions.
Public-Records Lawsuits as Algorithm Discovery
For lawyers and advocates, public-records litigation has become a stand-in for algorithm discovery. In Chicago, the ACLU of Illinois v. City of Chicago lawsuit used state freedom-of-information law to obtain documents about police social-media monitoring, after the city initially resisted disclosure. Investigations into predictive policing in Los Angeles and elsewhere began with records requests that sought contracts, policy memos, and basic descriptions of how risk scores were generated.
These disputes rarely deliver source code or full model documentation. Instead, they pry loose procurement files, system descriptions, vendor slide decks, and internal audits. Policy analyses of data-driven policing, such as Ronald J. Coleman’s article on capacity measurement in the New Mexico Law Review and Brookings’ essay on the threats data-driven policing poses to constitutional rights, show that even high-level descriptions of inputs and outputs can reveal feedback loops, neighborhood concentration effects, and disparate-impact risks that would otherwise remain hidden behind trade-secret claims.
In some cases, public-records litigation pushes past disclosure toward structural change. A settlement over a predictive-policing program in Pasco County, Florida, required the sheriff’s office to abandon the initiative and acknowledged that the system had violated constitutional rights, a result detailed by the Institute for Justice in its case summary on predictive policing in Pasco County. The December 2024 settlement required the Pasco County Sheriff’s Office to pay four plaintiffs and permanently terminate its Intelligence-Led Policing program. Elsewhere, litigation over delayed or incomplete responses has led courts to order improved recordkeeping practices, which indirectly shape how agencies log AI-related decisions and retain vendor communications.
The stakes of these disputes extend beyond disclosure. Recent privacy enforcement actions signal that municipalities could face orders to delete AI systems entirely. When the Federal Trade Commission settled with Weight Watchers over its Kurbo app in 2022, the agency required the company to destroy algorithms trained on illegally collected data from minors. That precedent raises an uncomfortable question for city attorneys: if a municipality loses a public-records or privacy lawsuit over an AI system, could a court order algorithmic disgorgement, requiring the city to delete not just the training data but the model itself? The question remains untested in the municipal context, but the risk adds urgency to getting procurement and documentation practices right from the start.
Trade Secrets, Vendor Portals, and the Scope of Agency Records
The hardest questions arise at the intersection of trade-secret law and public-records statutes. Vendors routinely assert that model architecture, training data, and performance metrics are proprietary, even when governments rely on those systems for high-stakes adjudications. Public-records laws usually include exemptions for trade secrets, but definitions and burdens vary widely, leaving courts to decide how far secrecy can stretch when decisions affect liberty or benefits.
In practice, municipal lawyers must parse two linked questions: whether AI-related information is an agency record at all and, if so, whether an exemption applies. Documents on a vendor’s portal, or on a police foundation’s servers, may still be deemed agency records if the government uses them to make decisions or has the right to obtain them under contract. Case law on outsourced prison healthcare, private universities policing public spaces, and city-created nonprofits suggests that functional control often matters more than formal ownership.
Scholars of algorithmic transparency have urged governments to treat procurement as a leverage point. Paul Ohm and others have argued for “access to algorithms” through contracts, while Andrew Selbst, Margot Kaminski, and colleagues have documented how disclosure regimes struggle with automated systems. In the smart-city context, Harry Surden, Margot Kaminski, and the team of Michael Brauneis and Ellen Goodman have focused on transparency tools; Brauneis and Goodman’s article on algorithmic transparency for the smart city captures how procurement choices decide whether oversight bodies can ever see meaningful documentation.
Algorithm Registers and Proactive-Disclosure Experiments
Some cities have tried to move beyond case-by-case disclosure by publishing algorithm or AI registers. Amsterdam and Helsinki maintain public lists of municipal AI systems, with descriptions of purpose, input data, and responsible departments. The Dutch national government now operates a broader algorithm register for high-impact public-sector systems, and European city networks have developed an “algorithmic transparency standard” that outlines common fields for such registries.
International policy work is starting to systematize these experiments. A state-of-the-art report prepared under the Global Partnership on AI and hosted by the OECD’s AI Observatory maps dozens of algorithm registers and other transparency tools worldwide. An OECD chapter on AI governance frameworks emphasizes that proactive disclosure can reduce litigation risk by clarifying what systems exist and what documentation is available before controversies erupt.
These registers are not a cure-all. Reviews of early deployments have found gaps, inconsistencies, and sometimes narrow coverage limited to a handful of pilot systems. But even imperfect registries give requesters a roadmap. Once a system appears in a register, it becomes easier to frame targeted requests for contracts, risk assessments, and vendor correspondence, rather than guessing at product names or internal project codes.
Procurement Checklists: Embedding Public Access Requirements
The emerging lesson is that AI governance for municipalities now lives partly inside procurement boilerplate. City attorneys can reduce open-records risk by treating public access as a non-negotiable requirement, no different from data-security or indemnity clauses. That means specifying which categories of documents the city must be able to obtain on demand, from training-data descriptions and evaluation reports to configuration settings and change logs.
Contracts can also build in disclosure mechanics. Vendors can be required to support export of logs and model outputs in common formats, to segregate city data so that public-records searches are feasible, and to notify agencies when they materially change model versions or retrain systems on new data. For systems that rely on continuous vendor tuning, governments may need clear obligations around version control so that records reflect which model was in production at the time of a contested decision.
AI-specific procurement guidance from international bodies and open-contracting groups now recommends risk-based tiers, where higher-impact systems trigger more demanding documentation and audit rights. The Open Contracting Partnership, for example, calls for “user-friendly” checklists that tie technical performance and rights-impact assessments to specific contract clauses in its note on public procurement guidance on AI. City lawyers can adapt those frameworks to local law by mapping each tier to defined records categories and retention obligations.
Some jurisdictions are beginning to align procurement standards with the National Institute of Standards and Technology’s AI Risk Management Framework, which provides structured documentation requirements for AI systems. San Jose, California, conducted a comprehensive self-assessment using the NIST AI RMF, evaluating its AI governance practices across the framework’s functions. The framework’s emphasis on transparent documentation, testing protocols, and governance structures offers city attorneys a ready-made checklist for contract language, though adoption remains voluntary and implementation varies across municipalities.
Labor Agreements as Transparency Tools
Public-sector unions have emerged as unexpected transparency advocates. Municipal employees often discover surveillance and scoring systems before the public does. Union contracts increasingly include provisions requiring notification before implementation of monitoring technologies, access to the data used to evaluate members, and documentation of algorithmic decision criteria.
The Public Services International Digital Bargaining Hub documents numerous collective bargaining clauses worldwide that address AI transparency, including requirements that algorithmic parameters and decision-making logic be disclosed to workers and their representatives. These collective bargaining agreements can function as a parallel transparency mechanism, forcing disclosure that open-records litigation might take years to achieve.
Generative AI, Chatbots, and the Next Records Wave
Generative AI introduces a new generation of records questions. City staff may use general-purpose tools to draft correspondence, summarize reports, or prototype policies, while agencies deploy chatbots to answer resident questions and guide public-records submissions. State and local guidance has begun to treat prompts and outputs as records when they relate to public business, but retention and retrieval practices are uneven and often depend on vendor defaults.
When a school district uses a chat application that auto-deletes messages, or a city fields resident complaints through a vendor-hosted chatbot with limited export features, those design choices collide with statutory duties to preserve and disclose communications. Public-records disputes in this space are likely to ask whether government entities can rely on tools whose default settings undermine archiving and search, and who bears the cost of retrofitting retention into systems that were not built with transparency in mind.
For city counsel, the safest approach is to treat generative AI as another front end on existing records systems. If a chatbot steers residents through public-records requests, the underlying submissions should be captured in the same repositories that handle email or web forms. If staff use AI tools for research, agencies should decide when drafts or notes become retainable records and whether platform logs need to be preserved as part of the administrative history of a decision.
Litigation, Not Just Legislation, Sets the Ground Rules
National AI statutes and executive orders attract most of the headlines, but much of the practical law of municipal AI is taking shape in public-records disputes, state-court opinions, and settlements that never reach appellate review. Open-government lawsuits in Chicago, Atlanta, Washington, D.C., and other jurisdictions have already forced disclosures that reveal how cities buy and operate algorithmic systems, even as trade-secret and privacy arguments limit what requesters can see.
Those cases, combined with academic work and international guidance on algorithmic transparency, sketch a working agenda for municipal lawyers. Treat AI systems as record-generating infrastructure, not black-box appliances. Negotiate procurement contracts with public access in mind. Align local practices with emerging algorithm-register standards and AI-governance frameworks, so that records requests do not have to start from scratch each time a new system appears.
Public-records law was not drafted with AI in mind, yet it is rapidly becoming one of the main mechanisms for aligning municipal automation with democratic accountability. For city attorneys, that means AI procurement failures are no longer just budget or IT problems. They are live tests of whether long-standing rights of access to information still function when the city’s most consequential decisions run on someone else’s servers.
Sources
- ACLU of Illinois: “ACLU of Illinois v. City of Chicago” (June 21, 2018)
- Brookings Institution: “Data-driven policing’s threat to our constitutional rights,” by Angel Diaz (Sept. 13, 2021)
- Electronic Frontier Foundation: “Georgia Court Rules for Transparency over Private Police Foundation,” by Jose Martinez (June 27, 2025)
- Federal Trade Commission: “FTC Takes Action Against Company Formerly Known as Weight Watchers for Illegally Collecting Kids’ Sensitive Health Data” (March 4, 2022)
- University of Florida Journal of Law & Public Policy: “Let’s Not Be Dumb: Government Transparency, Public Records Laws, and “Smart City” Technologies,” by Sanders, Amy Kristin and Stewart, Daxton R.(Vol. 33: Iss. 2, Article 1, 2023)
- Georgia Recorder: “Atlanta Police Foundation ordered to comply with open records requests over ‘Cop City’ documents,” by Maya Homan (June 4, 2025)
- Institute for Justice: “Case Closed: Pasco Sheriff Admits Predictive Policing Program Violated Constitution” (Dec. 4, 2024)
- Knight First Amendment Institute: “Transparency’s AI problem” (June 17, 2021)
- National Freedom of Information Coalition: “Is It Just Dumb Luck? The Challenge of Getting Access to Public Records Related to Smart City Technology,” by Amy Kristin Sanders, Daxton ‘Chip’ Stewart and Steven Molchanov (2022)
- New Mexico Law Review: “Big Data Policing Capacity Measurement,” by Ronald J. Coleman (Vol. 53, No. 2, 2023)
- NIST AI Risk Management Framework: “City of San José AI RMF Self-Assessment”
- OECD AI Observatory: “Algorithmic Transparency in the Public Sector”
- OECD: “Governing with Artificial Intelligence” (2025)
- Open Contracting Partnership: “User-friendly public procurement guidance on AI,” by Kaye Sklar (Feb. 4, 2025)
- Public Services International: “Digital Bargaining Hub: Digital Tools, Artificial Intelligence, and Algorithms”
- UNC School of Government: “Developing guidelines for the use of generative artificial intelligence in local government” (March 14, 2024)
- Washington Secretary of State: “Managing Generative AI Records”
- Yale Journal of Law & Technology: Robert Brauneis & Ellen P. Goodman, “Algorithmic Transparency for the Smart City” (Vol. 20, 2018)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Delegating Justice: The Human Limits of Algorithmic Law

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
