How a New York City Ordinance Became the Blueprint for AI Regulation
New York City’s 2021 adoption of Local Law 144 required independent bias audits and public disclosure for algorithms used in hiring and promotion. The law’s importance rests not only on its subject matter but on its origins. While federal and state lawmakers debate frameworks, the city moved first with enforceable obligations. For law firms advising employers and technology vendors, it signaled the beginning of a new era in local AI compliance and legal accountability.
From Borough Hall to Boardrooms: A Local Law with National Impact
Local Law 144 is a municipal ordinance enacted by the New York City Council on November 10, 2021, and enforced by the Department of Consumer and Worker Protection (DCWP). The law took effect on January 1, 2023, with enforcement beginning July 5, 2023. It governs employers and employment agencies that use automated tools for hiring or promotion of jobs located in the city. That alone makes it rare. In the United States, most AI regulation originates from federal agencies such as the EEOC or from state legislatures. New York City’s choice to legislate directly, complete with its own enforcement authority, put municipal government at the center of AI governance.
Because national companies recruit in New York City and many vendors are based there, the law’s reach extends beyond the five boroughs. Employers hiring remote workers where the location associated with the remote position is a New York City office must still meet its audit and notice requirements. The result is a local ordinance functioning as a de facto national benchmark for algorithmic accountability.
Home Rule and Legal Authority
The AEDT law derives from the city’s home-rule authority under Article IX of the New York Constitution and Section 10 of the Municipal Home Rule Law. These provisions allow cities to enact laws protecting public welfare so long as they do not conflict with statewide general laws. Courts interpret pre-emption narrowly, granting local governments wide discretion unless the state clearly occupies the field. Since New York State has no comparable statute on automated hiring tools, the city acted within that open lane.
This home-rule framework gives Local Law 144 a dual identity. It is both an employment regulation within city limits and the nation’s first enforceable bias-audit mandate for algorithmic hiring. For law firms counseling multi-jurisdictional employers, it became the first real test of how local AI rules intersect with federal anti-discrimination statutes and professional-ethics obligations.
Defining the AEDT: What Tools Are Covered?
Local Law 144 defines an Automated Employment Decision Tool as any computational process derived from machine learning, statistical modeling, or similar technique that issues a score, classification, or recommendation used to substantially assist or replace discretionary decision-making in hiring or promotion. The statute excludes tools that do not automate, support, substantially assist, or replace discretionary decision-making processes and that do not materially impact natural persons, including junk email filters, firewalls, antivirus software, calculators, spreadsheets, databases, or other data compilations.
According to DCWP guidance, the law applies only to candidates who have applied for a specific position and to jobs located in New York City, including remote roles where the location associated with the position is a city office. General sourcing or resume-bank tools are excluded. That distinction determines whether an employer must commission an audit or simply review data-handling practices, a crucial line for lawyers advising national clients.
Audit, Notice, and Publication Requirements
The law requires every covered AEDT to undergo an independent bias audit within one year of its use and annually thereafter. The audit must calculate selection rates and impact ratios across sex, race, and intersectional categories, following the standards of the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. Part 1607).
The audit must be conducted by an independent auditor, meaning a person or entity that is not involved in the development or offering of the AEDT, nor is a current or potential user of the AEDT. Results must be summarized and posted publicly on the employer’s website before deployment. The employer must disclose the date of the audit and the categories assessed.
Employers must also provide applicants or employees with at least 10 business days’ notice before using an automated tool. The notice must specify the job qualifications and characteristics evaluated, identify the data collected, and describe the right to request an alternative evaluation. These obligations turn algorithmic transparency from an abstract ideal into a practical, repeatable compliance duty.
Penalties and Enforcement
The DCWP enforces audit and notice provisions through administrative penalties. According to the full text of the law, fines are set at not more than five hundred dollars for a first violation and each additional violation occurring on the same day as the first violation, and not less than five hundred dollars nor more than one thousand five hundred dollars for each subsequent violation. Each day of unlawful use and each applicant denied notice can constitute a separate offense. Discrimination complaints arising from biased tools fall under the New York City Commission on Human Rights, which may investigate under the city’s Human Rights Law. Together, the two agencies form a local enforcement model with breadth unusual for a municipal authority.
For law firms, that enforcement posture means municipal authorities, not federal regulators, can now trigger investigations into algorithmic bias. Compliance reviews, once viewed as optional, have become prerequisites for maintaining operations within New York City’s jurisdiction.
Compliance Costs and Market Development
While the law does not mandate specific pricing, compliance requires securing access to the AEDT and commissioning annual independent audits. Employers must budget for audit fees, vendor cooperation costs, and internal documentation systems. The market for independent auditors is still developing, with firms ranging from specialized AI governance consultancies to established audit providers. For organizations conducting substantial volumes of automated screening, the cumulative expense of maintaining compliant tools represents a significant operational consideration. Small businesses with limited resources may find the financial and time commitments substantial, leading some to abandon automated screening altogether in favor of traditional hiring methods.
Why the Municipal Level Matters
Municipal regulation represents a new stage in AI governance. Local governments can act faster than legislatures and tailor obligations to immediate community concerns. New York City moved ahead after public debates about bias in hiring and facial recognition. Its law forces employers to perform annual bias audits and make them public, an operational requirement rather than a voluntary pledge. That immediacy makes it a model for other jurisdictions exploring algorithmic oversight.
Because municipal laws coexist with federal and state frameworks, law firms must now give layered advice. Counsel must evaluate whether a client’s tool complies with federal anti-discrimination rules under Title VII, whether any state privacy or fairness obligations apply, and whether local mandates such as bias audits and candidate notices are triggered. Meeting that layered standard of diligence has become an essential part of professional competence under Model Rule 1.1.
Advisory Duties for Law Firms
Lawyers advising employers or vendors under Local Law 144 should organize their guidance around five steps: scope assessment, vendor diligence, documentation, audit readiness, and disclosure. Each ties directly to competence, confidentiality, and supervision under the ABA Model Rules.
Scope assessment: Determine whether a tool qualifies as an AEDT and whether its use involves a position located in New York City.
Vendor diligence: Require vendors to provide audit cooperation clauses, impact-ratio data, and permission to publish summaries before deployment.
Documentation: Maintain an internal register of AEDTs, including audit dates, vendor names, and system versions. Retain all audit summaries and notices for at least three years.
Audit readiness: Align methodologies with the Uniform Guidelines on Employee Selection Procedures and ensure intersectional demographic categories are included.
Disclosure: Update engagement letters, privacy statements, and applicant notices to describe how automated tools are used and what data are collected.
Integrating these practices helps demonstrate diligence under both Local Law 144 and the Model Rules. It also reduces exposure under malpractice and cyber policies, as underwriters increasingly view AI-governance documentation as evidence of professional care.
How Other States Compare: Illinois, Colorado, and Beyond
Other jurisdictions have adopted different approaches. Illinois regulates AI video interviews through its Artificial Intelligence Video Interview Act, requiring candidate consent and data deletion but no audit. Colorado enacted SB 24-205 in May 2024, introducing risk-based impact assessments for high-risk systems across industries, with full compliance required by February 1, 2026. Neither mandates the annual, third-party audit and public-posting regime that New York City already enforces. In practice, the municipal model remains the most operationally demanding in the United States.
For multi-state employers, building audit infrastructure to satisfy Local Law 144 now will ease compliance later as other states adopt similar standards. For law firms, the ability to interpret municipal AI laws has become as important as tracking federal rulemaking. The city’s early move effectively turned local compliance into a blueprint for national best practice.
International Context: The EU AI Act
While New York City pioneered municipal AI hiring regulation in the United States, the European Union enacted the EU AI Act, which entered into force on August 1, 2024. The regulation classifies AI systems by risk, with employment-related AI tools for recruitment, selection, and promotion designated as high-risk systems. Unlike Local Law 144’s focus on bias audits, the EU Act requires comprehensive conformity assessments, data protection impact assessments, human oversight mechanisms, and transparency obligations. Prohibited AI practices, including emotion recognition in the workplace, became banned on February 2, 2025. Most high-risk system requirements take effect on August 2, 2026, with full compliance required by August 2, 2027.
The EU framework carries extraterritorial reach, affecting any organization whose AI outputs are used within the EU, regardless of where the provider is located. Penalties can reach up to seven percent of global annual revenue for the most serious violations. For multinational employers and law firms advising them, the EU AI Act represents a parallel compliance track that must be navigated alongside U.S. municipal and state requirements, creating complex multi-jurisdictional obligations.
Practical Implications for the Legal Profession
Municipal AI laws alter how firms monitor legislation. Compliance once focused on state and federal dockets. It now requires scanning city councils, agency bulletins, and local human-rights commissions. Firms advising employers, recruiters, or technology vendors must recognize that a single municipal rule can redefine national risk exposure. That new reality elevates the importance of regulatory mapping and cross-office coordination.
Local Law 144 has already influenced insurers, auditors, and HR consultancies, each demanding evidence of algorithmic oversight. Law firms able to integrate bias-audit results, vendor contracts, and disclosure templates into unified compliance frameworks are shaping the emerging playbook for AI risk management. The precedent demonstrates that a city, acting through home-rule power, can set enforceable national expectations for algorithmic fairness.
Frequently Asked Questions
Does Local Law 144 apply to remote workers?
Yes, if the location associated with the remote position is a New York City office. According to DCWP guidance, the law applies when the job location is an office in NYC at least part-time, or when the job is fully remote but the location associated with it is an office in NYC.
What happens if my AEDT fails a bias audit?
The law requires employers to conduct bias audits but does not mandate specific actions based on results. However, employers must still comply with all federal, state, and city anti-discrimination laws. The NYC Commission on Human Rights may investigate discrimination claims involving AEDTs.
Who is responsible for conducting the bias audit?
Employers and employment agencies are responsible for ensuring bias audits are conducted. The vendor that created the AEDT is not responsible for the audit, though vendors may coordinate audit efforts. The audit must be performed by an independent auditor with no financial interest in the employer or vendor.
How often must bias audits be performed?
Annually. Employers can only rely on a bias audit for one year from the date it was conducted. To continue using an AEDT, they must ensure the tool has undergone a bias audit within the past year.
Sources
- American Bar Association: Model Rules of Professional Conduct (2025)
- Colorado General Assembly: SB 24-205 (Consumer Protections for Artificial Intelligence, enacted May 17, 2024)
- EU Artificial Intelligence Act: Official Information Portal
- European Commission: AI Act Implementation Timeline
- Illinois Legislature: Artificial Intelligence Video Interview Act (820 ILCS 42)
- New York City Council: Local Law 144 of 2021 (full text)
- New York City Department of Consumer and Worker Protection: Automated Employment Decision Tools (Enforcement Information)
- New York City Department of Consumer and Worker Protection: Automated Employment Decision Tools FAQ (June 29, 2023)
- New York State: Municipal Home Rule Law
- U.S. Equal Employment Opportunity Commission: Uniform Guidelines on Employee Selection Procedures (29 C.F.R. Part 1607)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All statutes, opinions, and frameworks cited are publicly available through official publications and reputable outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Can Machines Be Taught to Obey Laws They Can’t Understand?

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
