Illinois Workplace Surveillance Law Sets Global Standard for AI Employee Monitoring
Workplace surveillance used to mean a timeclock on the wall and a camera over the loading dock. Now it means fingerprint readers, facial-geometry scans, keystroke trackers, call-center analytics, wearables, and computer-vision tools that score how fast and how safely employees move. In Illinois, that stack runs straight into the Biometric Information Privacy Act, and recent amendments show what happens when a single state statute becomes the pressure valve for global AI monitoring trends.
BIPA Transforms Timeclocks Into Litigation Targets
The Illinois Biometric Information Privacy Act of 2008 was written for a world just starting to use fingerprints and face geometry in business settings. It defines “biometric identifiers” as retina or iris scans, fingerprints, voiceprints, and scans of hand or face geometry, and requires written consent, public retention schedules, and limits on disclosure for any covered use. The statutory text sits on the Illinois General Assembly site and has barely changed in structure since enactment.
Employers were early and frequent targets because biometric timeclocks spread faster than compliance programs. Hourly workers clocked in with fingerprints that routed through third-party vendors, often under form policies that did not fully explain how templates were stored, shared, or deleted. The law’s private right of action and statutory damages turned those convenience systems into a predictable class-action business model.
By the late 2010s, biometric workplace class actions were a standing feature of the Illinois docket. Suits clustered in manufacturing, logistics, retail, and health care, where shift work and badging habits made biometric timekeeping attractive. Early cases wrestled with threshold questions such as what counted as a biometric identifier, whether health care systems wrapped in HIPAA were treated differently, and whether workers’ compensation or union contracts displaced statutory claims.
Another line of argument focused on accrual and damages. Plaintiffs framed each scan and each transmission of a fingerprint template as a separate statutory violation. Defendants argued for a single claim tied to first collection or first disclosure, warning that per-scan damages would produce what one court later called “annihilative” exposure. Those disputes set the stage for two Illinois Supreme Court decisions that reshaped the risk math.
Two Supreme Court Rulings Changed Everything
In 2023, the Illinois Supreme Court resolved the limitations question in Tims v. Black Horse Carriers, Inc. The court held that BIPA claims are subject to the five-year catch-all statute of limitations, not a one-year defamation-style period, significantly extending the window for employees to bring suit over older systems and practices.
Weeks later, in Cothron v. White Castle System, Inc., the same court accepted the plaintiff’s view that a new claim accrues each time a person’s biometrics are scanned or transmitted without proper consent. The opinion describes a fingerprint system that, when multiplied across a large workforce and a multiyear period, exposed White Castle to potential damages exceeding $17 billion under a per-scan calculation. Together, those decisions made clear that ordinary biometric timekeeping could support staggering statutory damages even when employees suffered no traditional financial loss.
Legislative Response and Settlement Reality
Illinois lawmakers responded in 2024 with targeted amendments designed to narrow that exposure without dismantling the statute. Public Act 103-0769, signed on Aug. 2, 2024, limits statutory damages to a single recovery per person per type of violation rather than per scan, and clarifies that electronic signatures count as a “written release” for consent purposes.
Client alerts and news coverage describe the amendments as a business-friendly overhaul that reins in the most extreme damages scenarios but keeps the private right of action intact. News coverage of the signing notes that many companies had warned of ruinous liability under the per-scan regime. Federal courts are still sorting through whether the amendments apply retroactively to pending cases. An early decision in Gregg v. Central Transport LLC treated the 2024 changes as a clarification that applied retroactively, but the court later vacated that ruling and held that the BIPA amendment is a substantive change that applies only prospectively. In contrast, Schwartz v. Supply Network, Inc. reached the same bottom line without the initial detour, finding that the amendment cannot be applied to conduct that occurred before enactment.
The financial consequences of BIPA non-compliance became clear through a series of high-profile settlements. In February 2021, a federal judge approved a $650 million settlement between Facebook and Illinois users over the company’s facial-recognition Tag Suggestions feature, which allegedly created and stored face templates without proper consent. The settlement covered approximately 1.4 million Illinois Facebook users and resulted in individual payments around $397.
In the workplace context, BNSF Railway reached a $75 million settlement in 2024 after a jury initially found the company had recklessly violated BIPA 45,600 times by requiring truck drivers to scan fingerprints for facility access. The original jury award stood at $228 million before settlement negotiations reduced the figure. White Castle settled its landmark case for $9.39 million in 2024, covering approximately 9,750 employees who used fingerprint timeclocks.
These headline settlements triggered a rapid shift in insurance markets. By late 2024, carriers began adding express biometric exclusions to cyber liability and employment practices liability insurance policies or tightening underwriting standards for employers with biometric systems. In September 2024, an Illinois appellate court in Tony’s Finer Foods v. Lloyd’s held that a cyber policy provided no coverage for BIPA claims absent allegations of a data breach or security failure, narrowing one avenue employers had expected for protection. Courts have reached split decisions on whether commercial general liability, errors and omissions, or EPLI policies respond to BIPA litigation, leaving many employers exposed to statutory damages that insurance no longer covers.
AI Expands the Definition of Biometric Monitoring
Most early BIPA cases involved what the statute names explicitly. Fingerprints, hand geometry, facial geometry, and voiceprints fall squarely within the definition of biometric identifiers, and systems that generate templates from those traits are plainly within the law’s scope.
AI surveillance tools muddy those lines by extracting additional signals from familiar data streams. Courts have distinguished between simple photographs and scans of face geometry, and between ordinary voice recordings and the creation of a voiceprint template. Call-center analytics, webcam-based monitoring, or security footage that feed identification or scoring models may tip an employer from generic monitoring into biometric processing when systems generate templates tied to named employees.
At the edges, AI tools are starting to make keystroke patterns, gait analysis, eye tracking, and mood or sentiment scores part of the worker profile. Plaintiffs argue that where those signals are used to identify or evaluate a specific employee, they should be treated as biometric information even if the statute does not name them explicitly. That argument has not yet produced a definitive Illinois Supreme Court ruling, but it fits a broader regulatory trend that treats full-spectrum analytics as a biometric proxy when it allows persistent recognition or profiling of individuals.
BIPA plaintiffs are no longer focused solely on timeclocks. Suits now target voice analytics used in call centers, facial-geometry systems in access control and loss prevention, and vendor platforms that score productivity or safety with biometric inputs. Public reports and firm trackers describe a maturing plaintiffs’ bar that looks for any AI feature marketed as tracking faces, voices, or emotional states in the workplace and tests it against the statutory definition.
Wearables add another layer. Employers are experimenting with smartwatches, belts, and badges that monitor location, motion, and sometimes heart rate in warehouses and field settings. Federal civil rights agencies have begun to warn that health-related monitoring, even when described as safety-oriented, can cross into medical examination territory or proxy discrimination when data are used for discipline, scheduling, or promotion decisions. A December 2024 EEOC fact sheet on wearables highlights those risks, noting that employers using wearable technology to collect information about employees’ physical or mental conditions may be conducting medical examinations under the Americans with Disabilities Act.
Federal and Global Enforcement Converges
Illinois remains the center of private biometric litigation, but federal agencies have started to build their own enforcement record on AI-driven workplace monitoring. In 2023, the Federal Trade Commission issued a Policy Statement on Biometric Information and Section 5 of the FTC Act, warning that misleading accuracy claims, opaque models, and failure to mitigate foreseeable harms from biometric systems may be treated as unfair or deceptive practices.
The Equal Employment Opportunity Commission has folded AI and workplace monitoring into its Artificial Intelligence and Algorithmic Fairness Initiative. In May 2023, the Commission released technical assistance on the use of software, algorithms, and AI in employment selection procedures under Title VII, with a particular focus on adverse impact and vendor-provided tools.
The Department of Labor has gone a step further by publishing non-binding but detailed principles for AI and worker well-being. Its “Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers” frames AI systems that monitor or rank workers as tools that must support, not undermine, existing labor rights, collective bargaining, and job quality.
The Consumer Financial Protection Bureau has added Fair Credit Reporting Act pressure for employers that buy or generate background dossiers and algorithmic scores about workers. In Circular 2024-06, the Bureau explains that surveillance-based, black-box scores used for hiring, promotion, or discipline can qualify as consumer reports, triggering consent, accuracy, and dispute rights.
Other U.S. jurisdictions do not yet replicate BIPA’s combination of detailed statutory duties and a private right of action, but several are moving toward more structured oversight of AI in employment. Texas and Washington have narrower biometric privacy statutes without a broad private enforcement mechanism, while California has begun to treat automated decision systems and profiling as subject to California Privacy Rights Act rules for employees and applicants.
The European Union is pushing on a different front. The AI Act’s Article 5 list of prohibited practices bans AI systems that infer emotions of individuals in the workplace, as well as social scoring and certain other exploitative uses. Legal analyses of Article 5 stress that webcam or microphone-based emotion tracking in offices and schools is off limits except in narrow safety or medical contexts.
Multinational employers now have to assume that system-level rules on emotion recognition, biometric identification, and AI-driven scoring will converge at a higher protective ceiling. That reality is already visible in vendor roadmaps, where computer vision and productivity analytics tools marketed in Europe are being redesigned to avoid emotion recognition features outright rather than toggling them on and off by jurisdiction.
Compliance Playbook for Counsel
The starting point is inventory. Counsel need a simple but complete map of every system that tracks workers or applicants in any automated way, from access control and timekeeping to call recording, webcam monitoring, keystroke logging, productivity dashboards, and wearable programs. Vendor marketing materials that promise AI scoring, sentiment analysis, behavior prediction, or insider threat detection are often more honest about capabilities than internal policy documents.
Next comes governance. Existing BIPA programs should be refreshed to reflect the 2024 amendments, with explicit coverage of any system that captures fingerprints, facial geometry, or voiceprints and clear, electronic consent mechanisms that meet the statute’s written release standard. Firm guidance on the amendment language is a useful checklist. Retention schedules and destruction practices should be documented for each category of biometric and related template data, and notice documents should describe any AI features that draw on those signals.
Contracting is the third pillar. Many litigated systems are white-label products embedded in payroll or security suites with limited transparency about data flows. Employers can insist on representations about where templates are stored, which entities receive them, and how long they are kept, along with indemnities, audit rights, and cooperation clauses for responding to statutory access or deletion requests. Vendors that cannot answer basic questions about their AI models and biometric handling are increasingly difficult to justify to courts and regulators.
Finally, counsel should line up their internal policies against federal guidance. A single crosswalk that shows how current monitoring practices comport with FTC biometric standards, EEOC Title VII guidance on AI, DOL worker well-being principles, and CFPB FCRA expectations can become the backbone of both compliance documentation and workforce communication. That exercise often uncovers low-value monitoring that can be turned off entirely, reducing legal risk and operational noise at the same time.
The story that began with fingerprint timeclocks now includes high-resolution analytics on how workers move, speak, type, and respond under pressure. A recent Government Accountability Office report on digital surveillance of workers describes increased productivity in some settings and increased stress, injury risk, and job insecurity in others as employers rely more heavily on automated monitoring and scoring. The GAO’s 2025 report on digital surveillance, published Sept. 2, 2025 and released Nov. 24, 2025, also notes a patchwork of federal oversight that depends on which agency’s jurisdiction is triggered.
Illinois remains the jurisdiction where those trends resolve into the sharpest private litigation risk, even after the 2024 amendments reduced the most extreme damages scenarios. For employers, BIPA compliance is no longer a narrow Illinois problem. It is a preview of how biometric-specific laws, general privacy rules, and AI governance frameworks are converging around a simple question: how far should AI-driven workplace surveillance go before the law says that the costs to workers outweigh the efficiency gains for firms.
Sources
- Biometric Update: “Securing Insurance Coverage for BIPA Class Actions a Growing Challenge,” by David Oberly (Oct 16, 2024)
- Center for Democracy & Technology: “EU AI Act Brief, Part 4: AI at Work” by Laura Lazaro Cabrera and Magdalena Maier (April 14, 2025)
- Consumer Financial Protection Bureau: “Consumer Financial Protection Circular 2024-06: Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions” (Oct. 24, 2024)
- Epstein Becker Green: “Biometric Backlash: The Rising Wave of Litigation Under BIPA and Beyond” by Maurice Wells (April 21, 2025)
- EU Artificial Intelligence Act: “Article 5: Prohibited AI Practices” (Feb.2, 2025)
- Faegre Drinker: “Illinois Governor Signs Law That Limits Damages Recoverable Under the Biometric Information Privacy Act” (Aug. 9, 2024)
- Federal Trade Commission: “Policy Statement on Biometric Information and Section 5 of the FTC Act” (May 18, 2023)
- Government Accountability Office: “Digital Surveillance: Potential Effects on Workers and Roles of Federal Agencies” (Published Sept. 2, 2025; released Nov. 24, 2025)
- Illinois General Assembly: “Biometric Information Privacy Act, 740 ILCS 14” (2024 amendment)
- Illinois General Assembly: “Public Act 103-0769, An Act Concerning Civil Law” (Aug. 2, 2024)
- Illinois State Bar Association: Tony’s Finer Foods Enterprises, Inc. v. Certain Underwriters at Lloyd’s, London (Sept. 10, 2024)
- Illinois Supreme Court: Cothron v. White Castle System, Inc., 2023 IL 128004 (Feb. 17, 2023)
- Illinois Supreme Court: Tims v. Black Horse Carriers, Inc., 2023 IL 127801 (Feb. 2, 2023)
- Lewis Silkin: “Understanding the EU AI Act’s Prohibited Practices: Key Workplace and Advertising Takeaways” (Feb. 17, 2025)
- Littler: “EEOC Fact Sheet on Wearable Technologies Indicates Growing Concern over Employee Monitoring,” by Zoe M. Argento, Brad J. Kelley and Sean P. O’Brien” (Jan. 2, 2025)
- National Law Review: Illinois Supreme Court Sets Five-Year Statute of Limitations for All BIPA Claims by David J. Oberly, Christina Lamoureux, Kristin L. Bryan, Squire Patton Boggs (US) LLP (Feb. 2, 2023)
- Robbins Geller Rudman & Dowd: “In re Facebook Biometric Info. Privacy Litig.” (Landmark $650 Million Settlement)
- Rogers v. BNSF Railway Company Settlement: “BNSF BIPA Class Action Settlement” ($75 Million Settlement)
- Ropes & Gray: “A White Castle Post-Mortem: The Prospect of ‘Annihilative’ Damages Under BIPA,” by Nicholas M. Berg, Matthew R. Cin and Kris Kenn (Dec. 19, 2023)
- Top Class Actions: “Judge Set to Approve $9.4M Settlement Over White Castle’s Biometric Timekeeping Practices”by Brigette Honaker”(Aug. 9, 2024)
- U.S. Department of Labor: “Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers” (Oct. 16, 2024)
- U.S. Equal Employment Opportunity Commission: “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII” (May 18, 2023)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Understanding Blockchain and Its Growing Role in Law

Jon Dykstra, LL.B., MBA, is a legal AI strategist and founder of Jurvantis.ai. He is a former practicing attorney who specializes in researching and writing about AI in law and its implementation for law firms. He helps lawyers navigate the rapid evolution of artificial intelligence in legal practice through essays, tool evaluation, strategic consulting, and full-scale A-to-Z custom implementation.
