https://jurvantis.ai/wp-admin/post.php?post=5628&action=edit
|

Colorado Compels Insurers to Audit AI Underwriting for Algorithmic Discrimination

Insurers spent years treating artificial intelligence as an efficiency play for underwriting and pricing. Colorado has turned it into a fairness experiment. A 2021 anti-discrimination statute for “big data” in insurance now sits alongside the country’s first comprehensive AI law, with overlapping duties for governance, testing, and documentation. Together, they treat insurers as test cases for how far regulators can push companies to measure and mitigate disparate impact in automated decisions.

Colorado Tests Big Data Insurance Underwriting

Colorado’s starting point is Senate Bill 21-169, a 2021 law that restricts insurers’ use of “external consumer data and information sources” and predictive models that result in unfair discrimination in insurance practices. The statute targets not only traditional rating and underwriting, but a broad set of activities that includes marketing, claims, and utilization management across multiple lines of business, with a particular focus on life insurance.

“External consumer data and information sources” is defined very broadly. The statute covers credit information, social media and online behavior, purchasing habits, home ownership, education and occupation, civil judgments and court records, and other lifestyle indicators that supplement traditional actuarial factors.

To implement SB 21-169, the Colorado Division of Insurance adopted a governance and risk management framework for life insurers that use external data, algorithms, and predictive models. The final rule, codified in 3 CCR 702-10, requires life carriers to build a governance framework, catalogue models, evaluate data sources, and identify and remediate unfairly discriminatory outcomes, with specific responsibilities for oversight of third-party vendors.

The regulation became effective on Nov. 14, 2023, and required life insurers to submit a progress report by June 1, 2024, followed by an annual compliance attestation beginning Dec. 1, 2024, and annually thereafter. Consultancies and law firms that advise carriers describe SB 21-169 as a first-in-the-nation law that forces insurers to inventory their models, understand how external data is used, and document how they prevent unfair discrimination. This emphasis on outcome testing and governance is the foundation on which Colorado is now layering a broader AI regime.

Division Expands Framework to Health and Auto Insurance

On Aug. 20, 2025, Colorado’s Division of Insurance approved an amended regulation extending the governance and risk management framework beyond life insurance to health insurance and private passenger auto insurance. The amended regulation, which took effect on Oct. 15, 2025, requires insurers in these lines to establish the same type of risk-based governance structures, cross-functional oversight committees, and testing protocols that life insurers have operated under since 2023.

Jason Lapham, deputy commissioner for property and casualty at the Division of Insurance, indicated that the initial regulations for health and auto insurance would mirror the life insurance framework in 3 CCR 702-10, focusing on governance and risk management before moving to quantitative testing requirements. Colorado officials stated that no additional insurance lines would be addressed until these three streams are finalized.

AI Act Tightens Insurer Accountability

On May 17, 2024, Colorado enacted Senate Bill 24-205, the Colorado Artificial Intelligence Act, which creates a statewide framework for “high-risk” AI systems. High-risk AI is defined as a system that makes, or is a substantial factor in making, “consequential decisions” in areas such as education, employment, lending, health care, housing, insurance, and legal services.

The Act prohibits developers and deployers from using high-risk AI systems in a manner that results in algorithmic discrimination. It creates a duty of reasonable care and sets out how organizations can obtain a rebuttable presumption that they met that duty. For deployers, that presumption rests on building and operating a documented risk management program, conducting impact assessments, providing notices to affected consumers, and maintaining records that regulators can review.

Insurers are squarely inside this framework. The Act’s analysis highlights that consequential decisions include “the decision to provide or deny education, employment, lending, government services, health care, housing, insurance, or legal services,” and that algorithmic discrimination can include disparate impact based on protected characteristics, even when those attributes are not explicitly coded into the model.

The timing is now part of the story. On Aug. 28, 2025, Governor Jared Polis signed SB 25B-004, a short amendment that delayed the main operative dates for SB 24-205 from Feb. 1, 2026 to June 30, 2026. The amendment does not change the core obligations, but it gives developers and deployers additional months to build compliance programs. The delay followed a special legislative session in which lawmakers were unable to reach agreement on substantive amendments to the Act.

Model Bulletins Standardize AI Governance

Colorado’s laws do not operate in isolation. The National Association of Insurance Commissioners adopted a Model Bulletin on the Use of Artificial Intelligence Systems by Insurers that expects carriers to maintain an “AI System Program” with governance, risk management, internal audit, and documentation designed to avoid unfair discrimination and other adverse consumer outcomes.


The bulletin pushes insurers toward an enterprise view of AI. It calls for model inventories, clear roles and responsibilities, procedures for vendor oversight, and regular monitoring and testing of AI systems that affect consumers. Several states have begun to adopt their own bulletins based on this model, signaling that expectations for AI governance in insurance are converging even before formal laws are harmonized.

By late 2025, more than 20 state insurance regulators had adopted or proposed guidance modeled on the NAIC AI bulletin, often through their own circular letters, bulletins, or data calls. For insurers, that means Colorado’s framework does not stand alone but sits inside an emerging national baseline in which regulators expect enterprise AI governance, documented model inventories, and demonstrable efforts to prevent unfair discrimination in algorithmic underwriting and pricing.

New York’s Department of Financial Services provides a prominent example. Its Circular Letter No. 7 on the use of artificial intelligence systems and external consumer data in insurance underwriting and pricing sets out expectations for documentation, data controls, model validation, and testing for unfair or unlawful discrimination. The circular, issued on July 11, 2024, requires insurers to conduct proxy assessments to evaluate whether external consumer data sources correlate with protected class status and may result in unfair discrimination.

For Colorado carriers, the practical effect is cumulative. SB 21-169 demands specific governance and testing programs for life insurance. SB 24-205 extends algorithmic discrimination obligations to high-risk AI across insurance lines. NAIC and state bulletins create an emerging standard for how examiners will judge those programs. Insurers are expected to live in this combined regulatory space, not treat each instrument in isolation.

Risk-Based Pricing Meets Proxy Bias

Insurance law has long distinguished between permitted risk classification and unfair discrimination. The core idea is that insurers may differentiate based on characteristics that are predictive of loss, but not on traits that violate statutory or constitutional norms. The Casualty Actuarial Society describes this as the tension between actuarially sound pricing and constraints that reflect social and legal judgments about what counts as unacceptable discrimination.

SB 21-169 brings this tension into the age of AI and big data. It does not ban external consumer data or predictive models outright. Instead, it prohibits their use when they result in unfair discrimination against individuals based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The focus is on outcomes and patterns in decisions, not only on whether a protected characteristic appears as an explicit variable.

External data sources frequently include variables that correlate with protected traits. Credit information, consumer purchasing patterns, social media behavior, and geographic proxies can track race or income at a fine grain. Colorado’s Department of Insurance and outside commentators have noted that this creates a need for outcome-based testing, since the predictive power of a model may arise in significant part from correlations with membership in a protected class.

As a result, insurers that deploy AI underwriting systems in Colorado have to solve a harder question than whether a model is accurate. They must be able to show that the use of external consumer data and algorithms does not unfairly discriminate, even when the pathways from inputs to outputs are complex. That is where disparate-impact thinking enters the picture, even if the statute does not use that phrase explicitly.

Testing Regulations in Development

Colorado has drafted a separate quantitative testing regulation for life insurers that would establish specific methodologies for detecting unfair discrimination in insurance practices resulting from the use of external consumer data and predictive models. The draft regulation, published in Sept. 2023, proposes the use of Bayesian Improved First Name Surname Geocoding as a statistical modeling methodology to help identify potential racial and ethnic disparities in datasets for both application approvals and premium rates.

As of Dec. 2025, the quantitative testing regulation has not been finalized. Stakeholder comments from the American Academy of Actuaries, the American Property Casualty Insurance Association, and the National Association of Mutual Insurance Companies raised questions about methodology, data availability, and the feasibility of testing requirements across different lines of insurance.

The Division of Insurance has indicated that similar testing frameworks are expected for health and auto insurance following the completion of the life insurance testing regulation. Lapham has also said Colorado does not plan to extend AI regulations to additional insurance lines until work on life, health, and auto insurance is complete.

Federal Guidance Shapes Insurer AI Playbooks

Insurance is regulated primarily at the state level, but federal guidance on AI in credit and consumer finance is influencing how carriers think about underwriting models, adverse decisions, and explanations. The Consumer Financial Protection Bureau has issued multiple circulars that address the use of complex algorithms in credit underwriting and surveillance-driven data sources, with a consistent message that existing fair lending laws and adverse action requirements fully apply.

In Circular 2022-03, the CFPB advised that creditors must provide specific and accurate reasons for adverse decisions, even when they rely on complex models supplied by third parties. It rejected the notion that black-box systems justify generic disclosures or that creditors can avoid explanation by pointing to complexity.

In a subsequent 2023 circular, the bureau reiterated that adverse action notice obligations apply equally when decisions are based on data harvested through consumer surveillance and advanced analytics. Commentators have tied these positions to the broader federal debate about disparate impact in AI, noting that lenders cannot use opacity as a shield against fair lending responsibilities.

While these circulars are directed at creditors, not insurers, they provide a template that insurance regulators can borrow. If a carrier uses AI systems and external data to decline coverage, materially change terms, or assign risk tiers, state unfair practices statutes and emerging AI laws can pull in similar expectations about documentation and explanation. For in-house counsel, the message is that model governance should be designed with potential adverse decision scrutiny in mind.

Building a Defensible AI Underwriting Program

Colorado’s combination of SB 21-169, SB 24-205, and implementing regulations leaves insurers with a clear implication. AI underwriting is no longer just a technical choice. It is a regulated activity that will be measured against formal expectations for risk management, fairness testing, and documentation. A defensible program begins with a comprehensive inventory of models, external data sources, and use cases across the enterprise.

Model inventories cannot simply list internal tools. The Colorado life insurance regulation and NAIC’s model bulletin stress that carriers remain responsible for algorithms and predictive models supplied by third parties, including vendors and affiliates. Governance frameworks therefore need clear policies for vendor selection, contractual requirements for documentation and testing, and ongoing oversight of external systems that influence insurance decisions.

Testing expectations are evolving toward outcome analysis. Guidance from the Division of Insurance, NAIC, and consultants describes programs in which insurers evaluate model outputs for patterns of disparate outcomes across protected classes or reasonable proxies, document the methodology, and remediate where justified. Life insurance rules in Colorado already require governance and risk management frameworks designed to identify and correct unfair discrimination with respect to race, and further testing regulations are in development.

Documentation is the connective tissue between internal governance and external scrutiny. Commentators recommend that insurers maintain records that tie each AI system and external data source to a clear business purpose, describe key drivers and mitigation measures in accessible language, and map their controls to frameworks such as the NIST AI Risk Management Framework. Those records may be critical for asserting the rebuttable presumption of reasonable care under SB 24-205 if the deployment of a high-risk AI system is later challenged as discriminatory.

How AI Underwriting Disputes May Emerge

Formal AI-related enforcement in insurance is still in its early stages, yet the legal pathways are visible in existing laws and guidance. For Colorado carriers, one route involves market conduct examinations and Division of Insurance reviews focused on model governance and testing under SB 21-169 and its life insurance regulation. Those exams can ask for documentation of external consumer data sources, governance frameworks, testing methodologies, and remediation steps.

Another route involves the Attorney General’s new authority under SB 24-205. Once the Colorado AI Act takes effect on June 30, 2026, the statute will give the Attorney General power to investigate and enforce algorithmic discrimination provisions against developers and deployers of high-risk AI systems, including insurers. The Colorado Attorney General has launched a dedicated rulemaking process for the state’s Anti-Discrimination in AI Law and holds rulemaking authority over documentation requirements, risk-management programs, impact assessments, and the standards for rebuttable presumptions and affirmative defenses.

Private litigation can arrive indirectly. Policyholders may bring claims under state unfair practices or consumer protection statutes, arguing that AI systems or external data sources produced discriminatory outcomes. Plaintiffs and regulators can seek internal model inventories, testing reports, and governance records in discovery, and compare them against public statements and regulatory expectations. That prospect gives in-house counsel a concrete incentive to ensure that AI governance programs are not just aspirational.

Using the Delay Window Effectively

Colorado’s decision to postpone the operative dates of the AI Act to June 30, 2026 does not signal retreat from regulation. It simply creates a defined window in which insurers can align their SB 21-169 programs, NAIC-driven AI governance structures, and SB 24-205 obligations into a coherent whole. Commentaries on the delay emphasize that the short extension does not alter the substance of the Act, but it does provide time to build or refine documentation, testing, and oversight.

For in-house and outside counsel, that window is an opportunity to move AI underwriting from a technology project to a regulated function that receives the same level of attention as solvency oversight or privacy compliance. Carriers that can explain how their models work, why external consumer data is needed, how outcomes are tested for unfair discrimination, and how remediation decisions are made will be better positioned when regulators and courts inevitably begin to ask harder questions about AI-driven insurance underwriting.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Lost in the Cloud: The Long-Term Risks of Storing AI-Driven Court Records

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *