On May 8, 2024, Colorado’s legislature enacted “An Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (SB205), a state law that comprehensively regulates the use of certain “Artificial Intelligence (AI)” systems.[1] The law is aimed at addressing AI bias, establishing a requirement of human oversight throughout the life cycle of AI systems, and requiring significant documentation around the use of AI. This blog post covers to whom the law applies, effective dates and penalties, important definitions, and initial steps companies should consider taking to prepare for complying with the law.

To whom does SB205 apply?

SB205 applies to any person doing business in Colorado who develops an “AI system” or deploys a “high-risk AI system” (each are discussed further below).[2] The law defines “deploy” as “use,”[3] meaning that SB205 applies to any company using a high-risk AI system, whether or not that system is consumer-facing. Developing an AI system as defined in the law will also include actions that “intentionally and substantially modify” an existing AI system.[4]

How is the law enforced?

SB205 explicitly excludes a private right of action, leaving enforcement solely with the Colorado Attorney General.[5] Additionally, SB205 provides that if the Attorney General brings an enforcement action relating to high-risk AI systems, there is a rebuttable presumption that a company used “reasonable care” under the law if the company complied with the provisions of the applicable section setting forth the respective obligations (§1702 for a developer, §1703 for a deployer), along with any additional requirements that the Attorney General may promulgate.[6] For example, if a developer faced an enforcement action related to the development of a high-risk AI system, and could demonstrate it had the requisite processes and documentation in place as required by Section 6-1-1702, it may benefit from a rebuttable presumption that the developer exercised reasonable care to protect consumers from risks of algorithmic discrimination. The law also provides companies with an affirmative defense against actions by the Attorney General if the company discovers the violation and takes corrective actions, in addition to maintaining a compliance program that meets certain criteria.[7]

How does the law work? Key Definitions

SB205 contains key definitions that determine what specific steps companies must take to be in compliance with the law. Companies must be aware of what constitutes “algorithmic discrimination,” be able to assess whether their AI systems are “high risk,” and determine whether they are a developer, a deployer, or both.

“Algorithmic Discrimination” is defined as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors any individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”[8]

Further, the law’s main obligations attach to different AI systems based on their capabilities and uses. “High-Risk AI System” means “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”[9] The law also defines “consequential decision” as “any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, (a) education enrollment or an education opportunity, (b) employment or an employment opportunity, (c) a financial or lending service, (d) an essential government service, (e) health-care services, (f) housing, (g) insurance, or (h) a legal service.”[10] Note that the definition is subject to a series of exclusions, including use of AI in critical cybersecurity and Information Technology functions (e.g., firewalls, networking, spam filtering) or in providing information to consumers, provided the usage does not serve as a substantial factor in making a consequential decision relating to a consumer.[11]

Finally, companies will need to distinguish whether they are developers, deployers, or both:

  • Developers are “a person doing business in [Colorado] that develops, or intentionally and substantially modifies, an artificial intelligence system.”[12]
  • Deployers are “any person doing business in [Colorado] that deploys a high-risk artificial intelligence system.”[13] As mentioned above, “deploys” means “use.”[14]

Whether a company meets the criteria of either or both will be context-dependent and will influence both statutory and contractual considerations.

5 initial considerations to prepare for SB205’s Feb. 1, 2026, effective date

SB205’s provisions take effect Feb. 1, 2026.[15] All companies must implement a notice within consumer-facing AI systems that alerts consumers to the presence of AI by Feb. 1, 2026, whether the system is high-risk or not, unless the fact that the consumer is interacting with the AI system would be “obvious” to a reasonable consumer.[16]

If you or your customers do business in the state of Colorado, there are five key actions you should consider taking to prepare before February 2026:

  1. Determine whether you are a developer, deployer, or both. This may depend on the various types of and ways that your company uses AI.
  2. Determine if you have a high-risk AI system as defined by the law. Because most of SB205’s substantive provisions only apply to high-risk systems, having a clear idea as to whether your AI systems are covered will be crucial. You should also consider future use-cases for AI systems that are not yet high-risk but may become high-risk depending on how they are deployed.
  3. Review SB205’s notice requirements. As mentioned above, certain consumer-facing AI systems must contain a notice within the system to the consumer that AI is present, effective Feb. 1, 2026, with limited exception.[17] In addition, there are multiple other required notices, some of which must be publicly available.[18]
  4. Review SB205’s impact assessment requirements. The law requires impact assessments in particular contexts that differ somewhat from data processing impact assessments that companies may already be conducting to comply with privacy laws.[19]
  5. Determine whether you need to implement a risk-management policy and program. SB205 requires deployers using high-risk AI systems to implement risk-management policies and programs pursuant to the law’s requirements.[20] Additionally, any company wishing to benefit from the affirmative defense provided by SB205 will need to have a satisfactory compliance program in place.[21]

[1] See S.B. 24-205, 74th Gen. Assemb., Reg. Sess. (Colo. 2024). Other states have regulated specific uses of AI or associated technologies, such as California, which regulates interaction with bots, and Colorado, giving consumers opt-out rights from profiling. At the time of this blog, the law has not yet been signed by Colorado’s governor.

[2] S.B. 24-205, Secs. 6-1-1701(6) & (7).

[3] S.B. 24-205, Sec. 6-1-1701(5).

[4] See S.B. 24-205, Secs. 6-1-1701(7) & (10)(a).

[5] S.B. 24-205, Sec. 6-1-1706(6).

[6] See, e.g., S.B. 24-205, Secs. 6-1-1702(1) & 6-1-1703(1).

[7] S.B. 24-205, Sec. 6-1-1706(3)(a).

[8] S.B. 24-205, Sec. 6-1-1701(1)(a). The definition also clarifies that “algorithmic discrimination” does not include uses related to testing AI systems for discrimination, “expanding applicant, customer, or participant” pools to increase diversity, or acts or omissions of private clubs as covered by 42 U.S.C. 2000a(e). Id.

[9] S.B. 24-205, Sec. 6-1-1601(9)(a).

[10] S.B. 24-205, Sec. 6-1-1701(3).

[11] S.B. 24-205, Sec. 6-1-1701(9)(b).

[12] S.B. 24-205, Sec. 6-1-1701(7). The law also defines “intentional and substantial modification.” S.B. 24-205, Sec. 6-1-1701(10).

[13] S.B. 24-205, Sec. 6-1-1701(6).

[14] S.B. 24-205, Sec. 6-1-1701(5).

[15] See generally S.B. 24-205.

[16] S.B. 24-205, Sec. 6-1-1704(2).

[17] S.B. 24-205, Sec. 6-1-1704(2).

[18] See S.B. 24-205, Sec. 6-1-1702(5), 1703(4),(5),(7), 1704.

[19] See, e.g., S.B. 24-205, Sec. 6-1-103(3)(a) (making impact assessments a requirement for deployers). Note that the law also implies scenarios in which a developer would also conduct impact assessments. S.B. 24-205, Sec. 6-1-1702(3)(a).

[20] S.B. 24-205, Sec. 6-1-1703.

[21] S.B. 24-205, Sec. 6-1-106(3)(b).