2024-2025 Global AI Trends Guide
On May 17, 2024, Colorado Governor Jared Polis signed the Colorado Artificial Intelligence (AI) Act (CAIA), the first broadly scoped U.S. AI law. Similar to the EU AI Act, the CAIA takes a risk-based approach and focuses on high-risk AI systems. It requires developers and deployers of such systems to use reasonable care to avoid algorithmic discrimination in high-risk AI systems. Developers and deployers must disclose specified information to stakeholders. Deployers must also conduct impact assessments, implement risk management plans, and provide consumers with a mechanism to appeal adverse decisions. The Colorado Attorney General has exclusive authority to enforce and adopt rules implementing the CAIA. AI developers and deployers can also establish a rebuttable presumption that reasonable care was used to avoid algorithmic discrimination by complying with their respective obligations under the CAIA. The CAIA takes effect on February 1, 2026.
The CAIA’s core provisions impose obligations on developers and deployers of high-risk AI systems to avoid engaging in discrimination, disclose certain information to various stakeholders, and (for deployers) implement compliance and safety regimes.
The CAIA defines an “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” The CAIA applies only to a “high-risk AI system,” which is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.” A “substantial factor” is an AI-generated factor that helps make or can alter the outcome of a consequential decision, and includes “any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision concerning the consumer.” A “consequential decision” is any decision that “has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:” education, employment, financing or lending, essential government services, healthcare services, housing, insurance, or a legal service. “Consequential decision” is similar to the definition of “Decisions that Produce Legal or Similarly Significant Effects Concerning a Consumer” under the Colorado Privacy Act (profiling in furtherance of which is subject to an opt-out right under that law), except that that definition also includes “criminal justice,” and “essential goods or services.”
A “developer” is a person doing business in Colorado who develops or intentionally and substantially modifies an artificial intelligence system, including general-purpose and high-risk systems. A “deployer” is a person doing business in Colorado that uses a high-risk AI system.
The CAIA requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses” of the high-risk AI system. “Algorithmic discrimination” is any condition in which the use of an artificial intelligence system results in unlawful differential treatment or impact that disfavors an individual or group of individuals based on a protected characteristic or classification. Developers and deployers can establish a rebuttable presumption that they have used reasonable care to avoid algorithmic discrimination by complying with the CAIA’s requirements and any implementing rules promulgated by the Colorado Attorney General.
Developers of high-risk AI systems must:
Disclose to deployers (1) a statement disclosing specified information about the high-risk AI system, including training data, data governance practices, known harms, and safeguards, and (2) information and documentation necessary to complete an impact assessment of the high-risk AI system;
Disclose to the public a statement (1) summarizing the types of high-risk AI systems that the developer has developed or intentionally and substantially modified, and (2) describing how the developer manages known or reasonably foreseeable risks of algorithmic discrimination in connection with each of these high-risk AI systems; and
Disclose to the Colorado Attorney General and known deployers any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended use of a high-risk AI system within 90 days after discovery or receipt of a credible report from the deployer that the system may have caused algorithmic discrimination.
Deployers of high-risk AI systems must:
Implement a risk management policy and program that specifies principles, processes, and personnel used to identify and mitigate algorithmic discrimination, and update it regularly throughout the product lifecycle;
Complete an impact assessment annually and within 90 days of any intentional or substantial modification;
Audit annually each deployed high-risk AI system to ensure that the system is not causing algorithmic discrimination;
Disclose to consumers subject to use of a high-risk AI system information about the high-risk AI system, including the purpose and functioning of the system and the nature of the consequential decision, before deploying the system to make decisions about consumers;
Inform consumers of their rights under the Colorado Privacy Act to opt out of the processing of personal data for profiling for decisions that produce legal or similarly significant effects concerning the consumer;
Notify consumers about whom a decision was made (1) the reasons for the decision, the involvement of the high-risk AI system, and the type and source of the data processed; (2) the consumer’s right to correct inaccurate personal data processed by the high-risk AI system to make the consequential decision; and (3) the consumer’s right to appeal the decision for human review, if technically feasible;
Disclose to the public a statement summarizing the types of high-risk AI systems the deployer currently deploys, how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination related to these high-risk AI systems, and the nature, source, and extent of information collected and used by the deployer; and
Disclose to the Colorado Attorney General the discovery of algorithmic discrimination within 90 days of discovery.
Small employer exemption. Deployers with fewer than 50 full-time employees are exempt from the risk management program, impact assessment, and public statement obligations, but they remain subject to all other requirements, including the duty to exercise reasonable care.
Impact assessment interoperability safe harbor. If a deployer completes an impact assessment to comply with another relevant law or regulation, such impact assessment may be used to satisfy the CAIA’s impact assessment requirements.
Any person doing business in Colorado who deploys or makes available an AI system intended to interact with consumers must disclose to each consumer who interacts with the AI system that the consumer is interacting with an AI system, unless it would otherwise be obvious to a reasonable person. This provision applies to AI systems generally, not just high-risk AI systems.
The Colorado Attorney General has exclusive authority to enforce the CAIA, and the CAIA does not provide a private right of action. That said, the CAIA also amends Colorado’s unfair or deceptive trade practices law, C.R.S. 6-1-105, by defining violations of the CAIA in the course of business as a deceptive trade practice. Developers and deployers must respond within 90 days to any demand from the Colorado Attorney General for information, including risk management policies, impact assessments, or other required notices or documentation.
If the Colorado Attorney General commences an enforcement action under the CAIA, a developer or deployer can raise as an affirmative defense that it discovered and cured a violation through (1) feedback that the developer, deployer, or other persons encouraged others to provide; (2) adversarial testing or red teaming (as defined by NIST); or (3) an internal review process, as long as the developer or deployer otherwise complies with the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework, the International Organization for Standardization’s (“ISO”) ISO/IEC 42001 standard, another nationally or internationally recognized risk management framework, or a framework designated by the Colorado Attorney General.
The CAIA also grants the Colorado Attorney General the authority to issue regulations regarding the CAIA requirements on developer documentation; notice and disclosure requirements; risk management policies and programs; impact assessments; rebuttable presumptions; and affirmative defenses.
Organizations that develop or deploy AI systems will need to consider whether the system might be classified as “high risk.” And both developers and deployers of high-risk AI systems will need to consider the similarities and differences between the CAIA and the EU AI Act to help ensure that their compliance programs are appropriately scoped.
The CAIA is the first broadly scoped AI bill enacted in the United States. Our global, cross-practice AI teams are closely watching legislative and regulatory developments and can help companies navigate these evolving issues.
Authored by Mark Brennan, Katy Milner, Ryan Thompson, Sophie Baum, and Ambia Harper.