Insight
Insights
Jun 2, 2025

Colorado AI Act (SB 205): What Businesses Need to Know Before 2026

Table of contents
Authors
Casey Bleeker
Casey Bleeker
CEO & Co-Founder
Casey Bleeker testifying to Colorado Judicial Committee on SB 205

A few weeks ago, I had the opportunity to be featured in the Wall Street Journal about what is occurring at a Federal level in AI regulations.

While the Federal landscape is constantly changing, it is still critical to stay aware of developments happening at the state and local level that will have significant impacts on how your organization adopts AI, including requirements to govern the use across your workforce to avoid significant civil and criminal penalties.

One such jurisdiction we’ve been deeply involved with is my home state, the State of Colorado.  Last year, I was able to work closely with other technology leaders within the State to attempt to influence Colorado’s AI regulatory act, known as Senate Bill (SB) 205.

About Colorado Senate Bill 205

Colorado SB 205, Colorado’s landmark artificial intelligence law, was signed into law by Governor Jared Polis on May 17th, 2024.  Within the bill are requirements for companies who leverage AI (which is every company with users who have Internet access) to implement controls that will protect Colorado citizens.  Companies are expected to protect consumers from "reasonably foreseeable risks” in which AI may make consumer impacting decisions in discriminatory or biased ways. This means every business operating within the State of Colorado must disclose to all consumers the ways in which AI is leveraged by their employees, and the protections they have implemented to mitigate risks in their use of AI as an organization. 

While an amendment to SB 205 was proposed in the most recent legislative session (SB 318) to attempt to reduce the compliance burden on companies operating within Colorado, it was killed in committee. With Colorado’s legislature re-convening in January 2026, it is unlikely an amendment will be made to SB 205 prior to the current enforcement date of February 1, 2026, so companies must be ready to comply by this date.

What this means for companies operating in Colorado

Under SB 205, any consumer risks that are expected to potentially arise from an organization’s AI use must be disclosed to consumers and to the Colorado Attorney General prior to deployment.  Companies must comply with the provisions of SB 205 by February 1, 2026 or face enforcement action from the Colorado Attorney General’s office.

Luckily, many Colorado technology organizations successfully lobbied together to include provisions in SB 205 to establish a rebuttable presumption of compliance if an organization implements appropriate AI controls. By implementing risk management programs that include appropriate governance controls, organizations receive a presumption of compliance that significantly reduces their compliance burden and enforcement exposure.

My thoughts on SB 205

I was able to testify to the Colorado Senate when they were debating SB 205 in committee. I felt that first and foremost my obligation was to ensure the bill itself did not create undue burden that would hamper innovation within the State of Colorado, or block the adoption of critical technologies that allow Colorado companies to be competitive in the age of AI.

There were two major challenges that will impact organizations operating within Colorado that I identified with SB 205. These were unfortunately not resolved prior to its passing:

  1.  The definition of an “Artificial Intelligence System” is concerningly broad.  It is defined as any system that generates an output based on an input.  This could be an Excel spreadsheet formula, an algebraic equation, or a toaster.  Not being able to define what is “AI”, means companies do not know where to properly apply compliance with SB 205, and increases the regulatory burden and legal compliance risks.
  2. The definition of a “High Risk Artificial Intelligence System” is based upon how an “Artificial Intelligence System” is used.  How a tool is used is often determined by the consumer or end-user of that tool, or in this bill, the “deployer”. That means employers are now responsible for governing how their employees leverage any AI powered tool, from ChatGPT to Grammarly, whether it was intended to be used as a “High Risk” AI solution or not.

You can listen to my full testimony here:

Preparing for AI compliance requirements like SB 205 and more

While we all hope that SB 205 requirements are eased in the 2026 legislative session, the chance that occurs before the current enforcement date of February 1, 2026 is unlikely. 37 states have enacted AI regulations that have similar requirements, in addition to the EU AI Act, and many regulations throughout the APAC regions. Orrick has an incredible resource in the AI Law Center, where they actively track these regulations.

In order to remain compliant with SB 205 and many of these emerging regulations, organizations should identify how they intend to gain visibility of employee’s use of AI solutions, and ensure that protected class data of consumers does not get exposed to AI models - whether ChatGPT, DeepSeek, Claude, or their own internally deployed models such as AWS Bedrock, OpenAI on Azure, etc.

We didn’t build SurePath AI with the intention of enforcing overly burdensome legislation, and actively advocated against the challenges SB 205 introduces. We built SurePath AI to secure your most sensitive data from 3rd parties, and to help you safely adopt AI broadly across your organization. Yet as AI regulations continue to emerge, we can help. SurePath AI makes gaining visibility of and transforming use of AI simple, so your organization can remain compliant with regulations like SB 205, without decreasing the adoption of AI solutions that are so critical to remain competitive in today’s market. 

Disclaimer: This post does not serve as legal advice. To discuss the potential legal implications of SB 205 or other AI regulations specific to your business, please confer with your own legal counsel. We are also happy to refer you to the experts at industry leading firms we partner with around AI governance issues.