Colorado AI Law Catapults AG Weiser to National Policy Influence

May 22, 2024, 9:00 AM UTC

Call it the Denver Effect.

Colorado is pushing the boundaries of AI regulation in the US, similar to how the European Union created a framework for data privacy protections that spread across the globe—a phenomenon known as the Brussels Effect.

A lot comes down to Colorado Attorney General Phil Weiser (D) and how he will craft regulations implementing the state’s first-of-its-kind law (S.B. 205) targeting discrimination by high-risk AI systems. He’ll face pressure from tech companies, industry groups, consumer advocates, and fellow public officials over the upcoming rulemaking process.

“The Colorado attorney general has an incredibly important role to play here in setting the rules of the road for the use of high-risk AI in Colorado,” said Grace Gedye, a policy analyst at Consumer Reports. “Whatever the Colorado attorney general does in implementing this law will have reverberations outside of Colorado.”

That will likely make Weiser a national figure on AI policy on a range of matters covered by the law, including education, employment, finance, government services, health-care, housing, insurance, and legal services.

How the Law Works

The legislation, signed last Friday by Gov. Jared Polis (D), immediately became the most ambitious and far-reaching AI law in the nation.

It takes effect in early 2026 and will require AI developers and those using the tools to take steps to prevent algorithmic discrimination by “high-risk” systems, with many of the details left to the rulemaking process.

Developers must conduct impact assessments, provide information on training data to their customers, and create a public inventory of their AI products.

Those using the technology—called “deployers” in the law, which borrows some terminology from the EU AI Act—face requirements of their own. These include using “reasonable care to protect consumers” from biased decision-making. They also must have risk management plans and conduct annual impact assessments detailing how they are using the technology.

People affected by a “consequential decision” by a deployer would have a right to demand information on how the system made its determination.

Work Ahead for Weiser

The law does not give Colorado residents the right to sue, but relies on the attorney general for enforcement.

Guidance will “put more meat on the bones” of the law on matters including risk management policies and programs, rebuttable presumptions, and affirmative defenses, said Cassandra Gaedt-Scheckter, a partner at Gibson, Dunn & Crutcher in Palo Alto, Calif.

“Novel requirements are likely going to need the most explanation, such as the documentation and requirements for developers,” she added.

Rulemaking will likely have to tackle some “significant questions” that remain in the law, according to James McPhillips, a Washington-based partner in Clifford Chance’s Global Tech Group. He pointed to a requirement for AI developers to disclose “any known or foreseeable risk of algorithmic discrimination.”

“What constitutes a ‘known or reasonably foreseeable risk,’ and what type of disclosure is required?” McPhillips said. “Notably, trade secrets are protected from disclosure, but what happens if certain trade secrets are at the core of the cause of the algorithmic discrimination?”

Data Privacy Model

Weiser, a former deputy assistant attorney general in the US Justice Department’s Antitrust Division, was elected Colorado attorney general in 2018 and reelected in 2022 to a term-limited final term that runs through the end of 2026. He has said it is too early to say what issues his office will target in guidance implementing the law.

“There’s going to be a thoroughgoing process to help us become smarter,” he said Monday in an interview. “We’re at the very beginning of the very beginning.”

His rulemaking approach will resemble how he tackled writing guidance for a 2021 privacy law by setting up listening sessions and asking for stakeholder feedback, Weiser added. That process concluded last year.

“We’re going to look for wisdom wherever we can find it,” he said of the AI law.

Weiser also said he would work to educate businesses and consumers about the law. “Our goal is compliance,” he added.

Colorado state Senate Majority Leader Robert Rodriguez (D), who sponsored the bill in the upper chamber, said Weiser’s experience on tech issues and his role in implementing Colorado’s privacy law were key considerations in assigning the attorney general such a prominent rulemaking position on AI.

Other Parties in Conversation

Parties ranging from ordinary people to the Federal Trade Commission could influence the implementation of the law and how Weiser enforces it.

Colorado Rep. Brianna Titone (D), who sponsored the legislation in the state House, said public input would include a task force established by a separate law, (H.B. 1468) to guide the state on AI policy.

“It’s not only going to be up to the attorney general,” Titone said of implementing the new law. “It’s going to be a combination of people in the governor’s office and the Legislature, the task force, and technology groups and partners in the state.”

Weiser could also coordinate his efforts with the FTC considering its own work on confronting algorithmic discrimination and deceptive trade practices, Tatiana Rice, deputy director, US Legislation at the Future of Privacy Forum, said in an interview.

Courts could also have a say, said McPhillips.

“Laws inherently carry the risk of ambiguity, and we are likely to see some of this ambiguity tackled by the courts once the law goes into effect,” he said.

Setting a National Standard?

For now, a company designing its AI governance policies in the US would do well to look to Colorado’s law, several lawyers said.

“The nature of the requirements is such that it would be next to impossible to comply as to only Colorado use-cases if the AI system is used nationally,” said Rachel Marmor, a partner at Holland & Knight in Boston who focuses on data strategy and privacy.

For example, the Colorado law requires companies to complete impact assessments of their high-risk AI systems. The assessments will likely include discussion of the company’s national practices. If the company recognizes and takes steps to mitigate a bias risk only in Colorado, other states or the Federal Trade Commission could view its action as an unfair trade practice, she said.

But companies should watch where other states land before modeling their approach too closely on Colorado’s law, McPhillips warned.

“For businesses that operate across multiple states, operating a uniform approach that aligns with the most developed and stringent AI laws could simplify compliance efforts,” McPhillips said. “The big question, however, is how to identify a uniform approach.  While Colorado’s new law could present the first opportunity at uniformity, other state legislatures appear to be hot on the heels of Colorado.”

Similar measures failed to pass legislatures in Connecticut and Washington this year, though bills addressing algorithmic bias could move in California before its session ends in August.

Industry Pushback

Industry groups are already pushing lawmakers to amend the law to be more business-friendly, after Polis signed the bill “with reservations” and urged the Legislature to revisit the law in the two years before it goes into effect.

Polis said in the letter that he was concerned about the compliance requirements and that a patchwork of state-level AI regulations could hamper innovation. He also urged federal action on AI policy.

Some industry groups such as the Consumer Technology Association hope the Colorado legislature or Congress might act to make it easier for companies to comply with AI regulations in Colorado and beyond.

“There are significant issues with what Colorado enacted and there’s certainly time to get better policy in place,” Doug Johnson, vice president of emerging technology policy at the group, said in an interview.

Weiser said he will focus on making the Colorado law as easy to follow as possible.

“We recognize in Colorado that having a bunch of different state rules—a patchwork quilt, if you will—is suboptimal to having federal rules that could be enforced by state attorneys general and others,” he said. “Right now in both privacy and in AI, we don’t have federal rules or any federal system, we have different state systems, which means us at the state level need to work hard to make sure our approaches are all interoperable.”

To contact the reporters on this story: Zach Williams at zwilliams@bloombergindustry.com; Isabel Gottlieb in New York at igottlieb@bloombergindustry.com

To contact the editors responsible for this story: Gregory Henderson at ghenderson@bloombergindustry.com; David Jolly at djolly@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.