States Target AI Hiring Tools as Federal Freeze Attempt Fails

July 9, 2025, 9:05 AM UTC

Businesses using AI-powered tools in personnel decisions must navigate a small but growing hodgepodge of state and local regulations, after a proposed moratorium on those laws fell short in Congress.

Statewide restrictions governing companies’ use of artificial intelligence are set to take effect in California on Oct. 1 and Colorado on Feb. 1, 2026. They add to a handful of mostly narrower existing laws, such as those in New York City and Illinois.

The measures focused on automation in employment decisions make up one part of the state AI law universe, which stretches from election-related deepfakes to digital replicas of performers’ voices. The US House passed a 10-year moratorium blocking states from regulating AI technology, but the US Senate stripped it from Republicans’ budget bill that President Donald Trump signed July 4.

The failed attempt at reserving AI regulation to the federal government is likely to spur more state legislatures to restrict employers’ use of the tools, said Melanie L. Ronen, attorney at Stradley Ronon Stevens & Young LLP in California.

“We were seeing a growing interest in state regulations even under the Biden administration, where there was some movement toward implementing” federal standards, she said. “I only see that increasing in the absence of any movement by the federal government.”

The Trump administration’s retreat from disparate impact theory in prosecuting discrimination cases could compound this, as some policymakers look to more explicitly authorize such state-level unintentional discrimination claims to help target AI-powered bias, Ronen said.

A bipartisan mix of governors and state lawmakers opposed the moratorium before senators stripped it from the budget bill. Some state legislators said Congress would violate the Tenth Amendment by enacting a sweeping preemption without passing federal standards.

“Imposing a broad moratorium on all state action while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections,” 40 state attorneys general wrote to congressional leaders.

The outcry indicates an interest from state officials in regulating aspects of AI, although it’s not clear how widely they’ll target employment decision-making tools specifically.

Growing Patchwork

Colorado’s SB 205 imposed the most expansive requirements thus far on AI technology developers and employers. Connecticut, Massachusetts, New York, and Washington state lawmakers considered similar bills. While details vary, the measures generally require public disclosures to job applicants and consumers being evaluated by AI along with bias assessments of the tools.

The moratorium debate “definitely made state legislators take notice of the issue more,” and proved “just how intransigent” the tech industry’s opposition to AI regulation is, said Matt Scherer, senior policy counsel at the Center for Democracy & Technology.

The net result is “a more welcoming climate for AI regulatory proposals going forward,” among state policymakers, he said.

Companies including Alphabet Inc., Meta Platforms Inc., Microsoft Corp., and OpenAI Inc. and industry associations have lobbied against state legislative proposals. The industry likewise advocated for the federal moratorium.

The Colorado attorney general’s office is expected to offer more details on the state’s requirements through regulations or guidance, after the legislature and Gov. Jared Polis (D) failed to agree on revising or delaying the law. The AG’s office declined to comment.

California’s new civil rights regulations, which recently won final approval, clarify automated decision tools can cause illegal discrimination including disparate or “adverse impact”, require employers to keep records on these decisions for four years, and highlight that certain games and tests that reveal applicants’ disabilities count as illegal medical inquiries.

Texas enacted a sweeping AI measure last month, but with fewer private-sector business obligations than Colorado’s. Texas’ law (HB 149), effective in 2026, bans intentionally discriminatory AI uses, including in employment decisions, but specifies disparate impact alone doesn’t equal bias.

Utah and Minnesota each enacted laws with AI-related disclosure or opt-out requirements. Minnesota’s data privacy law taking effect July 31 requires that businesses let consumers opt out of automated processing of personal data that influences significant decisions including employment and housing.

Compliance’s ‘Invisible Hand’

The moratorium’s failure will disappoint some employers that hoped to escape the growing number of state-level AI laws, said Mark Girouard, attorney with Nilan Johnson Lewis PA in Minnesota.

“It’s going to mean a patchwork of AI regulations that employers are going to have to deal with,” he said.

But it’s early to know how much employers will feel the practical effects of state measures. The most expansive, Colorado’s law, hasn’t taken effect, and narrower statutes such as Illinois’ limits on AI-evaluated video job interviews have yielded few if any investigations or penalties.

New York City’s law requiring bias audits of automated employment decision-making tools was so narrowly crafted—covering only those that largely or fully replace human decision-making—that most employers and tech developers determined it didn’t apply to them.

“From what we’ve seen so far, there hasn’t been that much enforcement at the state level anyway,” said Alice H. Wang, an attorney at Littler Mendelson PC in California. “They’re still an invisible hand that’s pushing companies and vendors to be in compliance.”

The Colorado law sets out common themes many state proposals have mirrored, such as transparency notices and bias assessments, she said.

Employees can likely satisfy many states’ laws by following Colorado’s requirements across the US, Girouard said, but portions might be too burdensome to apply universally. For example, Colorado will require businesses to give job applicants—and consumers applying for housing, credit, and other services—a chance to appeal AI-assisted decision tools’ rejections and request human review.

“It’s going to be difficult, considering the speed at which recruiting needs to happen,” Girouard said.

While AI-specific measures bring compliance challenges, the bigger concern for employers remains liability risk under federal and state anti-discrimination laws covering all employment decisions, including those assisted by AI, Ronen said.

“The focus continues to be to ensure that when AI is used, it’s not operating in a discriminatory manner,” she said, “regardless of the nuanced legislation.”

To contact the reporter on this story: Chris Marr in Atlanta at cmarr@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Alex Ruoff at aruoff@bloombergindustry.com

Learn more about Bloomberg Tax or Log In to keep reading:

Learn About Bloomberg Tax

From research to software to news, find what you need to stay ahead.

Already a subscriber?

Log in to keep reading or access research tools.