A Trump order denouncing disparate impact bias claims. EEOC’s moves to shift the agency’s workplace discrimination enforcement. Similar Justice Department policy changes. And even the new Texas artificial intelligence law taking effect Jan. 1.
At each turn this year, employment lawyers have encountered another policy pronouncement offering businesses a false sense of security about assessing their AI-powered hiring tools for disparate impact, otherwise known as unintentional discrimination.
Their constant refrain for the past year and going into 2026: The law hasn’t changed, courts recognize disparate impact claims, and employers can be sued for them.
“There’s been so much change since January,” said Eric J. Felsberg, a management-side attorney at Jackson Lewis PC. “A lot of employers, and I’ve heard this from them, have said we don’t have to worry about this anymore. But not quite, it’s still the law.”
Some states have clarified they recognize disparate impact claims to push back on the Trump administration. As 2025 draws to a close, New Jersey adopted new civil rights regulations and New York amended its human rights law.
Disparate impact theory imposes liability on companies using neutral selection procedures that negatively affect workers based on protected traits such as race and sex. The US Supreme Court recognized such claims as valid, and Congress explicitly added disparate impact to Title VII of the 1964 Civil Rights Act in 1991. Many states also recognize the claims.
Conservative legal advocates have long criticized the theory as penalizing businesses for unintentional statistical disparities and pressuring them to implement affirmative action-type hiring policies.
“Disparities do not (and should not legally) imply discrimination per se,” attorney Jonathan Berry wrote in the sprawling Project 2025 document prepared in anticipation of Trump’s return to the White House. Berry, who now serves as the solicitor of labor, got his wish for an executive order barring pursuit of disparate impact claims by federal agencies, but not the congressional action he urged.
Workplace AI bias litigation is in its infancy, with employment lawyers watching a handful of bellwether cases. Disparate impact is the likely path for these cases—though intentional bias, or disparate treatment, claims can’t be ruled out.
This means the Equal Employment Opportunity Commission’s decision to drop investigations of disparate impact cases makes the agency less likely to enforce AI bias cases.
Still, revised federal “enforcement priorities don’t mean the government has conveyed any kind of legal immunity on companies that are implementing the tools,” said Nathaniel Glasser, an attorney at Epstein Becker & Green PC.
Mixed Messages
States have taken varied approaches to disparate impact and AI regulation, sending mixed messages to employers on navigating legal risks.
California passed civil rights regulations effective in October and privacy regulations set to go live Jan. 1 that pressure employers to audit AI tools for bias. A Colorado law with more explicit audit and transparency requirements will take effect June 30, if lawmakers don’t revise or delay it again.
New Jersey’s new rules that affirm the state’s anti-bias law covers disparate impact also address examples of possible unintentional bias resulting from AI tools.
Texas, on the other hand, passed an AI discrimination law imposing limited compliance duties while including language that attorneys say also risks giving employers a false sense of security.
The Texas law bans using AI with intent to discriminate, “but it expressly provides that a ‘disparate impact’ is not sufficient to demonstrate intent,” said Robert Brown, a partner at Latham & Watkins LLP.
But federal and Texas civil rights laws still recognize disparate impact claims for workplace bias, he added.
Further complicating matters for employers is Trump’s Dec. 11 executive order seeking to limit states’ authority to regulate AI, Felsberg said.
Employers might be thinking, “I don’t have to worry about the state laws because Trump got rid of them,” he said, but that’s a false assumption.
The order calls for the Justice Department to challenge state AI laws as overly burdening interstate commerce, violating employers’ First Amendment rights, or running afoul of other federal laws—likely suing one state at a time to see which measures can be blocked in federal court.
“We’re going to have to wait and see, but right now the waters are very muddy,” Felsberg said.
Not ‘Off the Hook’
Despite federal policy changes, large employers using AI in hiring are diligently evaluating bias risk, said Danielle Ochs, a shareholder with Ogletree Deakins in San Francisco.
There’s some variation geographically, though, said Jenn Betts, office managing shareholder for Ogletree in Pittsburgh. Businesses with operations limited to states without AI bias laws, such as the South, are less motivated, she said.
“It can be more difficult to get their internal stakeholders to buy into” investing in those assessments, Betts said.
Historically, employers have been expected to routinely assess their tools or methods of job candidate evaluation, under the Uniform Guidelines on Employee Selection Procedures that the EEOC adopted in 1978, Felsberg said. If they detect statistical disparities, they should hire an outside consultant to validate the tool as job-related and consistent with business necessity.
With AI tools, he said the best practice is to validate before using them, rather than wait for signs of disparate impact.
That can be expensive and time-consuming, he said, and it isn’t clear whether those historical assessment models will satisfy audit expectations of Colorado, California, and New York City AI laws.
Employers should expect more workers will bring AI-related bias claims into court as the public becomes more familiar with the technology and how businesses are using it, said Shelby Leighton, senior attorney at Public Justice.
The same goes for agency enforcement, even if that doesn’t include the EEOC for now, she added.
“In a lot of states, there will continue to be robust investigations,” said Leighton, who helped file a bias complaint with Colorado’s civil rights agency this year against Intuit and video interview technology provider Hirevue. “It would be a mistake for employers to think they’re off the hook.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
