How to Use AI for Annual Job Reviews While Reducing Bias Risks

December 29, 2025, 10:15 AM UTC

Businesses that allow managers to use artificial intelligence tools for workers’ performance reviews should include human oversight among the steps they take to lower chances they’re sued over job discrimination.

More companies are exploring AI software like algorithmic analytics and generative AI-assisted writing to craft employment reviews. JPMorgan Chase & Co. managers, for example, use an internal chat bot to assist with evaluations, but follow guidelines warning that the technology isn’t a substitute from human judgment and can’t be used to assign performance scores or make promotions decisions.

Like with the adoption of AI in the hiring process, attorneys warn that incorporating those tools for performance management without guardrails in place opens the door to discrimination claims. Those risks are heightened after recent US Supreme Court precedent made it easier for workers to bring lawsuits alleging bias.

“Performance evaluations sit at the center of so many other employment decisions,” impacting promotions, compensation, workforce reductions, and bonuses, said Lauren Hicks, a management-side attorney at Ogletree Deakins.

If the input into the performance evaluation is flawed, then every subsequent employment decision “inherits that bias,” she said.

Discrimination Risks

Using analytical tools to input data into performance evaluations isn’t necessarily new, but the rise of large language models and chat bots can generate “fully fleshed out” reviews in “relatively fluid English prose,” said Matt Scherer, senior policy counsel for workers’ rights and technology at the Center for Democracy and Technology.

Risks emerge when these tools aren’t simply used to gather information and expedite a manager’s review process, but instead to offload human judgment over a worker’s performance to a machine, he said.

An AI tool may not disregard biased feedback or unfair criticism the way a human manager could.

“Part of the reason it’s worrying to offload the actual evaluation and drawing conclusions task to machines is that they don’t have that ability to exercise judgment and distinguish between what is fair and what is not, what is biased and what is not, or even what is relevant and what is not,” Scherer said.

Issues with AI can also be rooted in the data set systems it’s trained on, attorneys said.

If a company’s historical reviews for workers reflect gendered language, or harsher scoring for employees of a particular race, the AI can absorb that and may replicate it more consistently than a human would, Hicks said.

“AI may or may not create bias on its own, but either way it can scale it significantly,” Hicks said.

Human Presence

Companies can lower risks posed by AI in performance reviews by maintaining a human presence throughout the process, attorneys said.

Employers should always begin with human generated content and AI to augment, rather than giving an AI large sets of unorganized notes and having the system generate a full review from scratch, said Fisher Phillips partner David Walton.

It’s not entirely clear if companies are using AI as a tool for managers to incorporate in performance reviews, or if they’re leaning more heavily on the technology for judgment purposes.

If a company is investing in AI performance review tools, though, they’re likely buying systems to offload part of managerial time, Scherer said.

Businesses should also audit their AI systems to ensure they don’t have a disparate impact on workers based on race, sex, or other protected characteristic, attorneys said.

If a certain feature of the tool or the weight assigned to criteria evaluating workers can cause a potential disparate impact, they should either be adjusted or eliminated all together, Walton said.

Although there are risks, companies should still embrace the use of AI tools with protections in place, Walton said. He likened it to the introduction of cars, which initially didn’t have sophisticated brakes.

“Companies do need to use AI but need to put guardrails around it,” he said. “If you don’t use AI, you’re going to fall behind the competition.”

To contact the reporter on this story: Rebecca Klar in Washington at rklar@bloombergindustry.com

To contact the editors responsible for this story: Jay-Anne B. Casuga at jcasuga@bloomberglaw.com; Alex Ruoff at aruoff@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.