- Automated tools increasingly taking hold
- Congress, states moving to curb pervasiveness
The rise of artificial intelligence in tracking and evaluating workers for employment decisions is prompting some lawmakers to act to curb its power in the workplace.
Employers like
The widespread use of these tools—78% of employers in a 2021 Express VPN survey reported using the technology—has drawn increasing scrutiny on the Hill and in a growing number of state legislatures. Federal lawmakers introduced measures last month to regulate AI in the workplace, while states like California, New York, and Washington have moved to limit the way large warehouse operators such as Amazon monitor and enforce productivity quotas.
“People think of AI and they get scared. They think Terminator. They think Skynet,” said David Walton, chair of Fisher & Phillips LLP’s artificial intelligence team. “It’s a sexy thing to say that you’re going to try to protect people from it, and so that’s why it’s garnering more attention and you’re starting to see these laws now.”
‘Dystopian Work Environment’
Employers already have been tracking workers—a trend that grew amid the Covid-19 pandemic— with plenty of legal leeway. The 2023 AI boom, however, has sparked calls for more guardrails over employers’ reliance on “robots” to make job decisions.
New York City passed a first-in-the-nation law in July tackling AI’s role in hiring and promotions by requiring companies to conduct an independent bias audit of their automated tools. AI observers say it’s a blueprint for additional efforts at the state and federal level.
Since its passage, lawmakers in Albany have introduced legislation that would restrict how much monitoring employers can engage in, and ban AI-only employment decision-making.
Bills pending in the District of Columbia and New Jersey legislatures also aim to regulate employers’ use of algorithms that could perpetuate discrimination in hiring and firing. In the Massachusetts statehouse, a bill titled “an act preventing a dystopian work environment” targets those tools in addition to setting limits on companies’ electronic monitoring of employees. Each of those bills is awaiting committee review.
At the federal level, Pennsylvania Democrat Sen.
Casey’s Stop Spying Bosses Act and No Robot Bosses Act would require employers to disclose their monitoring activities and prohibit data collection during employees’ off-duty time. The bills, introduced for the first time in Congress, would also mandate that every employment decision go through a human.
DOL Privacy Division
Both bills would establish a Privacy and Technology Division at the US Labor Department, as well. Currently, the Equal Employment Opportunity Commission has been the primary agency in charge of curbing AI employment discrimination, and the new DOL division would enforce the surveillance guardrails for workplace issues in general.
“This is a great step,” said Merve Hickok, president and research director at the Center for AI & Digital Policy. “But it still requires some improvements in challenging the underlying assumptions. It accepts that worker surveillance works, that these tools work in the first place and that it’s just a matter of having disclosures, and period of testing, and training, etc.”
While a path to passage for Casey’s legislation is narrow, elements of his bills could potentially be incorporated into Senate Majority Leader
While it’s good that lawmakers are taking AI seriously, this behind-the-curtains approach is a “concern,” for Hickok, who noted she’s more optimistic about states moving more quickly than Congress.
“The states and even some local jurisdictions like New York City do not have patience anymore to wait for the federal government,” she said.
Sen.
Worker Monitoring
Software that tracks a worker’s phone screen and camera are widespread, particularly for app-based companies, said Hickok. These tools take random screen and camera shots during the day and can expose workers performing personal activities like using the toilet or checking their bank account, she added.
AI software company RemoteDesk, for example, offers webcam monitoring software for call centers and law firms to track a worker’s behavior in front of the computer.
Many workers don’t know they are so closely watched by their employers, and most oppose AI surveillance in the workplace, according to an April Pew Research survey. But as long as workers are using the employer’s property—a laptop, VPN, or an app—they can be monitored, said Walton from Fisher & Phillips.
Employers should be more transparent about practices and clarify privacy isn’t a given in the workplace, even when mostly not required to do so, he said.
“The most important thing is to defeat an expectation of privacy,” Walton added.
When it comes to existing guardrails, the National Labor Relations Board restricts how much employers can monitor workers, particularly related to union organizing activities. But it isn’t black and white, Walton noted. The monitoring itself isn’t necessarily prohibited, but rather action derived from it.
“You can’t monitor for the purpose of finding out information about organizing activity. If you happen to find that information as part of other monitoring, well, then you better not act on it,” he said.
AI is also used to set work schedules and productivity goals. Companies like Prodoscore use algorithms to score workers’ output based on their activity in office programs, chat tools, phone systems, and more.
Those AI-generated targets have no room for nuance, however, said Amanda Ballantyne, director of the AFL-CIO Technology Institute.
“There’s a real loss of value of the expertise that workers bring to work of how to do their jobs when these systems are imposed. So it’s de-skilling, but it’s also really undervaluing what’s human about work,” Ballantyne said.
Benefits of AI
But it’s not all doom and gloom for other AI observers.
Cassidy noted in his paper that while AI monitoring could lead to deteriorating working conditions and discrimination in hiring, it also has the potential to make the workplace better for many people. Lip-reading and facial recognition tools can increase workplace access for disabled employees, he said.
While the lack of transparency from employers on their monitoring activities is “concerning,” the use of automated tools in human resources departments could actually make things fairer, said Sam Ransbotham, a professor of business analytics at the Boston College Carroll School of Management.
“In the last two millennia, we’ve had humans in charge of human resources and honestly, we’re not doing a great job,” said Ransbotham, who calls himself an “AI optimist.”
Ransbotham said humans, not machines, marginalized populations like women and workers of color, and that with time these tools can be shaped to be fairer and leave more room to nuance and human intuition.
“I think that we ought to give these machines a pass for at least the first couple of decisions they make,” he said.
—with assistance from Chris Marr
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
