How artificial intelligence is changing the workplace is starting to get the attention of congressional lawmakers at a time when some employment attorneys are sounding alarms about the need for legislation.
The House Education and Labor Committee plans to hold hearings on machine learning’s impact on workers and their jobs after Congress returns from recess in September.
However, while a hearing is usually a precursor to legislation, employers using AI-based tools and tech companies developing those programs probably don’t need to worry about new bills anytime soon. The focus on Capitol Hill remains on trying to understand what the effect of artificial intelligence on workers could be.
The rise of algorithms and machine learning technology is already changing the way people work. It’s also raising concerns about the widespread elimination of jobs and questions about privacy, inherent bias, and how businesses make decisions that lawmakers are still trying to wrap their heads around.
“Congress is not seriously moving the ball. There is no national call to action,” said Michael Lotito, co-chair of Littler Mendelson’s Workplace Policy Institute. He is also the co-founder of the firm’s future of work group.
Lawmakers may be reluctant to get behind specific legislation, but the ball has started rolling already with different proposals making the rounds in Washington. That includes a proposal from one Silicon Valley attorney who says it is time for lawmakers to act. Meanwhile, a proposal for regulating the use of artificial intelligence in lending decisions could also serve as a blueprint for tackling potential algorithm bias on the job.
Employment Attorney Shops AI Bill
“AI is going to impact everyone’s life. And if that’s not serious enough to properly legislate, I don’t know what is,”
So far, he’s had little success getting lawmakers and businesses on board with a proposal to address those issues. Newman is shopping draft legislation that proposes far-reaching regulations on AI and how it’s implemented in the U.S.
The legislation proposes the creation of an AI regulatory agency that would have the authority to block certain “misuses” of technology. The proposal would also establish a worker retraining program for those whose jobs were displaced, funded by a 4 percent AI tax. That could be a tough sell for the business community.
Lotito bashed the idea for a tax on employers, saying it wouldn’t get business support and, if implemented, would “create panic” among companies.
“The tax will scare people away,” he said. Any proposal to monitor AI should leave room for companies and employers to innovate, he said.
Former Rep.
Will Mitchell, who as Nolan’s legislative director worked with Newman on the drafting of the proposal, said there’s still lingering hesitation over rocking the boat and that the agenda on the Hill is packed with other priorities.
“Nobody wants to stick their neck out there and raise the ire of big tech companies or the AI startups doing creative work,” Mitchell, now a staffer for Rep. Angie Craig (D-Minn.), told Bloomberg Law.
HUD Weighs In
Rep.
The proposal—Sen.
The department is continuing to look at AI bias.
HUD is pursuing rulemaking that would give mortgage lenders that use algorithms in their credit decisions potentially greater protection from fair-lending lawsuits. The focus on algorithms is part of a broader department proposal to redefine how the agency would use disparate impact to enforce the Fair Housing Act.
The regulation would give lenders a shield from discrimination claims if they can show that data used to shape an algorithm isn’t a substitute for a protected characteristic like race or sex, or that the algorithm is created and maintained by “a recognized third party.” Lenders could also ward off discrimination claims by showing that “a neutral third party” has analyzed the algorithm and given it a stamp of approval.
Meanwhile, the Equal Employment Opportunity Commission, which has been asked by lawmakers to address possible bias in the use of AI on the job, has said there is no guidance or other policy effort in the works on their end. The agency instead has been focusing on sexual harassment and retaliation litigation.
Nonprofits and industry groups have taken on the role of researching the topic. Some major employers, such as
‘Forcing a Conversation’
Newman’s proposal is designed as a conversation starter in Congress. But it also reflects a growing level of “hysteria” surrounding technology that many people don’t fully understand, Veena Dubal, an associate professor at University of California Hastings College of Law, said.
People fear what they don’t understand about the technology and its potential and respond out of concern of what they think could happen to the workforce because of it, she said.
“I think it indicates there’s a lack of information in this area and a little bit of hysteria,” Dubal said. “It’s not that I think what he has written will play no role in regulating the potentially problematic role of AI. What he is doing is forcing a conversation.”
Dubal would prefer to see a broader discussion and hearings to help lawmakers and the general public understand this technology and its potential for workplace disruption before enacting legislation.
“We have to be so careful about building laws and regulations about something we do not understand,” she said.
But Newman says technological advances won’t wait for lawmakers to get up to speed.
“We don’t need more studies. We need solutions.”
To contact the reporter on this story:
To contact the editors responsible for this story:
To read more articles log in.
Learn more about a Bloomberg Law subscription