Bloomberg Law
Nov. 14, 2022, 2:00 AMUpdated: April 12, 2023, 6:13 PM

ANALYSIS: Algorithmic Bias Is No Longer Under Regulators’ Radar (2)

Rachel DuFault
Legal Content Specialist

Nationwide, 2023 is teed up to tackle the new frontier of tech bias in the workplace: algorithmic discrimination.

In 2022, the Equal Employment Opportunity Commission laid the foundation by rolling out guidance about workplace algorithmic bias. The issue is also becoming a priority for state and local lawmakers, as they draft and enforce new restrictions on discriminatory use of artificial intelligence in workplaces. And a model of industry self-regulation is in the works for AI and recruiting and hiring through a collaborative effort in the non-profit, legal, and employment sectors.

Employment law practitioners can bring their “A” game to this nascent field in 2023 by familiarizing themselves with all three approaches, and applying this knowledge to help employers avoid, or help plaintiffs bring, workplace algorithmic bias claims.

General Rules

Let’s start by generally defining the issue. Algorithmic bias occurs in workplaces when employers use algorithmic- or AI-powered systems or tools that may intentionally or unintentionally exclude or otherwise discriminate against job applicants or employees who belong to a protected category under federal, state, or local anti-discrimination laws or regulations. One example would be an employer’s use of an applicant tracking system that uses algorithms to filter through job applications, resulting in the exclusion of applicants based on their gender, race, or age.

Just how prevalent is use of AI and algorithms in workplaces? The numbers may vary, but EEOC Chair Charlotte Burrows stated in February that a large majority of employers are using AI in various workplace functions, with more than 80% of all employers and more than 90% of Fortune 500 companies using such technology.

Gear Up for Federal Guidance (and Expect More)

No federal laws or regulations specifically target the use of AI in employment decisions. But the EEOC broke new ground in May, releasing technical assistance guidance on algorithmic bias in hiring, medical examinations, and other workplace decision-making processes.

The guidance, which is built upon the Americans with Disabilities Act’s protections against disability discrimination, generally cautions employers against: (1) using screening and testing tech solutions that may discriminate based on disability because their algorithms don’t account for disabilities; and (2) failing to provide reasonable accommodations to applicants and employees in using algorithmic decision-making tools. (In tandem, the Department of Justice released similar, more focused guidance targeted for state and local governments as employers.)

Since releasing the technical assistance, the EEOC hasn’t retreated from addressing workplace algorithmic bias. It remains focused on Burrows’s Initiative on Artificial Intelligence and Algorithmic Fairness, and is further exploring how algorithms and AI may discriminate in hiring practices and other employment decisions. Additionally, the Biden administration released a Blueprint for an AI Bill of Rights in October 2022, which includes strategies for employers to avoid algorithmic discrimination.

While neither the EEOC’s technical guidance nor Biden’s AI Bill of Rights carries the force of law, they may serve as catalysts in 2023 for enforcement actions against algorithmic bias, such as administrative complaints, litigation, or additional agency enforcement guidance. It’s unclear, given current political headwinds, if they’ll spur Congressional action; a Senate bill introduced in February that addresses this issue is still in committee.

Prepare for Uptick in State and Local Laws, Regulations

Although Illinois and Maryland kicked off state regulation of AI in workplaces in 2020, those laws were limited to the use of video or facial recognition technology in interviewing. But more comprehensive AI workplace regulation is on the horizon, as states and localities are taking a deeper dive into regulating algorithmic bias and enforcing restrictions on the use of AI-powered tech solutions.

2023 is already off to a strong start. Beginning July 5, 2023, New York City employers will face civil penalties from $500 to $1,500 if they use any automated employment decision tool in their hiring or promotion decisions that hasn’t been through a proper bias audit or that fails to comply with notice requirements. Finalized regulations, which were published on April 6, refine the scope of such tools and bias audits.

California and District of Columbia employers may be next in 2023 to comply with new algorithm and AI workplace-bias restrictions. In August, the California Civil Rights Council approved an edited version of proposed regulations that govern nondiscriminatory use of automated-decision systems in employment. And in September, the District of Columbia moved forward with a public hearing on legislation that, in part, prohibits algorithm-related employment decisions based on protected class status and allows employers to be sued for violations.

With New York City, California, and the District of Columbia mapping out regulation of this field, don’t be surprised if more states and localities join them on the journey.

Watch for Industry Self-Regulation

Without concrete federal and state mandates about workplace algorithmic bias in place, there have been attempts at industry self-regulation to establish voluntary protections against such bias.

The Center for Industry Self-Regulation, which is another work name for the BBB National Programs Charitable Foundation, has partnered with the law firm of Epstein Becker & Green and several companies to develop industry standards for AI and recruiting and hiring. Separately, in December 2021, a group of Fortune 500 companies adopted measures to help HR prevent workplace algorithmic bias in assessing HR tech vendors.

Such efforts may reflect some conscientiousness on the part of employers, but will they achieve the true desired result? Critically, can companies and self-regulation partners prove that workplace algorithmic bias self-regulations are enough to roll back current regulations or prevent further involvement by federal, state, or local governmental entities?

Despite these initial regulatory attempts to reign in workplace algorithmic bias, federal, state, and local regulation may not be enough to prevent it completely. Authorities must compete with evolving technology, which may result in outdated laws and guidance. These regulators will need to be nimble, incorporate legal flexibility in regulation, and provide updates to keep pace with technological changes.

Maybe industry self-regulation plans, which might be more adaptable to technological changes, could provide key insights into how such regulation might effectively work. Regardless, practitioners should continually monitor these three regulatory approaches to workplace algorithmic bias if they want to maintain control in this emerging legal field.

Access additional analyses from our Bloomberg Law 2023 series here, covering trends in Litigation, Transactional, ESG & Employment, Technology, and the Future of the Legal Industry.

Bloomberg Law subscribers can find related content on our Workplace Nondiscrimination Toolkit resource.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> in order to access the hyperlinked content, or click here to view the web version of this article.

(Updated enforcement date of NYC's law in 11th paragraph. Changed link in last sentence of 11th paragraph, added publish date for NYC's law.)

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.