Whether employers are currently operating as normal, teleworking, or planning for the future, the Covid-19 experience may lead them to turn to the proliferation of workplace artificial intelligence (AI) tools to help streamline recruiting and hiring so they can continue maintaining social distancing best practices.
Employers should be aware, however, that using such AI tools brings with it various regulatory challenges regardless of its utility in these trying times.
AI’s Growing Influence on Employment
AI has been exerting an ever-growing influence on companies’ employment decisionmaking for some time. AI tools that have long been used to market services and products to customers (e.g., algorithms for personalized pop-up ads) are making increasing inroads into the employment arena, including those that mine data from an applicant’s social media and internet presence to determine personal attributes and those that evaluate an applicant’s responses during a video interview in making employment decisions.
Employers considering using AI recruitment and selection tools during the Covid-19 crisis, which some experts expect to last for months after the curve has “flattened,” should be mindful of the potential for misuse and of discriminatory impact raised by these technologies.
While federal and state laws exist prohibiting discrimination in the recruiting and selection process, as well as throughout employment, until recently there has been minimal legal guidance on or regulation of employers’ use of AI. This regulatory landscape is changing, as states and municipalities have started to take steps to regulate AI’s use, particularly for recruitment and hiring.
Illinois Leads State Efforts
In 2019, for example, Illinois became the first state to enact protections with respect to the use of AI in hiring. The Illinois Artificial Intelligence Video Interview Act, requires an employer that asks a job applicant to record video interviews and that uses AI to analyze the recorded video to notify the applicant of its use and of the characteristics the AI uses to evaluate applicants. The Illinois Act also requires employers to destroy applicant data within thirty days of receiving a request from the applicant.
Maryland, following the lead of Illinois, is close to enacting a law (HR 1202) prohibiting the use of facial recognition technologies during pre-employment interviews without the applicant’s consent. Maryland’s bill passed both branches of the legislature March 18, but has since been adjourned, likely due to the Covid-19 pandemic.
The Maryland bill would apply only to AI tools that employ facial recognition services, i.e., “technology that analyzes facial features and is used for recognition or persistent tracking of individuals or video images.”
The measure would prohibit employers from using facial recognition services in interviewing without an applicant’s written consent and waiver that states the applicant’s name, the date of the interview, that the applicant consents to the use of facial recognition during the interview and that the applicant has read the waiver. If enacted, the law will take effect on Oct. 1. Further, California and New York City have each proposed legislation restricting the sale and use of employment-related AI under consideration.
California’s proposed law, the Talent Equity for Competitive Hiring (TECH) Act (SB 1241), is more extensive and if enacted would apply to all AI technology used in selection procedures. The bill, which is aimed at addressing discrimination concerns, would create a presumption that an employer’s decision relating to hiring or promotion based on, among other things, use of “assessment technology,” is not discriminatory, if it meets specified criteria.
Specifically, AI would be considered compliant with anti-discrimination rules if: (1) prior to deployment, it is tested and found not likely to have an adverse impact on the basis of gender, race, or ethnicity; (2) the outcomes are reviewed annually and show no adverse impact or an increase in diversity at the workplace; and (3) the use is discontinued if a post-deployment review indicates an adverse impact. The bill is currently in committee process.
New York City, for its part, introduced a bill (Intro No. 1894-2020) Feb. 27 that would govern the sale and use of any automated employment decision tool. Under the bill, companies that sell AI tools that automate employment decision-making must first audit them for bias and provide an annual bias audit to the purchaser.
The bill would require employers who use such AI to notify candidates of the use of AI technology to assess their candidacy for employment within 30 days of use, and to disclose the specific job qualifications and characteristics assessed by the AI technology. Violators of the law would be subject to a $500 penalty for the first violation, and up to $1500 for any subsequent violation.
Trend Will Accelerate
These laws and proposals are clearly the beginning of a trend, which may be expected to accelerate. The Covid-19 pandemic seems likely to increase employers’ use of video interview tools and other online candidate assessments, and with them AI assisted decision-making.
Companies should monitor these developments in the jurisdictions in which they do business and ensure they comply with those that apply to them. Even if the current proposals are not enacted, or are not applicable where a company currently operates, best practices dictate that companies using AI should regularly audit the technology they use to ensure it does not create adverse impact on any protected classes, and, absent legitimate business reasons for not doing so, to provide notice to candidates about their use.
In addition, companies selling or using employment-related AI should confer with counsel who can help navigate the various nuances of each law, as well as advise on the gaps in the law that still remain.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Author Information
Adam S. Forman is a member of the firm in Epstein Becker & Green’s Employment, Labor & Workforce Management practice. He focuses his practice on labor and employment issues related to technology in the workplace, such as artificial intelligence, social media, and privacy. In addition, he represents employers in labor relations and employment litigation matters and frequently conducts workforce training on a variety of labor and employment issues.
Nathaniel M. Glasser is a member of the firm in Epstein Becker & Green’s Employment, Labor & Workforce Management practice, where he co-leads the artificial intelligence strategic industry group. His practice focuses on the representation of employers in employee relations and human resources compliance, as well as litigating claims of harassment, discrimination, whistleblowing, and wage-hour violations.
Christopher Lech is an associate in Epstein Becker & Green’s Employment, Labor & Workforce Management, Litigation & Business Disputes, and Employee Benefits & Executive Compensation practices.