Employers using artificial intelligence to screen, rank, or evaluate candidates are facing a regulatory mismatch: Federal civil rights rules haven’t changed, but state and local restrictions are multiplying.
Executive orders have signaled a more permissive approach to AI regulation, but these tools still have a compliance burden. This should signal to employers that the compliance landscape is evolving, not receding.
Organizations can’t wait for rules to settle. They need to understand how AI tools operate, evaluate potential risks, and put governance in place to stay compliant while harnessing the technology’s benefits.
Federal baseline remains unchanged. Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act apply regardless of whether decisions are made by a hiring manager or informed by an algorithm. If an AI-assisted tool produces a disparate impact on a protected group, the legal analysis is the same as for any other selection mechanism.
For employers considering—or already using—AI to screen resumes, rank candidates, or inform hiring and promotion decisions, recent federal policy developments have contributed to some confusion. Over the past year, executive actions have characterized disparate impact theory as inconsistent with the Constitution and directed federal agencies to take a more limited approach to AI oversight.
As the Equal Employment Opportunity Commission has scaled back certain disparate impact investigations, a separate executive order issued last year also signaled a willingness to challenge state AI regulations viewed as overly burdensome.
Even as federal agencies adjust enforcement priorities, disparate impact remains a viable theory under federal civil rights law, and claims may proceed through private litigation or state enforcement channels. Per the Department of Labor, employers can’t rely on automated systems to satisfy obligations that still require human oversight and accountability.
Employers should note that AI tools used for scheduling, productivity monitoring, or leave administration remain subject to existing wage-and-hour and leave requirements. Algorithmic errors at scale can quickly turn into systemic violations, even when the employer had no intent to violate the law. Employers must ensure AI-driven scheduling and timekeeping systems don’t shave compensable time, misclassify hours, or deny leave requests in ways that create Fair Labor Standards Act or Family and Medical Leave Act liability.
State regulations create a patchwork. Recent federal actions may signal a lighter touch when it comes to AI, but they don’t preempt state law.
Until a court invalidates a specific statute, state and local requirements remain fully enforceable. A single AI-driven tool may be subject to multiple, and sometimes inconsistent, legal standards, particularly for employers operating across multiple jurisdictions.
Some of these requirements are already in force. New York City’s Local Law 144 requires bias audits and public disclosures for certain automated employment decision tools. Illinois mandates notice to applicants and employees when AI is used in hiring, along with certain testing requirements. California has expanded its civil rights framework to address AI-driven employment decisions and requires extended record retention.
Other states are proceeding more cautiously. Colorado, once positioned as the leading model for comprehensive state AI regulation, has delayed its AI Act to June and is considering whether to repeal or significantly revise portions of it.
Employers don’t just face a growing patchwork of state requirements, but also a regulatory environment in which states are assessing how far they can or should go to regulate AI, absent federal restrictions. Employers should assess AI tools against the most demanding applicable state requirements, then implement controls such as bias testing, clear documentation, and defined governance protocols.
Courts are holding employers accountable. Meanwhile, courts aren’t waiting. In January, a class action against an AI hiring platform alleges that certain applicant scoring and profiling practices may implicate the Fair Credit Reporting Act and analogous state laws. This demonstrates that compliance considerations may extend beyond traditional discrimination frameworks.
Early cases addressing AI-driven employment decisions also suggest that existing liability frameworks will apply. In Mobley v. Workday Inc., for example, a federal court allowed disparate impact claims to proceed against a vendor whose software was alleged to screen out applicants based on race, age, and disability. The court rejected the idea that automated decision-making systems should be treated differently from human decision-making and emphasized that doing so would risk undermining established anti-discrimination protections.
One consistent point: Employers remain responsible for employment decisions, even when those decisions are informed by third-party technology. The use of a vendor doesn’t shift legal accountability.
A framework for what comes next. Waiting for regulatory clarity isn’t a neutral strategy. A comprehensive federal framework for AI in employment may take years to emerge, yet employers that delay adopting AI workforce tools risk missing out on key operational benefits. The question becomes not whether to deploy these tools but how to do so responsibly.
Organizations that are most effective in navigating this environment are treating AI as an extension of existing employment practices. This starts with understanding where AI is already embedded in workforce processes, including screening, evaluation, scheduling, and compensation. These uses should be visible to legal and human resources stakeholders.
Employers should evaluate how these tools function, whether they produce disparate outcomes, and how decisions informed by them are documented and reviewed. In practice, that may include pre-deployment testing, periodic audits, and clear escalation paths when outcomes raise concerns.
Vendor relationships are critical in this context. When AI tools are integrated into broader platforms, responsibility for how they operate isn’t always clearly defined. Employers should ensure that contracts address data use, audit rights, and accountability for compliance, especially for tools influencing hiring or other high-stakes decisions.
Ultimately, employers are responsible for the decisions they make. AI may change how those decisions are informed, but it doesn’t change the obligation to ensure they are lawful, explainable, and consistently applied.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Benjamin Woodard is a labor, employment, and benefits partner at Stinson.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
