Artificial intelligence and automation are embedded in hiring workflows across industries. In the US, more than 95% of employers conduct pre-employment background checks, with an increasing share relying on automated systems to process that information at scale.
A lawsuit filed in California in January against AI recruiting platform Eightfold puts a quiet employer risk front and center: When software scores, ranks, or filters candidates using consumer report-related information, federal consumer protection law may already be in play.
At the heart of that risk is the Fair Credit Reporting Act (FCRA): When AI tools ingest and score candidate data to influence hiring decisions, they may be functioning as consumer reporting agencies under federal law, triggering disclosure, accuracy, and adverse action obligations most employers have never considered.
Most employers don’t think of hiring workflows as legal decisions. They’re filling roles. Tools help them move faster: Resumes are parsed automatically, background reports are flagged, candidates are scored, ranked, or screened out before anyone meets them. For most teams, this is now routine.
That operational convenience is exactly why the risk is growing. When decisions happen at scale, with speed and limited human review, compliance failures stop being isolated mistakes: They become systems. And system failures are what class action lawsuits are built on.
The legal framework stayed the same. Hiring decisions have always carried legal consequences, regardless of whether they’re made by a human or AI. Under the Fair Credit Reporting Act, the core point is simple: If consumer report data influences an employment decision, FCRA obligations can attach, no matter whether the decision is made by a recruiter, a rule engine, or a machine learning model. Plaintiffs in the Eightfold case are effectively testing that proposition in the context of AI-generated scoring and profiling used in hiring workflows.
Kistler et al. v. Eightfold AI Inc. alleges that the platform scraped personal data on over a billion workers, assigned each applicant a scored ranking, and filtered out lower-ranked candidates before any human reviewed their application, all without the disclosures the FCRA requires. Eightfold denies the allegations, and the case remains pending.
From a risk perspective, the question isn’t whether to use AI. It’s whether your tool assembles or evaluates consumer report-related information for employment purposes and produces an output that affects eligibility. If yes, you’re in FCRA territory faster than most organizations realize.
When the AI hiring tool starts to look like a consumer reporting workflow. Many platforms do more than organize applicants. They ingest and combine data that can include employment history, education, online profile information, identity attributes, public records, and other third-party sourced inputs, then convert it into outputs such as match scores, rankings, flags, recommendations, or predicted fit. That functional step matters, as, under the FCRA, labels don’t control the analysis nearly as much as what the system does. The Eightfold complaint, for example, frames candidate scoring and assessment outputs as “consumer reports” used for employment purposes, triggering rights that job applicants never received.
You don’t need to accept every allegation to take the lesson. The lesson is that plaintiffs’ lawyers are now mapping familiar FCRA theories onto modern hiring stacks, and courts will be asked to evaluate whether these outputs are, in substance, regulated consumer reporting.
Accuracy breaks first, and scale makes it worse. The FCRA requires employers and their vendors to follow reasonable procedures to assure maximum possible accuracy of consumer reports. Automation doesn’t relax that standard; quite the contrary: at scale, it often undermines it.
FCRA risk is often conveniently presented as a paperwork problem. Truth is that it’s more of an accuracy and process problem.
At scale, common errors compound: mixed files (someone else’s record attributed to the applicant), stale information that should not appear, incomplete context, or overly aggressive matching. AI can amplify these issues because it makes “clean” outputs that feel authoritative: a score, a ranking, a recommendation. The cleaner the output, the harder it is for employers to notice what went wrong upstream.
And speed makes it even harder to fix. If a tool screens someone out instantly, the system can create the very harm the FCRA is designed to prevent: An adverse decision based on information the individual never saw and couldn’t correct.
The takeaway for employers is straightforward, even if the compliance work is not: AI hiring tools don’t exist outside the law, and the legal framework that governs consumer reporting was built precisely for situations where data about individuals is assembled and used to make decisions about them.
Before deploying any tool that scores, ranks, or filters candidates using third-party data, employers should conduct a vendor review to assess whether that tool’s outputs could qualify as consumer reports under the FCRA. If the answer is yes, or even maybe, standard FCRA obligations apply: written disclosure to applicants, authorization, and a clear adverse action process that gives candidates the chance to see and dispute the information used against them.
Vendor contracts should address accuracy obligations explicitly, and compliance teams should pressure-test what happens when the system gets something wrong and how quickly it can be corrected. The Eightfold case is an early signal, not an isolated event, and the employers best positioned to avoid the next lawsuit are the ones who treat this as an operational question to solve now, not a legal problem to manage later.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Daniel Cohen is founding partner of Consumer Attorneys PLLC, representing consumers in cases involving background check and credit report errors, identity theft, and other consumer reporting disputes.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
