AI Hiring Tools Elevate Bias Danger for Autistic Job Applicants

Aug. 22, 2025, 9:15 AM UTC

AI-enabled interview tools and algorithmic personality tests risk employment discrimination against autistic job seekers as the technology weighs criteria that can disadvantage neurodivergent candidates.

These technologies raise potential Americans With Disabilities Act violations, from video and audio screenings that measure applicants’ character traits via their eye contact and voices to automated analyzers that downgrade resumes based on disability-related group memberships or awards.

AI amplifies concerns of built-in bias as awareness of accommodations for neurodivergence at work is growing, and autism-related charges to the Equal Employment Opportunity Commission are rising. The EEOC under the Trump administration simultaneously removed AI guidance, lessening employers’ understanding of how to mitigate tech-based bias.

“The ways the AI hiring tools tend to discriminate aren’t by inventing new modes of discrimination but applying the same pattern to technologies that are able to discriminate, at times at scale,” said Ly Xīnzhèn M. Zhǎngsūn Brown, director of public policy at National Disability Institute.

Often employers adopt these features since they are marketed to reduce bias in hiring, but “the reality is, if bias exists it’s going to be baked in,” they said.

Ninety-nine percent of hiring managers said they use AI in some capacity in their process, according to a recent InsightGlobal survey.

“We are seeing these tools across the board, most large and medium sized employers are now using some form of automated employment decision tools,” said Olga Akselrod, senior counsel in the ACLU’s Racial Justice Program.

The American Civil Liberties Union filed EEOC class-wide charges in late 2023 against Aon, which sells AI-enabled hiring technology, alleging its candidate assessments discriminated based on race and disability.

The ACLU brought the charge on behalf of a biracial autistic job applicant and others who were required to take Aon assessments.

The charge involved Aon’s ADEPT-15, an algorithmic personality test, and gridChallenge, a gamified cognitive assessment, Akselrod said.

Such personality assessments purport to assess for general traits such as positivity or emotional awareness, Akselrod said. These traits are not clearly job related and are directly linked with core aspects of the medical understanding of autism and other mental health disabilities, such as depression or anxiety, she added.

A spokesperson for Aon declined to comment.

Testing Traits

Employers need to “test drive” AI platforms and see questions applicants will be asked, tasks they’ll be given, and potential outcomes, said Katharine Weber, a labor attorney at Jackson Lewis P.C.

“Then they need to study those and compare them to what the ADA would permit,” Weber said.

Under the ADA, pre-employment testing needs to be “job related” and “consistent with business necessity.”

Tests that don’t meet this standard are left open to a disparate impact claim. The legal theory relies on biased systemic effects of facially neutral employer policies, though the Trump administration has discouraged its federal enforcement use.

Pre-employment tests must focus on whether an applicant can perform the “essential functions” of the job, Weber said. The ADA considers a qualified individual to be anyone who can perform those functions with or without a reasonable accommodation.

“You get into issues if you’re using an AI tool and the AI tool has embedded in it screening questions that just aren’t necessary for the job,” Weber said.

Some hiring technologies include features to measure applicants’ biometric data, like vocal cadence or eye contact, during video or audio interviews. That can result in bias against disabled people, including autistic and blind individuals, said Ariana Aboulafia, project lead of Disability Rights in Technology Policy at the Center for Democracy and Technology.

AI-enabled resume screeners can also potentially discriminate based on a disability by zeroing in on key terms.

Screeners ranked resumes lower if they included disability-related awards or memberships, a 2024 University of Washington study found.

The study found resumes with autism-related awards and memberships were ranked No.1 by screeners the least, when compared to otherwise identical resumes.

Employers have to consider questions used to screen resumes and make sure they aren’t “going to shut the door to a whole section of the potential workforce,” Weber said.

Transparency Concerns

A core hurdle for AI hiring tools is the lack of transparency around their use.

Without federal AI disclosure requirements for employers, applicants with conditions like autism may not know if or when to request an ADA accommodation during the hiring process, Aboulafia said.

The ADA puts strict limits on prospective employers’ ability to make disability related inquiries of candidates before a job offer.

The EEOC’s guidance outlining how AI tools can violate the ADA and Title VII of the 1964 Civil Rights Act was removed after a Trump administration executive order revoking Biden-era AI policies.

An agency spokesperson said the EEOC is “committed to preventing and remedying employment discrimination, including reviewing the potential intersection between AI and the laws the agency enforces,” while focusing on “robust compliance” with executive orders.

Although the documents were taken down, the law did not change, Weber said.

Removal of the guidance, though, signals that fighting this form of discrimination is no longer an agency priority, Zhǎngsūn Brown said.

That leaves companies that want to do the right thing to independently figure out how, and gives others “a free pass to keep engaging the same kinds of back door sneaky discrimination people with disabilities are worried about,” they said.

To reduce bias risks, employers should maintain “significant human oversight” of AI hiring tools, ideally with disabled people in those roles, Aboulafia said.

“There’s a lot that would have to happen in order to get them to the place where they reduce bias instead of perpetuating it,” she said of the tools. “And I don’t think we are there yet, particularly when we are still at the place of lack of basic accessibility.”

(An incorrect AI summary previously at the top of this story was removed.)

To contact the reporter on this story: Rebecca Klar in Washington at rklar@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Jay-Anne B. Casuga at jcasuga@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.