06.09.2022

|

Updates

The U.S. Equal Employment Opportunity Commission (EEOC) issued guidance on May 12, 2022, regarding the use of software, algorithms, and artificial intelligence (AI) in assessing job applicants and employees. The EEOC’s guidance discusses how employers’ use of tools that rely on algorithmic decision-making may violate the Americans with Disabilities Act (ADA).

In its guidance, the EEOC identifies different types of software utilizing algorithmic decision-making that employers may use at various stages of the employment process. This software includes résumé scanning programs that prioritize applications using certain keywords, employee monitoring software that rates employees based on their keystrokes or other factors, video interviewing software that evaluates candidates based on facial expressions and speech patterns, and testing software that provides “job fit” scores regarding personality, aptitude, or cognitive skills. The EEOC also explains that an employer may still be responsible under the ADA for using such tools even if they are designed or administered by another entity.

Common Ways AI and Algorithmic Tools May Violate the ADA

The EEOC guidance highlights a number of ways that the use of AI or algorithmic tools may violate the ADA. For example, an algorithmic decision-making tool that “screens out” an individual based on a disability even though that individual could perform the job with reasonable accommodation may violate the ADA. According to the EEOC, screen-out occurs when a qualified applicant or employee loses a job opportunity because a disability prevents them from meeting—or lowers their performance on—a selection criterion.

Screen out may occur if a person’s disability prevents the algorithmic decision-making tool from measuring what it intends to measure. As an example, the EEOC states that video interviewing software that analyzes applicants’ speech patterns to assess problem-solving ability would not fairly score an applicant who has a speech impediment that causes significant differences in speech patterns. If such an applicant were rejected due to a low score caused by their speech impediment, the applicant may have been improperly screened out.

According to the EEOC, screen-out due to a disability is unlawful if the individual can perform the job’s essential functions with a reasonable accommodation if one is legally required. For example, some employers assess applicants and employees by utilizing “gamified” tests in which video games are used to measure abilities, personality traits, and other qualities. If an employer requires a certain score on a gamified memory assessment, a blind applicant would not be able to see the screen to play these games. However, the applicant may still have a very good memory and may be perfectly capable of performing the essential functions of a job requiring good memory.

Another way algorithmic decision-making tools could violate the ADA is if the employer does not provide a “reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.” In its guidance, the EEOC provides the example of a job applicant with limited manual dexterity due to a disability. Such an applicant may have difficulty taking a knowledge test that requires using a keyboard or trackpad. Therefore, this type of test would not accurately measure this particular applicant’s knowledge. According to the EEOC, the employer in this scenario would need to provide an accessible version of the test as a reasonable accommodation, such as a test that allows oral responses, unless it would cause undue hardship.

Finally, an employer may run afoul of the ADA if it adopts an algorithmic decision-making tool that constitutes a disability-related inquiry or medical examination before giving a candidate a conditional offer of employment, even if an individual does not have a disability. According to the guidance, an assessment includes “disability-related inquiries” if it asks job applicants or employees questions that are likely to elicit information about a disability or directly asks whether they have a disability. An assessment qualifies as a “medical examination” if it seeks information about an individual’s physical or mental impairments or health.

However, not all algorithmic decision-making tools that ask for health-related information are “disability-related inquiries or medical examinations.” The EEOC states, for example, that a personality test does not make “disability-related inquiries” because it asks whether the individual is “described by friends as being ‘generally optimistic,’” even if that might somehow be related to certain mental health diagnoses.

Tips to Prevent Discrimination

The EEOC’s guidance recommends several “promising practices” that may reduce the likelihood of an algorithmic decision-making tool or AI violating the ADA:

  • Use tools designed to be accessible to individuals with as many different kinds of disabilities as possible. If using tools designed by a vendor, confirm with the vendor whether they developed the tool with disabled individuals in mind.
  • Clearly indicate that reasonable accommodations, including alternative formats and alternative tests, are available to individuals with disabilities and provide clear instructions for requesting reasonable accommodations.
  • Prior to the assessment, provide individuals with as much information about the tool as possible, including information about the traits or characteristics being measured, the methods by which they will be measured, and any disabilities that may potentially lower the assessment results.
  • Ensure that the tools only measure abilities or qualifications that are truly necessary for the job and that these abilities or qualifications are measured directly rather than by way of scores that are merely correlated.

© 2022 Perkins Coie LLP


 

Sign up for the latest legal news and insights  >