The EEOC released new technical assistance for employers regarding the use of AI in the hiring process. The guidance is focused on preventing disparate impact discrimination in “employment selection procedures.” The EEOC, among other federal agencies, is increasingly focusing on regulating the use of AI in the workplace.
Avoiding disparate impact discrimination: The EEOC’s guidance is mainly focused on preventing disparate impact discrimination in the hiring process. Disparate impact discrimination occurs when an employer’s policy is by itself or on its face neutral or nondiscriminatory, but nevertheless in practice disproportionately adversely affects a protected class.
The guidance suggests that employers can assess their AI tools for disparate impact by checking whether its use in hiring processes screens out one group substantially more than another. If that group is a protected class (race, ethnicity, religion, sex, national origin), then the usage is likely unlawfully discriminatory unless it is job related and consistent with business necessity.
Employer, not vendor, responsible: According to the EEOC’s guidance, “in many cases” the employer is responsible for discriminatory algorithmic decision-making tools even if they are designed and/or administered by a third party such as a software vendor. Even if the tool is both designed and administered by a vendor or other third party, if the employer has given the third party the authority to act on their behalf, the employer may be held responsible, according to the EEOC.
Outlook: The guidance is not binding on employers or courts, but will be used by EEOC investigators in enforcement actions against employers. The EEOC, among other federal agencies and the White House itself, has made regulation of AI a policy priority. Employers can expect increased guidance and regulation from agencies regarding AI in the workplace, and should prepare accordingly.
Published on: May 19, 2023
Authors: Gregory Hoff
Topics: Employment Law, Technology