Employers’ Use of Artificial Intelligence to Screen Applicants Can Raise Discrimination Alarms

/

Employers increasingly use artificial intelligence (“AI”) hiring and recruiting software to weed out applicants. These programs are the beginning stages of automating the recruiting arm of human resources.  AI recruiting software may ultimately displace the role of the hiring managers altogether. In fact, Undercover Recruiter predicts AI will replace 16% of HR positions by 2028.

How AI is Used in the Hiring Process

These hiring programs use algorithms to sift through applications submitted to public job postings on sites like monster.com and indeed.com. However, they go much deeper than screening candidates based on experience or educational background.  AI has the capability to mine candidates’ social media posts to determine their political or social persuasions. AI can even access databases containing information on their spending habits or voter registration. In some cases, the role of the human interviewer is replaced with AI virtual interviewers (a.k.a. recruiter chat bots). These bots ask questions and evaluate candidates’ word choices, speech patterns, and facial expressions. The same biometric and psychometric technology our intelligence services use when analyzing these markers.

But, can an artificial intelligence hiring selection program discriminate against applicants in protected classes?  Sure they can; in two ways, actually.

How Can AI Recruiting Software Discriminate?

First, artificial intelligence software does not architect itself. Software programmers, project managers, and quality control personnel design these programs. Upper-level human resource personnel and C-suite executives modify and approve this AI software. Along this continuum of development, individuals with biases leave their fingerprints. AI hiring software also unwittingly learns an organization’s previous biases when it analyzes the historical hiring data. This impacts future elimination of candidates.

Second, even if researchers create and implement artificial intelligence programs in a perfect vacuum free of bias, these programs could just as equally violate discrimination laws. This occurs when the program has an unintended, disparate impact on particular protected categories. For example, an algorithm that excludes applicants with GEDs might have a disparate impact on minority candidates, regardless of whether that was the intent.

How can we Fix It?

The obstacle becomes how to prove that either of these unlawful scenarios occurred. Employers mask the development, tweaking, and purchasing of AI software behind attorney-client privilege and work product. They are tightly guarded secrets, no different than the Kentucky Fried Chicken recipe. Fortunately, we at Van Kampen Law are familiar with these sorts of countermeasures. We encounter these measures when we deal with challenging stack ranking evaluation systems or mandatory turnover quotas. However, once we obtain those pertinent documents, the person bringing the lawsuit will need to engage an algorithmic hiring expert and depose the scores of individuals involved in the AI hiring program’s development and implementation. Fortunately, established firms, like Van Kampen Law, maintain substantial monetary reserves and have access to lines of credit to finance this kind of complex litigation.

If you have been denied a position as part of an AI selection process, reach out to Van Kampen Law so we can assess whether your race, age, sex, national origin, disability, sexual orientation, or litigation history may have been factors in your non-selection.