The Use of Artificial Intelligence in the Hiring Process Could Be Discriminatory

Lawyers don’t appear to be in danger of being replaced by artificial intelligence, but employers are increasingly turning to proprietary AI programs to automate the recruiting arm of their human resource departments. In fact, AI is predicted to replace 16 percent of HR positions across the board by 2028, according to Undercover Recruiter, a blog about recruiting and talent acquisition.

These AI hiring programs offer a suite of services, each more troubling than the next. They use algorithms to sift through the legions of applicants employers sometimes encounter from public job postings on the likes of monster.com and indeed.com.

But they can go much deeper than just screening candidates based on experience or educational background. AI has the capability to mine candidates’ social media posts to determine their political or social persuasions, and can even access databases containing information on their spending habits or voter registration.

Human Interviewer vs. AI Virtual Interviewer

Even the role of the human interviewer can be replaced with an AI virtual interviewer, known as a recruiter chatbot, to ask the questions and then evaluate candidates’ word choices, speech patterns, and facial expressions using the same biometric and psychometric technology our intelligence services use.

But can an AI hiring selection program discriminate against applicants in protected classes and thus run afoul of state and federal employment laws? Sure they can—in two ways, actually.

First, AI software doesn’t design itself. These programs are designed by software programmers, project managers, and quality control personnel, and are ultimately modified and approved by upper level HR personnel, and even C-suite executives. And all along that continuum of development, individuals with implicit or explicit biases can leave their fingerprints. Indeed, AI software can unwittingly learn an organization’s previous biases when it analyzes its historical hiring data and past preferences.

Second, even if AI programs were created and implemented in a perfect vacuum free of bias, they could just as easily violate discrimination laws by having an unintended, disparate impact on particular protected categories of applicants. For example, an algorithm that strikes applicants with GEDs might have a disparate impact on minority candidates, regardless of whether that was the intent.

The rub is in how to prove that either of these unlawful scenarios applies. Most likely, AI hiring practices will be challenged on a class-wide basis; my guess would be in the Northern District of California to start.

Litigation of AI Hiring Programs

Taking down an AI hiring program in litigation will admittedly be a tall order. Employers will no doubt try to shroud the acquisition, development and tweaking of such software in attorney-client privilege and work product. Their development will be a tightly guarded secret, no different than the Kentucky Fried Chicken recipe. And it’s a foregone conclusion that such litigation will spawn dueling algorithmic hiring experts and scores of depositions of the individuals involved in the AI program’s development.

Still, for plaintiff-side employment lawyers, this is definitely a hill worth dying on as we continue the fight to root out discrimination in the workplace. Perhaps just like Arnold Schwarzenegger’s character in Terminator 2, companies will someday have to say “Hasta la vista, baby,” to a troublesome bit of artificial intelligence.

Josh Van Kampen is an employment law attorney in Charlotte and the founder and leader of Van Kampen Law.