Compliance Tips for Healthcare Employers Utilizing AI in the Recruitment Process

Updated on May 16, 2021
Creative abstract healthcare, medicine and cardiology tool concept: laptop or notebook computer PC with medical cardiologic diagnostic test software on screen and stethoscope on black wooden business office table with selective focus effect

By Dawn M. Irizarry and Allison Chua

As one of the fastest growing industries in the nation, recruiting top talent in the healthcare industry can be extremely competitive and time-consuming.  Thus, many employers are increasingly turning to artificial intelligence (“AI”) technology to optimize and streamline the recruitment process.  While AI also promises to eliminate human bias and subjectivity, AI tools may nevertheless adopt the preexisting biases of the people who utilize them.  To counter this unintended consequence, federal and local authorities have enacted or are considering enacting new legislation to regulate the use of AI in the employment context.  In this article, we review how employers have been using AI tools in the hiring process, their potential pitfalls, and practical tips that healthcare employers should consider to avoid these risks.   

How Employers Use AI In The Recruitment Process

Employers have been using AI for many years in the recruitment process before the proliferation of such software.  One example is performing simple keyword searches to quickly filter through job applications to identify qualified candidates.  In recent years, however, increasingly sophisticated AI solutions that do more than perform rudimentary text searches have become more widely available and promise employers an effective method for predicting a candidate’s likelihood of success with the company while improving workplace diversity.   

Some AI software utilize data from current employees to seek out candidates with similar characteristics.  For example, an employer can use the resumes of top performing employees to program a resume scanner to identify candidates with similar experience.  Other AI tools can search for an applicant’s online presence and use that information to create a profile of that applicant’s values and career goals to determine if s/he would be a good fit for the company.  Tests and other simulations are also available to measure an applicant’s cognitive and emotional attributes.  Employers can even utilize chatbots that interact with applicants to ensure that they meet job requirements and answer preliminary questions about the position.     

Employers also use AI tools to streamline the interview process.  Once an employer identifies applicants to be interviewed, chatbots can reach out to the applicants and schedule the interviews.  AI programs have the capability to conduct virtual interviews using preset questions and to analyze the applicant’s answers, word choices, facial expressions, body language, and tone.  With these advances in technology, however, healthcare employers may face unknown dangers and consequences of utilizing such programs.

Potential Pitfalls

AI recruiting and hiring tools are generally designed to eliminate bias.  Nevertheless, studies have shown that unintended bias may seep into the algorithmic process and may have a disparate impact on underrepresented groups which can expose employers to claims of discrimination.

For example, an AI software that screens applicants based on the resumes of other applicants who were successfully hired by the employer can perpetuate historical workplace biases.  AI tools that look for masculine biased words in resumes may also unintentionally favor male over female applicants, particularly in hospital leadership roles that are historically male-dominated positions.  Similarly, AI software that conducts virtual interviews can potentially discriminate against persons with disabilities whose facial expressions, body language, or enunciation deviate from the norm.  Likewise, an AI program that prioritizes applicants who live within the same zip code as the healthcare employer in an effort to minimize the applicant’s potential commute and lead to an increase in employment retention may seem like a neutral criteria at first.  In practice, however, it could screen out applicants from protected groups if the facility is not located within close proximity of areas that have a higher representation from diverse groups.    

Although employers may argue that their AI hiring tools were designed to weed out discriminatory hiring practices, the unintended consequence may create a disparate impact on underrepresented groups, leading to unlawful discrimination and related claims under Title VII of the Civil Rights Act or similar state laws, such as the California Fair Employment and Housing Act.  Unlike disparate treatment claims which require proof of intentional discrimination by an employer, disparate impact claims involve employment practices that are facially neutral in their treatment of protected groups but nevertheless results in a disproportionate, negative impact on a protected group.  An employer can defend a disparate impact claim by presenting evidence that the hiring practice at issue is job-related and consistent with business necessity.  To successfully counter this argument, an employee must merely point to a less discriminatory practice that meets the employer’s legitimate business needs, which could be as simple as making a small change to the search terms or algorithm used during the AI process.  

Employers using AI tools may also be subject to claims under the Americans with Disabilities Act (“ADA”).  Among other things, the ADA prohibits employers from inquiring about and/or using a job applicant’s physical disability, mental health, or clinical diagnosis in pre-employment candidate assessments.  Therefore, AI tools that assess an applicant’s facial expressions, body language, and tone may give rise to claims under the ADA (and similar state laws) if it screens out applicants with disabilities, such as deafness, speech disorders, or mental illnesses.  

The legal implications stemming from using AI in the recruitment process will only increase in the coming years as lawmakers propose or enact new laws to address AI hiring bias.  One example is Illinois’ Artificial Intelligence Video Interview Act (“AIVIA”), which became effective January 1, 2020.  The AIVIA imposes certain notice and consent requirements for employers using AI programs to analyze video interviews of applicants.  In February 2020, members of the New York City Council also proposed a bill that would require employers to disclose to job applicants if AI tools were used to assess their candidacy for employment and the job qualifications or characteristics screened by the AI tools that they use.  In addition, on December 8, 2020, ten U.S. Senators sent a joint letter to the Equal Employment Opportunity Commission (“EEOC”) urging the EEOC to use its investigative and enforcement authority to safeguard against discrimination resulting from the use of AI during the recruitment process.  Consequently, employers should be prepared for increased enforcement and regulatory oversight as more and more employers become increasingly reliant on AI during the recruitment and hiring process.    

Best Practices

In order to mitigate the risk of discrimination claims related to the use of AI in the recruitment process, healthcare employers should do the following:

  • Disclose the use of AI tools to applicants, explain how the AI works, and obtain the applicant’s consent to be evaluated by the AI program.  
  • Perform the necessary due diligence of the AI software to find out how the software has been used in the past and how it safeguards against implicit bias during the recruitment process.  Determine what information is input into the software, what characteristics are being assessed by the software and how the software makes decisions.  
  • Evaluate the qualifications of any third-party software vendors.  Determine whether and how the vendor developed the software to comply with federal and state employment laws.  
  • Conduct audits of the results of the software’s algorithms on a regular basis.  Analyze whether the results produced by the software show a disproportionate, negative impact on a protected group.  Ensure that the software can be corrected and correct the software for any adverse impact on protected groups.   
  • Ensure that assessment tests and other simulations are accessible to applicants with disabilities.  Provide an alternative non-AI method for applicants with disabilities to continue through the hiring process.

Machines are only as the good as the controls that are put into place.  Healthcare employers who are using or are considering adopting AI tools in the recruitment process should consult with experienced counsel to ensure that the algorithms that are used do not run afoul of federal, state or local employment laws and to minimize potential legal liability.

About the Authors:

Dawn M. Irizarry is a Partner and Chair of the Firm’s Healthcare Practice Group at CDF Labor Law LLP, a California-based labor, employment and immigration law firm with offices throughout the state.  Dawn has focused her practice on counseling and defending businesses in labor and employment matters for almost 20 years.  In particular, Dawn has defended many healthcare institutions against claims of sexual harassment, unlawful discrimination, hostile work environment, retaliation, wrongful discharge, defamation, failure to accommodate and other employment-related disputes before federal and state courts.  Dawn can be reached via email at [email protected].

Allison Chua is an attorney at CDF Labor Law LLP.  Allison’s practice focuses on defending employers with workforces in California against a range of alleged claims, including employment discrimination, harassment, retaliation, and wage and hour class actions.  Prior to joining CDF, Allison served as General Counsel for a large staffing and human resources company, where she advised senior management on business and employment matters, investigated employee claims of discrimination and harassment, and ensured compliance with state and federal employment laws.  Allison can be reached via email at [email protected].

The Editorial Team at Healthcare Business Today is made up of skilled healthcare writers and experts, led by our managing editor, Daniel Casciato, who has over 25 years of experience in healthcare writing. Since 1998, we have produced compelling and informative content for numerous publications, establishing ourselves as a trusted resource for health and wellness information. We offer readers access to fresh health, medicine, science, and technology developments and the latest in patient news, emphasizing how these developments affect our lives.