Facial Recognition in Hiring: Occupational Segregation on Speed

Written by Geneva Lasprogata Sedgwick - Faculty Fellow and Associate Professor of Business Law
April 22, 2021

Black and White profile photo
Artificial intelligence (AI) has gifted us a new technology, facial recognition. Increasingly corporations in the U.S. are using this AI in their hiring processes to shorten timelines and decrease the associated human resources cost. Firms also posture that this AI promotes diversity in hiring by neutralizing human bias. On the surface, this looks like a legitimate business interest in efficiency and an even admirable commitment to diversity, equity, and inclusion. When we dive deeper, however, we see that there are some real concerns about discrimination, privacy and autonomy.

 

Employer Surveillance and Facial Recognition

For decades, employers have been using surveillance techniques in the workplace. Such techniques include monitoring employee email and internet usage, using biometric data such as fingerprints and eye scans to identify employees in real-time and collecting and analyzing prospective employee DNA.

The reasons employers apply surveillance tools include rational objectives such as workplace security, workplace productivity, protection of employees from internal bullying and harassment, and protection of intellectual property assets. U.S. law sanctions some employer surveillance tools, confirming for example that employers have a logical and often imperative reason for protecting the workplace from risks like employee trade secret theft and misuse of technology to sexually harass coworkers. U.S. law sometimes draws a line where the surveillance becomes excessive relative to an employer’s legitimate objectives. Surveillance tactics that collect biometric data and draw inferences about you on that basis raise these concerns.

Consider here the relatively recent collection by employers of applicants’ genetic data during pre-employment health exams for the purpose of predicting an applicant’s suitability for the job. Prior to the passage of U.S. federal law protecting employee genetic privacy and nondiscrimination, hiring managers could analyze DNA profiles to decide whether to hire, typically casting aside those with markers such as addiction, certain cancers, and even anger-management issues. At that time, my co-authors and I published an article in the American Business Law Journal analyzing the legislation before Congress and comparing it to the expansive rights of European employees under the then EU Privacy Directive. We argued for the need to have both privacy and nondiscrimination as policy mandates in any law regulating access and use of our unique genetic data. And while agreeing with most of the proposed U.S. law, we did propose amendments to enhance privacy and nondiscrimination protections for job applicants and employees in the U.S. drawing on insights from the European approach.

Congress passed the final version of the Genetic Information Non-Discrimination Act (GINA) in 2008, which makes it unlawful for an employer to request, require, or purchase genetic information with respect to an employee. This includes genetic tests that do not reveal medical information. As surprising as it may be to acknowledge that these markers exist and can be identified in us; it is even more disturbing to consider that a corporate hiring professional would presume to know enough about science to predict an employee’s fate.

One of the more recent surveillance technologies applied by corporate America is facial recognition, which now has become a tool used during the interview process to screen applicants and draw inferences about the traits and qualifications of a prospective employee.

This technology has generally been under scrutiny by digital activists who want to call our collective attention to the dangers of coded bias. Facial recognition is a flawed AI tool whose bias mostly impacts women and people of color whose faces are not so accurately recognized, leading to various personal harms associated with misidentification. Joy Buolamwini, an MIT researcher and founder of the advocacy team, the Algorithmic Justice League, is challenging both governmental and private commercial use of facial recognition as biased and dangerous in its current design. Ms. Buolamwini’s original TED talk is a favorite share in our Seattle University courses on law and ethics in data analytics.

Facial recognition is a biometric tool typically associated with identifying individuals. But the term “facial recognition,” however, is used loosely and should be distinguished from facial analysis. For example, facial recognition may refer to a system that matches similar faces by searching in a database. Facial analysis, on the other hand, can be used to predict features of a person such as age, gender, or emotion. Regardless, the tools rely on machine learning and the input of massive amounts of datapoints into proprietary algorithms that have proven to be unreliable, particularly in applications used for policing where the technologies incorrectly identify individuals by race and gender. As a result, companies like IBM and Microsoft have put a stop to selling their facial recognition technologies to governments, at least until there is clear federal regulation on its use.

But the concern here centers on how a company uses technology for facial analysis. Like its predecessor genetic testing, facial recognition tools are being used in the hiring process to select the most “appropriate” candidates for open positions. Individually unique biometric data is being used once again by hiring professionals (not scientists) to predict individual competency, likability, and firm fit. What law exists to protect against infringement of the applicant’s rights? According to the law firm of SheppardMullin:

In the hiring process, facial recognition technologies assist employers by analyzing images or videos of job applicants’ faces (e.g., brow raising, eye widening, smiling, etc.) and use of language and verbal skills (e.g., passive or active voice, speed, tone, etc.) to infer characteristics about them that correlate to job performance that can then be ranked against other applicants. This process exposes employers to potential risk under antidiscrimination laws because, according to proponents, if the data used to train these technologies is based on similar data of successful candidates and there was a prior history of biased hiring that led to a homogeneity in hired candidates, then a bias towards that group is introduced into the technology and reflected in the selections made.

 

Is federal antidiscrimination law enough to protect a job applicant’s human rights to privacy and equality? Does existing nondiscrimination law apply to protect these human rights, or do we need a new law, a GINA equivalent perhaps, to safeguard both?

 

In Re HireVue and Facial Recognition

HireVue is a leading provider of AI-based, pre-employment screenings. The Utah-based company markets its recruiting tools to eliminate bias in the hiring process. However, we know that hiring algorithms are likely to be biased by default, designed by humans, and using data from “top performers”, a method that can perpetuate past hiring biases.

HireVue says it has more than 700 business customers globally including Unilever, GE, Delta, Hilton, Staples, Oracle, Carnival, Ikea, and Anheuser-Busch. Their clients seem eager to adopt a process that assists an otherwise often time-consuming and sometimes demanding hiring process. Goals of efficiency and the promotion of diversity are arguably both important human resource objectives. However, having a legitimate business purpose does not make this technology per se lawful or ethical, particularly given the allegations of bias and discrimination.

Profile of young male with lines over the face as if scanningHireVue conducts video-based and game-based pre-employment assessments of job candidates on behalf of employers. These assessments utilize facial recognition technology not to recognize the identity of the individual job applicant, but rather to analyze features that allegedly reveal the “cognitive ability”, “psychological traits”, “emotional intelligence”, and “social aptitudes” of that applicant.

According to HireVue, the science is sound. They claim their assessment tools are, in fact, not facial recognition technology. That depends on if they are defining the term narrowly as AI that identifies individuals based on their faceprints, or more expansively in alignment with the U.S. Federal Trade Commission’s (FTC) definition to include: “technologies that merely detect basic human facial geometry; technologies that analyze facial geometry to predict demographic characteristics, expression, or emotions; and technologies that measure unique facial biometrics.”

HireVue claims that its technology is based on “psychometric assessments” that measure traits and competencies associated with performance at work.  According to the company, “decades of research” indicates that their technology reliably predicts—without less bias—cognitive ability, competency tied to performance and the presence of personality traits that improve work with others.

In 2019 the Electronic Privacy Information Center (EPIC) filed a complaint with the FTC against HireVue. The petition challenged HireVue’s application of the technology as an unfair and deceptive trade practice, claiming that its AI tools were unproven, invasive and prone to bias. In January of this year, facing ongoing litigation and public dissent over the technology, HireVue announced that it will stop relying on “facial analysis” to assess job candidates. It will, however, continue to analyze biometric data from job applicants including speech, intonation, and behavior, ostensibly to predict our future possibilities at work.

 

Occupational Segregation

Occupational segregation is essentially the segregation of people into jobs based on perceptions about their demographics. Human bias, subconscious or conscious, sets up heuristics that people use to form opinions and make judgments about whom to hire. This type of bias reflects stereotypes that influence who is selected for a particular job despite objectively verifiable facts, like an applicant’s skill set, education, and relevant experience. This may explain why more women are hired as kindergarten teachers and more men are hired as school principals. While the studies of this bias seem to focus more on gender, recent research includes the impact of occupational segregation on race. For example, one recent study revealed that in the U.S. women are represented more in occupations characterized by “high warmth” and “low competency”; Asian people are more represented in occupations characterized as “high competency”; and Black and Hispanic workers are more represented in occupations characterized as “low competency”.

Human bias that influences hiring decisions resulting in occupational segregation causes harm to women and people of color, notably in compensation, treatment professional opportunities over time. This bias reality is problematic on its own, but what happens when AI replicates or reinforces this bias?

AI is a suite of computer-enabled technologies including machine learning and deep learning that “learn” from data inputted into an algorithm (mathematical model) designed by humans. The opportunity for bias exists in algorithmic design and in the data selected for use, which can itself be biased as in the case of facial recognition technology that utilizes data of “top performers” in a job category. What is happening is that AI used in this way mimics the heuristics that humans use in making hiring decisions. The AI mirrors our own stereotypes and biases, enforcing the occupational segregation loop. So, as Charlotte Alexander has argued, when a company utilizes AI technology like facial recognition to streamline the hiring process, and the hiring manager relies on the recommendation of the AI, the resulting decision will be informed by both the bias of the hiring manager and the bias of the AI. This is what I call occupational segregation on speed.

 

Why Do We Care?

We are presented with a new AI technology in the form of facial recognition that some believe offers promise for corporations seeking both cost savings associated with hiring and effective tools to promote diversity in their workforce. It is clear, however, that what is positioned as neutral technology is in fact biased, causing harm to women and people of color. A recent Amazon debacle illustrates this well. The company believed their hiring algorithm was neutral but learned it was biased against women in hiring for software developers and other technical positions. Even after Amazon attempted to fix the algorithm, the problem ensued resulting in the company eventually discontinuing its use.

So, why do we care? I believe that we should care about the use of facial recognition AI in hiring for three reasons.

First, this is misuse of AI that perpetuates bias and inequity in the workplace.

Second, the use of facial analysis highlights our expectations that firms should choose ethics over efficiency. The importance of this ethical point is magnified by the fact that the law is typically far behind technology in responding to human rights issues of privacy, equality and nondiscrimination that beg for regulation.

And finally, because facial analysis is an instance of the use of AI to make predictions about our identity—who we are and who we will become—it robs us of our autonomy and what I like to refer to as our very human right to self-design.

 

But what do you think? I would like to know. Please feel free to share with me via email at gsedgwick@seattleu.edu