Web
Analytics
DL Seminar | The Paradox of Automation as Anti-Bias Intervention
  • Digital Life Initiative

DL Seminar | The Paradox of Automation as Anti-Bias Intervention

Updated: May 16, 2019

Post by Iris Zhang | MS Student, Cornell Tech

Talk by Ifeoma Ajunwa | School of Industrial and Labor Relations, Cornell University

Illustration by Gary Zamchick | DLI Chronicler

A common misperception is that leaving humans out during decision-making could eliminate human bias. Is that true? Not really. Professor Ifeoma Ajunwa, an Assistant Professor of Labor and Employment Law in the Cornell's School of Law and School of Industrial and Labor Relations, who particularly focuses on the ethical governance of workplace technologies, came to unveil the paradox of automation as an anti-bias intervention.


Human beings are biased. Like many others, Dr. Ajunwa once believed that automated systems could serve as effective anti-bias methods, removing barriers for marginalized populations when comes to hiring. However, after hearing the stories of a few formerly incarcerated students, she realized this is not true. The formerly incarcerated students said they actually preferred to meet a person before interviewing, as e-hiring systems would automatically discard their resumes. With the system in place, they would never get a chance to interview, not to mention getting a job. On the other hand, a human interviewer may actually listen to their stories and make an exception.


Sadly, the story of the formerly incarcerated students is not unique. We could find many similar incidences from the news stories. For example, Amazon recently scrapped a secret recruiting tool that showed bias against women. Amazon used the experimental hiring system to give candidates a score from one to five, just like they did with the products listed on their website. As the model was trained to vet applicants by resumes over the past 10 years, it was not rating candidates in a gender-neutral way. Similarly, Facebook is letting job advertisers target only men, as many of the ads are simply not visible to women. In addition, it is reported that Facebook also allows job advertisers to aim at younger audiences, although the federal Age Discrimination in Employment Act of 1967 clearly prohibits bias against people 40 or older when comes to hiring. Imagine if this is a newspaper ad saying that we are only hiring people younger than 30, how would the public react to it? It is blatantly unlawful. Bias hidden behind technology is still bias.


In these instances, automated decision-making is not only not able to reduce bias but instead serve to replicated and amplify bias. These problematic features of using algorithmic capture of hiring are at odds with the fundamental principle of equal opportunity employment. So what is new here? What elements of the social world does new technology make particularly salient that went relatively unnoticed before?


In an ideal world, automated decision making would be better than human decision making. However, the problem is that human hands are still in it. Laws govern human decision making to avoid bias, but not automated decision making. Technology enables these problematic features that would otherwise be prohibited. Why is that the case? When the human bias is hiding behind these technologies, smoking gun evidence is difficult to prove. Even if the defendant offers false reasons, the plaintiff must prove the defendant was motivated by reasons of animus against a protected group. We really need increased scrutiny for this unregulated power hiding behind automated decision making system. There is a dire need for a new legal framework.


Dr. Ajunwa argues for the need to establish a fiduciary. In the context of automated hiring, it needs to address the practical difficulties of proof. It also needs to work with an audit requirement. In the event of bias detection, it would require human intervention to double check with the hiring requirement to find out what the problem is. The automated decision-making system should be here to reduce bias instead of replicate, amplify, and hide away our discrimination. To rectify the issue behind the algorithmic decision-making system, we need our laws and ethics to keep up the pace with our technologies.

Contact

Address

Cornell Tech

2 W Loop Rd,

New York, NY 10044

Get Here >

DLI Queries

Jessie G. Taft

jgt43@cornell.edu