Kathleen Creel
Northeastern University
Picking on the Same Person: The Ethics of Algorithmic Monoculture
Abstract
Human mistakes are inevitable, but fortunately heterogenous. Not so with machine decision-making. Using the same machine learning model for high-stakes decisions in many settings amplifies the strengths, weaknesses, biases, and idiosyncrasies of the original model. When the same person re-encounters the same model again and again, or models trained on the same dataset, she might be wrongly rejected again and again. Thus algorithmic monoculture could lead to consistent ill-treatment of individual people by homogenizing the decision outcomes they experience. This talk will formalize the measure of outcome homogenization, describe experiments on US census data that demonstrate that the sharing of training data consistently homogenizes outcomes, then make a contractualist argument for why and in what circumstances outcome homogenization due to reliance on algorithmic sorting is wrong.
About
Dr. Katie Creel is an assistant professor of philosophy and computer science at Northeastern University, holding joint appointments in the College of Social Sciences and Humanities and Khoury College of Computer Sciences. Her research explores the moral, political, and epistemic implications of machine learning as it is used in automated decision-making and in science.
Before joining Northeastern, Creel was the Embedded EthiCS fellow in the Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence at Stanford University. In this role, she worked with the Stanford Computer Science department to embed ethics in the core curriculum. Creel worked as a software engineer at MIT Lincoln Laboratory and subsequently pursued her doctorate in history and philosophy of science from the University of Pittsburgh. She holds a master’s in philosophy from Simon Fraser University, and a bachelor’s in computer science and philosophy from Williams College.