Web
Analytics
Fairness & Interpretability in Machine Learning and the Dangers of Solutionism
top of page
Zachary Chase Lipton
Zachary Chase Lipton

Carnegie Mellon University

Fairness & Interpretability in Machine Learning and the Dangers of Solutionism

Abstract

Supervised learning algorithms are increasingly operationalized in real-world decision-making systems. Unfortunately, the nature and desiderata of real-world tasks rarely fit neatly into the supervised learning contract. Real data deviates from the training distribution, training targets are often weak surrogates for real-world desiderata, error is seldom the right utility function, and while the framework ignores interventions, predictions typically drive decisions. While the deep questions concerning the ethics of AI necessarily address the processes that generate our data and the impacts that automated decisions will have, neither ML tools, nor proposed ML-based solutions tackle these problems head on. This talk explores the consequences and limitations of employing ML-based technology in the real world, the limitations of recent solutions (so-called fair and interpretable algorithms) for mitigating societal harms, and contemplates the meta-question: when should (today's) ML systems be off the table altogether?

About

Zack Lipton is an Assistant Professor at Carnegie Mellon University (CMU) jointly appointed in the Tepper school of Business and the Machine Learning Department. Additionally, he is affiliated with the Heinz School of Public Policy. His research spans core ML methods and theory, their applications in healthcare and natural language processing, and critical concerns, both about the mode of inquiry itself, and the impact of the technology it produces on social systems. He completed his PhD at the loveliest of universities (in UCSD's Artificial Intelligence Group), and if he had a time machine, he would go back, take two years longer to graduate, and actually learn to surf.

bottom of page