Web
Analytics
What We Owe to Decision-subjects: Beyond Transparency and Explanation in Automated Decision-Making
top of page
John Basl
John Basl

Northeastern University

What We Owe to Decision-subjects: Beyond Transparency and Explanation in Automated Decision-Making

Abstract

In this paper, we defend what we call the Interpretability Thesis which states that, in many contexts, decision-makers are morally obligated to avoid basing their decisions about how to treat decision-subjects on the outputs of non-interpretable ("black box") algorithmic decision systems. Others have defended this thesis, typically by arguing that we have duties of transparency to decision-subjects which require us to make certain information available to them. However, this approach to defending the interpretability thesis has been met with skepticism with skeptics worrying about the grounds of these duties of transparency and concerned that we hold algorithmic decisions to higher standards than human decision systems which also fail to meet duties of transparency. We provide an alternative defense of the interpretability thesis grounded in a different set of duties to decision-subjects. We argue that decision-makers have duties of due consideration to decision-subjects. These are duties governing how decision-makers form beliefs about decision-subjects, which decision rules they may permissibly deploy, and which capacities they must exercise in make decisions. After articulating and defending these duties of due consideration, we argue that black box systems often serve as an obstacle to satisfying these duties, and that skeptical responses are unjustified.

About

John is an associate professor of philosophy whose primary areas of research include moral philosophy and applied ethics, especially the ethics of emerging technologies such as AI and synthetic biology. He teaches courses in moral philosophy, ethics of technology, and ethics in scientific research.

bottom of page