Recent advances in data analysis have led to the development of an abundance of technologies to support human-decision making in many societal domains. Such applications, often labeled artificial intelligence, employing machine learning and other types of statistical data analysis for classification, prediction and decision support. Due to their widespread utilization, they affect most of us on a daily basis, albeit in different ways. As countless cases have demonstrated, data-based systems are prone to biases and may further entrench or even increase inequalities and discrimination by transforming biased evidence into seemingly neutral numbers. As a result, the question arises whether and under what conditions we can or should trust such systems. In my talk I will first turn to the question whether we can sensibly talk about trust in AI systems. Proposing a socio-technical view on AI, I will argue that we can trust AI systems, if we conceive them as systems consisting of networks of technologies and human actors, but that we should trust them if and only if they are trustworthy. I will conclude my talk by outlining some epistemic and ethical requirements for trustworthy systems and two caveats.
Judith Simon is Full Professor for Ethics in Information Technologies at the Universität Hamburg. She is interested in ethical, epistemological and political questions arising in the context of digital technologies, in particular in regards to big data and artificial intelligence. Judith Simon is a member of the German Ethics Council as well as various other committees of scientific policy advice and has also been a member of the Data Ethics Commission of the German Federal Government (2018-2019). Her Routledge Handbook of Trust and Philosophy has been published in June 2020.