Web
Analytics
Are We All Miasmists Now? Parallels Between Recommender Systems and the History of Public Health
top of page
Thomas Krendl Gilbert
Thomas Krendl Gilbert

Cornell Tech

When (ET)

Where

Are We All Miasmists Now? Parallels Between Recommender Systems and the History of Public Health

Abstract

Attention capitalism has generated design processes and product development decisions that prioritize platform growth over all other considerations. To the extent limits have been placed on these incentives, interventions have primarily taken the form of content moderation. While moderation is important for what we call “acute harms,” societal-scale harms – such as negative effects on mental health and social trust – require new forms of institutional transparency and scientific investigation, which we group under the name accountability infrastructure.

This is not a new problem. In fact, there are many conceptual lessons and implementation approaches for accountability infrastructure within the history of public health. Channeling these insights, we reinterpret the societal harms generated by technology platforms as a public health problem. To that end, we present a novel mechanism design framework and some practical measurement methods for that framework. The proposed approach is iterative and built into the product design process, and is applicable for either internally-motivated (i.e. self regulation by companies) or externally-motivated (i.e. government regulation) interventions.

In doing this, we aim to help shape a research agenda of principles for mechanism design around problem areas on which there is broad consensus and a firm base of support. We offer constructive examples and discussion of potential implementation methods related to these topics, as well as several new data illustrations for potential effects of exposure to online content."

About

Thomas Krendl Gilbert received an interdisciplinary Ph.D in Machine Ethics and Epistemology at UC Berkeley. With prior training in philosophy, sociology, and political theory, Thomas designed his degree program to investigate the ethical and political predicaments that emerge when artificial intelligence reshapes the context of organizational decision-making. His recent work investigates how specific algorithmic learning procedures (such as reinforcement learning) reframe classical ethical questions and recall the foundations of democratic political philosophy, namely the significance of popular sovereignty and dissent for resolving normative uncertainty and modeling human preferences. This work has concrete implications for the design of AI systems that are fair for distinct subpopulations, safe when enmeshed with institutional practices, and accountable to public concerns, including medium-term applications like automated vehicles.

bottom of page