University of Haifa
Social media platforms are responsible for mediating public discourse online. Increasingly platforms are using Artificial Intelligence and machine learning (AI) to perform content moderation (e.g., matching users and content, adjudicating conflicting claims, or detecting unwarranted content). In a digital ecosystem governed by AI, we currently lack sufficient safeguards against the blocking of legitimate content. Moreover, we lack a space for negotiating meaning and for deliberating the legitimacy of particular speech. In this presentation, Elkin-Koren proposes to address AI-based content moderation by introducing an adversarial procedure. Algorithmic content moderation often seeks to optimize a single goal, such as removing copyright infringing materials as defined by right holders, or blocking hate speech. Meanwhile, other values of the public interest, such as fair use, or free speech, are often neglected. Contesting Algorithms introduce an adversarial design, which reflects conflicting interests, and thereby, offers a check on dominant removal systems. The presentation will introduce the strategy of Contesting Algorithms, discuss its promises and limitations, and demonstrate how regulatory measures could promote the development and implementation of this strategy in online content moderation.