DL Seminar | Contesting Algorithms
By Prasenjit Roy and Yuxi Sun | MS Students, Cornell Tech
At the Digital Life Initiative Seminar on October 24, 2019, Professor Niva Elkin-Koren from the Faculty of Law at the University of Haifa presented her work about Contesting Algorithms. She draws on theories of adversarial interactions - a key concept in both computer science and law - to propose solutions for opaque and ungovernable content moderation systems.
In the lecture, Elkin-Koren introduced the strategy of Contesting Algorithms to explain AI-based content moderation. She demonstrated how regulatory measures can be used to promote the implementation and development of this strategy in online content moderation platform. At present, content moderation algorithms seek for an optimal single goal, for example, it will remove copyright infringing materials in the platform. The drawback here is that other values of the public interest, such as fair use and free speech, are often ignored. Contesting algorithms introduce an adversarial design, reflecting conflicts within interests and providing a check on dominant removal systems.
The Difference between current and past scenarios
The replacement of human-led moderation by more involved technical methods such as AI makes big progress on the development of content or information moderation. Moderation, historically under the purview of governments and institutions, is now in the hands of large, highly-valued corporations which have the power to control the way we interact with our fellow human beings. These companies also tailor content to suit the needs of each and every individual that they serve. The question of access to information, therefore, becomes important to address. Previously, almost all citizens in a country had access to the same kind of information in the public domain.
Moderation by profit-driven companies changes these dynamics. The disconnect between company goals and social information needs makes the question of content moderation much more important. Who should determine what should stay on the platforms and what should be removed? When content being uploaded to these major platforms encourages healthy debate between people, it can be great. Such debates and discussions have led to major people-driven movements and revolts all over the world. While these have the power to make the world a more inclusive place, there still exists a series of questions:
What content is removed?
What happens to the content that is removed?
Why is it removed?
Who removes it?
What is the impact of content removal?
It has been reported that 99.5% of all terrorist propaganda is removed and more importantly, 38% of the content removed isn’t even seen once by a human. This raises the question about how and on what basis are these pieces of content removed from the public sphere?
Filtering and Oversight
The AI techniques that moderate and personalize content on large platforms are almost entirely opaque. They are prized by platforms as trade secrets - a source of competitive advantage. This situation is concerning given that this filtering AI needs to align both private and public interests, because they are moderating a public good. Moreover, there need to be restraints to ensure the rule of law. This begs the question- why aren’t the moderators being moderated?
In addition, the complexity of AI moderation tools means that even if their operation were clear, they are not easily understandable. They aim to enforce specific rules, such as copyright law. However, the ways in which these laws are encoded in the system are not fully understandable by people. AI systems also remove content ex ante (before the content is seen by others) rather than ex post (after it is posted), as in human-led moderation. These factors combine to produce a lack of oversight, as the technical and social tradeoffs are concealed by opacity and moderation speed.
Elkin-Koren proposed a solution that she calls a public or common AI. For every post that is removed by a privately operated AI system - for copyright infringement or threatening content - the public AI would evaluate the same content for measures of public good, such as fair use or free speech. This "adversarial" system would be trained on observational data and legal case decisions, and would allow us to learn what platforms are removing and why. The solution proposes changes to both the technical and regulatory aspects of the current systems to enhance them and ensure that they are fairer and more accountable.
Regulatory Oversights. Current regulatory strategies for AI content moderation include transparency (aiming to make the removal process viewable), auditing (keeping records of removals) and due process protections that set formal standards for removal. To implement public AI, institutional structures would need to be reevaluated. For example, safe harbor laws would be needed to give a pass to harmful user content provided that it passes the public AI system. Human review would still be needed to provide oversight to what is removed.
Technical Oversights. Technical solutions such as system re-configurations and opportunities for subversion can create checks on corporate opacity. Public AI makes moderation a public good, open to public scrutiny. This would help add a social layer to the systems which are moderating our discourse. Using social values as an input in the evaluation rubric would help us maintain the current social fabric and information flows, without disproportionately affecting any of the multiple segments.
Challenges to Public AI
The proposed public AI system brings some challenges. One problem is developing incentives and funding tools required to implement the system. Elkin-Koren proposes that the system be supported by taxation on both the public and on corporations, as is the model for other public utilities. Re-enforcing the system's legitimacy - ensuring that it provides genuine social value - is also a challenge. Using court decisions as inputs in the public AI model may help to provide the needed legitimacy.