top of page
Severin Engelmann
Severin Engelmann

Cornell Tech

When (ET)


Effectively Countering Hate Speech on X


Effectively reducing hate speech on social media is a defining challenge of the digital age. Hate speech expressions inflict non-trivial harm to individuals or groups based on their ethnicity, gender, religion and other characteristics. Hate speech’s lasting effects silence and marginalize vulnerable communities. When people see hate speech on social media they sometimes counter it publicly by condemning the transgression itself and/or by showing solidarity with the victim. In an in-the-wild study on X (formerly Twitter), we controlled user accounts to deliberately counter racist slurs and investigated whether transgressors would change their transgression behavior following our intervention. I will present the results of this research in the first part of the talk. Moving from the descriptive to the normative, I then argue that “ideal” counterspeech should follow three directives. One, counterspeech should be tailored to transgressor groups, two, counterspeech should be empathy-based, and, three, it should be performed by the social media operator.


Severin studies the ethical legitimacy of classification in powerful socio-technical systems such as the Chinese Social Credit System, social media profiling and facial analysis AI. With a background in Philosophy of Technology and Computer Science, his research combines conceptual analysis with quantitative and qualitative methods.

Prior to his Postdoctoral Research Fellowship at the Digital Life Initiative at Cornell Tech, Severin completed his Ph.D. at the School of Computation, Information and Technology at the Technical University of Munich (TUM). He also designed and taught courses in digital ethics at TUM’s Department of Computer Science.

Severin was a visiting scholar at the School of Information at UC Berkeley and the Max Planck Institute for Research on Collective Goods. His work has been published in cross-disciplinary Computer Science conferences such as Fairness, Accountability, and Transparency (FAccT) and Artificial Intelligence, Ethics, and Society (AIES), and it has been covered by media outlets such as TechCrunch.

bottom of page