Are social media connecting us or tearing society apart? How do trolls, fake news, memes, and online misinformation impact upon politics today? Recent scandals surrounding Facebook founder Mark Zuckerberg, concerns about election rigging, and troll activism reinforced a sense of unease. We learned that the Internet is not objective or neutral but shows different content depending on the user's profile created by AI - just by the websites you visit, the posts you like, and the duration you look at a picture. How do social media radicalize their users? What is their role in the recent resurgence of right-wing nationalism? How should we react within this context to extreme or hateful speech online? To obtain a more holistic picture, we spoke with Prof. Dr. Sahana Udupa, Professor of Media Anthropology at Ludwig-Maximilians-Universität Munich, whose aim is to promote a global comparative study of extreme online language, honed by ethnographic and historical sensitivity.
"Vitriol and aggressive expressions begin to surreptitiously slide into the normal"
L.I.S.A.: As a professor at the LMU, your work includes, among others, digital politics, online extreme speech and hate speech, and media policy. Why do you consider the concept of extreme speech more adequate in the context of social media than hate speech?
Prof. Udupa: My first impulse to propose the concept of extreme speech came from the realization that there was very little ethnographic knowledge about actors and their lived worlds that compose, perpetuate and normalize vitriolic speech acts online. On the one hand, the discourse around "hate speech" came with strong normative-regulatory baggage, and the motivation in most cases was to identify and remove such speech. On the other hand, discussions about Internet speech hovered around the binary of free speech versus hate speech, structuring the debate in terms of taking sides with regulating hate speech or advocating for free speech. In most instances, this choice was posed as a zero-sum game. I felt there was a pressing need to bring anthropological perspectives on new media, exemplified by brilliant ethnographies such as Gabriela Coleman's study of the Anonymous group[1], to debates around online hate speech. It promised to open up an analytical field that can track and understand the thick lived contexts within which vitriol and aggressive expressions begin to surreptitiously slide into the normal, entering the lives of online users in mundane, if emotionally charged, ways, and thereby reconfiguring what is considered as mainstream and legitimate cultures of political discourse. I was then carrying out the first phase of fieldwork among online right-wing Hindu nationalists in India. It showed me how approaches that make easy villains out of right-wing actors would lead to limited, polemical analyses, leaving out the nuances of instigations and conditions of life that inform their right-wing politics online. This ethnographic attention came with the obligation to extend the same principles of trust and respect towards our research interlocutors even when they harbored less than ideal or outrightly abhorrent political views. These are admittedly difficult choices. The objective is not to condone and support their views but to excavate lived and historical conditions that surround online vitriol through ethnographic fieldwork that requires empathy as practical morality. To bring this ethnographic sensibility beyond the regulatory language of hate speech was the first motivation to develop the concept of extreme speech. Of course, the term itself existed before we used it this way, but it was able to signal the intention to understand speech acts that stretched the boundaries of legitimate speech along the twin axes of truth/falsity and civility/incivility, without taking a priori normative positions that pathologized such speech. When I exchanged these ideas with Matti Pohjonen, we started thinking about not only bringing ethnographic attention to online practices and meanings that people attached to their online actions but also to make a clear departure from notions of online "extremism" that saw online speech as a form of "risk" or "threat". Through his research associations with online extremism projects, Pohjonen was able to stress the importance of departing from this securitization discourse.
What finally pressed me to incessantly chase this question, seek new research partners, develop publication plans, hold conferences, and so on, was the very political climate we had come to inhabit. Xenophobic and nationalist expressions are expanding online and offline, and those who advocate for inclusive societies are dragged into prolonged online wars that deride them with a slew of seemingly jocular putdowns, name-calling, shaming, and offensive comments. I am now analyzing one such troll war on Twitter when right-wing nationalist voices hounded a critical scholar with labels such as "race baiter" as well as ad hominem attacks and body shaming in the context of her work on white supremacy in the UK. This most recent incident is strikingly similar to the troll attack against a liberal journalist I observed in 2013 when I had just started my research on online Hindu nationalists in India. It was impossible for me to turn away from this topic, for intellectual reasons, but also for how it shaped my experiences of researching the Hindu right in India as a privileged scholar residing "abroad" and living in Germany at a time when Pegida hosted public rallies. The role of digital media in the expansion of xenophobic populist politics has now emerged as a pressing question. I should confess that it is strenuous to research this topic, but each day, I feel more determined to advance this work with colleagues from different institutes around the world and my students at LMU. The concept of extreme speech is now developed by a range of studies we have solicited and collected, including in the forthcoming book, Digital Hate: The Global Conjuncture of Extreme Speech[2], I have edited with Iginio Gagliardone and Peter Hervik.