The Problems of Platform Protections
Yesterday, I celebrated the national Martin Luther King Jr. holiday in the United States, on the heels of the President of the United States doubling down on his racist agenda with abhorrent comments against people of color, immigrants, and those who don’t reflect his vision of America, an America where nazis and white supremacists are legitimated through more than just his re-tweets on Twitter. When we discuss Dr. King’s legacy, we spend considerable time talking about Dr. King’s commitments to ending poverty and economic oppression, which is fundamentally tied to racial and gender oppression. We recall that Dr. King was murdered for supporting sanitation workers who wanted fair wages and labor rights, and that harassment, stalking, trolling, and even murder can be the consequences for standing up to white supremacy and speaking truth to power.
In Trump’s America, not unlike the America under which Dr. King struggled, white supremacists certainly are not disavowed for their hate speech and acts. The news headlines in the U.S. each week remind us that Black political thought – from Dr. King to Black Lives Matter – is central to our contemporary struggle for social, economic, and political reparation after three centuries of unresolved enslavement and occupation in the Americas. Trump’s hate speech has been central to the emboldening of neo-nazi hate speech online, and the implications of this phenomenon are intensifying for vulnerable populations.
In the spirit of trying to address these entanglements of hate speech and the digital, Dr. Diana Ascher and I wrote a book chapter this month about potential modes of remedy for people who are targeted and trolled by white supremacists and sympathizers (a significant part of President Trump’s base). She and I had a recent experience with neo-nazi hate group members who actively engage in social media trolling, and this led us to think about the implications of hate speech, and the ways in which digital media platforms protect the anonymity of its speakers, while leaving us, the targets, fully exposed. These protections for speakers of hate, and for those who actively troll and harass while hiding behind pseudonyms are an important new dimension of our recent work. Platform protections, coupled with their constant punishment of people who try to flag such hateful content pointed at them, are structural problems. These problems include the difficulty people who are targeted by hate speech have in knowing the origins and severity of threats made toward them.
In the analog past, white racist organizations like the Ku Klux Klan (KKK) wore robes and hoods to assume a state of pseudonymity. In the digital, various information practices like stripping metadata from digital photos or masking IP addresses have become widespread. These techniques, we argue, have emboldened neo-nazis in their sense of righteousness, and largely desensitized the public to hate speech as it is constantly decoupled from “real” speakers or figures. We argue that protections afforded by platform pseudonymity, among other algorithmic and content moderation flagging systems, exacerbates the precariousness of the most vulnerable who often cannot know who is targeting them, or their geographic location.
Part of the protections afforded to platforms are a matter of legal decisions in the U.S. and beyond. Legislation limiting speech on social media platforms is often unclear, although we are starting to see an increase in criminalizing perpetrators of revenge porn and other malicious acts engaged on the internet. Because of the broad latitude afforded to those making Constitutional claims to “free speech,” this often leaves victims of misogynist, anti-Semitic, racist, heterosexist, and other forms of hate speech with limited protections. Adding to the complexity, the national Communications Decency Act (CDA) offers protections to internet technology companies for hateful content posted to their platforms, and often allows them to avoid prosecution.
In this context of companies eschewing responsibility for content, we engaged in a project tied to real-life trolling, and de-anonymized the identities of members of a neo-nazi hate group on Twitter to explore the tension surrounding free speech. We sought to see whether de-anonymizing hate speakers would allow victims of hate speech to increase their level of safety and protection offline from potential harassers, trolls, or even from true threats. What we learned in our experiment is that we could trace speakers of hate online, and assess the level of threat we felt with respect to our trolls, based on knowing who they are.
This has led me to thinking about the heightened efforts by technology companies to identify bots, fake accounts that operate under pseudonyms, and the role of such in the project of American democracy. Public information reliability and its impact on political elections, for example, played a major role in cultivating political opinion during the 2016 presidential election. Platforms, as many of us have been studying for years, bolster some voices and increasingly silence others.
In Trump’s America, just as in Dr. King’s America, there was no shame in racist, sexist, homophobic, and misogynist speech in public, and that speech was (and is) embodied. Ascher and I argue that the pseudonymity of platforms like Twitter can help people of similar opinion find one another, and even reinforce their sense of community. However, in the midst of this, the inundation of hate speech on platforms, and the protections it offers those speakers of hate, is part of a dangerous acculturation and desensitization to harmful words and threats.
In essence, anonymity for speakers of hate and trolls on social media platforms means that white supremacists use pseudonymity to their advantage. They are often largely unconcerned with being exposed because they operate online in cloaked manner, as Jessie Daniels taught us many years ago in her amazing book, Cyber Racism. This means that vulnerable members of society who are targeted with hate speech or misogyny must depend on the assumed protections of anonymity, even as these protections may be unwanted. Indeed, their “real life” identities as academics, activists, labor organizers, feminists, civil rights organizers, environmentalists, (or any concerned members of society working on a host of issues related to justice), may be pertinent and valuable to their work and commitments. In some cases, pseudonymity might be wholly inappropriate or unwelcome in the critical work of human and civil rights.
This transposition shifts the chilling effect from neo-nazis and other hate groups (who often worked hard to protect their identities offline) to historically unprotected, marginalized people. The cloaked protections for trolls and those who would deny civil and human rights to others, particularly in the case of speakers of hate online, means platforms have left their targets more accessible. These are some of the problems with platform protections: their effects are not neutral, and often work in service of bolstering some of the most harmful and hateful content – content not without consequences.