2. The Problems of Profiting from Internet Pollution
Veröffentlicht: 08.01.2018
in der Serie Digital Infrastructures of Race and Gender
zurück vor

In recent months, the role that humans play in organizing and filtering the information that flows through the internet has come under increasing scrutiny. Companies are trying to keep child pornography, “extremist” content, disinformation, hoaxes, and a variety of unsavory posts off of their platforms while continuing to keep other kinds of content flowing. They must keep the content flowing because that is the business model: Content captures attention and generates data. They sell that attention, enriched by that data. But what, then, to do with the pollution that accompanies the human generation of content? How do you deal with the objectionable, disgusting, pornographic, illegal, or otherwise verboten content? –Alexis C. Madrigal, The Basic Crossness of Humans, The Atlantic, December 15, 2017 

At the end of 2017, I attended and participated in an international conference on internet content moderation, All Things in Moderation, at the University of California, Los Angeles, organized by my long-time research collaborator, Dr. Sarah T. Roberts, an authority on commercial content moderation. This conference was the first of its kind, bringing in stakeholders for public conversations that reflected the concerns of industry, activists, content moderation workers, journalists, academics, and policy makers. In today’s blog post, I want to talk about the ethical dimensions of regulating the internet and digital media platforms, whether by content moderation, algorithms and automated decision-making systems, or by public policy.

Much of my work has been concerned with thinking about those who are most likely to be harmed online in large, multinational digital media platforms, particularly in the ways that women of color are sexually objectified and pornified in commercial search engines like Google. Consistently, I call for more voices from the margins to move toward the center of conversations about technology design, and in this spirit, I had the amazing experience of co-moderating a panel with Sarah T. Roberts and two former and current content moderators; Rasalyn Bowden, an early content supervisor at Myspace, and Rochelle LaPlante, a worker from Amazon Mechanical Turk. LaPlante is also the cofounder of a platform designed to organize digital laborers at MTurkCrowd.com.

Some of the most powerful things Bowden and LaPlante shared were about the exploitive labor conditions they experienced as content moderators, which Alexis Madrigal reported for The Atlantic:

“When I left Myspace, I didn’t shake hands for like three years because I figured out that people were disgusting. And I just could not touch people,” Bowden said. “Most normal people in the world are just fucking weirdos. I was disgusted by humanity when I left there. So many of my peers, same thing. We all left with horrible views of humanity” (as reported by Madrigal).

LaPlante, who currently works as a moderator for several different types of clients (see Roberts’ taxonomy of commercial content moderation), has tracked the limits of “digital piece work,” where moderators are paid as low as two cents per image they scan, as thousands of images are sent for review in Mechanical Turk. As LaPlante and Bowden shared their strategies for emotionally and psychologically surviving the abhorrent content they have reviewed, including doing things like crossing their eyes, or scanning the edges of an image to try and keep from seeing and internalizing the violence, it became increasingly apparent that the value systems through which they had to make decisions were not only their own personal frames of reference for what constitutes harmful content.

The panel left many of us thinking about the tremendous lack of clarity about the values companies use to develop their content moderation policies and procedures. I recall the many conversations I’ve had over the years with Dr. Roberts about the limits of speech and expression, and upon whose value systems these notions rest. While representing women of color with pornography is a “right” of Google to perform its business practices any way it wants, we have little recourse in the United States to battle the harms of damaging content.

Indeed, for many years as I have written and talked about the consequences of outsourcing public information needs to the private sector, the fundamental issue is about the ethical frameworks that guide our relationships to one another, and the way that power operates online to further marginalize and oppress groups that are in a perpetual struggle for social, political, and economic liberation. As Madrigal rightly reports, digital media platforms are engaged in generating traffic, and that traffic generates data (through surveillance and tracking of users through software and hardware), which can be sold to advertisers for the purposes of selling products, services, and ideas.

This type of platform capitalism is defining the landscape of commercial internet culture and practice. In a recent interview we did with Logic: A magazine about technology, Sarah and I discussed what’s at stake when profit-at-all-costs is the driving imperative in the media and tech industries:

“In the end, these companies are beholden to their shareholders and they’re committed to maximizing profit before all else... But we need more than just maximizing profit as the value system in our society. So, engineers may not be malicious, of course. But I don’t think they have the requisite education in the humanities and social sciences to incorporate other frameworks into their work. And we see the outcomes of that. I think things will only spiral out of control, and we will increasingly see automated decision-making systems and other forms of artificial intelligence emerge as a civil and human rights issue that we cannot ignore.”

People who are vulnerable to neoliberal economic and social policies are even less protected, and often subjected to the “human pollution,” as Madrigal notes, by technology industries who have little to no regulation in the United States. Whether it’s content moderators who are exposed to the most vile, degrading, and disgusting content users are uploading to large digital media platforms like Facebook, YouTube, Pinterest, Tumblr, and so forth, or whether algorithms and so-called “artificial intelligence” are used to sort those with the most capital to the front page of search results; the ethical frameworks of justice, fairness, equity, and transparency are divorced from the business models and the cultures of technology companies.

Antwort verwerfen