Imagine ending up in jail with no understanding of what the charges against you are, no idea what legal process you will be facing, and no idea what happens if you appeal your conviction- or even how to appeal. It sounds like a Kafka novel, but that’s how social media platforms operate today. It has to stop.
It’s become clear that platforms can’t be “lawless.” Last week, after a temporary suspension, Twitter permanently banned conservative commentator Alex Jones. He had already been kicked off of Facebook, and the last few weeks have been full of cautionary statements about censorship on social media platforms. These concerns aren’t unjustified, especially when content is posted by a marginalized group, or serves as important evidence of human rights abuses. But all of this must also be seen in light of Facebook’s ban on military officials and others responsible for instigating extreme ethnic violence against the Rohingya, issued the same day as a damning report from the United Nations fact-finding mission to Myanmar. The UN explicitly criticized Facebook’s role in the violence, calling it a “beast and the role of content posted on Facebook in facilitating violence is clear. For example, in one incident in 2014, “false rumors spread over Facebook and other social media that a Muslim man raped a Burmese woman sparked violent riots in Mandalay.”
In fact, recent months have illustrated perfectly that the distinction between online and offline is rapidly disappearing, and it’s time for all of us to come to terms with that. Society is in the dangerous position of having reality mediated by the decisions of for-profit companies in Silicon Valley. There’s no easy solution to this problem. But it seems like common sense that the world needs to understand what’s actually happening. Technology platforms need to explain exactly how they make decisions so that we can start making informed decisions about what happens next.
Offline/online
Last month, the New York Times shared the results of a study that found that “social media has not only become a fertile soil for the spread of hateful ideas but also motivates real-life action.” The German study looked at links between anti-refugee violence and engagement with Facebook pages for the far right-wing (and anti-immigrant) party Alternative für Deutschland (AfD). The study found that increased engagement with AfD pages correlated with increased hate crimes, even in light of factors such as existing propensity for right-wing violence in a community. The study does not “claim that social media itself causes crimes against refugees out of thin air,” but rather “that social media can act as a propagating mechanism for the flare-up of hateful sentiments.” The study complements Susan Benesche’s work around “dangerous speech,” which is defined as “any form of expression (speech, text, or images) that can increase the risk that its audience will condone or participate in violence against members of another group.”
While this study is illuminating, there are also often more direct links between violence and online content. Some examples:
- Sri Lanka: Like Myanmar, people rely heavily on Facebook for news in Sri Lanka. Earlier this year, the government banned the site after social media rumors led to“mobs descend[ing] on several towns, burning mosques, Muslim-owned shops, and homes.” The New York Times reports that “A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing.
- Death by SWATting“Swatting” is the practice of getting police or other emergency services to visit someone’s home as a way of harassing them. It’s a common result of the practice of “doxxing”- posting private information like someone’s home address online. While it can be an inconvenience, it can also be deadly. In 2017, Andrew Finch of Wichita, Kansas was shot and killed by police after Tyler Barriss called the police to falsely report a murder and in progress hostage situation. Barriss made the call over an online gaming dispute. It’s not the first high-profile swatting in the US— and SWATting happens elsewhere too
- PizzaGate: In 2016, a conspiracy theory on extremist right-wing websites claimed that a restaurant in Washington, DC, was being used as a front for child trafficking. These claims led to death threats, harassment, and ultimately a physical attack when “Edgar Maddison Welch … allegedly walked into the restaurant and pointed a gun in the direction of a restaurant employee.”
- Extremist Hindus: Direct calls for violence and death threats against Muslims, journalists, activists and inter-faith couples have proliferated online in India alongside offline violence, including lynchings of Muslims In one case earlier this year, after an extremist published names and personal details of Hindu-Muslim interfaith couples, one of the targets reports that men showed up outside his house, referenced the post, threatened him, and roughed him up.
Preserving evidence, preventing violence
In the case of Myanmar, Facebook has started, in carefully worded blog posts and changes to its community standards, to admit it’s role in offline violence and human rights abuses. But as public pressure like UN critique continues, it’s important that social media platforms ensure that they aren’t deleting evidence of those same human rights abuses, as we have seen with videos of the Syrian conflict on YouTube.
There are hopeful signs. In its blog post announcing the removal of Myanmar accounts, Facebook notes; “We are preserving data, including content, on the accounts and Pages we have removed.” This should be standard practice. There is a huge open source investigations community relying on these videos, including the International Criminal Court and the United Nations. Social media platforms must work with investigators to ensure that this vital media can aid in justice processes.
The slippery slope
Of course, evidence preservation isn’t usually the first issue to come to mind when platforms remove content. The common argument against removing groups and individuals from social media platforms is that once a company starts, it won’t stop. It’s the “censorship is a slippery slope,” argument: when you start removing objectionable speech, you’ve opened a Pandora’s box of censorship where “good” objectionable speech (in recent articles, Black Lives Matter has been used as an example) is censored along with “bad” objectionable speech…like websites advocating a new Aryan nation through violence.
The problem with this argument, as Jillian York of the Electronic Frontier Foundation pointed out recently, is that “If there is a slippery slope of platform censorship, it didn’t start with Infowars [Alex Jones]. It started with the Moroccan atheists, the trans models, the drag performers, the indigenous women…” Alex Jones is hardly the first person to be unceremoniously kicked off a platform. It is marginalized people who have been silenced. There is no slippery slope. There’s a cliff, and the most vulnerable people have already been thrown off of it.
Existing ideas about free expression simply don’t line up with the reality of what is already happening, and they aren’t helpful when it comes to thinking of a solution.
It’s true that the removal of content from Facebook, YouTube, and Twitter often has the same effect as state censorship- messages are simply silenced, even if they aren’t illegal. These platforms don’t simply take it or leave it tools in a “free market.” In many places, Facebook IS the Internet. But unlike legal decisions about when and how people can protest or whether nudity is protected speech, these companies are not publicly accountable.
Fortunately for us, at this time they are still claiming publicly that they are.
So here’s a challenge, for Facebook, YouTube, Twitter, and the like; tell us what you’re doing.
That doesn’t mean a report on Community Standards enforcement that groups basic information on takedowns into broad categories (wink, wink Facebook). It means ensuring due process and real transparency. Fortunately for these companies, there are a variety of very specific recommendations available to them, including the 2018 Santa Clara Principles on Transparency and Accountability in Content Moderation; Guiding Principles in a 2011 report from the Berkman Center and Center for Democracy and Technology, and United Nations Special Rapporteur on Freedom of Expression David Kaye’s 2018 report on content regulation .
What would it take to even begin to understand what companies are doing? Platforms must:
- Specify what rules they applied and how they applied them in every instance of content and account takedown, including high-profile takedowns like Alex Jones;
- Share old terms of service in a searchable archive, rather than simply noting that they have changed (currently, Facebook’s old terms of service are not provided to the public);
- Be transparent about their relationships with governments. This means companies should track and publish information about the number of requests they receive from governments to remove content under terms of service and what action was taken on those requests. They should also be transparent about general conversations about policy with government officials;
- Publish transparency reports on content regulation that include very specific numbers, broken down by country, type of request, details on how long processing took, and more;
- Share specific details about enforcement, such as at what stage content moderation algorithms are applied, what are the rules that cause a temporary suspension versus a permanent ban;
- Share specific details about content moderators, such as where they work, working conditions, and steps taken to ensure they are culturally competent; and
- We know that sometimes celebrities and high-profile people get their accounts restored, and sometimes even activists who can manage to get press– but every user should have easy access to human-moderated appeals, and the reasoning behind decisions made in those appeals should be made clear to users in the same way courts typically have to make their decision clear at least to participants in a case and sometimes to the general public.
Once we understand how companies are making the decisions they are making, it’s likely we’re going to see a hard truth- companies are making decisions to maximize profit, and if making decisions that are bad for human rights doesn’t affect their bottom line, they’re likely to keep making those decisions. Maybe not. Maybe we’ll discover something more complicated. Regardless, it’s the first step in making decisions about accountability mechanisms, legal liability, regulation, and more. We have to stop arguing about abstract ideas and take that step.
This work is licensed under a Creative Commons Attribution 4.0 International License.
[1]This is not a complete definition of the Right to Record, and we and others have many questions about what the Right to Record encompasses. We hope to explore these in conversation with civil society and activists. Look for more blog posts in coming months!