This post was updated at 14:40 CET to note result from 6 December 2018 meeting of the Home Affairs Council. 

There’s a reason why, when courts function properly, they offer more due process than corporations when it comes to making decisions about free expression. Deciding what speech can take place in public forums in democratic societies is not an easy task. While standards range from permissive jurisprudence of the First Amendment to the broader prohibitions against hate speech found in German law, at least these standards can be litigated, discussed, and understood. So why is the European Commission trying to push through a regulatory proposal that would not only force private corporations to regulate broad swathes of expression, but would actually require the use of opaque filters and artificial intelligence driven algorithms to do so? The only answer is that this proposal, set to be considered today, is political—and it must not pass.

The proposal

The European Commission has proposed a regulation on “dissemination of terrorist content online,” based on a process which started in September 2017. The Commission adopted a recommendation on “illegal content” more broadly in March of 2018, and proposed specific regulations on “terrorist content” in September of this year. A recent draft introduced slight revisions from the September version, but not in a way that addresses the grave concerns raised by civil society and business alike. The most recent version can be found here. The regulation is being discussed today in the Justice and Home Affairs Council and supporters are trying to push a “general approach,” which means less opportunity for discussion. Update from Politico on the meeting: the majority of Ministers in the Home Affairs Council approved a “general approach,” but “countries including Finland, the Czech Republic and Denmark said they could not support the general approach. Others like Slovakia said they needed more time and more work at the technical level.” 

The ostensible goal of this regulation is to force “hosting service providers” to remove “terrorist content” within one hour upon receipt of a removal order from any EU member states “competent authority,” and to use proactive measures, “including automated means,” to detect such content and prevent reappearance. The quotation marks all signify places in the regulation where definitions are unclear or poorly worded. The proposal would create specific obligations for hosting service providers to remove, to report on removals, and to coordinate their required “proactive measures” against terrorist content with authorities on an ongoing basis. Member states designate their own competent authorities, and set their own penalties, which can be up to “up to 4% of the hosting service provider’s global turnover of the last business year” if the state concludes that the hosting service provider has systematically failed to comply with the regulation. Furthermore, terrorist content is defined by each member state, and can vary from country to country. For example, in Spain a single law prohibits the glorification of terrorism alongside “humiliating victims of terrorism”—conflating two very different forms of expression.  

The problems

Much has already been written about the dangers of this regulation. Unfortunately, however, WITNESS has first-hand experience with how this proposal could go very, very badly—both inside and outside of the European Union—through dangerous copycat legislation and overuse of automated content moderation.

We agree with our allies, including the signatories to a letter sent to the Ministers on December 4, that the regulation is poorly written, unnecessary, and overbroad–and that it appears to be getting pushed through as a ploy, coming before the 2019 parliamentary elections. For good general background information and legal analysis, check out these articles:

We have two concerns about how, based on our experience, this regulation will be harmful.

Automated content moderation  

First and foremost, this proposal encourages increased use of machine-learning algorithms for identifying terrorist content, as well as increased automatic takedown through shared databases of material that has been deemed terrorist content. This architecture already exists, and it is shrouded in secrecy and error.

Through the Global Internet Forum to Counter Terrorism (GIFCT), many major companies already have a shared database of extremist content that violates their Terms of Service (which means it might not even violate the law.)This database helps companies take down content before it is ever seen. Unfortunately, the GIFCT provides almost no information about this database publicly, including information about quality checks or reassessment. Errors in this database will be propagated throughout all of GIFCT’s members if not corrected.

Facebook and YouTube, who are GIFCT founding members, also already use machine-learning algorithms to detect and remove so-called extremist content. We have been dealing with the fallout of YouTube’s extremist content algorithm since August of 2017, and alongside our partners the Syrian Archive, we have observed removals of hundreds of thousands of channels and videos that are documenting human rights abuses in Syria.

What we don’t know is almost anything at all about those algorithms. We don’t have the most basic assurances of algorithmic accountability or transparency, such as accuracy, explainability, fairness, and auditability. Platforms use machine-learning algorithms that are proprietary and shielded from any review. Groups like WITNESS have to look to patterns and hearsay to get any idea of how they’re working. Unless they’re specifically designed to be “interpretable,” these algorithms can’t be understood by humans, since they learn and grow over time—but we can’t even get access to the training data or basic assumptions driving the algorithms. There has never been any sort of third-party audit of such proprietary technology, although we would strongly support this if companies were open to it.

When it comes to accuracy, we have seen how even existing systems are already having disastrous effects on freedom of expression and documentation and research of human rights abuses. As noted, Syrian Archive has seen hundreds of thousands of videos go missing, and whole channels shut down. We have reviewed many of these videos personally. They are coming from groups recognized by journalists and human rights bodies like the United Nations. When they do depict extremist activities, they often indicate that they are trying to show the world the human rights abuses taking place in Syria. But many of these videos simply have no link to extremism—they are showing demonstrations, or the aftermath of a bombing.

Syrian Archive isn’t the only group relying on such open source knowledge to investigate human rights abuses. That list includes the International Criminal Court, Human Rights Watch, the United Nations, and even prosecutors in Sweden and Germany trying to prosecute perpetrators of war crimes and terrorism in Syria. These investigators don’t have an alert system in place with platforms that would allow them to review deleted material and save evidence that could later be crucial from being lost forever.

We have no doubt that this regulation strongly incentivizes companies to open the floodgates on automated content moderation.

A bad precedent for the world

WITNESS works at a global scale, and we have seen how policies made in democratic societies can be used to repress human rights in the wrong hands, or even simply how policies in one setting don’t work elsewhere when they are copied. At its most basic level, this regulation takes decisions about what speech is legal or illegal away from courts and lawmakers, and places those decisions with corporations and poorly defined “competent authorities. It encourages platforms to create machinery that easily could be used in undemocratic societies to silence critics. And it encourages the idea that fighting “terrorist content” can justify any legislative excess—an idea already embraced by Russia, Egypt  and many other countries with terrible human rights records.  

The role of the European Union in setting norms that can potentially affect the human rights of billions is only growing larger. What’s more, no legal system has caught up with the Internet, and every new piece of legislation regulating it matters. Legislative responses to problems raised by the Internet, can and do spread globally, even when they pose a threat to free expression, like the many legal responses to misinformation popping up around the world . That’s why we’re deeply concerened about the bad precedent that would be set globally should this regulation pass.

This is not hypothetical. Germany’s NetzDG law, which creates a dangerous system for rapid removal of “illegal content” on social media, has already been cited as a positive example  by many countries, including Russia, a country with an obviously atrocious record when it comes to free expression. In fact, Reporters Without Borders says that a dangerous Russian law passed in 2017 “”is copy-and-paste of Germany’s hate speech law.” Even within Germany’s democratic system, NetzDG has already been used to silence Tweets that were parodying hateful speech from a far-right Alternative für Deutschland politician.

Governments around the world already abuse platforms’ terms of service to silence critics. Imagine how expertly governments that already abuse Facebook’s terms of service to silence dissent, such as Cambodia, will abuse legislation like the terrorist content proposal in their own countries?

What’s next?

Although this regulation is being pushed hard, it’s not likely to pass before the European Parliament goes on its winter break on December 13. WITNESS will monitor the regulation, and will be sharing the above analysis of it with Ministers and Members of Parliament. We will continue to support efforts of our allies, such as this open letter from La Quadrature du Net. For the most up-to-date news, follow us on Twitter: @witnessorg

6 December 2018

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

 

One thought on “European “terrorist content” proposal is dangerous for human rights globally

Leave a Reply

Your email address will not be published. Required fields are marked *