Today, Professor David Kaye presents his newest Report to the 38th session of the United Nations Human Rights Council. Professor Kaye is the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, and this Report addresses the increasingly urgent human rights implications of content regulation on platforms like Facebook. The Report incorporates the serious concerns raised in written Submissions to the Special Rapporteur from  civil society, including WITNESS’ Submission, and the recommendations from the Special Rapporteur for companies reinforced why WITNESS’ tech advocacy work has been ahead of its time and is particularly important now.

The 20 page Report, available in multiple languages on the UN website, is a must-read for anyone trying to understand how human rights principles can and should be applied to content regulation. While the Report addresses both States and Companies, WITNESS’ Submission focused on the role of technology platforms, and the biggest takeaway for us is the Report’s set of clear recommendations to companies:

1.“Companies should recognize that the authoritative global standard for ensuring freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content standards accordingly.”

We called for greater responsibility in our own Submission, noting that companies should “push back on laws or [content moderation] requests that violate human rights. . . even in places where it might affect their financial bottom line.” This Report provides an excellent basis for companies to start doing so, emphasizing that companies should have public commitments to “‘resolve any legal ambiguity in favour of respect for freedom of expression, privacy, and other human rights.’”

The Report minces no words when it says

“Companies committed to implementing human rights standards throughout their operations — and not merely when it aligns with their interests — will stand on firmer ground when they seek to hold States accountable to the same standards. Furthermore, when companies align their terms of service more closely with human rights law, States will find it harder to exploit them to censor content.”

2. “The companies must embark on radically different approaches to transparency at all stages of their operations, from rule-making to implementation and development of ‘case law’ framing the interpretation of private rules.”

The Report calls for better and broader input from civil society and users, and more transparency on how tools and policies work. We found this recommendation particularly striking. It is a stark reminder about why WITNESS’ tech advocacy program exists, and why we focus on the priorities raised by our own partners as well as their real-world experiences with the impact of company decisions:

“Companies too often appear to introduce products and rule modifications without conducting human rights due diligence or evaluating the impact in real cases. They should at least seek comment on their impact assessments from interested users and experts in settings that guarantee the confidentiality of such assessments, if necessary. They should also clearly communicate to the public the rules and processes that produced them.” 

We couldn’t agree more. Last November,when the Intercept asked us about videos documenting human rights violations in Syria that were being deleted en masse by automated content moderation tools on YouTube, we told them: “Huge companies need to recognize that every change they make … will have an effect on human rights users. Instead of working to fix issues after policies and tools have already been instituted, it just makes sense to reach out to stakeholders. And in our Submission, we called for companies to “Invest in meaningful, regular outreach to human rights defenders and targeted outreach in key situations” like the Rohingya genocide, so that potential issues can be avoided, and current ones can be dealt with more quickly. We also called for companies to “include real information about content regulation in transparency reports,” and to “audit their machine learning processes.” We were pleased to see these recommendations in the Report.

In July, when we initially got wind of the Syrian video issue, we said that we were “working with YouTube to understand and remedy these removals, and we’re glad they’re open to trying to fix this problem– but they never should have happened in the first place.” Talking to us would have helped, but we believe that companies can’t just strengthen their existing relationships with groups like WITNESS- they must also expand, particularly outside of Western European/Anglophone civil society organizations to groups like the newly emerging coalition of activists calling for “parity, transparency, and accountability from Facebook in the Global South.”

We also noted in our Submission the “rapidly approaching danger that human rights defenders who are risking their lives to capture human rights abuses on the ground will be dismissed as fake news.’” We were pleased to see the Report note; “Because blunt forms of action. . . risk serious interference with freedom of expression,companies should carefully craft any policies dealing with disinformation.”

Other standout recommendations included calls for “transparency initiatives that explain the impact of automation, human moderation and user or trusted flagging on terms of service actions,” as well as transparency when it comes to the relationships between States and platforms. As we wrote in our Submission, “some platforms acknowledge direct relationships with repressive States,” and those states even brag about their success in getting content taken down. The report makes it clear that “[U]sers can only make informed decisions about whether and how to engage on social media if interactions between companies and States are meaningfully transparent. Best practices on how to provide such transparency should be developed.”

3. “Given their impact on the public sphere, companies must open themselves up to public accountability.”

The Report calls for companies to bring “minimum levels of consistency, transparency and accountability to commercial content moderation,” and provides some clear recommendations like providing appeals on actions taken on content, such as “institut[ing] robust remediation programmes, which may range from reinstatement and acknowledgment to settlements related to reputational or other harms,” assessing the impact of new tools and policies on human rights, and seeking public comment on such impact assessments. The Report also recommends that automated content moderation technology should be “rigorously audited and developed with broad user and civil society input.”

As domestic and international policy makers eyes all turn to technology companies, in particular social media platforms, the threat of dangerous or reactionary regulation is all too clear. But companies haven’t stepped up to the plate either. This Report provides a number of clear steps that can help alleviate these issues, and we hope to see companies take those steps. Regardless, WITNESS and other civil society organizations will continue to advocate at tech companies for human rights.

-19 June 2018

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Leave a Reply

Your email address will not be published. Required fields are marked *