The importance of accurate information during this pandemic is clear. But knowledge about the novel coronavirus is rapidly evolving.  This is also an unprecedented opportunity to study how online information flows ultimately affect health outcomes, and to evaluate the macro- and micro-level consequences of relying on automation to moderate content in a complex and evolving information environment. But such studies rely on information that your companies control–including information you are automatically blocking and removing from your services. It is essential that platforms preserve this data so that it can be made available to researchers and journalists and included in your transparency reports. 

That’s the core message of a letter WITNESS, Committee to Protect Journalists, and Center for Democracy and technology sent to nine companies today. We joined voices with 75 signatories, including civil society organizations from around the world and individuals from a broad range of academic institutions. 

We understand the need to combat dangerous information online, especially during this pandemic.  But we need to be able to assess these approaches thoroughly, and we need to ensure that valuable data—both the content itself, as well as information about the spread of that content—is not lost. Companies are pushing through myriad approaches to misinformation and disinformation, while increasing their use of artificial intelligence to remove content. 

WITNESS was one of the first organizations to point out the tangible human rights costs of shifting to the use of artificial intelligence (automation) for content moderation—since 2017 we have seen countless incorrect removals of human rights documentation from Syria in the context of so-called “extremist” content. We pointed out the dangers of using automation in the context of COVID-19 on the 19th of March. We started tracking company responses to mis and disinformation in March as well. 

We are not alone in pointing out that during the COVID-19 pandemic, human rights are at risk. UN Special Rapporteurs and expertsremind[ed] States that any emergency responses to the coronavirus must be proportionate, necessary and non-discriminatory.” The UN High Commissioner for Human Rights Michelle Bachelet said, “Human dignity and rights need to be front and centre in that effort, not an afterthought.” But just as states must preserve human rights, so must companies.

This letter addresses the key related issues of understanding the spread of the virus itself, as well as mis and disinformation about the virus,  and more broadly understanding how well automation is (or isn’t) working. That’s key, because when automation doesn’t work, human rights are at risk. 

First, posts on social media are undoubtedly spreading dis and  misinformation at a rapid pace, but like the myriad videos from Syria that have been improperly deleted by machine-learning algorithms, they also constitute an important historical record. In this case, social media posts and searches, leveraged in a privacy-protecting way, can provide “vital clues” about how COVID-19 is spreading. They also provide information about mis and disinformation. What’s more, they’re not just an epidemiological record, they’re also a journalistic and historical record of an unprecedented historical moment where governments are using the pandemic to crack down on dissent. As WITNESS has pointed out, people are  leveraging video and social media to tell their own stories, bypassing mainstream media. This is a particularly important medium for the communities most likely to be hard hit by both the virus and government abuses—those that are already marginalized. 

Second, this is an important moment to assess content moderation in action at a large scale. We are worried that once this pandemic is over, platforms will decide that if automation worked well enough during a crisis, it should be good enough for every day use. Despite the best intentions of the engineers and policy people working to build systems that will remove harmful information such as “drink bleach every day to keep COVID-19 away,” if companies believe that cost savings are worth the risk of improper takedowns, and even bad press, these systems could become normalized. That’s why we believe reviewing these takedowns when the world, and tech companies, are in less of a crisis response mode is important.  This will also ensure preservation of essential human rights content documenting the abuses warned of by the United Nations. The public also deserves to know how automation plays out, in the form of detailed transparency reports

Finally, we want to make it clear—we are not supporting wholesale capture and storage of people’s personal information. It is incumbent on companies, working hand in hand with civil society, to preserve this data without putting people’s privacy at risk. WITNESS can’t speak for any other civil society organization, but we are more than happy to take part in designing policies and consulting on technical requirements to make this happen. We know that there are many people at companies who are working incredibly long hours to do their part in this pandemic. We are happy to support this herculean effort. 

But first, we need confirmation from companies that they are taking human rights seriously and not deleting essential data.   

 

Leave a Reply

Your email address will not be published. Required fields are marked *