Updated with further information from Facebook that was published 19 March 2020 at 19:22 Pacific Time, after this post was already published.  

Most of us are spending a lot of time on social media right now. Social media has been instrumental in spreading misinformation about COVID-19, but it has also provided lifelines for people across the globe, in the form of fundraisers and offers of help for at-risk groups who can’t go grocery shopping or pick up medicine for themselves. It has also convinced some to take this pandemic seriously. In a particularly moving example, in a video with over 5 million views on YouTube, Italians recorded “a message for the themselves of 10 days ago.”

If you’ve been on Facebook in particular this week, it’s likely you’ve seen people complaining that their posts are being removed as spam. 1 These takedowns would be a problem in the best of times, but right now, they could be deadly. Facebook’s only comment on this has been a Tweet from VP of Integrity Guy Rosen which said that it’s a bug with their anti-spam software, and a follow-up that said it has been fixed. Anecdotally, it seems that complaints of removals have decreased. But we at WITNESS are concerned that this is just the beginning.

This week, YouTube, Twitter, and Facebook all announced that they would be increasing reliance on automated content moderation.  That means they will be using more artificial intelligence and removing humans from the process. One reason for this is because content moderators, who are often not direct employees, cannot work remotely-at least in the case of YouTube and Facebook. This is the subject of an entire separate blog post- it was wrong to force content moderators to come in as this pandemic was well under way, but it’s also wrong that these workers are employed by companies like Accenture, contracted by tech giants, and that the content moderation processes are so opaque that their work is inherently shrouded in secrecy. They should be direct employees, with the pay and benefits that come with working for tech giants, and they should be able to work remotely.

As platforms increase their reliance on AI at this critical moment, WITNESS is paying close attention to all major platforms to see if removals of human rights content increase. WITNESS has long been advocating around this problem along with our partners at Syrian Archive. Automated content moderation has not been working well. There’s no reason to expect it’s suddenly going to work now. If you are seeing large swathes of content get removed, please reach out to us. We hope that providing examples of improper removals can help improve the AI each company is using. In the long run we hope that it becomes clear that simply shifting to automated moderation isn’t the silver bullet companies have been making it out to be.

That being said, YouTube and Twitter’s announcements were relatively clear about what increased use of AI means, and- at the very least- they are not treating these takedowns as if they were reliable. They aren’t.  YouTube noted: “automated systems will start removing some content without human review.” The company’s blog post did go further to explain:

As we do this, users and creators may see increased video removals, including some videos that may not violate policies. We won’t issue strikes on this content except in cases where we have high confidence that it’s violative. If creators think that their content was removed in error, they can appeal the decision and our teams will take a look. However, note that our workforce precautions will also result in delayed appeal reviews.

Twitter’s announcement was similar, though they only noted that they will not permanently suspend accounts:

Increasing our use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content. We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes. As a result, we will not permanently suspend any accounts based solely on our automated enforcement systems.

Facebook, unfortunately, was even less forthcoming. This is particularly upsetting because right now, more than ever, Facebook IS the fabled “public square”; it is the place people are coming together virtually to share information, but also to comfort each other and even to take action and organize. Facebook said:

Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice… With fewer people available for human review we’ll continue to prioritize imminent harm and increase our reliance on proactive detection in other areas to remove violating content. We don’t expect this to impact people using our platform in any noticeable way. That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.

Well, Facebook…. there are limitations. Tell us more. It’s obvious that misinformation is a priority, and given the threat to people’s lives, that’s understandable. But shutting down fundraisers and deleting factual articles can also cost lives. Tell us that you won’t be shutting down accounts. Tell us what you are doing to mitigate the threats to people’s lives that are posed by this virus…beyond just unleashing a very zealous spam filter on all of us. Reassure the billions of people relying on you that you are, in fact, doing your best.

Update:Facebook told us more! Check out the update. Importantly, Facebook has admitted that “we expect to make more mistakes, and reviews will take longer than normal,” has noted that some contract reviewers will work from home, and that certain categories of content- including “terrorist content,” will be done by full-time employees. Finally, the updated notes that “we’ll give people the option to tell us that they disagree with our decision and we’ll monitor that feedback to improve our accuracy, but we likely won’t review content a second time.” 

And Twitter and YouTube, though Facebook is making you look good, we’re keeping an eye on you too. Update us as the situation develops. Make this the moment when you FINALLY allow civil society and experts like Syrian Archive and the Berkeley Human Rights Center to feed in to your algorithms and other content moderation processes. Reconsider your structures- who is a valued employee and who is a contractor?  Just like all the other things Covid-19 is making us reconsider, maybe now is the time to hit reset.

Please reach out to WITNESS if you are a human rights defender and your content has been taken down starting this week. We will do our best to respond. We may ask whether we can use your information in our advocacy efforts. 

1.Just a few examples, gathered in 30 minutes:

19 March 2020
Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

Leave a Reply

Your email address will not be published. Required fields are marked *