Using end-to-end encrypted services like WhatsApp to share information can save human rights defenders’ lives. But misinformation can put those same human rights defenders, as well as marginalized people around the globe, in imminent physical danger. In this post, we are sharing some ideas for how WhatsApp can combat misinformation while preserving end-to-end encryption. Please see today’s other blog post, “Harm Reduction for WhatsApp,” to read basic tips for using WhatsApp more safely, and some ideas for making WhatsApp security better.
The Encryption Conundrum
The use of WhatsApp to spread misinformation has been in the news more and more lately. The only people who can read the contents of WhatsApp messages are the sender and receivers, because the app uses end-to-end encryption. WhatsApp does have access to subscriber info such as name and contacts (more on that in our Harm Reduction post.) Because of this, messages on WhatsApp are not subject to content moderation in the same way misinformation on Facebook or other public platforms might be. But that is also one of the reasons WhatsApp is widely used by activists around the globe, who depend on its end-to-end encryption, to organize and share human rights related material amongst themselves and with journalists. Although messaging app Signal may be more secure, in many countries around the world WhatsApp packages are free or low-cost. Aside from the overwhelming popularity of WhatsApp, this is another reason it can be difficult for activists to move to Signal.
Perhaps spurred on by its accessibility, WhatsApp is also increasingly used as a source of news; a new report from the Reuters Institute for the Study of Journalism, which consulted “over 74,000 online news consumers in 37 countries” found that “WhatsApp use for news has almost tripled since 2014.”
Unfortunately, we are seeing that WhatsApp is being misused to spread incredibly dangerous content. Misinformation spread on WhatsApp groups in Brazil appears to have sealed the win of extremist Jair Bolsonaro in last month’s Presidential elections. WhatsApp-spread misinformation has ignited violent attacks and even murder for people in India, Mexico, and elsewhere.
Weakening WhatsApp’s encryption may seem like a solution to misinformation on WhatsApp. But we cannot stress enough what a bad idea this is. Draconian government crackdowns on human rights defenders are only increasing worldwide, making WhatsApp’s end-to-end encryption even more important than before. Amongst other dangers faced by human rights defenders, governments are surveilling social media and using “fake news” laws to throw human rights defenders in jail for livestreaming, sharing videos, and posting content oppositional to authoritarian governments.
We are concerned that calls from civil society and governments to WhatsApp to fight misinformation may not take the accessibility of—and need for—WhatsApp encryption seriously. We are also heartened to see practical suggestions that don’t require breaking that encryption. In particular, we agree with the pragmatic suggestions put forth in a New York Times op-ed during the Brazilian elections that included restricting the number of times a message can be forwarded to five, limiting how many people a user can send a message to at one time (currently WhatsApp allows users to send a single message to up to 256 contacts at once), and limiting the size of new groups. Researchers made these suggestions for the election, but WhatsApp has to consider making them permanent or at least being prepared to institute them during events like the Brazilian election in the future.
During the Brazilian elections fact-checking organization Comprova, a coalition of 24 news organizations, helped fight misinformation by reviewing material sent to them via email and WhatsApp. This followed on the heels of Verificado in Mexico, and First Draft’s Cross Check initiative in France. Fact-checking organizations like AfricaCheck have now provided an institutional WhatsApp account that can be used to forward them WhatsApp messages for review directly. Many of these fact-checking organizations are funded by Google News, and some by Facebook.
WhatsApp should provide increased financial support—with very clear independence—to organizations that are carrying out fact-checking, and not just during elections. While there are always concerns about influence when large companies provide such funding, funding existing organizations is preferable to having fact-checking done with the same lack of transparency that plagues content moderation at Facebook now. WhatsApp should also meet with these groups to determine how it can make their work easier. First Draft noted that Comprova’s work was made easier by messaging features of WhatsApp’s new business API. Fact-checking groups likely have other suggestions for features that could increase their effectiveness—although they must ensure that they are not impinging on users’ privacy.
When it comes to educating users about misinformation,it’s simply intuitive that WhatsApp should provide media literacy education and general warnings about misinformation directly in the service as a regular WhatsApp message. WhatsApp’s campaigns in India and Brazil have relied on traditional media, and sometimes sponsored Facebook ads—but these are not likely to be as effective as an in-app notification that users have to interact with.
Lastly, there is room for creative solutions, like a database of debunked messages that allows reverse image search or easier ways to block WhatsApp direct marketing services. Coming up with creative solutions that actually feasibly requires WhatsApp to make it’s staff, in particular engineers, available to see what is possible and how the platform and new features actually function. This also means looking at potential misuses for any solution. For example, while the Business API may be helping fact-checkers, if not properly managed it too could become a vector for misinformation. It’s important to think about how the API could be misused now, to get ahead of that issue.
Similarly, if WhatsApp makes it easier to deploy chatbots, the platform should consider ensuring that conversations with chatbots are clearly marked as such. Messages sent from businesses, such as the messages used to spread misinformation during the Brazil election, should be prohibited unless the business is registered with WhatsApp, and WhatsApp should ensure that it can detect mass marketing campaigns. That way, if there is another large misinformation campaign, companies that participate can be banned from the platform. All of these solutions require some analysis of pros, cons, and feasibility—which will be nearly impossible without some help from WhatsApp.
Tell us what you think
WITNESS has a long history of working directly with companies to provide pragmatic suggestions to difficult problems- something clearly needed now more than ever. We hope WhatsApp will take our suggestions seriously. But we think you probably have great ideas too. We’d love to hear from WITNESS supporters. Check out our “Harm Reduction for WhatsApp” post and share your ideas to make WhatsApp safer, too! Reach out to us online or tweet with the hashtag #WhatsUpWhatsApp. We´ll be keeping an eye out for more suggestions.
-13 November 2018
This work is licensed under a Creative Commons Attribution 4.0 International License.