—May 2020: Sam Gregory, Dia Kayyali, Corin Faife.

Last updated: 5/29

In this blog post we give an overview of some of the key trends in platform response to misinformation, disinformation and harmful speech during COVID-19, and develop a set of criteria through which the wide range of possible reactions can be assessed, using a framework based on human rights and our experience working with marginalized communities and human rights defenders globally . We then apply this framework to highlight areas in which response has been strong and should be expanded into other information domains, along with key gaps and concerns which should be addressed as circumstances evolve. This leads us to a set of recommendations for what companies should do now, what they should continue to do in the future, and what they should stop doing altogether.

Part 1: Framing the problem and response
– How COVID mis/disinformation compares to other topics
– Sorting the good from the bad
Part 2: Key platform trends in COVID-19 response
– Subtractive measures: Information removal or restriction
– Additive measures: Boosting trusted information
– Financial measures: Constraint or support
Part 3: Lessons for the future
– What we should preserve
– Gaps that remain
– Underdeveloped areas
– Challenges

Part 1: Framing the problem and response

[Direct linkBack to top]

The global COVID-19 (“coronavirus”) pandemic has brought the world into an unprecedented state of crisis, the effects of which will be felt for years—if not decades—to come. We have been closely monitoring the human rights impact, which has been characterized by numerous instances of repression, intimidation, surveillance overreach and direct physical violence around the world, as well as broader issues of access to health, shelter and livelihood.

At the same time, like many other organizations, we are also following the rapid shifts that are taking place in the information space. For years, WITNESS’ tech advocacy program has engaged with social media platforms on issues of content moderation, while a newer strand of our advocacy work has honed in on responses to digital mis- and disinformation. Both of these topics have now become critically relevant, as social platforms struggle to contain an overwhelming volume of misinformation about COVID-19 (the so-called “infodemic”) at a time when the stakes are uncommonly high.

While the pressure from governments and publics to remove false or misleading information is great, platforms (and governments) should not buy into the logic of trading safety for fundamental rights like freedom of expression and freedom of assembly. Instead, as required by international human rights law, we should aim to ensure that when attempting to control the pandemic all actors only restrict rights in a way that is necessary, proportionate and non-discriminatory. Businesses are subject to the UN Guiding Principles on Human Rights, while countries are obliged, even in a state of emergency or public health crisis to restrict rights only under specific constraints. These constraints – outlined in the Siracusa Principles (pdf)—justify restrictions on rights only when they have a legal basis, are strictly necessary, based on scientific evidence and neither arbitrary nor discriminatory in application, of limited duration, respectful of human dignity, subject to review, and proportionate to achieve the objective. 

This is not a question of balancing: Instead we should strive to be clear about harms from different approaches, honest about cases where solutions for one set of problems come at the expense of other rights, and question strongly when a response is clearly overreaching in scope, under-explained in necessity or legitimacy, or overly harms a particular group.

Platforms’ responses to the flood of harmful or false information related to the pandemic also highlight existing disparities in how they prioritize resource distribution for different regions of the world. We are especially vigilant of the dangers of largely US-based companies creating and deploying rules with global applicability without understanding local contexts, local languages and cultural differences, and giving way to political pressures—and how this can mean mis- and disinformation outside of the US is treated with less seriousness. For example, the anti-Muslim #coronajihad hashtag was allowed to trend in India just weeks after Muslims had been killed in pogroms, while other misleading coronavirus claims were removed. 

This moment also provides a unique laboratory in which to learn about what platforms are actually capable of, while assessing the dangers to human rights presented by rapid deployment of automation, stricter rules for content, and government pressure. Although action is needed in a crisis, restricting this to necessary, legitimate and proportionate action (consistent with businesses’ responsibility to respect international human rights law under the UN Guiding Principles) is the key: It’s important to take a step back where possible in order to evaluate present and future impacts and understand how the response can be improved.

After this, we can consider how our general response to mis/disinformation can be refined within the new possibility space. That’s why we believe it is a key moment to look at how platforms are reacting and to have the best possible data on what has happened, which is why WITNESS was a joint signatory of a letter urging tech companies to preserve evidence of takedowns during the pandemic. 

How COVID mis/disinformation compares to other topics

Although platforms have employed a wide range of strategies to fight coronavirus mis- and disinformation, it would be wrong to imply that these strategies are suitable for fighting all false, dangerous or manipulated information. To understand why, we can enumerate some of the ways that the information landscape around the coronavirus outbreak is distinct:

  • The public health emergency of the global pandemic—as in other emergency contexts—allows for some curtailment of particular human rights under certain circumstances, provided that this is a proportionate and time-bound response. A statement from the UN High Comissioner for Human Rights frames this further, reminding member states that any emergency measures must be non-discriminatory, motivated by legitimate public health goals, and communicated clearly to the public and relevant treaty bodies when fundamental rights are being limited.
  • There is now a globalized context of linkage between bad information and direct physical harm, rather than the localized or national contexts in which information has been weaponized in most recent platform contexts. 
  • In a health crisis the definition of “authoritative information” is clearer and less subject to partisan dispute (albeit with exceptions). As a result, it’s easier for platforms to decide which information to elevate, and which to downrank. 

At the same time, these points come with caveats: There is even more consensus around climate change than around COVID-19 treatment, and a greater harm in the long run, yet climate misinformation has not triggered such strict platform policies—nor did the immediate harm inflicted on New Delhi’s Muslim population in recent ethnic violence, or the genocide of the Rohingya in Myanmar. Recognizing this critical link between mis/disinformation, dangerous speech and harm is critical.

Further, in the present context of coronavirus, the information coming from “authoritative sources” has changed over the duration of the pandemic—on the one hand, giving rise to concerns that legitimate debate may have been suppressed, and on the other, creating challenges in explaining that scientific understanding does change over time—and that in many parts of the world, notably the US and Brazil, official government sources have been purveyors of misinformation.

In summary, these are different circumstances to localized, politicized mis- and disinformation, where the incentives for platforms to act are fewer, and the political risks may be greater. Indeed, outside of the “Global North,” platforms have not dedicated sufficient resources to assessing linguistic, cultural and political context, which makes responding appropriately far more difficult.

Sorting the good from the bad

In light of the unique nature of the pandemic and the rapid shifting of the information space, how then do we sort out what is good and bad about this, and what do we need to take into account to do it? 

One of the most crucial tests is whether responses strike a balance between human rights, freedom of expression, and the range of harms that could result from both action and inaction. As mentioned above, responses must be shown to not only be necessary, but also proportionate and non-discriminatory.

Besides this, we’re considering how broadly platforms are implementing new policies (are they global or country specific?), and if the former, whether these measures are appropriate for such uniform deployment.

In addition, we’re examining how well global platforms take into account the realities of local contexts and implementation. For example, in the past we have shared activists’ concerns that Twitter and Facebook have not dedicated sufficient resources to Africa, leaving them ill prepared to interpret colloquialisms, social and geopolitical context and cultural nuances when moderating content. We can also look at how well platforms have done with translation or processing of languages with non-Latin characters, and more generally the degree to which less widely spoken languages are supported.

We’re also basing our assessment on how well platforms have implemented content moderation in the past, and what changes they’ve made to improve if they didn’t do well. This ties in to a recognition of the risks of automated content review, and a corresponding need for transparency about the results, as mentioned earlier.

Lastly, we’ll consider the dangers of misuse by governments amidst the backdrop of an explosion of information control and “fake news” laws. Attempts to reduce misinformation about the coronavirus cannot be an excuse to suppress criticism and debate, nor to crack down on opposition groups; and equally, we should be wary of any measures that increase surveillance powers without criteria for an eventual rollback.

Part 2: Key platform trends in COVID-19 response

[Direct linkBack to top]

Outlined in broad strokes, most of the key trends in how social media platforms have responded to COVID-19 represent an acceleration of pre-existing trends rather than a rupture with existing policies and processes.

First, even before the pandemic, in the past few years we had begun to see a shift away from a laissez-faire attitude to misinformation, towards a recognition that platforms have at least some obligation to signal the presence of false or misleading content in situations where viewers may not be able to distinguish for themselves. Facebook has incorporated third-party fact-checking into the platforms, and both Facebook and Twitter have implemented policies on deceptive or misleading media—analyzed by WITNESS elsewhere—while Google-owned YouTube prohibits manipulated media as part of its deceptive practices policy, and has also unveiled a more narrowly focused policy banning election-related disinfo. All three platforms framed their policies around a central criteria of likeliness to cause harm, and, given the clear physical and societal harms engendered by coronavirus misinformation, all three have moved to much more aggressively flag and/or remove misleading content related to the virus at scale.

Second, though in conjunction with the first point, platforms have significantly increased the level of automation used in content moderation. For a range of reasons, some related to outsourcing and others related to psychological well being, the content moderators used by Facebook and YouTube (the biggest employers in this field) cannot easily do their jobs from home. This has meant that as work-from-home orders were implemented in various countries, the number of human moderators reviewing flagged content has decreased—at the same time as content removal in general is being ramped up. To make up for the shortfall, platforms are relying more heavily on automated content moderation, with the result that—based on existing experience with human rights content—we should anticipate a greater number of posts being incorrectly flagged, and more content that does not contravene standards being removed. To better understand this we’ve been supporting the critical need for data on takedowns to be preserved: There are clear human rights implications to automated content removal, such as the erasing of evidence of war crimes, highlighted by WITNESS and the Syrian Archive in a New York Times op-ed on YouTube and the Syrian civil war. As we argued in a previous blog post, some of this is unfortunate but inevitable, so it is important that platforms are transparent about what their automated systems can and can’t do, and how they are trying to minimize false positives.

Third, at the intersection between platform policy and government legislation are a number of new laws that have been passed in various countries specifically to criminalize the act of spreading misinformation about the coronavirus. While these laws would usually prove contentious, history shows us that it is far easier to obtain consent for “security” measures in a climate of heightened public fear.

Charting platform response so far

Various organizations have been tracking week-by-week changes rolled out by the platforms so far, and our intention is not to duplicate this work. (For good examples see living documents created by First Draft and Disinfo.eu.) Still, it’s helpful to recap some of what has been done in order to identify major themes, since this will form the basis of our evaluation and recommendations later. For this reason our analysis here is organized by action rather than by platform, and aims to be comprehensive in terms of strategies covered while not capturing each individual instance.

Subtractive measures: Information removal or restriction

Taking down harmful information

Across the board, platforms with publicly searchable information have been aggressively removing misinformation about the coronavirus. Twitter has been more proactive in deleting tweets, rolling out a policy that prohibits unverified claims that could “lead to widespread panic.” It has also extended the tweet labelling system for misleading media to add contextual labels and curated factual information and explanation to misinformation about COVID-19.

 As of late January Facebook announced that it would be removing false claims and conspiracy theories about the coronavirus, and has gone a step further by notifying some users that they have been exposed to misinformation about the coronavirus—showing them fact-checks debunking false claims that have since been removed.

YouTube has also released a policy on COVID-19 medical misinformation, clarifying that videos are not allowed to contradict WHO or local health authorities’ guidance on COVID-19 prevention. Content that violates the policy is subject to removal, with three violation “strikes” against a channel resulting in the channel being terminated.

Applying rules to global leaders

In general, Facebook and Twitter both employ controversial policies that exempt politicians from their usual rules on posting false information. Where coronavirus is concerned, both platforms have decided that politicians are not exempt: Facebook has been removing political ads containing coronavirus misinformation (but not other kinds); Twitter  went as far as to delete tweets from Brazilian president Jair Bolsonaro which encouraged an end to social distancing; and Twitter, Instagram and YouTube all removed more Bolsonaro posts declaring hydroxychloroquine to be an effective treatment for COVID-19.

However, while Twitter has removed tweets by high profile US figures like Rudy Giuliani for containing similar misinformation, it declined to remove tweets from Donald Trump referring to hydroxychloroquine as having a “real chance,” but stopping short of declaring the drug to be a cure.

Update: On May 26 Twitter added a fact-check label to a tweet from Trump for the first time, providing context to a claim about mail-in vote fraud. A further tweet was hidden/restricted for appearing to encourage shooting of rioters in Saint Paul, Minnesota following the killing of George Floyd by police. Both of these tweets were labelled under pre-existing policies on misleading content and glorifying violence on the part of public officials. At time of publication there are no contextual labels on his COVID-related tweets.

Trump tweet restricted for glorifying violence

Imposing forwarding limits

WhatsApp’s closed messaging groups have proved to be a huge conduit for the spread of misinformation, since content cannot be externally moderated. Instead, the platform has imposed limits for some kinds of message forwarding, so that messages identified as “highly forwarded” can only be sent onward to one group rather than five.

Recognizing limits of automation

As mentioned above, platforms have been relying much more heavily on automated moderation to remove high volumes of misinformation with reduced human capacity. In recent times Facebook has been forced to admit that  automated moderation (such as its extremist content policy) can result in overreach. In this case,  Facebook has admitted that “automated systems we set up to prevent the sale of medical masks needed by health workers have inadvertently blocked some efforts to donate supplies.” 

ASSESSMENT – Key points to note:

  • Taking down supposedly harmful information needs to be done in a consistent, appealable manner consistent with the Santa Clara Principles
  • Global leaders should not be exempt from rules around sharing manipulated media when it is done as communication, not as clearly-labelled political campaigning, such as political ads
  • The ‘digital wildfire’ of rapidly spreading false information on messaging platforms is a key concern in most regions where WITNESS operates: introducing limits on large-scale forwarding are a useful break on the ability to amplify and distribute false information
  • Platforms need to provide data on the automation for both better research into handling information in a pandemic and to understand implications of more aggressive misinformation policies paired with increased automated decision-making

Additive measures: Boosting trusted information

Elevating authoritative sources

The inverse process to removing inaccurate or harmful information is ensuring that good information receives more visibility. YouTube has tried to do this by providing a direct link to the CDC’s main COVID-19 information page beneath all videos relating to coronavirus (in the US), while other platforms like Twitter, Instagram and Facebook have used search terms related to coronavirus as the entry point to direct people to official sources. 

Many platforms have rolled out some kind of “information hub” combining news about the platform’s response along with links to key resources. Twitter has also directed resources towards accelerating the verification process for medical professionals, who have emerged as an important source of information during the pandemic.

Facebook coronavirus info panel
Facebook search for “coronavirus” directs to COVID-19 Information Centre

Giving free advertising

Facebook, Twitter and Google have all provided free advertising space to the WHO and national health authorities, giving them a channel through which to reach a large number of platform users with the message of their choice.

Curating information on the platforms

Being media-driven by nature, YouTube has had success in curating playlists of relevant coronavirus information and presenting them directly on the homepage. Some of these playlists contain news updates and the latest health guidance, while others provide tangential content like quarantine cooking suggestions or home workout routines. 

Youtube stay home curation panel
YouTube curates videos relevant to quarantine into playlists

Whereas hashtags on Twitter and Instagram would usually provide user-driven curation, here both of these platforms have had to take the opposite response, intervening to redirect hashtags to official health information rather than allowing them to bring together all tagged content including potential misinfo.

Increasing fact-checking

Various studies have shown that presenting fact-check labels along with headlines on social media can help slow the spread of misinformation. To control the spread of misinformation about the coronavirus, Facebook has increased the reach of third party fact-checkers on the platform in a number of ways: adding new fact-checking partners, adding a section for fact-checked articles in the COVID-19 information hub, and supporting fact-checkers with grants, as detailed in a blog post. For the first time, Facebook has also taken the step of notifying users when they have interacted with dangerous misinformation that has subsequently been taken down, in a move that has been well received by the fact-checking community.

ASSESSMENT – Key points to note:

  • Providing curated or authoritative information about public health measures is a good idea.  However, providing authoritative information is challenging outside of contexts with fixed, public information (e.g. scientifically validated medical approaches or dates of elections). 
  • Front-end efforts to provide good authoritative information also relate to back-end efforts at the algorithmic level to recommend good sources. We need to ensure that these do not structurally bias against civic journalism sources in favour of official sources – for example by consistently down-ranking alternative media sources that may be more reliable in many contexts.
  • Although in many circumstances platforms should not provide ‘authoritative’ information on key issues they can do better in using some of the approaches deployed during COVID-19 to provide guidance to users on key skills and literacies — like how to assess a post for truth or falsehood

Financial measures: Constraint or support

Product/ad prohibition

Another pillar of COVID-19 response has been stricter regulations about what products can and can’t be advertised on a given platform—though policies have evolved in key respects as the crisis has continued.

For example, amid shortages of PPE for health workers, Amazon banned the sale of N95 masks to the general public, diverting all supplies towards hospitals and government agencies. And to protect against price gouging and other predatory listings, Facebook banned all ads for medical masks, hand sanitizer, and disinfectant wipes.

Google initially imposed a blanket ban on ads related to COVID-19, but later relaxed the policy to allow ads from government entities, hospitals, and other health providers. YouTube made a similar adjustment to video monetization policies: Initially ads were turned off for videos connected to coronavirus, but later reinstated them after creators complained that there was no longer an incentive to make informative content.

Blocking market access

Apple and Google control access to the iOS and Android markets through their respective app stores, and both companies have been removing coronavirus-related apps that don’t meet quality standards. Apple has blocked all coronavirus apps except those made by “recognized entities” such as governments or medical institutions, and Google has also removed apps en masse. Although it’s still possible to install banned apps by other means, far fewer users will take the time to go through unofficial channels.

Direct monetary support

Facebook and Google have both announced funding programs to address coronavirus misinformation by supporting journalism. Facebook has pledged $100 million to support the news industry, mostly through grants to local newsrooms, whereas Google has chosen to focus on independent fact-checkers with $6.5 million funding.

ASSESSMENT – Key points to note:

  • Funding for media is a concrete step that the platforms must take, particularly given their own role in cutting into media revenues and reach. Our key concerns relate to ensuring that a diverse range of global media are supported including media in the Global South and community media. Our research into new forms of digital mis/disinformation has also identified key areas where journalism funders, including platforms, need to invest – including critical OSINT (“open source investigation”), media verification and digital forensics skills needed to deal with increasingly sophisticated media manipulation.

Part 3: Lessons for the future

[Direct linkBack to top]

On social media platforms, as in so many other areas, the COVID-19 pandemic has created the conditions for the rapid development and testing of systems that existed in some form but had not been deployed at scale. Not all of these experiments have produced desirable outcomes, but platforms and human rights organizations are in a position to learn from both the good and the bad as we decide which measures to preserve and which to either improve or dispense with.

What we should preserve

Better information management from platforms

Where platforms have previously allowed social media users to navigate information streams independently, the pandemic has demonstrated the value in being more proactive in supporting users to find better information, and to recognize it as such. 

This goes beyond simply providing verification checkmarks (which prove identify a user but say nothing about the quality of the information or their expertise on a topic area), towards highlighting sources’ domain expertise—e.g. communications from the CDC or WHO where epidemiology is concerned—and foregrounding this information at the time it is most needed, such as in search results.

When questions of health and/or harm are concerned, platforms should also maintain a willingness to push valuable information or key guidance to users at times when context suggests they will be receptive to it. This could be a sidebar next to thematically related content, a pop up window on a website, or even a push notification from an app if the information is critical and time-sensitive.

YouTube COVID info panel
Information panel added below all YouTube videos on coronavirus

Curating information has also proven useful to surface more science-based, reliable content: If done on topics with recognized authoritative sources, platforms should not shy away from presenting collections of good quality sources, or playlists of informative content.  However, in many cases, platforms like YouTube are doing this curation to address the failings of their algorithms in over-surfacing misinformation over good information. Any approach that is on a broader range of mis/disinformation must take into account the existing known structural deficiencies with algorithmic recommendation systems and the parallel problems of shadow banning of marginalized human rights content. It must also ensure that “authoritativeness” does not get weaponized against critical media voices. 

Equally, information management is a two-way street: Besides highlighting and curating information, during this time platforms have made more points of contact available to the public like call-in numbers on WhatsApp.

Transparency, data sharing, better engagement and resourcing to work with vulnerable communities  and clear right to appeal where automation is used

As we have argued before, when platforms use automated moderation, they must be transparent about the limitations. This goes further than admitting that the system has flaws, as Facebook has already done: Platforms need to share detailed data about content removal and enable independent observers to assess whether the impacts are evenly distributed, and suggest corrections where they are not. It’s time for platforms to make third-party oversight, transparency and data sharing  a standard part of operating procedure, to ensure that automation is used in a way that respects human rights moving forward. As with all content moderation, there also needs to be a clear, accessible way for users to appeal erroneous or biased decisions.

Principled, clear handling of public figures

In cracking down on inaccurate claims made by Jair Bolsonaro, Twitter, Facebook and Google showed that they were prepared to hold influential politicians to the same standards applied to everyday people. Now that this precedent exists, there should be no valid reason to allow public figures to contravene these rules, especially when clear harms could result from their behavior.

More proactive moderation to avoid offline harm

Having seen how quickly platforms can act in a public health crisis and what resources they are able to marshall, we have also seen a precedent set that they can do more to challenge, reduce the spread of, or remove information that has a direct link to offline harm. We should be able to expect sufficient resources to go into understanding local contexts and languages and, when necessary, engaging in more proactive moderation that still respects free expression. This is particularly important when marginalized communities are being targeted—while it is too late for those who were killed in the recent anti-Muslim Delhi riots or many other instances around the world where people have been harmed as a result of dangerous speech, there can be no further excuse for inactivity. 

Gaps that remain

The points made above all suggest positive developments in responding to misinformation, but we also see many kinds of shortcoming that need to be identified and addressed. Broadly speaking these can be sorted into structural failures, gaps and oversights, underdeveloped areas in need of support, and challenges without a clear cut solution, for which continued research is needed.

Structural failures, gaps and oversights

Fundamentally many of the challenges of platforms in handling misinformation, disinformation and dangerous speech relate to their underlying models of monetization, management of attention and recommendation. However, to better address critiques of these problems in a global human rights context we must also address:

Not resourcing globally

A first, fundamental point to make is that US-based platforms have historically left large gaps or uneven distributions in the allocation of their resources globally, focusing disproportionately on North America and Western Europe, with lesser attention to the rest of the world.

Not listening globally

Closely linked to a failure to resource globally is a continued need to listen closely to civil society and media voices outside of North America and Western Europe, many of whom are organizing in coalitions such as the Next Billions Coalition.

Not preparing for adverse human rights impacts adequately

Although we have recently seen Facebook release additional human rights impact assessments for their platform operations in Sri Lanka, Indonesia and Cambodia, we need to see even greater attention, in advance, to adverse human rights consequences of platforms and products in vulnerable societies. Doing these human rights impact assessments well will be directly correlated in its effectiveness to the degree which companies resource and listen globally to a diverse range of impacted communities. Companies have also not done human rights impact assessments on the effects of their algorithmic decision-making.

Data sharing and transparency

As a networked problem misinformation cannot be addressed in silos, and threat intelligence sharing between platforms will have a net positive effect on the health of the ecosystem overall as long as information sharing does not result in ‘content cartels‘ that arbitrarily take down content across multiple platforms. However beyond intra-industry collaboration, data should be shared with other relevant actors (researchers, journalists, activists) to the extent possible, to enable better analysis of what is happening in-platform and cross-platform and the potential adverse consequences for human rights.

Underdeveloped areas

In-platform tool provision

Unfortunately the tools that exist for analyzing and verifying media are not often easily accessible from within the platforms where misinformation is encountered. In our expert meetings there has been a consistent demand for better tools to address shallowfakes that are integrated directly into social media apps, e.g. the ability to reverse image/video search, check for prior uploads of a photo or video, and see the original version of a piece of media side-by-side with a piece of manipulated or miscontextualized video. 

Media literacy guidance appropriate to modern contexts, combined with support to community leaders

In workshops led by WITNESS around the world (e.g. Brazil), activists have listed media literacy training as a key requirement for combating misinformation in their communities. In the face of the COVID-19 pandemic, platforms have recognized that they have a responsibility to steer users towards reputable health information—and it is not unreasonable to ask that they make similar efforts to support users to navigate other forms of information too. Allocating funds towards appropriate media literacy initiatives and to training and support to community-based civic activists and leaders who can mediate harmful information with others has the potential to create large benefits as the digital sphere becomes a key component of public participation around the world.

Journalistic media forensics capacity

As information experts, journalists have a key role to play in addressing misinformation, but the journalism industry is facing unprecedented resource pressures. Most journalists still lack skills in media verification, and this will only get worse as media manipulation becomes more sophisticated in tandem with an overall decline in journalistic capacity. To address this trend, it is vital to find new sources of funding for the development of media forensics skills and capacity, both in existing forms of media manipulation and shallowfakes, as well as emerging forms such as deepfakes. Bearing in mind capacity and resourcing constraints this could take the form of supporting dedicated expertise rosters, hubs of support for regions, as well as ensuring that all tools are built with journalistic workflows and needs in mind

Inclusive solutions development

WITNESS is a member of various initiatives to prepare for emerging forms of mis- and disinformation such as deepfakes. These initiatives are developing innovative solutions to a range of pressing problems, from manipulated media detection to enhanced content authentication and tracking—and as a member, WITNESS is advocating for such solutions to be developed, tested and deployed with vulnerable and/or marginalized communities in mind.

For more on this topic see our blog posts on deepfake detection and building content authenticity infrastructure.

Challenges

Opacity of closed group sharing

Private messaging apps like WhatsApp, Telegram, WeChat, Viber and others provide a challenge for the spread of misinformation because of their opacity to researchers and, depending on encryption, even the app companies themselves. Organizations like First Draft News have produced some guides for researching closed messaging apps, but more research is needed into how to slow the spread of misinformation—without compromising the legitimate privacy needs of users who may rely on end-to-end encryption. As tools like reverse image search and similarity search are introduced in platforms to make it easier to see if an image or video has been miscontextualized or re-used in misinformation these need to be available in messaging apps too.

Growth of restrictive “fake news” legislation

Around the world, especially in non-democratic countries, fake news legislation is growing at lightning speed. Under the guise of stopping the spread of misinformation, laws such as those enacted in Singapore, Nigeria and South Africa (plus more listed in the COVID-19 Civic Freedom tracker) vest power in governments to become the arbiters of what is true and false, and create a legal avenue for them to sanction or silence critical voices. This development presents a serious threat to freedom of expression which the human rights community must be aware of.

False positive rate of automated moderation

As noted previously, automated moderation has its limits—and this is particularly true for issues less clear cut than health misinformation. Although automation is the go-to solution for platforms wishing to deploy cost-effective solutions at scale, research has shown that automated solutions can lead to takedowns of critical public interest content that is erroneously tagged as inappropriate. This is even more the case in global contexts, as well as situations involving non-Roman scripts, and greater resource allocation is needed to parse these both automatically and through human moderation.

Difficulty in defining harm

Finally: Defining the exact harms and links of cause and effect involved in misinformation will often be hard, but we must still undertake to do so on issues that are not black-and-white. For topics like climate change, the “politicization” of the issue cannot be an excuse for inaction, when the harms—though distributed—are enormous in scale. For others, like the speech of political candidates, the inability to draw a line in all cases between campaigning and misinformation is not an excuse for allowing every kind of false statement.

In a time of crisis, managing a rapidly changing information landscape presents great challenges for platforms, and navigating it presents challenges for the users themselves. However, we have also seen many moments of potential improvement and chances to strengthen the ecosystem, and hope to see the best parts of the response carried forward into the future.

Leave a Reply

Your email address will not be published. Required fields are marked *