In 2021, critical issues at the intersection of social media, accountability, and human rights are finally at the center of global public discussion. It took the attempted insurrection at the US Capitol to lead to social media platforms finally suspending former President Donald Trump’s accounts and forcing a discussion about online content, hate and violence. Against the backdrop of the ongoing COVID-19 pandemic as well as rampant political mis/disinformation, governments are looking at further measures to address truth and falsehood, as well as broader legislative and regulatory efforts around online platforms.

For more than a decade since our Cameras Everywhere report in 2011, WITNESS has been focused on how internet platforms mediate trust, evidence online, and amplify or suppress marginalized voices. This systemic work is grounded in our ongoing support to the use of video and technology for human rights and efforts to integrate grassroots, global voices into advocacy to tech companies.  Here we outline critical questions relevant to our work globally, and recommend actions moving forward.

TL;DR

  • Ensure platform accountability is grounded in global realities and human rights
  • Hold accountable leaders who incite violence on social media and get away with it
  • Create workable ‘evidence lockers’ for critical online content
  • Ground content moderation reform in human rights
  • Defend witnessing
  • Fight mis/disinformation in the right way: “There is truth and there are lies, lies told for power and for profit.”

Ensure platform accountability is grounded in global realities and human rights

Human right activists globally (e.g. in Myanmar, Brazil, Sri Lanka, Ethiopia, the Philippines, Hungary, and India) have pointed out the failure of platforms to resource, respond to, and act when social media is used to incite violence, amplify crisis, or stir hatred in their countries often coordinated, commercialized and directed by governments. In the US, activists have called Facebook and other platforms to account for failing to address racialized hate and misinformation. For WITNESS’s partners, these problems are a daily lived reality: the unwelcome take down of a critical evidentiary or revelatory video, an online attack from an unknown source, a “rumor” that sparks real-world violence, or a coordinated smear campaign such as the kind that targeted the monk Luon Sovath in Cambodia. 

Although some progress has been made, in the overwhelming majority of contexts activists point to an under-resourcing of corporate staff responsible for reacting to crises, neglect of civil society calls for action, over-cosiness and closeness of platforms to political power-holders, and lack of understanding of context and harm in order to make informed decisions about potential risks. 

Leading digital rights activist Nighat Dad of Digital Rights Foundation Pakistan captured the frustration felt by so many advocates globally in her tweets following January 6th:  

“…It took white men running these tech giants to see capitol incident to take “drastic” step but would this be the same in our countries? Most global south digital rights activists are caught between a rock & a hard place, on one hand their work is threatened in their own countries, and on the other hand, they have to call truth to power to platforms that don’t see themselves accountable to users in their countries. Platforms often apply different standards/policies to global south countries, while at the same they often capitulate to oppressive govts when their bottom line is in jeopardy.

What’s needed: Any effective platform accountability approach — either for structural reform or current harm reduction — must center human rights and look globally at how platforms have simultaneously failed rights defenders and marginalized and minority communities, and protected powerful leaders. Advocacy must demand that platforms pour more resources into global efforts, considering that euphemistically-called ‘emerging markets’ are real societies home to the majority of the world’s population. And we must continue to push-back against illegitimate or human rights-compromising demands made by governments.

Moving beyond status quo to reform and renewal, legislators, civil society, and the media have already put significant pressure on platforms to respond to hate and globalized mis/disinformation. But in this context, when voices from power and privilege are the loudest, we must intentionally center the experiences, expertise and voices of marginalized communities and critical activists, particularly from the Global South (see many suggestions here and here), as well as informed academic research that empirically assesses the impacts of platform moderation.

We must ensure that the urgent next steps on platform accountability are led by human rights principles (as articulated by the former Special Rapporteur on free expression, and outlined in this guide from AccessNow), with accountability and transparency for users and researchers as articulated in the Santa Clara Principles

Hold accountable leaders who incite violence on social media and get away with it

More often than not, world leaders who incite violence and hatred online get away with it. My former colleague Dia Kayyali highlights this trenchantly in a recent article: If Trump Can Be Banned, What About Other World Leaders Who Incite Violence? Twitter and Facebook should stop treating the US as exceptional, noting situations involving leaders in Brazil, India, and the Philippines (see also New York Times and Chinmayi Arun among others)

Now, as Trump’s suspension is passed for review by Facebook’s Oversight Board, it’s essential to consider whether this will be a one-off, US-centric exception or a precedent-setting decision grounded in international human rights standards. At its core, this is a question of ensuring that greater power comes with greater responsibility when it comes to speaking on a social media platform. Leaders should be subject to equal or greater scrutiny when they push boundaries on platforms, not less (as has been the policy on many platforms until recently). At the same time, we must demand transparency on how decisions are made for both for leaders and ordinary users, and hew to human rights principles of legitimacy, proportionality, and specificity rather than over-broad, inconsistent deplatforming. 

What’s needed: Platforms must not give leaders a free pass on promoting hate speech or mis/disinformation, as they often have until the belated action on President Trump this past year. With power comes responsibility, and freedom of speech does not guarantee freedom of reach. Moving forward, we must demand to see these approaches applied consistently and with an understanding of the context outside the US, rather than reinforcing a trend of US exceptionalism that has come to define platform accountability. 

Create workable ‘evidence lockers’ for critical online content

Videos shot by participants and journalists at the attempted insurrection at the Capitol show acts of violence and individuals within the crowd directing actions. The widespread dissemination of these videos shed light on the ways in which social media content can also serve as a source of evidence and resource for accountability, both in the moment and in the future. Within our work on strengthening video as evidence, WITNESS has engaged extensively on the issue of videos shot by the perpetrators of crimes (e.g. documentation by ISIS) and its value for accountability. These videos are now starting to play a key role in international criminal justice cases. Many times these videos are also lost because they are quickly removed by platforms for violating their terms of services, or are removed later on by the uploaders who realize they implicate them or others.

WITNESS and the Human Rights Center at Berkeley have advocated for a number of years for ‘evidence lockers’ that protect critical public interest and human rights content when it is removed by the platforms, dating back to our own experience running the Human Rights Channel on YouTube from 2012 and watching human rights content disappear day-by-day. Recent research from Human Rights Watch grounded in the ongoing experience of groups like Mnemonic further speaks to the need.

Our perspective on evidence lockers is informed by our experience supporting local communities and activists to archive important human rights content: we need to decide what content these lockers hold, who has access to it (recognizing privacy and security but also not excluding global stakeholders who are not the big human rights and justice players), and how and for how long data will be retained. Ethics and privacy need to be central.

We have also seen how documentation captured by eyewitnesses, activists and perpetrators of violence often graphic, disturbing scenes such as the aftermath of a devastating air-raid on a civilian space is lost rapidly from online platforms. This happens due to both legitimate rules around extremist content, as well as arbitrary, discriminatory or overbroad application (both by humans and automated methods) of those same rules, as well as related graphic content rules. In 2017, the archiving organization Mnemonic saw hundreds of thousands of videos shot by frontline witnesses, documentors and journalists from the Syrian conflict disappear overnight from YouTube due to being falsely labeled by an automated sweep as “terrorist content,” despite capturing likely evidence of war crimes. In response, it is crucial to address the growing use of artificial intelligence-based and automated moderation as it intersects with increasing government regulation and its impact on vulnerable users and critical public-interest content.

What’s needed: Civil society and government must make a concerted effort to ensure that documentation of future human rights crises shared to online platforms can be saved within ‘evidence lockers’ (secure archives or sharing mechanisms that preserve important human rights content, even if the public does not see it) and made available for legitimate usage. This requires critical thinking on privacy, access, and security informed by diverse communities and an inclusive process of developing these evidence lockers. The data from evidence lockers can also help us ensure that AI-based content moderation is accountable to human rights principles and is correctly deployed. WITNESS will be sharing more on its proposals for such evidence lockers in the coming weeks.

Ground content moderation reform in human rights

The Biden administration in the US as well as the Congress, and legislators in the UK (Online Harms Act) and across Europe (Digital Services Act, TERREG) have already pointed to an intention, or are in the process, of increasing platform regulation. WITNESS supports new regulations or reform where those are centered on fundamental human rights principles balancing privacy, freedom of expression, and assembly with legitimate concerns about hate speech, harmful misinformation/disinformation and incitement of on- and offline violence. One key area of focus is content moderation: the choices that platforms make about what content (and users) are allowed on their sites. Based on previous policy responses and the tendency of lawmakers to focus on their own jurisdictions, we fear that future regulations will myopically center the US and Europe, leading to adverse consequences for global populations that have already been harmed and excluded (as well as for vulnerable populations in their home jurisdictions). 

Policy responses must take into account how previous attempts to force rapid takedowns of broad categories such as ‘terrorist and violent extremist’ content have swept up critical human rights evidence and the accounts of legitimate human rights defenders. It is likely that impending reform efforts will be broader even than the current TERREG proposals in the EU on terrorist and violent extremist content, carrying even greater potential for enhancing rights and conversely creating adverse consequences. There is also an increasing proliferation of cross-company efforts such as the Global Internet Forum to Counter Terrorism (GIFCT) to share examples of violating content and potentially remove it across all platforms. These ‘content cartels’ have important implications for how both necessary removals, as well as mistakes and abuses of the system ripple across the whole social media ecosystem. We also note clear risks in ‘legislative opportunism‘ by non-democratic governments to exert power to crush dissent using supposed ‘fake news’ and ‘national security’ laws. 

What’s needed: Legislation and regulation must be built on human rights principles and with a clear understanding of trade-offs. Legislators must listen to human rights defenders or marginalized communities domestically and globally to avoid worsening an already-challenging situation, and democratic governments should ensure their measures cannot be easily co-opted or emulated to suppress dissent and free speech, or compromise privacy.

Defend witnessing

At the heart of WITNESS’s work is a commitment to ensuring a greater number of people are able to speak truth to power, show the reality of human rights violations and challenge lies through their activism, eyewitnessing, and civic journalism. The promise of a more diverse set of people able to expose these realities is evident in the work by WITNESS’s global partners and allies and networks and many others who tap into the power of smartphone witnessing on a daily basis. People use video to expose war crimes, demonstrate patterns of abuse by police and other state actors, ally with OSINT investigators who scour online sources to expose wrongdoing (just as groups and individuals have tried to do around the Capitol riots though with a focus on stronger safeguards), or advocate on long-standing injustices such as indigenous land rights. 

Calls for racial justice in the US, #ENDSars in Nigeria, and other movements globally have emphasized the power of smartphone witnessing to galvanize movements and expose long-standing harms to broader communities. Critical to the power of this diversified capacity to hold to account is the right to record, which was recognized at the UN in 2018. Now more than ever, we need to promote the right to witness, the right to record. 

Human rights defenders do, and often must, use social media and digital tools to find, share, and advocate around injustice. However, bad actors use these same platforms to target, surveil, attack, and harass marginalized groups – and they are doing it in more and more effective ways. In today’s world, social media and online platforms are used just as often to share misinformation and hate, or promote anti-rights agendas, as they are to advance rights and justice. Women, BIPOC, LGBTQI+ and other marginalized groups and individuals are disproportionately and intersectionally targeted and harmed. 

What’s needed: We must promote the skills and capacities of a wide range of activists, civic journalists and human rights defenders to continue to hold governments and perpetrators to account for rights violations using video, social media, and technology, and defend the right to record

Meanwhile, we must center and address the security risks inherent to such actions, and the imbalance of power between governments/platforms and human rights defenders. WITNESS’s commitment to systemic advocacy around issues encountered by our grassroots allies and partners on a day-to-day basis is informed by this clear understanding. Our concerns on platform accountability further reflect the unequal playing field in which eyewitnesses, civic journalists, and rights defenders operate.

Fight mis/disinformation in the right way: “There is truth and there are lies, lies told for power and for profit.”

In President Biden’s inaugural speech, he promised a commitment to truth and facts. As we reform and improve social media and confront hate and mis/disinformation we must avoid throwing out ‘the baby with the bathwater,’ preserving the very necessary, increased diversity and voice in the digital revolution while recognizing the weaponized, unequal space that is the online world.

Over the past three years, WITNESS has focused extensively on responses to new waves of mis/disinformation, particularly the evolution of strategies for creating and using manipulated audiovisual media. This includes both ‘shallowfakes‘ (e.g. mis-contextualized and mis-captioned videos and photos that drive violence), as well as emerging issues of ‘deepfakes’ that target women, force them out of the public sphere, and compromise the capacity to use video as evidence. WITNESS organized a series of global convenings (Brazil, Sub-Saharan Africa, Southeast Asia, USA) that center the lived and professional experience of activists, journalists and civic actors globally, which emphasize mis/disinformation as global problems, and necessitate solutions based on the majority world experience and the experiences of vulnerable populations not just the preoccupations of the political class and media in the US and Europe. Critical issues we heard included an emphasis on how misinformation and disinformation is racialized and heavily gendered, and concerns about how increased media manipulation undermined the credibility of legitimate grassroots video as evidence. Often participants assumed that both policy and technical solutions would be ill-suited to their contexts and realities, and that they would not have access to them or resources and capacity to use them. They explicitly tied misinformation and disinformation trends to the actions of their own governments and related problems of closing civil society space, surveillance and criminalization.

We caution strongly against technological solutions that lead to inadvertent but predictable effects. The bias within tech companies will be to seek ‘scalable’ technology solutions to societal problems. As noted above, this applies to attempts to control ‘terrorist and violent extremist content’ that sweep up human rights content and target minority voices with arbitrary and rapid timelines for content removal, and cross-company coordination on takedowns and use of automated methods. But the same logic applies in the techno-determinist world of mis/disinformation responses. We will continue to advocate for a nuanced understanding of trade-offs in emerging infrastructure that responds to mis/disinformation, and a recognition that technology solutions often address symptoms and not underlying problems and that single solutions to complex societal problems don’t work.

One example of where it is critical to highlights trade-offs and non-negotiables is the push to develop an ‘authenticity infrastructure’ such as the Adobe-led Content Authenticity Initiative. These approaches to better tracking origins, manipulations and edits of online media can both powerfully serve people asserting truth and fact and fighting shallowfakes like mis-captioned media, but also inadvertently threaten their physical safety and their credibility if they cannot or chose not to use these tools. 

What’s needed: WITNESS has sometimes used the language of ‘seeing is believing‘. Although this remains powerfully true in some contexts where seeing rights violations and understanding grassroots realities with our own eyes can spark us to action, it is also heavily contested. Yet we must fight back against rhetoric that claims you cannot believe anything you see. One of the most pernicious effects of the “It’s all faked.. It’s a deepfake” rhetoric is an enabling of those in power at the expense of genuine grassroots truth. We must scrutinize technical solutions to ensure they actually respond to underlying power dynamics, don’t disenfranchise critical voices and are genuinely accessible and relevant to global contexts.

Looking ahead to 2021 and beyond, we must defend the truth of experienced reality at both a grassroots and systems level and fight commercialized and politicized incentives that discriminate against that truth and promote lies. We must focus on how activists and journalists can enhance the trustworthiness of their content, ethically respond to the weaponized strategies of commercialized and partisan lies, and engage with both allies and enemies on the narrative front of emotions and persuasion.

Leave a Reply

Your email address will not be published. Required fields are marked *