—December 9, 2019

The technology behind deepfakes is the same wherever you are in the world. But as WITNESS continues with a series of workshops on deepfake preparedness globally—first in Brazil and now in South Africa, with a workshop in Malaysia still to come—one thing we’re learning is that the perceived threats from this technology vary greatly from place to place. 

In this respect, one dimension that has emerged as a significant influence on threat perception is the level of trust in government in a given country, and more specifically the degree to which a government will tolerate dissent, i.e. respect the right of its citizens to criticize both policies and politicians without fear of reprisal. Having first uncovered this axis of concern during threat modeling in Brazil, WITNESS staff encountered similar attitudes in a workshop in Pretoria, South Africa conducted in November.

Here, social movement organizers in particular expressed concerns informed by their experience of confrontation with police and governmental representatives, and didn’t hesitate to interpret the (as yet) theoretical threat from deepfakes in light of the pushback they faced while advocating for the rights of marginalized groups.

Incitement to violence

To crystallize one big contrast with the discussion in the US, there was genuine worry over the potential of deepfakes to incite violence rather than just spread misinformation. 

Sometimes these worries focused on the potential for rumors to spark mob violence in areas with political or ethnic tensions: It’s not hard to imagine a faked video portraying a member of a particular group either committing or becoming a victim of a crime, or making disparaging remarks about their ideological opponents, leading to clashes on the streets. In fact, we don’t have to imagine: this has more or less happened in South Africa already, when fake videos were spread during the spate of xenophobic attacks targeting Nigerian businesses, stoking tensions by claiming that Nigerians had been deported or killed.

When participants from grassroots social movements imagined the harms resulting from deepfakes, other scenarios that came to their minds were of doctored footage being used as cover for state violence from either the military or police force—for example, making it appear that a person shot dead by police was carrying a weapon, or changing the narrative in some other way that would exonerate the aggressors.

This too has already occurred in South Africa: after the Marikana massacre in 2012, where 46 striking miners were shot dead by police, investigators found that police had planted weapons near miners’ bodies in photos that were subsequently taken of the scene. One has to conclude, in light of such events, that fears of falling prey to a framing or coverup are not misplaced.

In the US it’s not uncommon to hear deepfakes framed in the context of international subterfuge: as a new weapon in the arsenal of Russian intelligence services, for example, fueling partisan rivalry between Republicans and Democrats and destabilizing the American public’s understanding of truth. In our Pretoria workshop this was almost entirely absent. For citizens of these smaller, less geopolitically significant countries, nation-state attacks were not on the radar—or rather were deprioritized in the face of other threats. Instead the offensive intelligence capabilities of the state were seen as much more likely to be trained on internal voices of opposition, and citizens engaged in  political criticism felt that they had more to fear from their own governments than foreign actors.

It’s worth mentioning in passing some elements that were the same: at the individual level participants feared that female activists, especially women of color, would be targeted with sexually explicit faked videos, and that plausible rumors would attack the credibility of voices that were critical of institutional powers. These threats are real and serious, but as they have been covered elsewhere we’ll defer a more detailed discussion for later.

Maintaining civic space

Activists, journalists and computer scientists who came to the Pretoria workshop all shared anxieties over the ease with which misinformation is introduced into online spaces. At the same time, most were wary of pushing for greater restrictions on speech and discourse at a time when many African countries are already increasing censorship, through direct or indirect means.

One attendee from Uganda lamented the “OTT Tax,” a fee charged to all users of social media in the country that has resulted in millions of poor Ugandans leaving online spaces. Another mentioned Nigeria’s increasingly pro-censorship government, which under the guise of restricting fake news has proposed legislation that would fine individual internet users for spreading false or harmful information. (Naturally the government is the arbiter of what falls into this category.) And in South Africa, new laws to criminalize hate speech—a lofty goal given additional weight by the shadow of apartheid—have been criticized for being ill-defined and thus liable to silence the speech of marginalized groups.

More broadly, these restrictions on speech can be tied to the “closing of civic space”, a phenomenon which has been identified as a global trend since 2014. This closing is characterized by an intent from national governments to not only curtail free expression, but obstruct the activities of journalists, and block civil society actors (charities, NGOs etc) from operating effectively or at all. Hence the need, in addressing the spread of misinformation, for policy frameworks that mitigate harms without closing down debate, and avoid further contributing to the preconditions for authoritarianism.

Given that most of the major social media platforms are based in the US, it’s easy for American values—both positive and negative—to have an outsize impact on the way these companies make policy. But the bulk of the users of these platforms are distributed around the world, meaning that decisions made in a boardroom in San Francisco are felt much further afield, even if they have a tendency to prioritize the needs of Silicon Valley venture capitalists first.

In convening workshops outside of the US and Europe and relaying the outcomes to platforms and legislators based here, WITNESS is ensuring that voices in the conversation are more reflective of this global user base. We’re using these inputs as a guiding factor in our policy prescriptions on deepfakes, and we hope you’ll help by sharing the results.

In the coming weeks we’ll also add more blog posts to this series giving an in-depth look at perceived threats, possible solutions, and key values to consider when formulating a response. We’re also producing an extensive report on the findings of the workshop, which will be released in early 2020.

Leave a Reply

Your email address will not be published. Required fields are marked *