— January 28, 2020

Earlier this month, Facebook released its policy on manipulated media. As a civil society organization with expert knowledge on the subject, WITNESS was asked to give input on the policy while it was under development, and it’s worth recognizing the positives of Facebook’s pro-active approach in developing a response to this emerging threat. At the same time, the policy has some obvious gaps — gaps that WITNESS program director Sam Gregory has identified in a blog post, and through quotes in various print and video interviews after the policy was announced.

Facebook’s new policy is a nice point of departure for considering another, similar event from the end of last year. In November 2019 Twitter also released a policy on manipulated synthetic media, this time a draft version that was put forward for public consultation. Twitter users could give input on the proposal via an online questionnaire that asked direct questions on removal policy, and used a Likert scale to measure sentiment towards position statements e.g. “Twitter has a responsibility to remove misleading altered media,” “Twitter has the right to alert people who are about to share misleading altered media,” and so on.

Independently, WITNESS presented the same questions and position statements to attendees of our deepfakes workshop in Pretoria, South Africa. In doing so we were able to gauge  sentiments towards questions of harm, freedom of expression, and censorship, and also drive conversations that would identify broader concerns with Twitter’s response to disinformation so far.

Though the conversation was driven by a discussion of Twitter’s draft policy, many of the takeaways can be generalized to social media platforms as a whole. To help inform similar policies, below we outline a range of positive steps that platforms should take, as well as pitfalls that they may encounter when coordinating a response to digital dis- and misinformation.

Face map 2

Positive actions for combating deepfakes

Based on our own research and feedback from groups we have worked with, these are some positive steps to address synthetic media content. The list is not exhaustive, but touches on some key points.

Clearly signal the presence of manipulated media

As of now most deepfakes can be spotted by an untrained observer, thanks to to unconvincing movements or glitchy visual artefacts that warp the face around the edges. But as the software evolves, manipulated or fabricated videos will look seamless enough to fool human observers on a timeline of a few years from now.

This is why it’s important to develop a clear set of signals around manipulated media, as visual cues triggered by detection algorithms may be the only way the general public can spot that something is amiss. But this also creates a problem — will audiences trust platforms to correctly judge which content is true and which is false, if the fake content corresponds with views they already hold?

This can be addressed by giving contextual clues that help audiences judge for themselves, rather than just labelling content. For example, unedited videos could be presented next to manipulated versions for comparison instead of (or in addition to) a simple “true/false” flag. 

Ultimately it can be hard to convince a viewing audience that a video is fake if they don’t already have a clear understanding of how and why misleading media is created in the first place. Due to this, we’re also advising that social platforms consider investing in media literacy training in regions where levels are low.

Contribute to OSINT and media forensics capability

For Twitter or any other platform there’s a risk in overly centralizing synthetic media detection capacity. Overall, the information ecosystem will be stronger if multiple parties can forensically analyze publicly available media and present their findings to a large audience.

In South Africa, journalists and fact-checkers told us that they were interested in OSINT and media forensics techniques but struggled to find the time and resources to incorporate them into their work. In light of this we’ve recommended that Twitter and others should commit to strengthening overall OSINT and media forensics capacity. This could be done by building tools and plugins that are accessible to third parties, creating dedicated OSINT/forensics teams to work with local and national media outlets, or providing direct funding for OSINT training for partner organizations. 

Share data on detected media 

In the last months of 2019, Facebook made a large dataset of synthetic videos available for the purposes of training algorithms to detect deepfakes. As with any emerging technology though, the speed of development is rapid, so there’s no guarantee that algorithms that perform well on Facebook’s dataset will effectively catch videos made with whatever is the most current software a year from now.

Because of this it’s important that Twitter, Facebook and other platforms share examples of manipulated deepfake videos they detect in the wild as training data, so that detection algorithms can be continually fine-tuned in a collaborative rather than competitive manner.

Stand firm on satire; resist government pressure

WITNESS works with activists in countries that have weaker protections for free speech than the US, and many of these activists are concerned that anti-misinformation legislation could be used to silence legitimate criticism that governments and/or powerful individuals find embarrassing or inconvenient.

This scenario can be pre-empted by creating clear provisions for satire and critique, such that there are clear definitions of what constitutes satirical content, and content meeting these definitions is unambiguously protected. For example, Facebook makes clear that its new policy on manipulated media “does not extend to content that is parody or satire,” but makes no reference to the standards that will be used to judge what is satire and what is not. As mentioned in the next section, definitions and protections around this issue should be based on existing standards and protections for freedom of expression under human rights law, rather than new and ad-hoc definitions.

Concerns around content removal are rooted in the fact that movement organizers perceive closer ties between social media companies and governments than between companies and activists, as we heard in South Africa and Brazil. Inevitably social platforms will come under pressure to remove content that is critical of governments, and they will need to stand firm and speak out against this kind of coercion. WITNESS echoes the human rights-based principles raised by the Special Rapporteur on Freedom of Opinion and Expression about ensuring that content moderation follows human rights norms and is not over-broad.

Pro-actively identify and protect vulnerable groups

When it comes to manipulated media some groups are more vulnerable than others. So far deepfakes have been almost entirely used to target women in non-consensual pornography, as a recent study by Deeptrace Labs found.

In South Africa we heard concerns from a sex worker rights advocate that women, especially women of color, would be targeted by deepfakes as the technology became more accessible. The person in question cited “slay queens” — a slang term for women, mostly black, using social media to portray a glamorous self-image — as being likely targets, given that they post a large number of photos and videos of themselves online (perfect input data to generate synthetic video), and often attract the ire of internet users who disapprove of their lifestyle.

When it comes to deepfakes, platform policy shouldn’t assume that all internet users are equally likely to be targeted. Women, LGBTQ+ people, ethnic or religious minorities and other marginalized groups are more susceptible to online harassment of any kind, and platforms should make sure that anti-harassment policies extend to protect them from attacks that arise from synthetic video.

Face map 3

Common mistakes in addressing manipulated media

Developing policies around misinformation and manipulated media presents a range of challenges. There’s always a balance to be struck between freedom of speech and the need to protect vulnerable communities from content that could be harmful, and while some cases are black and white, there are large grey areas in between.

Still, the difficulty of the problem is a mark in favor of assigning more resources, and a failure to do so is one of the mistakes we highlight below. In general we’ve found that certain mistakes or shortcomings arise across contexts: this is not by any means a targeted criticism of Twitter, but a list of pain points that recur in critiques made by activists and researchers, and which we hope can be avoided in the future. 

Not assigning sufficient staff

African activists were well aware that Twitter did not have a regional office for the continent, and that Facebook only employed a small Africa team. A lack of resources implies that the region is a low priority, while conversely, devoting sufficient resources sends a signal that the company is taking the region seriously (and leads to better implementation too). 

What’s more, local staff bring cultural and/or linguistic sensitivity that is hard to replicate outside of the region. This sensitivity is critical for making informed decisions on content moderation, especially when evaluating potential harms. Without local knowledge, participants in our Brazil and South Africa expert convenings expressed that they were skeptical about Twitter’s capacity to identify and understand cultural context.

Removing too much video content

Social media platforms have legal obligations to remove some content and face pressure to remove other. However, violent or graphic content posted to social platforms may contain documentation of war crimes or other human rights abuses, and overzealous removal of such content may erase evidence that could otherwise help to secure justice for victims. This is particularly true when automated processes are used to carry out these removals.

WITNESS program manager Dia Kayyali and Syrian Archive founder Hadi Al Khatib addressed this dilemma in an op-ed for the New York Times, arguing that automated removal procedures should be used sparingly in such cases, and that platforms should both invest more money in hiring more moderators, and make removed content available to researchers. Although automated content removal based on deepfake detection is not yet planned, this is a critical concern to look ahead to.

Also, some journalists and fact-checkers did not want demonstrably false content to be taken down because doing so would deprive them of the chance to debunk it. If a fake video or image disappears entirely there is no clear point of intervention when it comes to proposing a counternarrative. As a result, viewers may still remember the content of the fake media. While it may seem counterintuitive at first, leaving a lie exposed can be the best way for the truth to prevail.

Tailoring policies too narrowly to the US or Europe

Almost all major social media companies are headquartered in the US, but their userbase is global. Pressures from investors and legislators lead companies to be more responsive to US and European interests, but elsewhere there is concern that policies developed for those audiences should not always be generalized around the world, and are inadequately resourced even when this takes place. WITNESS also frequently hears from activists around the world their concerns that the platforms face extra-judicial pressure as well as attention from local elites within their own staff to act on content on their platforms that should not be suppressed.

In a recent example, Facebook-owned Instagram was found to be removing posts in praise of the assassinated Iranian general Qasem Soleimani, allegedly because of a belief that this was necessary to comply with US sanctions on Iran. This approach has been widely criticized, especially given that Instagram was one of the few Western social media companies still operating in the Islamic Republic.

In order to support freedom of expression worldwide, social platforms should recognize their obligation to support global human rights law.

Hindering fact-checkers

At convenings inside and outside of the US, fact-checking groups told WITNESS that they are frustrated with shifts in platform policy on which data they can access in order to combat misinformation on social platforms. In the case of Twitter, fact-checkers suggested that advanced notice of topics which were just beginning to trend would enable them to debunk false claims before they achieved widespread reach — something that is technically possible but has not yet been implemented.

Integrating fact-checking into the user experience is a crucial step in combating misinformation, since research has shown that users find fact-checking tags (e.g. “Rated false by fact-checkers”) to be more persuasive than debunks by news organizations. Given the speed at which news travels on social media, fact-checking organizations have their work cut out already, and should be given extensive support to operate efficiently and effectively.

Ignoring the right to appeal

As we acknowledged before, content moderation is a balance between freedom of expression and the reduction of harms. Since it’s impossible to make the right call on moderation 100% of the time, policies should build in a clear process by which decisions can be appealed in disputed cases.

If automated moderation is to be widely applied — which is a key component of being able to moderate content at scale — it’s important that platforms are responsive to disputes, rather than creating barriers to appeals. Ultimately, appeals should be seen as a core part of moderation, rather than an afterthought.

Re-inventing concepts that have already been defined

Concepts such as hate speech, libel, incitement to violence etc. already have accepted definitions under international human rights law. But when regulating online speech, platforms often succumb to the temptation to create their own operational definitions which may differ from conventional usage.

In a landmark report, UN Special Rapporteur on the right to freedom of opinion and expression David Kaye found that governments were circumventing human rights obligations on freedom of expression by making deals directly with social media companies to impose certain restrictions. Kaye called for greater transparency on which standards are being used — and advised that these standards should always be rooted in international human rights law.

Final thoughts

Both Twitter and Facebook are pro-actively developing policies around synthetic and manipulated media, which is unquestionably a positive step. But these policies should be developed cautiously, with an eye to both the harms that arise from moderating too much, and those that arise from moderating too little.

That’s why the process of consultation is key, and should be undertaken with a view to hearing diverse perspectives from around the world, not just the US. When these views are taken into account, we end up with a social media ecosystem that strengthens democracy rather than undermines it, and enables greater representation and protection for those who have been left out of mainstream discourse.

For more information on WITNESS’ work around deepfakes, visit our dedicated lab page on synthetic media.

Leave a Reply

Your email address will not be published. Required fields are marked *