Yesterday Facebook released its policy on enforcing against manipulated media, detailing how the company would respond to deepfakes, and to a lesser extent shallowfakes. On Wednesday 8 January 2020 the US Congress will hold another hearing on digital manipulation, following on from the first congressional hearing on deepfakes in June 2019.

Over the past two years WITNESS has advocated on how to prepare better for synthetic media and deepfakes, identifying the best combination of global solutions and pushing for them to the public, media and platforms. Part of that has been identifying what policies on deepfakes should look like, so below is an initial response to some key questions relating to Facebook’s implementation.

What are the strengths of the policy? 

Deepfakes are not yet widespread outside of non-consensual sexual videos, but the technology and access trends suggest they will soon be much more available.  Facebook is right to be proactive in addressing them. This is particularly true given the failures globally of the platforms to prepare well for other forms of misinformation and disinformation. Platforms should be proactive in signaling, downranking – and in the worst cases, removing – malicious deepfakes because users have limited experience of this type of invisible-to-the-eye and inaudible-to-the-ear manipulation, and because journalists don’t have the ready tools to detect them quickly or effectively.

The devil will be in the details. How will Facebook ensure that it is accurately detecting deepfakes? How will it ensure it makes good judgements on when a modification is malicious or whether something is masquerading as satire or parody? How will it communicate what it learns to sceptical consumers? How will it make sure that any decisions it makes are subject to transparency and appeal because of the inevitable mistakes?

When it comes to removing media we advocate for clear and transparent processes that can also be appealed. In Facebook’s policy it is unclear how and when the company will communicate to users when manipulations have been identified and whether these decisions could be appealed — especially if the detection process is largely automated. We know that automated systems often fail when it comes to making judgements around speech. Any efforts by Facebook to make detection decisions themselves also need to be combined with making better detection resources available globally for independent fact-checkers and people around the world to make their own judgements.

It is also important that policy efforts be combined with initiatives to improve detection capacities like the Deepfakes Detection Challenge, to research how to communicate around manipulated media and to build journalistic skills and capacities in media forensics. Facebook taking steps in this direction, including in collaboration with the Partnership on AI, is important.

What criticisms do you have? 

Preparing well for deepfakes shouldn’t mean not responding to other existing more widespread challenges. Although deepfakes are an emerging threat, there is currently a much bigger problem with “shallowfakes,” media manipulation done with more simplistic techniques like editing or mislabelling or misrepresenting a video or photo.

In our meetings with civic activists, journalists and fact-checkers globally they reiterate that Facebook and WhatsApp haven’t solved for existing problems that are already far more widespread. You still can’t easily check whether an existing video is a ‘shallowfake’, a video that is simply slightly edited or just renamed and shared claiming its something else. In the absence of tools to detect the existing massive volume of shallowfakes – for example, a reverse video search out of WhatsApp – then deepfakes detection is a luxury.

Facebook could do more in explaining how they will reduce the spread of shallowfakes videos when they are known to be false and malicious in intent, and how they will support users and journalists to much more easily understand and/or detect them in the future — for example by providing the original version of a manipulated or misrepresented video in the feed.

What else is relevant to know?

Decisions on deepfakes and shallowfakes must center the people who are most vulnerable and impacted globally. In the US we often think of politically motivated deepfakes as targeting politicians or other high-profile figures. Elsewhere in the world, grassroots organizers are concerned that deepfakes could be used to silence politically inconvenient movements —  for example, in South Africa activists told us they were worried about facing a deepfake threat from their own government/police/military. Globally, in our expert meetings in Brazil and South Africa participants worried about whether the tools that are being developed, for detection for example, will actually be made available outside the US and outside major media. They also consistently demand that policies around manipulation not be made just with the US in mind — and that platforms adequately resource support globally, and build for the most vulnerable populations.

For this reason WITNESS is advocating for greater protections and easier redress for individuals who are targeted by synthetic media, and simultaneously a clear appeals processes for video removal, which would prevent accusations of media manipulation from being used to stifle legitimate political critique.

For more information read our recommendations on preparing for deepfakes, or our latest report on the burgeoning discussion on tracking what is true and unmanipulated.

Leave a Reply

Your email address will not be published. Required fields are marked *