For updated information on WITNESS work in this area please visit wit.to/Synthetic-Media-Deepfakes 

This is the first in a series of blogs on a new area of focus at WITNESS around the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media” and how we push back to defend evidence, the truth and freedom of expression. We’ll be sharing expanded elements of the report and further details on the recommendations in separate blogs—the second blog in the series can be found here

This work kicked off with an expert summit—the full report on that is available here

This work is embedded in a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights as emerging technologies such as AI intersect with the pressures of disinformation, media manipulation, and rising authoritarianism.

People have started to panic about the increasing possibility of manipulating images, video, and audio, often popularly described as “deepfakes”.  In the past decade Hollywood studios have had the capacity to morph faces —from Brad Pitt in “The Curious Case of Benjamin Button” to Princess Leia in “Star Wars’ Rogue One”—and companies and consumers have had tools such as Photoshop to digitally alter images and video in subtler ways. However, now the barriers to entry to create and manipulate audio and video in multiple, more sophisticated ways are beginning to fall, requiring less cost, less technical expertise and drawing on widely available cloud computing power. At the same time, the sophistication of manipulation of social media spaces by bad actors has led to increased opportunities to weaponize these manipulations.

This changing landscape allows for new challenges to human rights and reliable journalism that potentially includes categories of disruption including:

  • Reality edits removing or adding into photos and videos in a way that challenges our ability to document reality and preserve the evidentiary value of images, and enhances the ability of perpetrators to challenge the truth of rights abuses.
  • Credible doppelgangers of real people that enhance the ability to manipulate public or individuals to commit rights abuses or to incite violence or conflict.
  • News remixing that exploits peripheral cues of credibility and the rapid news cycle to disrupt and change public narratives.
  • Plausible deniability for perpetrators to reflexively claim “That’s a deepfake” around incriminating footage or taken further, to dismiss any contested information as another form of fake news.
  • Floods of falsehood created via computational propaganda and individualized microtargeting, contributing to disrupting the remaining public sphere and to overwhelming fact-finding and verification approaches.

Why WITNESS is engaged

For more than 25 years, WITNESS has enabled human rights defenders, and now increasingly anyone, anywhere to use video and technology to protect and defend human rights. Our work and the work of our partners demonstrates the value of images to drive a more diverse personal storytelling and civic journalism, to drive movements around pervasive human rights violations like police violence, and to be critical evidence in war crimes trials. We have also seen the ease in which videos and audio, often crudely edited or even simply recycled and re-contextualized can perpetuate and renew cycles of violence.

WITNESS’ Tech + Advocacy work frequently includes engaging with key social media and video-sharing platforms to develop innovative policy and product responses to challenges facing high-risk users and high public interest content. As the threat of more sophisticated, more personalized audio and video manipulation emerges, we are focused on the critical need to bring together key actors before we are in the eye-of-the-storm, to push back against apocalyptic narratives on this issue, and identify proactive solutions to ensure we prepare in a more coordinated way.

What are deepfakes and synthetic media?

The development of new forms of image and audio synthesis is related to the growth of the subfield of machine learning known as deep learning, which includes using architectures for artificial intelligence similar to human neural networks. Generative Adversarial Networks (GANs) are the technology used in deepfakes. Two neural networks compete to produce and discern high quality faked images. One is the “generator” (which creates images that look like an original image) and the other is the “discriminator” (which tries to figure out if an image is real or simulated). They compete in a cat-and-mouse game to make better and better images.

The cost of producing these new forms of synthetic media has decreased significantly in the last few years given increasing amounts of training data, computing power and effective publicly shared approaches and code.

So what to call these manipulations? The terms to describe these advances in video and audio manipulation are not yet well defined. The current conversation is dominated by the term deepfakes, which refers to the result of software that swaps a face between one person and another, and which was initially deployed in contexts such as nonconsensual image manipulation for porn.  But a broader range of manipulation (and consequent malicious uses) of audio and video is possible and has been called “synthetic media.”

Potential tools susceptible to mal-uses included:

  • Individualized simulated audio: The enhanced ability to simulate individuals’ voices as developed and available commercially via providers such as Lyrebird or Baidu DeepVoice.
  • Emerging consumer tools that make it easier to selectively edit, delete or change foreground and background elements in video. Concepts such as Adobe Cloak are advancing image editing currently available in tools like Photoshop or Premiere and competitors such as Pixelmator to allow better potential seamless editing of elements within video.

  • Facial reenactment: This refers to using images of real people as “puppets” and manipulating their faces, expressions and upper body movements. Tools such as Face2Face and Deep Video Portraits allow the transfer of the facial and upper body movements of one person onto the realistic appearance of another real person’s face and upper body.

  • Realistic facial reconstruction and lipsync created around existing audio tracks of a person as seen for example with the LipSync Obama project.
  • Real people with exchange of one region, typically a face:  Most commonly seen via deepfakes created using tools like FakeApp or FaceSwap these approaches also relate to technologies utilized in consumer tools like Snapchat, in which a simulation of the face of one person is imposed over the face of another person or in which a hybrid face is produced.
  • Combinations such as a deepfake matched with audio (simulated or real) and additional retouching, e.g. the Obama-Jordan Peele video in which the actor-director Jordan Peele made a realistic Obama say words that Peele himself was saying.

An introduction to the “arms” race between synthesis of synthetic media and detection/forensics

There is an ongoing arms race between manual and automatic synthesis of media, and manual and automatic forensic approaches.  

Manual synthesis is characterized by the explicit modeling of geometry, lighting and physics that we see in Hollywood effects. CGI has been a part of movie industry for 30 years, but it is time-consuming, expensive, and has required domain expertise. On the other hand,  automatic synthesis involves use of implicit synthesis of texture, lighting or head motion as we have seen for example in LipSync Obama, Deep Video Portraits or of course, deepfakes. Techniques here often involve a combination of computer vision and computer graphics, and in some cases uses of neural networks. Tools such as LipSync Obama build on a twenty-year research trajectory of exploring how to create 3D face models from existing images. There are a range of positive applications of enhanced ‘synthetic media’ including  video and virtual telepresence, VR and AR and content creation, animation and dubbing. There will also be uses in autonomous systems and in human computer/human-robot interaction.

Editing software and manual and automatic synthesis can increasingly create perceptually realistic images that are not visible as manipulated to the naked eye and visual analysis.

Manual forensics does explicit checks of perspectival geometry, lighting, shadows and the ‘physics’ of images, as well as detecting for example copying and splicing between images and evidence of the camera model for a photo. A recent notable example of manual forensics specific to deepfakes is the idea of using a technique known as Eulerian Video Magnification, to see the visible pulse rate of real people that would be absent in a deepfake.

An emerging field is automatic forensics. Approaches explored in this include looking at larger datasets and using machine learning to do forensic analysis. Recent experimentation includes:  

  • Detection of copy and splicing or use of two different camera models on origin images
  • Detection of “ heat map” of fake pixels in facial images created using FaceSwap,
  • Identification of where elements of a fake image originate via image phylogeny
  • Use of neural networks to detect physiological inconsistencies in synthetic media, for example the absence of blinking
  • Use of GANs themselves to detect fake images based on training data of synthetic video images created using existing tools (the FaceForensics database).

However, most systems are trained on specific databases, and might detect mainly the inconsistencies of specific synthesis techniques, although there is work in progress that addresses these shortcomings. There are also new counter-forensic approaches that use GANs to fight back against forensic analysis – for example, by wiping the forensic traces of multiple cameras and creating an image that appears to have the uniform camera signature of another camera.

Researchers disagree on whether the “arms race” is likely to be won by the forgers or the detectors. Humans are not good at detecting the difference between a real and a fake video (see data in FaceForensics (pdf) which indicated that with low-resolution images humans had approximately 50% accuracy “which is essentially guessing”) but machines are. Detection is currently easier than forgery and for every forgery AI there is a powerful detection model. Provided there is sufficient training data showing new types of faked images, audio and video, the use of GANs might be able to keep up in enabling AI-assisted identification of non-visible faking. There might be a time lag — which will be exploited by bad actors — but detection should keep improving.

What is WITNESS doing?

We see the need to:

  • Broaden journalists, technologists and human rights researchers’ understanding of these new technologies.
  • Begin building a common understanding of the threats created by mal-uses of AI-generated imagery, video and audio to public discourse and reliable news and human rights documentation, and map landscape of innovation in this area.
  • Map the solutions emerging from existing practices in human rights, journalism and technology to deal with mal-uses of faked, simulated and recycled images, audio and video, and their relationship to other forms of mis/dis/mal-information.
  • Develop appropriate pragmatic tactical, normative and technical responses to risk models of fabricated audio and video that can be initiated by companies, independent activists, journalists, academic researchers, open-source technologists and commercial platforms.
  • Push for research and action priorities by key stakeholders.

To initiate that on June 11, 2018, WITNESS in collaboration with First Draft, a project of the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School, brought together thirty leading independent and company-based technologists, machine learning specialists, academic researchers in synthetic media, human rights researchers, and journalists.

Our goal was to have an open discussion under the Chatham House Rule about pragmatic proactive ways to mitigate the threats that widespread use and commercialization of new tools for AI-generated synthetic media such as deepfakes and facial reenactment potentially pose to public trust, reliable journalism and trustworthy human rights documentation.

Our convening report is available here and we’ll be sharing expanded elements of the report and further details on the recommendations in separate blogs including:

What do we recommend as next steps?

Among the recommendations from the convening:

  1. Baseline research and a focused sprint on the optimal ways to track authenticity, integrity, provenance and digital edits of images, audio, and video from capture to sharing to ongoing use. Research should focus on a rights-protecting approach that a) maximizes how many people can access these tools, b) minimizes barriers to entry and potential suppression of free speech without compromising right to privacy and freedom of surveillance c) minimizes risk to vulnerable creators and custody-holders and balances these with d) potential feasibility of integrating these approaches in a broader context of platforms, social media and in search engines. This research needs to reflect platform, independent commercial and open-source activist efforts, consider the use of blockchain and similar technologies, review precedents (e.g. spam and current anti-disinformation efforts) and identify pros and cons to different approaches as well as the unanticipated risks. WITNESS will lead on supporting this research and sprint.
  2. Detailed threat modeling around synthetic media mal-uses for particular key stakeholders (journalists, human rights defenders, others). Create models based on actors, motivations and attack vectors, resulting in the identification of tailored approaches relevant to specific stakeholders or issues/values at stake.
  3. Public and private dialogue on how platforms, social media sites, and search engines design a shared approach and better coordinate around mal-uses of synthetic media. Much like the public discussions around data use and content moderation, there is a role for third parties in civil society to serve as a public voice on pros/cons of various approaches, as well as to facilitate public discussion and serve as a neutral space for consensus building. WITNESS will support this type of outcomes-oriented discussion.
  4. Platforms, search and social media companies should prioritize development of key tools already identified in the OSINT human rights and journalism community as critical; particularly reverse video search. This is because many of the problems of synthetic media relate to existing challenges around verification and trust in visual media.
  5. More shared learning on how to detect synthetic media that brings together existing practices from manual and automatic forensics analysis with human rights, Open Source Intelligence (OSINT) and journalistic practitioners—potentially via a workshop where they test/learn each other’s methods and work out what to adopt and how to make techniques accessible. WITNESS and First Draft will engage on this.
  6. Prepare for the emergence of synthetic media in real-world situations by working with journalists and human rights defenders to build playbooks for upcoming risk scenarios so that no-one can claim “we didn’t see this coming” and so as to facilitate more understanding of technologies at stake. WITNESS and First Draft will collaborate on this.
  7. Include additional stakeholders who were under-represented in the June 11, 2018 convening and are critical voices either in an additional meeting or in upcoming activities including:
    • Global South voices as well as marginalized communities in U.S. and Europe.
    • Policy and legal voices and at national and international level.
    • Artists and provocateurs.
  8. Additional understanding of relevant research questions and lead research to inform other strategies. First Draft will lead additional research.

For further information on the project please contact Sam Gregory, sam@witness.org.

Leave a Reply

Your email address will not be published. Required fields are marked *