Last updated: June 2019 (NOTE: please send feedback on areas/examples we are missing to sam [at} witness.org: this is not a comprehensive list!)

This is a survey of solutions around the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media” and how we push back to defend evidence, the truth and freedom of expression. For more information on other reports and actions in this initiative as well as our key whole-society recommendations visit our dedicated page.

This work is embedded in a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights as emerging technologies such as AI intersect with the pressures of disinformation, media manipulation, and rising authoritarianism.

This review updates a previous WITNESS survey compiled in July 2018. A notable framing observation is that there is a growing proliferation of advances in synthesis of media – including face expression manipulation based on limited training data, advances in video in-filling, advances in text to video synthesis and developments in realistic text generation simulating individuals and realistic audio synthesis. There is also emerging commercialization of such features as llip-sync dubbing and realistic avatars as well as video in-filling.

Solutions development is advancing, though not at the same pace or with the same level of investment. Public rhetoric on this issue continues to be hyperbolic.

Here we survey the range of solution areas that have been suggested, or are currently being pursued, to confront the mal-uses of synthetic media. Our goal is to share a range of approaches that work at a range of scales, draw on a diversity of actors, address specific threat models and include legal, market, norms or code/technology-based approaches.

Twelve things we can do now: WITNESS’ recommendations on priorities to engage in now include:

  1. De-escalate rhetoric and recognize this is an evolution not a rupture of existing problems and that our words create many of the harms we fear
  2. Recognize existing harms that are manifested in gender-based violence and cyber-bullying
  3. Inclusion and human rights: Demand responses reflect and be shaped by global and inclusive voices + approach and by a shared human rights vision
  4. Global threat models: Identify threat models and desired solutions from a global perspective
  5. Solutions building on existing expertise: Promote cross-disciplinary and multiple solution approaches building on existing expertise in misinformation, fact-checking and OSINT
  6. Understanding and connective tissue: Empower key frontline actors like media and civil liberties groups to better understand the threat and be connected to other stakeholders/experts
  7. Coordination: Identify appropriate coordination mechanisms between civil society, media and platforms around use of synthetic media
  8. Research: Support research into how to communicate ‘invisible-to-the-eye’ video manipulation and simulation to public
  9. Platform and tool-maker responsibility: Determine what we want/don’t want from platform and companies commercializing tools or acting as channels for distribution: including what we want and don’t want in terms of authentication tools, manipulation detection tools, and content moderation based on what platforms find
  10. Shared detection capacity: Prioritize shared detection systems, and advocate that investment in detection matches investment in synthetic media creation approaches
  11. Public debate on technical infrastructure choicesand understand the pros and cons of who globally will be included, excluded, censored, silenced, empowered by choices we make on authenticity measures or content moderation
  12. Promote ethical standards: on usage in political and civil society campaigning

Potential approaches and pragmatic or partial (and not-so) solutions

Solutions areas below are broadly grouped as follows, not in order of importance!

  1. Technical solutions for detection, authentication, provenance, and anonymization
  2. Platform/social media/search engine-based approaches to detection and protection
  3. Approaches within commercialized creation tools
  4. Multi-stakeholder collaboration and inclusion of adversely-affected communities
  5. News and information consumers
  6. Journalist and news organization oriented approaches
  7. Legal, regulatory, and policy solutions

Technical solutions for detection, authentication, provenance and anonymization

Invest in new forms of media forensics

As synthetic media advances, new forms of manual and automatic forensics could be refined and integrated into existing verification tools utilized by journalists and fact-finders as well as potentially into platform-based approaches. These will include approaches that build on existing understanding of how to detect image manipulation and copy-paste-splice, as well as evolved approaches customized to deepfakes such as using spectral analysis to spot distinctive characteristics of synthesized speech or the idea of using biological indicators to look for inconsistencies in deepfakes. A set of approaches has also been proposed by leading media forensics expert Hany Farid to create a so-called ‘soft biometric’ of key public figures such as 2020 US presidential candidates that will check whether in a deepfake where audio and lip movements have been simulated there is a correlation between what the person says and how they say it (a characteristic pattern of head movements related to how that known individuals says particular words).

The US government via its DARPA MediFor Program (as well as via media forensics challenges from NIST) continues to invest in a range of manual and automatic forensics approaches that include refinements on existing approaches for identifying paste and splice into images based on changes in JPEG/MPEG and tracking camera identities and fingerprints (PRNU-based). Some of these approaches overlap with the provenance approaches above – for example, the eWitness tool leaves designed forensic traces as part of its technology.  Other approaches look for physical integrity (‘does it break the laws of physics’) issues such as ensuring there is not inconsistency in lighting, reflection and audio as well reviewing the semantic integrity of scenes (‘does it make sense?’), considering audio forensics approaches to identifying forgeries, and identifying image provenance and origins (pdf). Many are looking for additional new forms of neural network-based approaches described below.

Invest in new forms of deep learning-based detection approaches and focus on shared detection approaches

Detection approaches to the new generative adversarial network(GAN)-based creation techniques used to create deepfakes and other synthetic media can utilize the same technical approach to identify fakes.  To do this, they generally rely on having training data (examples) of the forgery approach. As an example, forensics tools such as FaceForensics++ generate fakes using tools like FakeApp and then utilize these large volumes of fake images as training data for neural nets that do fake-detection. Google has also contributed training data in this area around synthesized audio. Other approaches in this area also look at possibilities such as the characteristic image signatures of GAN-generated media (pdf) (similar to the PRNU ‘fingerprints’ of conventional cameras). Key questions around these tools include how transferable they are between evolving generation techniques.

Outside of programs like the DARPA MediFor partnership, a number of commercial companies and academic institutions are working in the area of GAN-based detection including (and not limited to) DeepTrace Labs, Faculty AI and Rochester Institute of Technology.

Utilize approaches based on tracking the origins of image elements in a fabricated media item or on tracking known fakes

Platforms, as well as independent repositories such as the Internet Archive, also have significant databases of existing images that can form part of detection approaches based on image phylogeny and provenance. These approaches trace the history of image elements and  look for the use of elements of existing images. Entities such as the Internet Archive are now actively exploring this possibility and it is possible to do with existing browser plug-ins such as Invid-EU and manually using similarity search based on looking for existing images.

Other alternatives being discussed by a range of entities look to compile databases (crowd-sourced and otherwise) of known fakes in order to facilitate ongoing fact-checking and debunking, e.g. WeVerify.

Develop in-browser tools and multi-tool detection arrays

Tools developed in programs like MediFor such as the use of neural networks to spot the absence of blinking in deepfakes (pdf) could be incorporated into key browser extensions or dedicated tools like WeVerify/InVid. There are also projects in progress to develop specific browser plug-ins specifically for AI-based media manipulation – an example is the under-development Reality Defender tool from the AI Foundation. Other tools may follow the approach of the MediFor project to develop a dashboard of how a range of tools and approaches assess media integrity of a given piece of audiovisual media.

Explore tools and approaches for validating, self-authenticating and questioning individual media items and how these might be mainstreamed into commercial capture/sharing tools

There is a growing sense of urgency around developing technical solutions and infrastructures that can provide definitive answers to whether an image, audio recording or video is ‘real’ or, if not, how it has been manipulated, re-purposed or edited. An increasing range of apps and tools seek to provide a more metadata-rich, cryptographically signed, hashed or otherwise verifiable image from point of capture. These technologies provide technical signals of where, when, and on which device an audiovisual media item was created, and measures of trust that it has not been tampered with at creation or in transit and include apps for journalists and human rights defenders and civic journalists and commercial tools aimed at a broad range of markets. Companies and apps in this space include but are not limited to Serelay, Amber, TruePic, Eyewitness to Atrocities, ProofMode and eWitness.

WITNESS mock-ups of what a ‘proof’ or ‘eyewitness’ mode on devices/Camera OS might look like

An upcoming report by WITNESS looks like at at the pros and cons of these approaches, particularly considered in the light of the aspirations of many of these solutions to be embedded from chip to sensor to camera to social media platform and sharing. This report will be released in late June 2019 (media: please contact WITNESS if interested in reviewing a pre-release version)

Other nascent approaches look at how to guarantee media integrity in outputs from news organisations – potentially drawing on signing/metadata-based approaches as well as image provenance/phylogeny. 

Consider the pros and cons of an immutable authentication trail, particularly for high profile individuals

As suggested by Bobby Chesney and Danielle Citron, this concept of using lifelogging to voluntarily track movements and action to provide the potential of a rebuttal to a deepfake via “a certified alibi credibly proving he or she did not do or say the thing depicted” might have applications for particular niche or high-profile communities, e.g. celebrities and other public figures although not without significant collateral damage to privacy and the possibility of facilitating government surveillance. This approach has been advocated in a low-tech version – i.e. film all events – for  campaign teams preparing for the US 2020 elections in an adaptation of existing preparations to pre-empt opposition research.

Platform/social media/search engine-based approaches to detection and protection

Collaborate on shared detection tools that are available to as broad and diverse a set of users as possible

Media forensics and deep-learning based tools could also form part of a platform, social media networks and search engines’ approaches to identifying signs of manipulation. Platforms have access to significant collections of images (that will include increasingly, the new forms of synthetic media as encountered ‘in the wild’). Some platforms have begun to release individual training data sets of media created with synthesis techniques. However, since these deep-learning based detection tools require access to training data of new examples platforms could collaborate on maintaining updated training data sets of new forms of manipulation and synthesis to best facilitate rapid development of responses to new forgery techniques. A key question in this respect will be how to make this type of resultant shared detection capability available as widely to possible to media organizations, civil society and individual consumers without compromising the ability to detect by providing another source of training data for attackers, without compromising privacy and while recognizing there may be commercial competition in some areas.

Assess and debate platform-based approaches (social networks, video-sharing, search, and news) that including many of the above elements as well as content moderation

Platform collaboration could include detection and signaling of detection at upload, at sharing, or at search. They could include opportunities for cross-industry collaboration and a shared approach as well as a range of individual platform solutions from bans, to de-indexing or down-ranking, to UI signaling to users, to changes to terms-of-service (as for example with bans on deepfakes by sites such as PornHub or Gyfycat). As noted above there is also a critical opportunity to collaborate on shared training data.

Critical policy and technical elements here include how to distinguish malicious deepfakes from other usages for satire, entertainment and creativity, how to distinguish levels of computational manipulation that range from a photo taken with “portrait mode” to manipulations currently possible of speed or texture of a video, to facial expression modification to a fully engineered face transplant, and how to reduce false positives; and then how to communicate this to regular users as well as journalists and fact-finders. As Nick Diakopolous suggests, related to solutions around supporting journalism, if they ‘were to make media verification algorithms freely available via APIs, computational journalists could integrate verification signals into their larger workflows’.

Human rights and journalists’ experience with recent platform approaches to content moderation in the context of current pressures around ‘fake news’ and countering violent extremism—with Facebook in Myanmar/Burma and with YouTube’s handling of evidentiary content from Syria and taking down critiques of Nazi and far-right content along with actual extremist content highlights the need for extreme caution around approaches focused on takedowns of content. WITNESS’ recent submission to the United Nations Special Rapporteur on Freedom of Opinion and Expression highlights many of the issues we have encountered, and his report highlights steps companies should take to protect and promote human rights in this area.

In addition, there remain gaps in the tools available on platforms to enable solutions to other existing verification, trust and provenance problems around recycled, faked and other open-source images and to make those tools available to both the third-party fact-checkers they work with and others working independently. One key recommendation out of both the expert convening WITNESS sponsored in June 2018 and a recent knowledge exchange focused on leading fact-checkers and journalists was that platforms, search and social media companies should prioritize development of key tools already identified in the OSINT human rights and journalism community as critical in the fight against so-called ‘shallowfakes’ where videos are lightly altered, a soundtrack is changed, or a video is mis-contextualized as a different event. A key tool in this respect is reverse video search with some capacity for fuzzy-matching. For a detailed list of other tools needs please see the upcoming convening report-out.

Track and identify malicious deepfakes and synthetic media activity via other signals of activity in the info ecosystem

As identified elsewhere, the best way to track deepfakes or other synthetic media may be to focus on real-time tools for sourcing enhanced bot activity, detecting darknet organizing or creating an early warning on coordinated state or para-statal action. Recent reporting from the Digital Disinformation Lab at the Institute for the Future and the Oxford Internet Institute, among others, explores the growing pervasiveness of these tactics but also signals of this activity that can be observed. A number of platforms pursue this approach in general in their mis and disinformation work – for example, with Facebook’s emphasis on “coordinated inauthentic behavior”.

Confront shared root causes with other dis/mal/misinformation problems

There are shared root causes with other information disorder problems around how audiences understand and share mis and disinformation. There are also overlaps with broader societal conversation around micro-targeting of advertising and personalize content and how “attention economy” focused technologies reward fast-moving content and that are oriented towards an attention economy approach.

Identify, incentivize and reward high-quality information, and rooting out mis/mal/disinformation

A distributed approach could include analogies and lessons learned from cyber-security, for example, the use of an equivalent to a “bug bounty.”

Protect individuals vulnerable to malicious deepfakes by investing in new forms of adversarial attacks

Adversarial attacks include invisible-to-the-human-eye pixel shifts or visible scrambler-patch objects in images that disrupt computer vision and result in classification failures. Hypothetically these could be used as a user or platform-lead approach to “pollution” of training data around specific individuals in order to prevent bulk re-use of images available on an image search platform (e.g. Google Images) as training data that could be mobilized to create a synthetic image. Siwei Lu at University of Albany has been exploring how these might be deployed to counter available sources of training data for deepfakes. Other initiatives such as the EqualAIs initiative are exploring how similar tools could be used to impede increasingly pervasive facial recognition and preserve some forms of visual anonymity for vulnerable individuals (another concern of WITNESS’ within our Tech + Advocacy program).

Approaches within commercialized creation tools

Ensure commercial tools provide clear forensic information or watermarking to indicate manipulation

Companies such as Adobe producing consumer-oriented video, image, and audio manipulation tools have limited incentives to build counter-forensics measures into the outputs of their products since they are designed to be convincing to the human eye but not machines. There should be a unified consensus that consumer video and image manipulation should be machine forensics readable to the maximum extent possible, even if the manipulation is not visible to the naked human eye.

Another approach would look at how to include new forms of watermarking – for example, as suggested by Hany Farid, to include an invisible signature to images created using Google’s TensorFlow technology, an open-source library used in much machine learning and deep learning work. Some approaches to authentication at capture such as the eWitness project have also been experimenting with an approach to embed watermarks resistant to compression and other problems at point-of-capture.

Such approaches will not resolve the analog hole where a copy is created of a digital media item but might provide traces that could be useful to signal forensically many synthetic media items.

Ensure commercial synthetic media tools launched have a complementary detection tool available to the public alongside the creation tools

Adobe’s recent launched Content Aware Fill for video, which uses AI to do copy-paste simulation to in-fill video in a scene. It is easily detectable to forensic analysis in high-resolution images, but not in compressed images or to someone without technical knowledge. Additionally, the capability to do this analysis is not provided by Adobe. A useful industry consensus would focus on ensuring that any consumer tool that is released for synthesis and creation without any control for who uses it has an as equally-accessible tool for allowing analysis and detection of the manipulation.

Multi-stakeholder collaboration, inclusion of adversely-affected communities and ethical oversight

Support ethical codes around use of in political and NGO/non-profit campaigning of deepfakes and other synthetic media that present a deceptive representation of someone as real that is in-fact synthesized

An initial ethics code for political candidates has been promoted by the Transatlantic Commission on Electoral Integrity, a project of the Alliance of Democracies. It includes a commitment to “Avoiding dissemination, doctored audios/videos or images that impersonate other candidates, including deep fake videos.” There are possibilities of developing similar ethical principles in the non-profit campaigning sector.

Ensure communication between key affected communities and the AI industry

The most-affected by mal-uses of synthetic media will be vulnerable societies where misinformation and disinformation are already rife, where levels of trust are low and there are few institutions for verification and fact-checking. Many of the incidents of ‘digital wildfire’ where recycled or lightly edited images have spread violence have recently taken place in the context of closed messaging apps such as WhatsApp in India.

Most recently in the human rights space, there has been mobilization in the Global South Facebook Coalition/Next Billions Coalition to push Facebook to listen more closely and resource and act on to real-world harms in societies such as Myanmar/Burma and Sri Lanka. These groups, and the likely risks and particular threat paradigms in these societies need to be at the center of solutions. Similarly, solutions that resolve deepfakes issues in the US but are subject to abuse/co-option in other contexts of vulnerability need to be avoided.

Develop industry and AI self-regulation and ethics training/codes, as well as 3rd party review boards.

As part of the broader discussion of AI and ethics, there could be a stronger emphasis on training on human rights and dual-use implications of synthetic media tools (for example, drawing on operationalization of the Toronto Principles on AI) both among major platforms but also among the growing start-up community in this area.  This could include parameters for discussion of AI-derived techniques of synthetic media in research papers and support use of independent, empowered 3rd party review boards with genuine oversight power.

Identify mechanisms for shared information on threats between platforms and other users including media organizations and the public.

Some efforts are looking at how to ensure better communication between media, platforms and civic advocates around emergent AI-driven mis and disinformation threats.

News and information consumers

Invest in media literacy and resilience for news consumers, particularly around seamless audiovisual manipulation

There is increasing funding and support for efforts to promote media literacy around disinformation both among educators, foundations and media outlets. These initiatives could further integrate common-sense approaches to spotting both individual items of synthetic media (e.g. via visible anomalies such as mouth distortion that are often present in current deepfakes) as well as developing approaches to assessing credibility more broadly and to supporting people on how to engage with this content. This article by Craig Silverman for Buzzfeed is an example noting some simple steps that could currently be taken to identify a deepfake at this point—some such as checking the source are common to other media literacy and verification heuristics, while advice to “inspect the mouth” and “slow it down” are specific to the current moment in deepfakes detection.

NOTE: An important concern around these approaches is not to promote the current ‘achilles heel’ of any given model since within a few months this may be irrelevant guidance yet will stick in consumers heads. An example of this was the widely covered news that ‘deepfakes don’t blink‘ which was rapidly proved incorrect as a generalizable rule by people who updated their training data to accommodate for this. WITNESS has so far not invested in developing training materials for journalists and human rights advocates precisely because of the risk of providing information that will be rapidly out-dated.

Invest in research, particularly around how to communicate invisible-to-the-eye and ear audiovisual manipulation

Research and responses to misinformation and disinformation lag significantly in how to deal with visual and audio information and in non-U.S. contexts, and in relation to deepfakes and synthetic media, in terms of how to communicate the presence of seamless video or audio simulation, particularly to skeptical audiences.

Other work will need to build on the growing body of research on social media and disinformation to engage with a broad constituency of people in society. A recent Harvard Business Review, article, “Big Idea on ‘Truth, Disrupted’,” provides a summary of recent research and approaches. More depths is provided in resources such as the Council of Europe’s “Information Disorder” report, the Social Science Research Council report on the state of the field in Social Media & Democracy, the recent scientific literature on “Social Media, Political Polarization, and Political Disinformation”  and The Science of Fake News as well as the ongoing work of Data & Society on and First Draft.

Journalist and news organization oriented approaches

Build on existing efforts in civic and journalistic education and tools for practitioners

There is a range of existing efforts in journalistic, human rights and OSINT discovery and verification efforts that support practitioners in those areas to better find, verify and present open-source information, including video, audio, and social media content. New approaches to recognizing and debunking deepfakes and synthetic media can be built into the toolkits, browser extensions, and industry training provided by First Draft, Google News Initiative and similar non-governmental and industry peers, as well as the efforts of groups like Bellingcat and WITNESS’ Media Lab and Video As Evidence efforts.

 

 

 

 

 

 

Tools and approaches will undoubtedly also be integrated by industry leaders working in social media verification such as Storyful and the BBC.

WITNESS  has been engaged on these efforts including via a series of workshops connecting leading fact-checkers, OSINT researchers and UGC newsgatherers with leading academic experts on media forensics and deepfakes to understand gaps/needs and how to integrate and collaborate better. WITNESS is releasing a report on necessary steps in this area on June 12th.

Reinforce journalistic knowledge and enhanced collaboration around key events by supporting better collaboration on understanding and rapid identification and response in the journalism community

In response to misinformation threats, competing journalistic organizations have worked together around elections and other potential crises via initiatives such as Comprova, Crosscheck and Verificado. In preparation for potential deepfake deployment that will try to target “weak links” in the information chain in upcoming elections in the U.S. and elsewhere, coalitions of news organizations working on shared verification can integrate an understanding of deepfakes and synthetic media, threat models and response approaches into their collaboration and planning as well as coordinate with researchers and forensic investigators.

Although there is increasing discussion in public venues and OSINT/journalism spaces of how to detect deepfakes – as yet however there has not been any widespread usage of deepfakes and synthetic media, and a continued usage of shallowfakes. Correctly most training programs are not yet focused on deepfakes detection yet. Organizations like WITNESS are holding back on sharing threat models, reflecting concerns that these are premature and counter-productive from an op-sec perspective.

Invest in rigorous approaches to cross-validating multiple visual sources

Approaches pioneered by groups like Situ Research with its Euro-Maidan Ukraine killings reconstruction, Forensic Architecture, Bellingcat and the New York Times Video Investigations Team utilize combinations of multiple cameras documenting an event as well as spatial analysis to create robust accounts for the public record or evidence. These approaches could overlap with improved tools for authenticating and ground-truthing eyewitness video, allowing for one authenticated video to anchor a range of other audiovisual content.

Support internal journalist training and stress-test newsroom processes

A number of news outlets have instituted internal teams to explore how to best prepare for more advanced media forensics needs and new forms of AI-enhanced manipulation of audiovisual media. These include the Wall Street Journal and the Washington Post. Reuters and the BBC have experimented with synthetic media manipulation to assess how their internal practices spot it, and to inform audiences.

Reinforce existing journalistic efforts around protections for journalists attacked with deepfakes, noting the established attack form here

A number of journalistic protection organizations are starting to consider what is an appropriate newsroom/editorial training approach to support journalists, particularly women journalists, facing deepfakes-based sexual content attacks.

 

Legal, regulatory and policy solutions

Pursue existing and novel legal, regulatory and policy approaches 

Professors Bobby Chesney and Danielle Citron have recently published “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” a paper outlining an extensive range of primarily US-centric legal, regulatory and policy options that could be considered. Legal options includes new narrowly targeted prohibition on certain intentionally harmful deepfakes, the use of defamation or fraud law, civil liability including the possibility of suing creators or platforms for content (including via potential amendments to CDA Section 230), the utilization of copyright law or right to publicity, as well as criminal liability.  Within the US there might be potential limited roles for the Federal Trade Commission, the Federal Communications Commission and the Federal Elections Commission.

Other areas to consider that have been raised elsewhere include re-thinking image-based sexual abuse legislation as well as in certain circumstances and jurisdictions expanding post-mortem publicity rights or utilizing the right to be forgotten around circulated images. The options that would be available globally and in other jurisdictions than the US remain under-explored, with the exception of a recent survey published by Brainbox Institute that provides a similarly detailed analysis of options for considering synthetic media under New Zealand law.

Recently a number of US legislative proposals have emerged in relation to these ideas. These include state-level proposals in New York State Assembly focused on the right to privacy and publicity largely focused on actors; a bill in the California Senate focused on the likeness rights of sex industry performers as well as other non-consensual mis-use of sexual content as well as a proposal in the US Senate to ban deepfakes that ‘would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual’. Representative Yvette Clarke has sponsored a series of discussions highlighting the adverse effects of deepfakes on women and communities of color, and explored relevant legislative possibilities.

 

Please contact sam [at] witness.org to suggest updates for the next version of this survey.

Leave a Reply

Your email address will not be published. Required fields are marked *