April 2020

Misinformation and disinformation are a critical problem for societies worldwide. In response, WITNESS’s work addressing new forms of media manipulation such as deepfakes and synthetic media is focused on preparing better, and advancina a more global, human rights-lead approach to these emerging threats. Learning from previous and current mis/disinformation challenges, where critical impacted voices and necessary changes have been neglected till after the fact, we can do better. The need to be proactive has been further reinforced by the explosion of audiovisual misinformation and disinformation globally around COVID-19 (see these recent UNESCO briefs, and WITNESS’s recent presentation at the Skoll Virtual Forum).

What do we need in deepfakes detection? 

A core focus of WITNESS’s work is to assess the pros and cons of technology choices early on. This is critical to ensure that research, technology development and infrastructure building and policy decisions do not reinforce or exacerbate existing misinformation and disinformation dynamics, and are responsive to a broad range of the most vulnerable global users. We are actively looking at both detection and authenticity/provenance solutions – the two areas most touted as responses to deepfakes (as well as related and existing media manipulation issues). In this blog we explore key concerns around making detection work for a diverse range of global users, and highlight recent work in this area including insights from governance of the Deepfakes Detection Challenge.

If you’re interested in authenticity and provenance solutions, read our recent ‘Ticks or It Didn’t Happen’ report and look out for our upcoming ‘Tracing Trust’ series of videos and blogs, which further explore the trade-offs and dilemmas in building authenticity infrastructure.

Critical feedback from our global preparedness meetings

WITNESS has surfaced critical feedback on detection needs in our work with journalists and researchers including via our convening of leading verification experts and journalists with key deepfakes researchers (report), and our convening major media and tech companies with the Partnership on AI (PAI) and the BBC. We also lead the first series of regional convenings bringing together experts from journalism, fact-checking, human rights, tech companies and communities of civic activism in Brazil, Sub-Saharan Africa and southeast/south Asia to hear their prioritization of threats and solutions, including detection.

In these meetings we heard critical insights on the need for tools that are accessible to a wide range of media and civic actors rather than a luxury commodity; relevant to real-world harm scenarios that communities face today; and results that are easily interpretable and useful. Participants also emphasized the need for related training and resourcing in verification and media forensics as well as attention to existing ‘shallowfake’ problems.

With more granularity, expert participants in Brazil from major media, fact-checking, civic movement leadership and human rights highlighted the following:

Detection tools need to be cheap, accessible and explainable for citizens and journalists

Participants, particularly from the journalism and fact-checking world, were concerned about how the nature of detection would always put journalists at a disadvantage. They already grapple with the difficulties of finding and debunking false claims especially within closed networks, let alone new forms of manipulation like deepfakes, for which they don’t have the detection tools.

 

More and more investment is going into the development of tools for detecting deepfakes using new forms of media forensics and adaptations of the same algorithms used to create the synthetic media. But there are questions of who these tools will be available to, and how existing problems of ‘shallowfakes’ will also be dealt with. Journalists also reiterate that platforms like YouTube and WhatsApp haven’t solved for existing problems – you still can’t easily check whether an existing video is a ‘shallowfake’, a video that is simply slightly edited or just renamed and shared claiming its something else. In the absence of tools to detect the existing massive volume of shallowfakes – for example, a reverse video search out of WhatsApp – then deepfakes detection is a luxury.

 

As big companies and platforms like Facebook invest in detection tools they need to build detection tools that are clear, transparent and trustworthy, as well as accessible to many levels of journalists and citizens. The earlier that deepfakes and other falsifications can be spotted the better.

 

A big part of accessibility is better media forensics tools that are cheap and available to all – and challenging the economic incentives that build for synthesizing falsehood not detecting it — but this needs to be combined with journalistic capacity in new forms of verification and media forensics.

In our Sub-Saharan African expert convening participants noted;

On the technical side, there was a request for better documentation to outline the range of available algorithmic detection techniques along with their uses and limitations, and to more closely integrate some already existing detection solutions into social platforms. (For example, while visual media spread via private WhatsApp channels cannot be easily searched and debunked by fact-checkers, built-in tools could enable reverse image search functionality directly from the app.)

 

Having seen the range of advanced detection techniques available to computer science researchers, there was a concern that it would be a long time before such techniques were made available to grassroots or indigenous groups or even media outlets, and that efforts were not being made to bridge the gap in the technical sophistication needed to implement them and interpret them.

 

Media professionals stressed the need for more collaboration and resource sharing in order to respond to the threat effectively and with an efficient use of limited funds. Journalists and fact-checkers identified highly technical fields like media forensics as being an area where resources could be shared between teams and organizations. Building clear channels of communication between actors ahead of time was also highlighted as an area where improvement was needed, and would lead to a more effective response.

And in Southeast and South Asia participants (report forthcoming) noted critical needs including:

  • Providing accessibility to detection systems

  • Build capacity for shared media forensics

  • A database of experts who can help journalists identify synthetic media.

Insights from the Deepfakes Detection Challenge / Partnership on AI Steering Committee on Media Integrity

The recent Deepfake Detection Challenge (DFDC) launched by Facebook, Microsoft, Amazon and the Partnership on AI has been the most high-profile public effort to incentivize efforts in detection to date (although other company efforts with less publicity have existed previously, and the DARPA MediFor program has funded researchers for a number of years). The DFDC has now closed and the private leaderboards were recently announced.

In 2019, WITNESS joined the Steering Committee on Media Integrity launched by the Partnership on AI. The Steering Committee focused on how technical challenges must be “situated within the complex dynamics of how information is generated, spread, and weaponized in the 21st century and therefore requires cross-sector and multidisciplinary collaboration”. The Committee’s first task was to provide oversight and governance to the Deepfake Detection Challenge, focused on challenge scoring and judging criteria, allocation of resources within the Challenge fund money, as well as the terms for how participants’ entries are shared and distributed. As an important note, the Steering Committee did not have input on the dataset construction phase, given the timeline on which it was formed.

In the context of our ongoing global preparedness work on deepfakes, and experience building a solutions-forward discussion that centers global communities already facing related harms, WITNESS prioritized the following issues when we joined the PAI Steering Committee.

 

  • Accessibility and potential adoptability, particularly outside the US/Europe: How accessible detection methods are to more people globally is critical and how likely any particular detection method is to be adoptable at scale for a diversity of people are critical questions that have been raised in our dialogues with journalists, media and civil society globally.  A recent national-level convening in Brazil reinforced this need and the others outlined below.
  • Explainability of detection approaches. These detection approaches will enter an existing public sphere characterized by challenges to trust in media as well as a distrust of algorithmic decision-making that is not explainable. The more blackbox an approach is, the less convincing it will be to publics or useful to journalists who must explain their findings to skeptical audiences.
  • Relevance to real-world scenarios likely to be experienced by global publics particularly outside the Global North, as well as journalists and fact-checkers (such as manipulated images and videos that are partial fakes, compressed, ‘laundered’ across social media networks and must be evaluated and explained in real-time) These concerns were particularly highlighted in depth in the workshop WITNESS held connecting leading deepfakes researchers and leading fact-checkers.

Recently, the Partnership on AI published a report on the lessons learned through the governance of the Deepfake Detection Challenge. The report (pdf) provides critical insights into key considerations in the development of detection challenges, technologies, and tools. We were pleased to see that many of the Steering Committee recommendations on how best to govern and approach detection challenges reflect the concerns we and others on the SteerCo brought to the table, though critical concerns remain about global accessibility to avoid perpetuating existing information inequalities. We are also pleased to see PAI’s commitment to resourcing and supporting concrete follow-on to these and other steps identified in previous convening work including  how to communicate manipulation to the public and coordinate on detection arrays. The key insights of the report were as follows:
As Claire Leibowicz, the programmatic lead in this area at PAI notes:

These insights and recommendations highlight the importance of coordination and collaboration among actors in the information ecosystem. Journalists, fact-checkers, policymakers, civil society organizations, and others outside of the largest technology companies who are dealing with the potential malicious use of synthetic media globally need increased access to useful technical detection tools and other resources for evaluating content. At the same time, these tools and resources need to be inaccessible to adversaries working to generate malicious synthetic content that evades detection. Overall, detection models and tools must be grounded in the real-world dynamics of synthetic media detection and an informed understanding of their impact and usefulness

The Deepfake Detection Challenge – and other commitments by companies, academics and others to shared datasets (such as Google’s work in this area, Face Forensics ++, and a range of academic work such as Siwei Lyu’s recent Celeb-DF database and numerous others, and the key efforts in the MediFor and upcoming SemaFor consortium) are key steps. However they are only the start as we look to accessible, inclusive deepfakes detection that is globally accessible and embedded in attention to existing shallowfake problems like miscontextualized videos and other widespread trends in misinformation and disinformation.

Further action, collaboration and investments needed

Platforms need to pay urgent attention to existing shallowfakes problems: We consistently hear from rights defenders, journalists and civic activists about the need to focus simultaneously on emerging problems like deepfakes, and the existing scaled problems of shallowfakes. Shallowfakes are the miscontextualized, lightly edited, and manipulated videos created with existing technology and skills. In our feedback on both policy decisions (e.g. Facebook’s deepfakes policy), and on technical and product steps needed we stress the importance of advancing responses to shallowfakes. One technical and product need we’ve heard globally is around making it much easier to see when an existing video has been lightly edited or simply miscontextualized and shared claiming it’s something else. We’ve seen this misuse of audiovisual content at volume in the current coronavirus crisis – in a survey of fact-checks collected by First Draft it was found that 59% of videos were miscontextualized or reconfigured existing content. For example some videos claiming that scenes of crowd or individual suppressions relate to the virus and lockdowns despite pre-dating the crisis (seen here in Zimbabwe and China), whereas another claims to show a market in Wuhan where the virus is claimed to have originated but is in fact from Indonesia. The recent ‘Evidence in Action’ research convening on priorities in visual mis/disinformation prioritized the need for better reverse video search and similarity search as the first areas of focus. Such solutions must make this information easily visible in platforms (as opposed to requiring a workaround like frame grabs used in Google Image search, a technique suited to OSINT verification experts but not accessible to most people) so that consumers of content can easily see a similar original or other versions.

Ensure integration of detection with media literacy: A revitalized, informed approach to media literacy is critical in the context of deepfakes and shallowfakes. A key concern around deepfakes — particularly given the pace of improvement in their quality — is that ‘invisible-to-the-eye manipulation’ is challenging to explain to people when encountered in a social media context (rather than a Hollywood movie). How do we explain that you cannot trust your lying eyes? Projects such as the Partnership on AI/First Draft research on labelling manipulated media are investigating how to explain manipulation in a way that is helpful to consumers. Projects grappling with authenticity infrastructure approaches will also have to refine their understanding in this area. In the case of deepfakes — where increasingly it will be hard to tell that a piece of media has been synthesized using the human eye – we will need to work out how technical deepfake detection provides useful signals that can plug into and reinforce media literacy approaches like the SHEEP approach from First Draft. I.e. How does telling someone there is evidence of synthetic media manipulation relate to how they apply other media literacy checks on content they are consuming or sharing? Platforms will need to draw on the best research to identify how technical signals can be made available in platforms’ UX.

Think SHEEP before you share (First Draft)

Tools must be aligned with actual needs and workflows:  Participants in convenings organized by WITNESS highlight a range of needs for detection tools. These include recognizing the current gap between existing forensics approaches and the needs of journalists, investigators, fact-checkers and others who use OSINT processes of verification. For example, in terms of process, these truth-finders must deal with high-quantity, low-quality content (often compressed and ‘laundered’ via social media), acting in real-time, working across platforms and where forensic manipulation need not be perfect to be highly effective or insert doubt. The WITNESS report How do we work together to detect AI-manipulated media? explores these needs in greater detail.

Key frontline investigators need capacity to actually use the tools: Most journalists, human rights investigators and other frontline truth-finders and verification experts have limited training in media forensics, or in making reasoned judgements on the basis of forensic information about manipulation. Given the constraints on media funding, participants in our Southern Africa regional convening suggested innovative approaches like funding more regional resource hubs globally and supporting more pooled resources.

Researchers, investigators and platforms need shared learning, shared detection datasets and tools if we are to avoid duplication and forgers and media manipulators finding “the weakest link”: Within deepfakes detection work it’s critical to invest further in a number of areas. One of our key recommendations on deepfakes has been:

Now, as more companies and independent researchers invest in deepfakes synthesis and detection we need to see a commitment to work together, and together with civil society on a number of fronts:

  • to share datasets of new forgery techniques developed specifically for detection and encountered increasingly ‘in the wild’,
  • to share understanding of attack models, and
  • to collaborate on shared and interoperable detection arrays and standards.

The ability to detect evolving forms of deepfakes and synthetic media will depend on the ability to update models with training data examples of new forms of manipulation and to rapidly integrate advances in detection into arrays. Similarly, ensuring that detection is available to a wide range of people using a range of evolving underlying detection technologies requires coordination and interoperability at the user-facing side. The upcoming series of PAI-led expert workshops to “coordinate the development, deployment, and use of synthetic and manipulated media detection models/tools” is a promising start in this area, with plan to bring together fact-checkers, journalists, and others on the front lines of mis/disinformation work, alongside researchers studying synthetic media detection, technology company representatives, and others working on information integrity challenges and solutions.

Ensure synthesis tools are built with detection in mind: Investment in building tools for synthetic media creation vastly outstrips investment in detection. We need to ensure commercial synthesis products are as detection-friendly as possible, and that companies building synthesis products commit equally to the need for detecting malicious usages. The recommendations provided by Aviv Ovadya and Jess Whittlestone provide a good starting point in this area.

Learn more: To learn more about WITNESS” comprehensive approach to preparing better for deepfakes see our recent presentation at the virtual Skoll World Forum, visit our dedicated site, and review our 12 Recommendations below.

Leave a Reply

Your email address will not be published. Required fields are marked *