In a recent WIRED Op-ed I share the story of a recent deepfake panic in Myanmar.  Spoiler: It probably wasn’t a deepfake… but it has important implications for how we think about access to deepfake detection technology and what skills, capacities and escalation options are needed. You can read more about the story here.

“RECENTLY THE MILITARY coup government in Myanmar added serious allegations of corruption to a set of existing spurious cases against Burmese leader Aung San Suu Kyi. These new charges build on the statements of a prominent detained politician that were first released in a March video that many in Myanmar suspected of being a deepfake.

In the video, the political prisoner’s voice and face appear distorted and unnatural as he makes a detailed claim about providing gold and cash to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned whether the statement was real. This incident illustrates a problem that will only get worse. As real deepfakes get better, the willingness of people to dismiss real footage as a deepfake also increases. What tools and skills will be available to investigate both types of claims, and who will use them?”

For the past three years, WITNESS has lead a ‘prepare, don’t panic‘ approach to deepfakes and new forms of synthetic media manipulation. One key area we’ve explored is access to deepfake detection tools accompanied by the necessary expertise and resources to use them. A critical dimension is equity in access to both deepfakes detection tools and the capacity to use them, and how this will be achieved globally for those who need it most.

In this blog post I summarize recommendations and learnings from our work focused on facilitating appropriate tools, access and related support for deepfakes detection. This included a phase of needs assessment and deep community consultation with people with lived and expert experience relevant to the issue in 2019-20 and then research and consultation activities in 2020-21 by Partnership on AI and WITNESS exploring how to facilitate access and related support.

What now? Recommendations

Moving forward it’s urgent that the deepfakes research community, a broader media forensics community and funders of technology, civil society, mis/disinformation and journalism address these areas.

  1. Equity in access to deepfakes detection skills and capacity is critical. We need a commitment by funders, journalism educators, and the social media platforms to deepen the media forensics capacity and expertise in using detection tools of journalists, rights defenders, and others globally who are at the forefront of protecting truth and challenging lies.
  2. We need to develop a standing media forensics expert capacity. This should be available to assess high public-interest claims of deepfakes, including both real deepfakes and instances where someone is using the ‘liar’s dividend’ to claim real footage is false. As in the Myanmar case, rapidly escalating a contested corruption allegation against a leading politician is exactly the type of case where this would be warranted. As a next step, we should convene key stakeholders in journalism, media forensics and academia to assess potential models for access to deeper expertise and for escalation of particularly challenging media manipulation cases outside of traditional media forensics and law enforcement contexts. Aviv Ovadya has proposed a model for an International Media Authenticity Council, and WITNESS plans to highlight a range of approaches for escalation.
  3. As identified by the Partnership on AI we need to develop a robust set of principles and then implement a governance protocol for access to deepfake detection tools that in setting expectation about who has access to what tools, does so with attention to global needs and dynamics and the urgency of providing access to critical journalists, civil society leaders and community-level fighters against misinformation globally.
  4. Within deepfakes detection tools and a governance protocol we must identify ways to support open access and non-commercial tools and reconcile this with the adversarial nature of detection and synthesis
  5. We need companies and researchers investing in media synthesis to commit to resource-sharing to detection efforts, including providing commercially-viable and intellectual property-respecting input specific to their own techniques that will help support detection efforts. This need can form part of emerging codes of conduct in this space, including ones WITNESS is involved in.

How do we achieve greater global equity in detection access?: Context from the 2019-20 Convenings

A key assumption in WITNESS’s work is that many people are eager to ‘solve’ for new forms of media manipulation. But to do this well we have to listen to the right range of people globally with lived and expert experience. We’ve convened people in the US, Brazil, Sub-Saharan Africa and South/Southeast Asia who have lived, practical and expert experience of related problems in misinformation, disinformation, gender-based violence; connected journalists, companies and researchers, and advocated for the right policies, technology with access and new approaches to tracking trust such as authenticity and provenance infrastructure.

One key area we discussed in our convenings was how deepfake detection works, and about how we ensure global equity in access to tools and skills. Participants asked:

Firstly, how do detection solutions respond to needs outside the centers of power in terms of the relevancy of their approaches?

Secondly, in a world where access to these tools trades off against broader utility and effectiveness of detection tools, we need to purposefully consider detection equity. How is access provided equitably and avoiding perpetuating systemic inequalities?

Key observations from the reports from these global convenings are captured below and described in further detail in our ‘What’s Needed in Deepfakes Detection? blog.

Key observations from South and Southeast Asia expert convening
Key observations from Brazil-centered expert convening
Key observations from Sub-Saharan Africa expert convening
Further key observations from Sub-Saharan Africa expert convening

Background to these concerns can be summarized in these terms, which have been corroborated in other convenings and research by WITNESS and other work by Partnership on AI and its Synthetic Media and Manipulated Media discussion cohorts

  • Relevant: How relevant are detection approaches to real-world problems?
  • Available: How available and accessible are detection approaches to end-users?
  • Useful: How interpretable, explainable and useful are detection results?
  • Complementary: Are tools being developed with media literacy integration and user comprehension contradicting the ‘evidence of eyes’ in mind?
  • Capacity: Are tools being paired with capacity-building?

We also heard loud and clear that participants expected more attention to current problems. They wanted platforms, technologists and researchers to address existing problems such as mis-contextualized and lightly-edited shallowfakes, for example with better reverse video search and tracking of existing fakes across platforms.

There was also a fundamental concern with the liar’s dividend and its implications in undermining trust in video and images, particularly in content from people without power, e.g. citizen journalism, and how it placed additional pressure on journalists, fact-checkers and truth-finders to counter exhausting claims of fakery, and to prove true (not just prove false).

Identifying approaches to detection equity, access and skills support: 2021 feedback

In early 2021, WITNESS teamed with the Partnership on AI to conduct focus groups on key questions around detection equity and access. These involved a subset of the journalists, activists, fact-checkers, mis/disinformation specialists and community-based leaders who had participated in previous WITNESS expert convenings for Brazil and across Sub-Saharan Africa.

We framed our discussion in terms of the Deepfake Detection Dilemma identified in the recent Partnership on AI paper that was peer reviewed and presented at this year’s ACM AIES conference

The Deepfake Detection Dilemma posits that as tools to detect deepfakes and other synthetic media are beginning to be developed civil society organizations and journalists do not have the access that platforms and researchers have. However, wide access to this technology may make it less effective because it would also provide adversaries with important information. Under these circumstances how can access and support be facilitated equitably, broadly and for diverse populations?

These three questions formed the core of our discussion within focus groups of 12-15 participants drawn from media, fact-checking, community activism and human rights defense, and efforts against misinformation and disinformation.

Here’s what we heard…

Key Takeaways: Who gets access to deepfake detection tools? 

  • Ensure support for a) Journalists and fact-checkers who need these tools in their daily work, b) Societally-trusted leaders, as well as provide support for c) Community organizations and community level misinformation medics facing ongoing threats and attacks
  • Identify contextually-appropriate intermediaries and escalation options, e.g. industry entities like the International Fact-Checking Network (IFCN), the Brazilian Investigative Journalism Network Abraji in Brazil or networks like First Draft and WITNESS
  • Ensure that open source and open access solutions are widely available, as well as non-commercial options
  • Both restricted and open access tools need to be accompanied by relevant media literacy and interpretation skills

Key Takeaways: What access needs to be provided and how?  It must be controlled AND open, accounting for exclusion and inadequate intermediaries

  • While the default access assumption that WITNESS and PAI presented — and which many participants inclined to — focused on controlled access to a detection mechanism via an intermediary organization there also need to be open-access options, particularly to provide access for the most vulnerable and most under-resourced
  • Calls for open access are legitimate as they relate to concerns about exclusion at community level and for the most vulnerable and about commercial tool dominance, as well as a deeply rooted global concerns with supporting free (and open-source) software and open access.
  • Calls for open access as well as caution about intermediary mechanisms also reflected caution about naive assumptions on intermediaries and escalation options (for example, assuming that dominant media houses in particular countries are trusted, or not understanding linkages to government and commercial power). A critical question in any context is who are the societally-trusted intermediaries?

Key Takeaway: Deepen and diversify access to expertise and training, and create mechanisms for escalation

  • Without training in understanding results and broader media forensics, there is not much value to tools access
  • Intermediaries supporting others will need deeper training
  • What are the escalation mechanisms and to whom? There will be a need for escalation mechanisms to qualified media forensics capacity that can respond to the hardest cases rapidly and in a stream-lined and credible manner.

Many of these considerations were reinforced in the case from Myanmar that opens this blog. In that case, amateur sleuths and online commentators tried to test the suspect video in an online deepfake detector and came back with an over 90% certainty positive result indicating a fake. Yet, they lacked any knowledge of how deepfake detection tools work, and under what circumstances. They did not have recourse to expertise or experts to corroborate their findings.

Key takeaways from that recent real-world case reinforce the concerns of participants in our convenings:

  • Detection needs to be relevant, available, accessible, useful, complementary to user comprehension, and matched with capacity and skills
  • Open-access tools without support or contextualization of how to use them are more dangerous than useful
  • A limited public understanding of ‘deepfakes’, including how both deepfakes are synthesized, and how they are detected, perpetuates misunderstanding of detection results
  • Lack of journalistic and media skills in media forensics or assessing media manipulation makes it hard for journalists to help discern whether something is faked
  • Civil society and journalists have no way to rapidly reach out to engage expert capacities
  • There is no escalation capacity for complex media forensics
  • All of this takes place in context of liar’s dividend, which increases the capacity to claim something is a deepfake, and force others to prove it’s true.

Leave a Reply

Your email address will not be published. Required fields are marked *