May 2020

TLDR: Coronavirus has dramatically increased the stakes for how we deal with issues of manipulated, fake and deceptive video and audio online, with governments, companies and publics responding to the need to discern truth from falsehood. One solution to misinformation and disinformation is to better track what is authentic, what is manipulated and how. Here we explain what this looks like and explore key questions of this move to ‘authenticity infrastructure’ that includes ‘verified capture’ apps like TruePic and ProofMode and initiatives like the Adobe Content Authenticity Initiative.  Using key dilemmas from our recent ‘Ticks or It Didn’t Happen‘ report we look at key questions and describe what an optimal future looks like:

  • Whose voices are accidentally or deliberately excluded or chilled? Who needs privacy and anonymity?
  • On who is the burden of proof increased?
  • What if the ‘ticks’ don’t work or they work too well for media consumers?
  • Who abuses the data and the tools?
  • What pressures are placed on platforms, journalists and media?
  • How do we explain and account for science that is complicated and fallible?

The core question we need to ask now is this: As we move from opt-in and niche authenticity infrastructure to more widespread efforts driven by governments, platforms and public demand… what do we have to do to ensure that authenticated capture or tracking of content authentication and modification at scale doesn’t harm and in fact enhances freedom of expression and trust?

——-

WITNESS has been working on this issue for a decade – most recently, leading the first globally inclusive meetings on how to handle new forms of manipulation like deepfakes. One approach to this area that is gaining momentum is building more robust ways to track whether images, video and audio have been manipulated, miscontextualized or edited, and when and by who. At WITNESS we’ve been describing this trend as a move to ‘authenticity infrastructure‘.

We’ve worked on issues in this area for ten years and our recent report ‘Ticks or It Didn’t Happen: Key Dilemmas in Building Authenticity Infrastructure for Multimedia’ looks at fourteen key trade-offs we need to consider at an early stage rather than try and fix at a later stage.

It’s not easy right now being a citizen journalist, human rights defender or even mainstream media in most of the world. Elected leaders claim you are fake news and misuse the pressures of the COVID-19 global pandemic to drive further restrictions on media and free speech, while the media industry itself is facing devastating financial losses. These attacks aren’t just on the most marginal in society. In the expert meetings on preparing for deepfakes in Brazil, South Africa and southeast Asia this past year, many journalists in mainstream media pointed out how their work is being undermined.

The human rights activists, lawyers, media outlets and journalists around the world who we work with often depend for their lives on the integrity and veracity of images they share from conflict zones, marginalized communities and other places threatened by human rights violations. They want their media to be trusted and believed. Video evidence and diverse civic journalism matters, and the past decade has enabled a greater and greater number of people to show the realities of their lives and the violations they face. This is what’s at stake as we discuss authenticity infrastructure. It’s protecting the opportunity to enhance the trustworthiness of these critical truth-tellers in our media and society. Or, conversely, to get it wrong and create further risks for them. We are now at a critical moment because of emerging trends in authenticity infrastructure.

There are a growing number of actual tools and critical discussions occurring in this area. These include a burgeoning range of tools for verified capture of audiovisual media at source including commercial tools like TruePic, Amber, Serelay, Starling and others as well as open-source and public-purpose tools like ProofMode, Tella, Eyewitness to Atrocities and others. There are an emerging number of approaches to tracking provenance and attribution of mainstream media images, like the Provenance Project. We are also glad to see more attention on the need for better ability to do reverse video search and similarity searching, and to embed this more visibly in platforms to help people know when something is not what it claims to be. Finally we have the launch of the Content Authenticity Initiative from Adobe, New York Times and Twitter in launched in late January 2020 (and at which, a version of this blog was presented as a talk) pushing for shared standards and infrastructure.

The core question we need to ask now is this: As we move from opt-in and niche to more widespread, and driven by governments, platforms and public demand what do we have to do to ensure that authenticated capture or tracking of content authentication and modification at scale doesn’t harm and in fact enhances freedom of expression and trust?

Key dilemmas we’re bringing to discussions like the Content Authenticity Initiative

WITNESS has now joined the Working Group established by Adobe and the other partners in the Content Authenticity Initiative to look at content authenticity questions and develop a white paper on potential standards. We believe it’s critical that human rights groups and groups looking from a global perspective are part of these emerging technology discussions from the very start, and glad to see Adobe and others taking this position too. Here a few key dilemmas we are emphasizing in this and related discussions.

Whose voices are accidentally or deliberately excluded or chilled? Who needs privacy and anonymity?

We have to be careful about how we intertwine decisions around trustworthiness with usage of particular technology. People may opt out of using tools because of fear of what it reveals of them, or what it reveals of others around them. This fear is not unfounded. We constantly see people we work with trying to navigate complex daily decisions about visibility and anonymity – about exposing themselves by shooting video, and exposing others by including them in a video. As just one example from WITNESS’s work, people who film police violence in Rio (as ordinary observers) have been hounded from their homes when they are identified. Journalists worldwide live in fear of doxing by extremists and their own governments. Bad actors routinely target and retaliate against people who expose their wrongdoing. From years of human rights work we know the option for anonymity and pseudonymity is critical to protecting free expression. Any system of authenticity infrastructure must assume good faith reasons for not using a tool, as well as good faith reasons not to track some kinds of edits (eg. blurring faces of vulnerable individuals, or obscuring some location details). Any system must also not assume that a default persistent real-identity is essential to trusting media.

Other times the exclusions will be arbitrary and tech-driven. Remember Pokemon Go? A few years back it stopped working for millions of people in the Global South who used jailbroken or rooted phones. It was a step to preserve the integrity of Pokemon but it excluded the many people who used rooted phones of necessity in countries like Myanmar. And Pokemon Go also did not work when people had poor GPS or connectivity. The exclusions of Pokemon Go should not be the model for how we intertwine perceived trustworthiness and truth with access to tech.

We need to ask what technical constraints might stop these tools from working where they are needed most? Who might be included and excluded because of basic questions of older or jail-broken tech, battery life, GPS and connectivity? Any approach to a widespread standard must consider how it handles people who generate media on older devices, opt-out of using tools, or create media offline. They can’t by default be assumed to be less credible going forward.

This is not to say that critical truth-tellers do not want these tools. My team has also spent much of the last decade contributing to the building of a field focused on enhancing trust in citizen media, so-called “video as evidence”. We’ve also collaborated with the Guardian Project to build open-source verified capture tools and libraries like “ProofMode”. It’s a lightweight app running in the background that allow you to choose to add rich metadata, and cryptographically sign and hash any photo or video you take on your phone. Then that image can be more easily found, verified and trusted by journalists and investigators, and provide additional confirmation that it comes from you and has not been tampered with. Because that’s the flip side of the risk that many brave journalists and activists take in showing what is wrong. The existential fear that no-one is listening or watching. Many of them do want tools that help them be trusted and authenticated – but they want control of when, where and how they are used.

On who is the burden of proof increased?

Next we must ask where and on who is the burden of proof increased? The so-called CSI effect and ‘reverse CSI effect’ describe how that ubiquitous TV series created expectations of what scientific proof would be produced in trials. In courtrooms authenticity infrastructure will both hinder and help access to justice depending on capacity of judges, prosecutors and defendants. From our experience globally this is a fragile proposition.

For the media, authenticity infrastructure raises the pressure to get it right, and the costs to maintain forensic control. For activists, the assumption that you need verified content may defacto exclude those who as I described before must choose not to or cannot choose to use authenticity technologies, creating the so-called ‘ratchet effect‘.

What if the ‘ticks’ don’t work or they work too well for media consumers?

So what if the ticks don’t work or they work too well at the consumption end? We know that simplistic signals, like the ticks that were used to indicate ‘verified users’ get easily misinterpreted by consumers, and can reinforce system 1 thinking that make us jump to easy conclusions. They also risk creating a spillover effect in which surrounding content that doesn’t have a verification or authentication mark receives more suspicion irrespective of whether it is actually manipulated. We need to plan on how we’ll avoid these impacts.

We also know that the primary problems now, rather than sophisticated fakery, are the tens of thousands of videos circulated with malicious intent worldwide right now. These shallowfakes are crafted not with sophisticated AI or skillful editing, but often simply relabeled and re-uploaded, claiming an event in one place has just happened in another. Like a video of a person being burned alive that I’ve seen recycled and re-attributed in Ivory Coast, South Sudan, Kenya, Burma and beyond, each time inciting violence. Or else videos that have simple edits, manipulations or a new audio track. A similar pattern has occurred with COVID-19 misinformation and disinformation. In these cases, the actual content is authentic and un-manipulated, but it’s the context that is deceptive – for example with the multiple shares and re-contextualizations of an original video of violence. Content authenticity per se can be a red herring in this context – it’s more about showing similar or previous versions that matters.  This is where more readily accessible reverse video and image search or similarity search presented in platforms (rather than as an OSINT investigator work-around) is critical.

Who abuses the data and the tools?

The fears of people who may choose to opt out of using these tools are justified. The tools being built could be used to surveil peoplemore data on people including location, engagement with media, correlation between differing images via shared visible or data traits can be very dangerous and easily abused. This could type of surveillance can of course occur covertly or overtly, as well as by design and by accident.

Already we see legislation around the world that aims to even further control speech online, to define what is true and false according to a government declaration in Singapore, and to  define and constrict who is a journalist or is allowed to share on social media, as in Tanzania. The COVID-19 pandemic has further justified a flood of new regulations along these lines. It does not take an unrealistic leap of imagination to see how this type of technology can be weaponized via ‘fake news’ laws against journalists and dissidents such as now exist in Singapore and a number of other countries. Often these claims will be accompanied by the old false adage that the only people who need privacy are those who have something to hide.

What pressures are placed on platforms, journalists and media?

All these elements will place additional pressures on social media companies to integrate their own measures that support censorship or reflect inadequately applied and inappropriate automation, and on already stretched news outlets large and small to authenticate media to satisfy consumers, regulators and perceived needs. Although many news outlets are looking to these types of provenance measures (for example the New York Times), the capacity to integrate these types of measures will vary by budget and size of media outlet and are also undoubtedly preferred as voluntary not obligatory.

How do we explain and account for science that is complicated and fallible?

Finally we need to recognize that the question of deciphering media forensics and trust is complicated and fallible. This is particularly so with any new technology. How will we decode and explain new media forensics questions the public, and allow for the possibility that the infrastructure will get it wrong and people will need to ”appeal” false positives and wrong decisions?

——-

So what should we do next? WITNESS is not prescriptive on a single technical approach and has joined the Content Authenticity Initiative working group to make sure we address these dilemmas.

We need to work towards authenticity infrastructure that maximizes the benefits of these approaches while mitigating harms, and that respects international human rights laws. An optimal future is one where we respond to and center the needs of people globally who may benefit most and be harmed most by these technologies. One where these technologies are a signal of authenticity not the signal; and where they can be a signal towards trust, not a confirmation that you should trust something. They have to be  explicitly an option for creators to use, and we need to push back against such authenticity infrastructure becoming a  defacto or legal obligation in order to be trusted or visible in platforms and elsewhere. Technically, we need an ecosystem of tools for independent verification, with user control on privacy on multiple levels.

If we do this right we can protect and enhance these voices – journalists, civic activists and small media outlets worldwide. If we do it wrong, we’ll potentially make it harder for many of them, and for many other smaller media outlets globally and many media-makers around the world.

A version of this blog was presented as a talk at the launch of the Content Authenticity Initiative (CAI). This work also benefited from the tremendous contribution of WITNESS’s Mozilla Fellow, 2018-19, Gabi Ivens.

Look out for an upcoming video series – ‘Tracing Trust’ – from my colleague Corin explaining how we think about authenticity infrastructure

 

Leave a Reply

Your email address will not be published. Required fields are marked *