Lee el artículo en español aquí.
Co-authored by Sam Gregory.
TLDR: Building on WITNESS’ focus on developing an authenticity infrastructure that works globally and supports civic journalism and human rights uses and values, we have been part of the Coalition for Content Provenance and Authenticity specifications development effort. Right now we’re focused on identifying and trying to address potential harms from this growing area of technology exploration, and continue to proactively solicit input into this process.
WITNESS’s work on provenance and authenticity infrastructure
In the beginning of 2021, at WITNESS, we laid out our priorities around truth, lies and social media accountability. One area we focused on was the need to defend truth and facts in the face of deceptive content, misinformation and disinformation. Then as we reform and improve social media — and confront hate and mis/disinformation — we must avoid ‘throwing the baby out with the bathwater’. We must preserve the very necessary increased diversity and voices in the digital revolution, while recognizing the weaponized, unequal space that is the online world.
WITNESS has been working for more than a decade on how we defend the diverse realities of lived and filmed experience and evidence, while confronting deceptive media. Most recently, we led the first globally-inclusive meetings on how to handle new forms of manipulation like deepfakes. One approach to confronting both deepfakes and shallowfakes that is gaining momentum is building more robust ways to track the origins of images, video and audio: showing whether they have been manipulated, mis-contextualized or edited, and if so, how, when and by whom.
At WITNESS we have been describing this trend as a move to ‘provenance and authenticity infrastructure‘, and have built on our experience with helping communities create trustworthy information as well as deal with misinformation to prioritize a human rights-led, global approach. Collaborating on tools like ProofMode with the Guardian Project has informed our work to ensure these infrastructure work better for human rights, and to enable rather than disempower critical voices. We continued this work with our report ‘Ticks or It Didn’t Happen: Key Dilemmas in Building Authenticity Infrastructure for Multimedia’ that pinpointed fourteen key issues we need to consider at an early stage of the development of this infrastructure (rather than try and fix at a later stage). You can find a summary of these key issues in our Tracing Trust blog and video series.
Our experience and focused intention to influence these initiatives at an early stage, from globally-driven human rights perspectives and practical experiences, shaped our involvement to successfully push for these to be reflected in the Content Authenticity Initiative. Most recently, it has guided our efforts within the Coalition for Content Provenance and Authenticity (C2PA), where technical specifications for tracing the provenance and authenticity of digital assets, such as video and images, are being developed.
From niche to potentially systemic use of infrastructure
The C2PA is led by Adobe, Arm, Intel, Microsoft, BBC, Truepic and Twitter, and it is the most consolidated effort leading towards a more widespread and potentially systematic use of provenance and authenticity infrastructure.
Considering this, the core question we need to ask now is this: As we move from opt-in and niche authenticity infrastructure, such as those pioneered by human rights defenders, to more widespread efforts driven by companies, governments, platforms and public demand, what do we have to do to ensure that measures at scale for understanding the integrity, provenance and changes to media do not harm human rights and vulnerable communities globally, and in fact protect privacy, trust and enhance freedom of expression as well as other rights?
For us at WITNESS, an optimal vision for how to do this right is one where we respond to and center on the needs of people globally, who may benefit most and be harmed most by these technologies — one where these technologies are signals towards trust, not a confirmation that you should believe something. They have to be explicitly an option for people to use, and not require identity as their fundamental basis. Technically, we need an ecosystem of tools for independent verification, with user-control on privacy on multiple levels. We need to push back against such infrastructure becoming a defacto or legal obligation in order to be trusted or visible in platforms and elsewhere, or being weaponized by disingenuous ‘fake news’ or security law-making.
In an effort to proactively address these and other human rights concerns at an early stage of the C2PA, WITNESS has not only advocated for key areas in the design of the specifications but also argued for proactive steps to prioritize harms assessment and human rights impact assessment. Most recently, this has involved co-chairing the Threats and Harms Taskforce, where we have been leading efforts to understand potential systemic harms and risks to users and the broader society. We want to ensure that a human rights framework is bolstered as well as global issues are heard and reflected in the design of these specifications.
It is an important step to see these principles (and others focused on issues including privacy) affirmed by other members in the Guiding Principles of the C2PA: “C2PA specifications MUST be reviewed with a critical eye toward potential abuse and misuse of the framework”, and “C2PA specifications MUST be reviewed for the ability to be abused and cause unintended harms, threats to human rights, or disproportionate risks to vulnerable groups globally.” We have also seen a significant number of areas of potential concern addressed by the design of the specifications, and championed by other members of the C2PA.
Confronting potential harm early
WITNESS’ current focus as Co-Chair of the Threats and Harms Task-force has been on a harms and misuses assessment to identify potential systemic harms to both users and broader society (with particular attention to vulnerable and marginalized groups) that can be caused by the intended use, misuse or abuse of the specifications, and to develop a strategy for the prevention or mitigation of these risks. For WITNESS, this builds on our work seeing critical dilemmas in practice as well as those identified in our Ticks report, via our global convenings around new forms of media manipulation that took place in Latin America, Sub-Saharan Africa, Asia, US and Europe as well as via research.
The assessment process has included external feedback sessions with people with a broad range of lived, practical and technical experiences, all coming from different parts of the world, and acting across areas that include civic media, human rights, misinformation and disinformation, activism, technology advocacy and accountability, and digital rights. These conversations have, and continue to, inform and guide the development of the technical specifications, as well as the accompanying documentation, which includes guidance for implementers, guidance on UX, security considerations, and an explainer to sensitize the general public.
The harms assessment has also highlighted the need for monitoring the impact of the specifications, and for developing mechanisms to reflect an evolving landscape, and address unidentified and unmitigated threats and harms.
As we head towards the publication of version 1.0 of this standard, WITNESS continues to reach out for feedback on potential harms and misuses, and possible mitigations.
Published on 2nd December 2021.