Takeaways/TLDR: 

  • Partnership on AI, in collaboration with WITNESS and other key allies, has published the Responsible Practices for Synthetic Media Framework, which offers guidelines for developing, creating, sharing, and publishing synthetic media ethically and responsibly. 
  • The Framework is directed towards those building technology and infrastructure for synthetic media, those creating synthetic media, and those distributing or publishing synthetic media. 
  • Building off of our work under the ‘Prepare, Don’t Panic’ initiative, including our  regional workshops, WITNESS has been part of this process from the start in order to inform and shape the Framework in a way that reflects the threats identified by communities at risk globally, and to help determine expectations and responsibilities in a way that prevents harm and promotes accountability

As technology advances, synthetic media is becoming increasingly sophisticated and accessible. From generating realistic images with Dall-E 2 to using a TikTok filter to seamlessly add a realistic layer of makeup to your face, artificial intelligence (AI) is transforming the way we create, edit and share images and videos.  

While synthetic media has the potential to be a valuable tool for human rights defenders and civic journalists globally, it is rapidly gaining ground without targeted ethical and regulatory frameworks at a time where a crisis of trust already permeates the digital sphere. As rampant mis- and disinformation leave us questioning the authenticity of the images or videos we see online, there are serious consequences for communities around the world whose lives may depend on the credibility and trustworthiness of images and videos they share from conflict zones, marginalized contexts and other places threatened by human rights violations.

In an effort to introduce systems that can help us grapple with ethical questions in ways that protect human rights, the Partnership on AI has introduced the Responsible Practices for Synthetic Media Framework which offers considerations and guidelines for developing, creating, sharing, and publishing synthetic media. 

Based on our work under the ‘Prepare, Don’t Panic’ initiative, including our workshops in Brazil, Sub Saharan Africa, Southeast Asia, the US, and other locations and contexts, WITNESS has been part of this process from the start in order to inform and shape the Framework in a way that reflects the threats identified by at-risk communities globally, and to help determine expectations and responsibilities from stakeholders in a way that combats synthetic media panic and prepares for the changing digital information landscape

You can read PAI’s Responsible Practices for Synthetic Media Framework here

Preventing harm in every stage of the synthetic media pipeline 

The Framework lays out in detail specific responsible practices for all stakeholders intervening at different moments of the synthetic media pipeline. This includes those developing the technologies that tool-makers and creators employ, such as neural network or generative models, as well as those creating the interfaces, the platforms hosting the content, other distribution channels such as news media, and those creating or editing the actual content.

This is a reminder that as these technologies continue to improve, media consumers cannot be expected to identify content as synthetically created or manipulated if there are not adequate signals enabled by relevant actors before publishing and sharing. It also serves to emphasize that content creators cannot be the only ones accountable for potential harm, and that each stakeholder holds a key part of the puzzle necessary for the responsible deployment of synthetic media.

One of the mechanisms listed in the Framework that can exemplify this required multi-stakeholder approach is disclosure — or the act or process by which it is made known that a piece of media has been created or edited with artificial intelligence. This can happen in a variety of ways, including viewer-facing mechanisms such as labeling content, or less obviously visible ones such as those that use provenance-capturing technologies, but it still requires each actor within the pipeline to do their part.

For provenance-capturing technologies to work, for example, developers and trainers of AI models could be required to track the source of their datasets, tool-makers to develop functionalities that add provenance information to their outputs, content creators to not maliciously tamper with the metadata, and distribution channels that offer user-experiences for viewers to explore and evaluate the source and history of media.

As each one of these disclosure mechanisms can come with their own ramifications, including potential threats to human rights or disproportionate risks to vulnerable groups globally, it is necessary for the different actors across the pipeline to not just enable disclosure, but to do so after assessing and preparing for misuse, abuse and unintended harm. Continuing with the example of provenance-capturing technologies, WITNESS, as part of the C2PA, has led a Harms, Misuse and Abuse Assessment to identify potential harms and layout existing and potential solutions that can help avert and mitigate them as these technologies are deployed.

A Framework for the good, the bad, and the gray areas in between of synthetic media

Synthetic media opens up creative possibilities for satire, art and other forms of expression. However, there are no clear cut lines for discerning genuinely satirical or artistic content from a harmful or malicious one — or responsible content that is labeled as harmful or malicious to stifle freedom of expression. 

Recognizing the complexity in understanding and regulating this space, the Framework seeks to strike a delicate balance by opening up a space for continuous discussion. It also links to WITNESS and MIT’s Just Joking report to help spur the conversation and promote solutions that can be ‘intent agnostic’, as may be the case with the use of disclosure mechanisms. 

WITNESS has already been expanding on this conversation in our Just Joking Action Plan 2023 by thinking of labeling and disclosure of media provenance as a creative opportunity for creators — as another tool that can be leveraged to better transmit a message, and not as a stigmatizing indication of mis/disinformation. 

The Framework is also a living document that should continually change to reflect developments in the synthetic media landscape and the feedback from signatories and other stakeholders. The extent to which this Framework can help incorporate ethical considerations that shed light over these gray areas  – including how global human rights standards can be applied to satire – will be one of the barometers by which it is measured as it evolves. 

An important step towards human rights oriented guidelines for synthetic media

It is important to highlight that this is not a legal framework — it does not replace the need to develop legislation, internal policies and other enforcement mechanisms that promote accountability and protect and promote human rights. 

There is an urgent need to center within these processes communities that while they may be most vulnerable to the negative impact of synthetic media, are removed from centers of tech decision-making. This can pre-empt the harms of overlooking context-specific iterations, varying levels of synthetic media literacy, and global accessibility to labeling, provenance and other emerging tools. 

While we are still in the early phases of the development and regulation of AI tools for generating and editing audiovisual content, it is critical that the people and communities most at-risk from synthetic media participate in establishing targeted and enforceable human rights-based principles. It is these principles that should provide a framework for tackling potential harm, including those from satire and other gray areas of synthetic media use. This is just one step in that direction.

For WITNESS, this is also a reflection of much of what we have learnt over the years working alongside communities facing similar issues. It is one of the early blocks delineating the ethical boundaries that should shape these rapidly evolving technologies, and it is a reference point for our continuous grassroots-driven, human rights-centered advocacy aimed at technology companies and other influential stakeholders.

If you are interested in exploring other ways in which WITNESS is preparing for synthetic media and fortifying the truth, visit our ‘Prepare, Don’t Panic’ site here, and stay tuned for updates on our continued engagement in this area.

 

28th of February, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *