Written by Raquel Vazquez Llorente, Jacobo Castellanos, and Nkem Agunwa.

In March 2023, WITNESS hosted in Nairobi, Kenya, over 20 journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists from six African countries: Uganda, Kenya, Ghana, South Africa, Nigeria and Zambia. In our two-day workshop, we discussed threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, identifying and prioritising collective responses that can have a positive impact in the information landscape.

This workshop is part of ongoing efforts at WITNESS in engaging with communities that conduct crucial human rights work, to bring their global needs and risks into the shaping of emerging technologies. Our pioneering work convening these critical voices on matters related to deepfakes and synthetic media started in 2019 in South Africa, Brazil, South-East Asia, and continued in 2020 in the United States. We have also conducted other online consultations over the years. 

An accelerated threat landscape for human rights with echoes across countries 

Despite the hype of deepfakes and synthetic media, shallow fakes (or images that have been edited without AI-enabled tools) are still the main source of concern for communities at the frontlines of human rights. The main overarching concern echoed in Nairobi is no different from what we have heard over the past years across continents: threats from synthetic media will disproportionately impact those that are already at risk, because of their gender, sexual orientation, profession, ethnicity or belonging to a social group. 

As in previous workshops, participants worried that synthetic media, or the idea of it, could be misused by those in power to dismiss true information by claiming it is false (liar’s dividend). Furthermore, there were worries about the use of synthetic media to “poison the well”, that is, to discredit journalists, activists, and civil society organisations, along with the factual content they disseminate.

These actions would place additional strain on already under-resourced local newsrooms, fact-checkers, and community leaders responsible for verifying digital content. The discussions further revealed concerns about the enacting of restrictive laws that could curtail the potential of synthetic media as a potential consequence of the perceived misuse. These laws might be utilised to suppress free expression and dissent, posing a threat to the principles of civic debate and information sharing. 

Over the past few months, software enabling synthetic media or sophisticated manipulation has become more accessible. With the wider availability of generative AI tools, more people have had the opportunity to engage with synthetic media and imagine–or experience–how it could impact their lives.

In Nairobi, participants highlighted how the ability to create higher volumes of deepfake content could bring risks to the integrity of elections and democratic systems. They also prioritise public health misinformation, emphasising how the Covid-19 pandemic showed the level of harm that can come from the spread of mis- and disinformation and the difficulty of accessing trustworthy information or being able to identify it online.

We also heard how synthetic media amplifies concerns on new forms of gender-based violence and the reinforcement of biases and stereotypes, as well as its potential to escalate ethnic, religious, and political divisions in different parts of the continent.

Opportunities of synthetic media and generative AI for human rights advocacy

Participants in the workshop discussed how synthetic media could help enable creative contents that advances human rights while protecting the privacy of people at risk, and preventing harassment or persecution. On several occasions, participants alluded to the use of AI-generated avatars as a way to protect the identity of individuals such as whistleblowers or activists and create content that can connect with an audience. One example of this is the Welcome to Chechnya documentary that protects the identities of the individuals interviewed with facial replacement techniques using artificial intelligence.

Even in situations where targeted attacks and other risks are less of a concern, generative AI can be a powerful tool to make certain sectors of society more visible, and raise awareness of discrimination–like the Elders Series by Malik Afegbua shows. Additionally, immersive experiences can be a tool for fostering empathy and move people to action, and AI chatbots can help reach bigger numbers for human rights campaigns. 

What’s needed from technology companies, legislators and digital policy advocates

Building on our previous consultations that have taken place since 2019, participants in Nairobi prioritised a number of actions to counter the potential threats posed by synthetic media:  

  • Media literacy: expanding our understanding for more inclusive policy-making  

Media literacy campaigns should inform the public about what synthetic media is, and what is possible with new forms of multimedia manipulation (and what it is not). These initiatives can help prepare the public to view and consume media more critically while not adding to the rhetoric around generative AI. Moreover, media literacy should also be a vehicle for empowering individuals and communities to engage with governments, civil societies and companies to develop responses and solutions that reflect their needs, circumstances and aspirations. In this regard, media literacy campaigns acquire a critical importance and are a precursor to effective and inclusive public policy making. 

  • Pipeline responsibility: developing tools and processes for provenance and authenticity that are driven by accessibility and human rights-centred transparency 

Responses and solutions should not place the burden of responsibility on the end-users of synthetic media and generative AI tools, or consumers of digital content. Instead, these responses should set expectations across the pipeline, including foundational model researchers, tool makers, distribution platforms and other upstream stakeholders such as legislators and regulators. These actors should bear responsibility to guarantee transparency in how a piece of media is created or manipulated as it is circulated online, ensuring that media consumers are effectively informed about the nature of the content they are consuming.

More importantly, any solution should include input from global stakeholders, with an eye towards defending human rights and protecting privacy, in particular:

  1. Provenance technologies and watermarking solutions should consider the potential harms these responses can cause to communities like activists, whistleblowers and others whose security may be put in peril if their identities are disclosed.
  2. Policy templates that are designed alongside impacted groups, incorporating the feedback of local experts and experience, can help newsrooms and archives develop their own media literacy and resiliency against threats of synthetic media.
  3. Detection tools that can help discern whether a piece of content has been AI-generated or digitally manipulated should be accessible to those who can leverage these tools to meet the needs of marginalised communities, while preventing it from being rapidly undermined by wide and unrestricted access.
  • Transnational collaborations: building networks to influence legislative efforts, technical infrastructure and platform policies

Alliances can help civil society organisations ‘punch higher’. Participants discussed that, despite targeted advocacy and some efforts to leverage existing networks, they have not been able to influence legislation and policy. Well-organised networks can help digital advocates, communities and activists gain the credibility and the resources that are often required to get ‘into the room’.

One specific strategy for these networks to influence these spaces is to fill in gaps by producing evidence-led and foundational research, and by communicating these findings effectively for example via policy briefs. Similarly, regionally-led networks would be well-placed to monitor the propensity to copy legislation from Europe or the United States without proper consideration to the local context, and could also take note of China’s influence in the African digital space.

Where do we go from here? From threat mitigation to accountability for harms

Categorically, all groups we have consulted over the years agreed that not enough is being done by technology companies and legislators to include them into the design, deployment and regulation of synthetic media tools and the broader generative AI landscape. There seemed to be a clear sense of agreement amongst participants that any strategy to respond to the threats and opportunities of synthetic media should not rely exclusively on the actions taken on by end-users. For all the solutions discussed, there is a key role to be played by governments, technology companies and social media platforms and news organisations to develop regulation, policies and other responses that tackle threats without placing the responsibility on content creators or consumers, and that incorporate the expertise of global diverse stakeholders.

This idea is also reflected in calls from different stakeholders, including the United Nations via its Guiding Principles on Business and Human Rights, various civil society organizations and more recently emerging regulation from legislative bodies. Companies should be required to develop comprehensive human rights assessments prior to the deployment of AI models and tools, including those that facilitate the creation of synthetic media or synthetically manipulated media.

Although this is something that has been recognised by leading companies and individuals in this space, and more and more so in emerging regulations and soft law, what is worth underscoring from the workshop is that the creation of these threat-mitigating mechanisms is not enough. Ultimately, these upstream stakeholders should be accountable for the harm they cause or fail to prevent as synthetic media infrastructure and tools are deployed. 

How WITNESS is addressing the threats and opportunities of synthetic media and generative AI

WITNESS in engaging with communities that conduct crucial human rights work, to bring their global needs and risks into the shaping of emerging technologies like generative AI and synthetic media. As we engage in these conversations, here are some examples of the questions we are trying to answer, and the actions we are taking:

  • How do we ensure that tracking at scale of how content is created and modified doesn’t harm freedom of expression, privacy and trust? 

One solution to mis- and disinformation is to track how media is created and manipulated. Authenticity infrastructure has important implications for how trust is assigned to online accounts, and on whom the burden of proof is placed to show origin, prove media is untampered or confirm manipulation. Expectations of provenance and technical markers of authenticity must not be leveraged against vulnerable populations that cannot or choose not to use them. 

WITNESS has contributed to the development of emerging authenticity infrastructures with a human rights lens, including leading the threats and harms assessment for the Coalition for Content Provenance and Authenticity (founded by Adobe, Arm, BBC, Intel, Microsoft and Truepic). WITNESS is also further exploring emerging technical responses to promote transparency in the provenance of media, for instance labelling deepfakes.

  • Who has access to tools for detecting deepfakes and under what terms? 

Several technology companies have attempted to create and share AI-powered solutions that are built for detecting synthetic media. While imperfect and susceptible to adversarial dynamics, detection tools can contribute to a healthier online information ecosystem. However, the distribution of these solutions is unequal. Access to tools and the capacity to use them should be provided equitably around the world, especially in contexts and regions where civil society, journalists, and activist stakeholders are on the frontlines of protecting truth and challenging lies.

WITNESS is currently piloting a Deepfake Rapid Response Force that allows IFCN members to escalate cases of suspected deepfakes, and get a timely assessment on the authenticity or origin of the content. WITNESS also continues to advocate to companies and other stakeholders to invest in media forensics and detection equity, for instance by pushing for more accessible reverse video search capabilities.

Become part of the response: a call for researchers, technology companies, legislators, policy makers, activists, journalists, creators and campaigners

WITNESS will continue to consult globally with journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists about the threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, and the responses that these communities prioritise to build a stronger information landscape. 

You can also follow the conversation online #GenAIAfrica and read the full report from the WITNESS workshop in Nairobi, March 2023. 

If you want to be part of the conversation, or have suggestions for collaborations, please get in touch with us by emailing Raquel Vazquez Llorente: raquel@witness.org 

Published on 17th May 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *