[Read this blog in Spanish]

In August 2023, WITNESS hosted in Bogotá, Colombia, over 25 journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists from all of South America and Mexico. In our two-day workshop, we discussed threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, identifying and prioritizing collective responses that can have a positive impact in the information landscape.

This workshop is part of ongoing efforts at WITNESS to engage with communities that conduct crucial human rights work, to ensure that the development of emerging technologies reflect their global needs and risks. Our pioneering work convening these critical voices on matters related to deepfakes and synthetic media started in 2019 in South Africa, Brazil, South-East Asia, and continued in 2020 in the United States. In addition to online consultations, more recently, we held a similar workshop in Nairobi, Kenya. 

Elections, gender-based violence and attacks against civil society are still some of the major concerns voiced by participants

The use of synthetic media and generative AI in elections was one of the main concerns echoed during the workshop in Colombia. Participants noted an increasing prevalence of cases and pointed to various examples, such as synthetic broadcasters from a fake media network in English that supported the Maduro regime, the #yonosoyJanTopic trend in Ecuador, or the deepfake in favor of the Mexican opposition presidential candidate, Xóchitl Gálvez. 

While some noted that there may be responsible and legitimate ways of using artificial intelligence during elections, participants are wary about how the use of these technologies can amplify the impact of information operations. These tactics are a form of disinformation that are characterized by their intention to manipulate citizens, and alter elections through organized and coordinated efforts.

Gender-based violence is still the most immediate problem with deepfakes and synthetic media, and there is a risk that this will be exacerbated by easier access to better technologies. During the workshop, participants highlighted further concerns about how these could be used against children and minors, as noted by an example presented by a participant, how it deters women from participating in the public sphere, how it can be used to gaslight –a type of abuse that causes someone to doubt their perceptions or sanity–, and more generally, how it reflects and perpetuates patriarchal narratives such as stereotypical gender roles.

Also discussed as a major concern was the use of synthetic media to attack and undermine civil society and independent media. Where there is a lack of trust, there is an opportunity for nefarious actors or those in power to influence the public by undermining truthful content (liar’s dividend) or amplifying disinformation via information operations. Participants identified a need to create legal and technical infrastructure that can balance the scales in the face of economic and political power asymmetries. 

Among other threats that stood out in the workshop were the environmental impact of AI, a reduction of diversity in online content, and the undermining of community practices of content creation and attribution—described as spaces of gathering (espacios de encuentro) that do not reflect or respond to commercial interests, and whose processes are slower and necessarily open and collaborative. 

Beyond specific threats, one of the general conclusions from this workshop, as similarly noted in our workshop in Kenya, is that threats from synthetic media will disproportionately impact those that are already at risk because of their gender, sexual orientation, profession, ethnicity or belonging to a social group. In discussing threats, opportunities and responses, participants identified a need to focus attention on at-risk groups specifically rather than address concerns generally.

Synthetic media and generative AI offer opportunities, but they need to be developed with guardrails

Participants discussed various ways in which synthetic media and generative AI could help protect human rights and bolster their specific causes. As noted in other spaces, the fact that these technologies can facilitate and speed up the process of creating hyper-realistic multimedia content was recognized. 

WITNESS’s presentation on using generative AI for human rights advocacy, which details examples of how the technology can be used to protect the identity of individuals at risk, to visualize testimonies, and for satire and artistic expression, also resonated among the participants of the workshop. 

In addition to helping create content, one of the participants emphasized the analytical potential of AI, and how this could be used in other moments of the creation or use of audiovisual content, such as research, or in subsequent archival practices, for example for categorization or interpretation

Participants did clarify that none of these opportunities are inherently positive. The ease and speed of content creation, for example, can be useful in specific circumstances, like when time is of the essence, but in many other cases it can be counterproductive, like in the process of community film-making which, as noted previously, is often purposefully slow. Or more notably, perhaps, the fact that the same AI tools used to analyze videos for the defense of human rights can be used for surveillance and control. 

This double-edged sword of synthetic media means that any AI and synthetic media strategy needs to be intersectional; that is, it needs to recognize that it is built upon existing structural injustices, such as colonialism or a significant digital divide, that may undermine well-intentioned efforts. For example, attempts to address biases in AI models is laudable, but if they are not led by the people and communities who are underrepresented, they could end up being more harmful.

Promoting inclusivity in tech development with regional networks

A recurring theme throughout the two days of the workshop was the need for Latin America, and specially marginalized communities within the region, to ‘appropriate’ AI technologies—to own them and to make them their own. During the event, participants referred to this community-led generative AI and synthetic media as an objective and as a process by which these technologies can live up to their potential while mitigating and averting most of the existing and potential harms mentioned above. 

Although a specific definition for this community-led generative AI and synthetic media was not discussed, participants of the workshop introduced it in reference to the ideal of local communities retaining, through democratic processes, critical influence over the technology—over how it is developed, regulated and used. Based on the generalized understanding of existing possibilities, this could mean, for example, that local communities could work with foundational model developers to deploy fine-tuned AI under innovative systems of community-led governance.

Recognizing, however, that representation is also needed upstream, where the foundational models used around the world are being developed, participants highlighted the need for more voices from the global majority to have a seat at the table. This requires creating –more– spaces within standardization bodies, companies, international organizations and even in certain legislative bodies of the Global North to hear and understand how these technologies affect people and communities globally.

For participants, this also meant that, within Latin America, there is a need for more articulated coordination and cooperation, especially from civil society, not just to offer a more compelling stance against external processes that affect the region, but to offset a lack of dedicated resources, political commitment and institutional support.

Takeaways for WITNESS’ strategy to ‘Fortify the Truth’

For WITNESS, this reaffirms our efforts to continue engaging with communities that conduct crucial human rights work to bring their global needs and risks into the shaping of emerging technologies like generative AI and synthetic media.

The workshop in Bogotá also reaffirmed the importance of media literacy for the general public and news media as a first line of defense against harms from synthetic media, and it underscored that comprehensive, long-term solutions should include methods of transparency (such as watermarks and media provenance) that protect privacy and freedom of expression, and access to tools that can detect synthetic media. It also emphasized the need to place clear responsibilities across the spectrum of stakeholders involved in the synthetic media ecosystem. Our guiding principles and recommendations on deepfakes, synthetic media and generative AI, and our broader areas of intervention to strengthen the information ecosystem are in step with these conclusions.

Become part of the response: a call for researchers, technology companies, legislators, policy makers, activists, journalists, creators and campaigners

WITNESS will continue to consult globally with journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists about the threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, and the responses that these communities prioritize to build a stronger information landscape. 

You can also read the full report from the WITNESS workshop in Bogotá, August 2023 (available in Spanish). 

If you want to be part of the conversation, or have suggestions for collaborations, please get in touch with us by emailing Raquel Vazquez Llorente: raquel@witness.org

Leave a Reply

Your email address will not be published. Required fields are marked *