In August 2024, WITNESS hosted in Bangkok, Thailand, over 40 journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists from different parts of the Asia-Pacific region. In our two-day workshop, we discussed threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, identifying and prioritizing collective responses that can have a positive impact on the information landscape.

This workshop is part of ongoing efforts at WITNESS to engage with communities that conduct crucial human rights, media and civil society work, to ensure that the development of emerging technologies reflect their global needs and risks. Our pioneering work convening these critical voices on matters related to deepfakes and synthetic media started in 2019 in South Africa, Brazil and continued in 2020 in the United States and Southeast Asia. In addition to online consultations 2021-22, more recently, we held similar workshops in São Paulo, Brazil, Nairobi, Kenya, and Bogotá, Colombia.

The full report of the workshop can be found here.


Context considerations for Asia Pacific

Previous consultations have underscored the need to understand the context within which synthetic media and generative AI are emerging and becoming more widely developed and used. 

In discussing what currently threatens freedom of expression, assembly, and association in the region, participants highlighted that intimidation and censorship, online harassment, and data security were of particular concern, and they pointed to mass arrests and enforced disappearances in Bangladesh; blasphemy laws and online harassment, particularly against women journalists, in Pakistan; and legal and extrajudicial suppression of dissent in Thailand, as examples (see the CIVICUS Monitor 2023 for more information on the threats against freedom of expression, assembly, and association in APAC).

Participants noted that rampant disinformation has had a significant impact on trust and truth in the information ecosystem, and they agreed that synthetic media and generative AI were already part of this expanding challenge. There were other areas of concern in this space too, such as insufficient and disparate levels of digital literacy, challenges to independent media, and social media platforms with harmful content moderation policies and inadequately regulated––all of which add to an already burgeoning crisis of trust.   

While discussing what stands in the way of equity and inclusion, participants described concerns around economic exclusion and the marginalization of religious and ethnic minorities, LGBTQI+ individuals, and women. As in previous consultations, participants argued that these people and communities, along with activists, civil society, and journalists were the most at risk from the threats from synthetic media and generative AI. 

Identifying and Prioritizing Threats

The context described above framed the discussions focused on identifying and prioritizing threats from synthetic media and generative AI. 

One of the more striking conclusions from these discussions was the concerns around the accessibility and ease of use of generative AI tools––not as a problem in and of itself, but as the basis for manipulation, deception and confusion in the information ecosystem. It was noted that the ability to create sophisticated content at volume could be misused nefariously, or unintentionally used in a way that ultimately undermines truth. This was well reflected in our ‘Spectrogram’ exercise, where participants identified ‘synthetic histories’ as the number one threat from synthetic media. 

It is worth stressing, as participants did, that this has serious implications for fact-checkers, journalists, and anyone speaking truth to power. It creates an additional burden to develop mechanisms and strategies to prove the authenticity of their content to audiences, while also enabling those who seek to deceive or manipulate to plausibly cast doubt on the truth or present falsehoods as fact.

The weaponization of synthetic media and generative AI to attack and undermine activists, civil society, and journalists was, again, prioritized as a major threat. Among those, women were particularly vulnerable to AI-facilitated sexual and gender-based violence. Throughout the event, participants brought up numerous examples that already point to a growing trend, especially in the context of elections where synthetic media has been used to threaten, intimidate and undermine women candidates.  

Interestingly, in contrast with what seems to be top of mind among tech companies and technologists, especially in the United States, ‘Agentic AI’–an AI system designed to autonomously make decisions, adapt to environments, and pursue goals with minimal human intervention–was not a major concern among participants

Taking action

When considering the responses to the threats from synthetic media, participants seemed to emphasize the need to focus on developing the foundations that can help respond to an ever evolving landscape. For example, establishing clear mechanisms for collaboration between governments and legislators, technology companies, journalists, and civil society organizations was reiterated on numerous occasions as the basis for effective solutions. 

Multi-stakeholder collaboration could guide the creation of more locally relevant ethical and legal frameworks; it could facilitate strategic litigation aimed at ensuring accountability across the pipeline of actors in the synthetic media ecosystem; and it could help put in place time-sensitive, and scalable solutions for fact-checking, including through more trustworthy and accessible detection tools.

As hinted above, another recurring theme during the workshop was the need to ‘move the conversation away from the west’. In coming up with responses to threats from synthetic media, it is necessary to bring different voices, especially those most vulnerable, to the table. Some specific actions include the creation of new tools or the adaptation of existing ones to meet the language needs of minorities and involving community leaders in education and fact-checking processes as a way to promote trust.

Technical mechanisms for disclosure and transparency, such as provenance, watermarking and fingerprinting, were also discussed extensively, and there was a general agreement that these will play a critical role in addressing the threats discussed previously. These too, however, need to be built over the foundations discussed in this section; that is, these need to be designed and implemented with input and participation from different communities in the region, and they need to be enabled to ‘fortify the truth’ while addressing privacy and accessibility concerns.

Become part of the response: a call for researchers, technology companies, legislators, policy makers, activists, journalists, creators and campaigners

WITNESS will continue to consult globally with journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists about the threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, and the responses that these communities prioritize to build a stronger information landscape. 

If you want to be part of the conversation, or have suggestions for collaborations, please get in touch with us by emailing Jacobo Castellanos: jacobo@witness.org

Leave a Reply

Your email address will not be published. Required fields are marked *