In May 2024, WITNESS hosted in São Paulo, Brazil, over 25 journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists from different parts of the country. In our two-day workshop, we discussed threats and opportunities that generative AI and synthetic media bring to audiovisual witnessing, identifying and prioritizing collective responses that can have a positive impact in the information landscape.

This workshop is part of ongoing efforts at WITNESS to engage with communities that conduct crucial human rights work, to ensure that the development of emerging technologies reflect their global needs and risks. Our pioneering work convening these critical voices on matters related to deepfakes and synthetic media started in 2019 in South Africa, Brazil, Southeast Asia, and continued in 2020 in the United States. In addition to online consultations, more recently, we held a similar workshop in Nairobi, Kenya, and Bogotá, Colombia.


 

Threats from Synthetic Media

An initial reflection coming out of the first day of discussions in our two-day workshop in Brazil was that the threats of synthetic media do not emerge in a void. As the technology is developed, and as it becomes more widely used and regulated, it can reflect, accentuate and alter existing dynamics of power and patterns of media manipulation. 

Recognizing these historic patterns, participants argued that Black and Indigenous communities, women, and other minority groups, as well as activists and civil society, were likely to be most at-risk from the misuse of synthetic media. Existing trends, globally and nationally, already point in this direction, including cases of the use of generative AI to attack women, to deceive during election processes, and to undermine the voices that speak truth to power. 

It should be noted that the threats of synthetic media fitting within existing dynamics of power echoes perspectives from our previous consultations (see above). One takeaway from this is that, as generative AI and synthetic media technologies continue to evolve, permeate society and be subjected to regulation, it is imperative that we center the experiences from those most at-risk, and learn from the use of existing patterns of media manipulation. The latter point is particularly relevant since shallowfakes—lightly edited or mis-contextualized content—remain the most prevalent form of disinformation, in Brazil and globally, as highlighted by several participants.

Of the recurring themes that came up during the workshop, sexual and gender-based violence (SGBV) was the one that resonated the most – as it has in previous consultations. Participants mentioned the ongoing risk of media being used to silence and undermine women in public settings such as political candidates or journalists, and they underscored the existing harm of synthetic media being used to target women who are private citizens and do not occupy public roles, including through the creation of non-consensual sexual imagery. 

Another recurring theme voiced by participants is the risk that generative AI and synthetic media could aggravate and amplify the threat of disinformation, further eroding trust at a time where the voice of journalists and human rights defenders can already be easily dismissed. As a few participants noted, democracy has already been shown to be in danger in Brazil, and  AI-bolstered manipulation can create new obstacles for the upcoming elections. 

In terms of the capabilities of generative AI, participants were particularly concerned with the ability it offers to create large volumes of diverse content. This capability can be exploited to fabricate false narratives or sow confusion about actual ones. However, one participant did add a caveat: “In Brazil, everyone has the right to speak, but not everyone is heard,” highlighting that these capabilities may be most effectively leveraged by those with the resources and strategies to amplify their voices.

Another major risk factor identified by participants is how generative AI allows for the production of personalized content, which can be used to target vulnerable groups and civil society. This can include, as mentioned previously, the creation of disinformation or non-consensual, AI-generated sexual images, videos, or audio that mimic real people, including private individuals.

 

The Brazilian context

Although there are common trends that have emerged from our consultations in different parts of the globe, participants in this workshop also identified a number of factors that may be specific to Brazil that should inform how these technologies are deployed and regulated. Two of these stood out:

Regulation is moving fast

Brazil was one of the first countries to propose and discuss initiatives to regulate different forms of Artificial Intelligence, and since then it has continued to move at a rapid pace. Participants in this workshop listed and described a few that directly impact –though do not directly mention– synthetic media, such as the initiatives (projetos de lei) 21/2020 and 2338/2023 that, among other things, establish provisions for AI transparency, accountability, and user rights protection. Participants also listed legislation and other governmental initiatives that can indirectly affect synthetic media, such as the General Data Protection Law (LGPD) or the Senate’s Temporary Internal Commission on Artificial Intelligence. Finally, participants also presented the Superior Electoral Court’s 23.610 Resolution that establishes guidelines for the use of Artificial Intelligence and synthetic media in electoral propaganda.

Several participants explained that Brazil’s rapid legislative and other regulatory advancements in this area could ultimately have negative effects. Given the nascent state of these technologies and their unprecedented rate of evolution, the legal requirements are too ‘abstract’, and there is little confidence that current stipulations will be effective or capable of mitigating some of their more pernicious consequences. What’s more, these actions could hinder or delay the development of other initiatives that may reflect a better understanding of these technologies and their associated risks and opportunities.

WhatsApp is a major channel of communication

Participants mentioned that WhatsApp is the most popular social media platform in Brazil. Bolstered by zero-rating strategies, it has become one of the main channels for the exchange of information, including false, misleading or manipulative content. At least for the short term, it will be necessary to consider how the technical responses (discussed in the next section) to the risks and opportunities of synthetic media can be applied to this platform in particular, and what legislation is required to ensure effective implementation and accountability. 

Beyond the specific case of WhatsApp, participants also discussed the enormous influence wielded by social media platforms in determining the health of the country’s information landscape and the seemingly inadequate counterbalance provided by institutions and legislation. As an example, they mentioned how severely limiting access to X’s (then Twitter) API in 2023 significantly reduced disinformation research in Brazil, demonstrating how an arbitrary decision from Big Tech can have serious consequences for the country, in this case possibly compromising the election process of 2024.

 

The future of synthetic media

In looking towards the future of synthetic media, the workshop focused on three current areas of exploration that can help ‘fortify the truth’ and create opportunities for independent journalism and the protection of human rights: 1. media literacy, 2. transparency and disclosure, and 3. detection (equity). The intersection between these three areas, and that between them and tech policy and legislation was also a key part of our conversations. 

Media literacy

Although end-users should not be held responsible for identifying false, misleading or harmful media, and less so in a context of increasingly sophisticated AI-generated or manipulated content, media literacy can still play a significant role in averting threats and mitigating harm, including during the upcoming elections of this year. 

Participants discussed that media literacy programs should fit within legislative initiatives and structured programs that establish clear responsibilities for the different stakeholders involved, including news media, government, social media platforms and content creators. These should also consider social and geographic inequities, such as inadequate infrastructure and limited access to tools. 

In designing media literacy programs, participants considered that it is necessary to talk about how these technologies can impact society as a way to ensure that these do not become a source of oppression or social exclusion. They also observed the importance of localized and demographically targeted media literacy programs, stressing the need to develop initiatives specifically aimed at senior citizens. This demographic is particularly vulnerable to manipulation and often overlooked in such programs.

Given the youth’s exposure to emerging technologies, participants advocated for educational approaches that integrate cross-generational experiences. They also argued for transversal programs, as media literacy can be a strategy and tool that should be incorporated across different aspects of life. Finally, they highlighted the value of programs that are aimed at trainers, and specially to the extent that these intersect with community leaders, whose influence at a local level can help drive the success of the programs.    

Transparency and disclosure

Transparency and disclosure mechanisms such as verifiable metadata or invisible watermarks can play a significant role in averting and mitigating many of the harms identified on the first day. This has been recognized in legislation as well, where various provisions establish the obligation to disclose when AI systems are being used for user-facing content and interactions. The Superior Electoral Court’s 23.610 Resolution also required political parties to disclose the use of GenAI and synthetic media in their electoral propaganda; Article 9b requires informing, “[…] in an explicit, prominent and accessible way”.

Despite seemingly agreeing on the value of transparency and disclosure, there wasn’t a clear consensus on what the specific technologies should be, how they should work, or what the requirements should be across the information pipeline. However, participants did identify a number of steps that can already help to avert and mitigate harm—perhaps most importantly, the need to create effective mechanisms of collaboration between the different stakeholders involved, including generative AI and social media platforms, news media, government and civil society. This has already proven to be useful in the context of election disinformation. To the extent that these mechanisms are well in place, it could make it easier to design preemptive solutions or tackle emerging threats as they are identified. 

Media literacy was also discussed in the context of transparency and disclosure. In addition to having an understanding of what generative AI and synthetic media are and how they could impact society, participants observed a need for media literacy programs for the general public to know how transparency and disclosure is being leveraged technically—what they should look out for, how to interpret it and how to navigate trust in an uncertain environment. 

Detection

Detection technologies need to be made available to the people that need it most, including local journalists, indigenous communities and other marginalized groups. They should also fit into existing workflows so that using them doesn’t become an unsustainable burden—this means that they could, for example, be embedded into social media platforms, such as Whatsapp. At WITNESS, we have referred to this as detection equity.

The WITNESS team and technical experts invited to the workshop recognized that detection technologies, especially those that are publicly available, are not reliable—that is, they offer wrong results in a significant number of cases. This does not mean that detection technologies shouldn’t be used, but that they should be one of the tool used to help analyze content, not the only one.

Leave a Reply

Your email address will not be published. Required fields are marked *