—February 19, 2020
On 26 November 2019, WITNESS hosted a workshop at the University of Pretoria, South Africa, on deepfakes and synthetic media.
Today we are releasing a full-length report on the proceedings of the workshop detailing presentation sessions, discussion points and a range of solution areas identified, compiled by Ade Johnson of the university’s Center for Human Rights:
An abridged version of the report’s executive summary is published below. WITNESS would like to thank all who attended the workshop for their participation and valuable contributions to the discussion around deepfakes and synthetic media.
On Tuesday, 26 November 2019, WITNESS in collaboration with the Centre for Human Rights, University of Pretoria hosted a one-day expert workshop focused on increasing understanding of the problem of deepfakes as well as prioritizing threats and solutions.
To the best of our knowledge, this was the first intensive workshop to be held in Sub-Saharan Africa in order to kickstart discussions on deepfakes and explore possible responses in the African context, and based on African experiences of existing problems. Over the course of the day, the workshop would first build a common understanding of the threats presented by deepfakes and other forms of synthetic media, and then encourage a prioritization of possible interventions from a Sub-Saharan African perspective. This convening followed on from an earlier meeting in Brazil.
Contrasts with the U.S.
Besides contributing to greater understanding, one of the key outcomes of the workshop was a mapping of the areas where participants’ concerns and recommendations differed from those commonly expressed in the US context.
As one example, one of the most notable differences was the level of threat assigned to internal vs external actors: In the US, threat perception around deepfakes tends to imagine high-level political interference from foreign actors, i.e. a video attributing false statements to a senior government official. However, representatives of grassroots groups imagined a more pressing threat coming from agents of their own state, such as videos manipulated to show justification for police activity, to discredit prominent movement leaders, or to scapegoat activists for actions they had not committed in order to provoke a violent response from opposition groups.
In another contrast to discussion in the US, there was a real concern about the potential of deepfakes to incite violence rather than just spread misinformation. These concerns sometimes focused on the potential for rumours to spark mob violence in areas with political and/or ethnic, communal tensions, and also for deepfakes to be used as cover for state violence in either a police or military context.
Many participants identified low levels of media literacy as a problem in combating deepfakes and misinformation more generally. This led to a concern around the use of fake audio and video in ongoing health misinformation campaigns, like anti-vaxxing. As in other countries, media organizations were concerned about the challenge of ‘doing more with less’ in their journalistic work given low staff numbers and falling revenues.
In identifying possible solutions and mitigations against the emerging threat, participants emphasized a range of possible interventions spanning from technical to educational to policy based. On the technical side, there was a request for better documentation to outline the range of available algorithmic detection techniques along with their uses and limitations, and to more closely integrate some already existing detection solutions into social platforms. (For example, while visual media spread via private WhatsApp channels cannot be easily searched and debunked by fact-checkers, built-in tools could enable reverse image search functionality directly from the app.)
Having seen the range of advanced detection techniques available to computer science researchers, there was a concern that it would be a long time before such techniques were made available to grassroots or indigenous groups or even media outlets, and that efforts were not being made to bridge the gap in the technical sophistication needed to implement them and interpret them.
Media professionals stressed the need for more collaboration and resource sharing in order to respond to the threat effectively and with an efficient use of limited funds. Journalists and fact-checkers identified highly technical fields like media forensics as being an area where resources could be shared between teams and organizations. Building clear channels of communication between actors ahead of time was also highlighted as an area where improvement was needed, and would lead to a more effective response.
Stakeholder groups were broadly agreed on the need for improved public media literacy as a precursor to developing an understanding of sophisticated media manipulation from deepfakes. A call for the translation of training materials into local languages emerged as a key demand, along with a recognition of the importance of working with trusted figures (from community leaders to social media influencers) in promoting a critical approach to online news.
There was also agreement that social media platforms could play a more active role in promoting media literacy by using videos, games and news articles. Finally participants suggested that children should be advised on critical media consumption habits from a young age, with material on disinformation and media manipulation incorporated into school curricula.
The workshop ended with a call to develop (in the words of one participant) a “careful awareness” of the problem, in which sober appraisal of the threat was used to drive well considered responses rather than any knee-jerk reactions that could have unintended consequences further down the line.
Feedback on specific steps to take in moving forward included:
- Strengthen communication channels between the participant journalists, academics, civil society groups and grassroots organizers ahead of time in order to more effectively debunk deceptive videos when they arise.
- Update journalism training curriculum to include more information on deepfakes and other AI-driven manipulation.
- Look for funding bodies that could cover translation costs for material concerning digital disinformation.
- Initiate further surveys into media forensics capability in journalistic organizations, and begin to develop a plan for creating specialist facilities that could be shared across media outlets.
- Lobby politicians to raise awareness of disinformation as a social problem to be tackled, and to which resources must be allocated.
- Continue to address existing problems with ‘shallowfakes’ – i.e. mis-contextualized videos and lightly edited content
For further reading based on the proceedings of the workshop, see WITNESS blog posts here: