—August 2020

Technology touches all of our lives, so it touches all of the work we do at WITNESS. But there are some areas of technology more than others that are focal points for our Emerging Threats and Opportunities and Tech + Advocacy programs.

We’re in the vanguard of conversations on artificial intelligence and synthetic media (deepfakes), content moderation, and mis- and disinformation, and we wanted to create a channel for discussing these issues in more depth. That’s why we’re launching { Latent Space } – a newsletter that will bring together different strands of our technology research in a more transparent, conversational way.

Sign up for { Latent Space } here:

* indicates required



One of the topics will be cutting-edge insight into content moderation, a key focus for WITNESS’s tech advocacy program manager Dia Kayyali. WITNESS’s work in this area is driven by the impact of content moderation on both human rights documentation, and on the safety and security of human rights defenders themselves.

Efforts to “eradicate terrorist and violent extremist content” online continue to expand, threatening vast bodies of human rights content. At the same time, violent content directly linked to offline violence remains online. When companies do remove content linked to offline violence, they may fail to preserve potential evidence or have no established mechanism to share it with key parties, as just happened with the UN’s Independent Investigative Mechanism on Myanmar. That’s why WITNESS is involved in key multi-stakeholder conversations about these issues in places like the Global Internet Forum to Counter Terrorism.

We’ll also talk about developments in artificial intelligence, especially when these developments have implications for the global information space. For years now, WITNESS program director Sam Gregory, and more recently program coordinator for emerging threats Corin Faife, have been at the forefront of mapping the human rights implications of deepfakes, videos that use AI to simulate realistic looking scenes that never took place. Our work here has clarified not only the threats posed by deepfakes, but also the dangers of overly broad legislation meant to prevent them, which could stifle freedom of expression and be used to silence speech.

Lastly, we’ll be featuring key news from other sources and links to relevant events, because there’s so much great work being done by other organizations in this space. WITNESS’s work comes not only from the expertise of our team members, but from our connection to global networks of technologists, journalists, activists, and other human rights defenders worldwide.

Our newsletter isn’t just for policy wonks and AI engineers (though we do hope they’re reading too). It’s aimed at anyone who wants to learn more about tech policy and engineering issues from a perspective based in lived experiences and human rights. While we have honed in on deepfakes and terrorist and violent extremist content with laser focus, we regularly comment on broader issues, and we’ll do our best to keep you up to date on news from ourselves and our myriad friends and allies in the field.

In the field of machine learning, latent space is a way of encoding of data in a compressed form. A neural network is trained to find commonalities in a kind of input data – a map layout, a page of text, an image of a face – and reduce the data into a representation that contains the key features of the data but not the full detail. From this latent space the model can reconstruct the original form, filling in the blanks to paint a full picture again, or even imagine new possibilities.

We hope this newsletter will reveal similar underlying features of the fields we work in, and from this structure, build a picture of what else could be possible.

Leave a Reply

Your email address will not be published. Required fields are marked *