As my colleagues Sameer Padania, Priscila Néri and Chris Michael who worked on The Hub can attest, curating online video is difficult to say the least. While considering questions on ethics, revictimization, consent, dignity, and security, the Hub staff at WITNESS aimed to highlight relevant human rights-related video that, at times, contained disturbing or very graphic imagery (see the example of the Neda video from Iran: ‘A Woman Dies on Camera – To Post or Not to Post?‘) .

The issue of when to share/publish such imagery is one that has faced editors and image-makers since the dawn of the mass-distributed image (the Crimea War of 1853-56 was the first conflict to be widely photographed). It’s a topic that is perhaps more widely debated now more than ever. Voices in the conversation around visual media include cultural critics such as the late Susan Sontag in Regarding the Pain of Others, governments, and more recently by bloggers, citizen journalists and activists online (this honest discussion by Ruthie Ackerman who runs the website Ceasefire Liberia – thanks to Priscila for the heads up on this).

And what do you do if you work for YouTube, where 24 hours of video are uploaded every minute? How does anyone manage that amount of content?

YouTube was at the center of a controversy back in November 2007 when a video uploaded by Egyptian blogger Wael Abbas was flagged as inappropriate by a community member. (I am linking to the available police brutality videos on his channel, however please note that the content may be disturbing and that you need to be 18 years or older to view.) YouTube then suspended Abbas’ channel of police torture videos (Sameer and others reported on this as it was unfolding), virtually silencing one of the only channels in which this information was being circulated from inside Egypt. Eventually, Abbas’ channel and videos were reinstated after public outcry and the context of the videos could be reviewed by YouTube staff.

Global Voices Citizen Media Summit 2010 - Santiago, Chile. May 6-7Two and a half years later, YouTube sent Victoria Grand, its policy chief, to the GlobalVoices Citizen Media Summit (May 6-7, 2010 in Santiago, Chile) to discuss their video review and take-down policies. We weren’t able to attend the Summit this year, but hat tip to GV Managing Editor Solana Larsen for giving me the heads up that this important panel was video taped and available for anyone to see. (The discussion also includes “a critical look at Facebook by Jillian York; and Hong Kong, Special Administrative Region of China content removal and deactivation across a number of platforms with Oiwan Lam”  as described on the GV Summit site).

YouTube’s Take Down Policy Shared and Explained

In her discussion, Ms. Grand used some of Abbas’ video from his YouTube channel as an example of content that, if flagged, would actually be allowed to stand based on its “EDSA policy” (the acronym stands for: educational, documentary, scientific or artistic). A brief overview of how videos come to be reviewed according to the EDSA policy –  No video is pre-screened before it goes live on YouTube, a video is added to the YouTube staff’s review queue only when it is flagged by a community member; the first review is done by an algorithm to search the video content for flesh tones (i.e. its looking for porn which is technically a no-no on the site) which prioritizes a video for human review. The algorithm also balances the flagging party’s reputation, whether the video has already been reviewed by a human and allowed to stay published, and a flag-to-view ratio – is this video being flagged by many individual viewers right now? If so, then it goes higher in the review queue. However, no video is removed from the site without being seen by a human.

I encourage anyone with interest in the topic of online content curation to watch the entire panel discussion here (about 40 minutes total) but I’m embedding the first part of the discussion below in which Grand explains YouTube’s take down policy in more detail than I’ve shared above. This appears to be the first time we’re hearing this amount of detail from YouTube about their take down policy and their transparency is important because this has been quite a contentious topic in activist circles where some in the past have had videos removed, often without much explanation.

One take away from Ms. Grand’s presentation (and I’m paraphrasing her here) is no algorithm will ever be able to figure out context of content in a video; it requires a human mind to look at it and ‘make sense’ of the content. And even with someone viewing a video, context can still be difficult to discern as questions about language, local context, and consent are very hard to answer if there is no personal contact or relationship with the uploader/content creator.

Curating Human Rights Content Online

Context is so important with respect to human rights video. Take the safety and security of those who appear in video uploaded to the Internet for a moment. Videos filmed by activists and bystanders of mass protests in Burma during September 2007  provided a window into a country whose military dictatorship does its utmost to keep its population disconnected from the outside world. It was a dangerous endeavor just to get the video out of the country once the junta blocked Internet connectivity. The videos captivated the world as they showed thousands of brave citizens joining Buddhist monks in protest in what is known as the Saffron Revolution. However, as the videos were available on international news sites, Facebook and YouTube, the Burmese regime, who ensured their connection to the Internet was still flowing, were able to review the videos and use them to identify protesters, thousands of whom were arrested.

My colleague Sam Gregory discusses this example in more detail in this blog post from 2009. Those who shot the original video certainly did not intend for the video to be used by the Burmese government to hunt down protesters. In the case of the Egyptian blogger Wael Abbas, the video he shared on his YouTube channel was actually shot by perpetrators – the police committing acts of violence and torture. Abbas was gathering examples of this type of video as he built his case via his blog and through international media of the impunity with which Egyptian police are able to operate.

In summary, context for videos can not be guaranteed to be understood simply by ensuring that a human review system is in place. In the post I’ve linked to above, Sam quotes another blogger who highlights Tom Glaisyer’s suggestion that we need to propagate “an online culture pervaded by a sense of fairness & justice” and apply “the Universal Declaration of Human Rights in to all web 2.0 Terms of Service.”  I wonder what others in the human rights landscape think about the opportunities and challenges present with posting content online, video or other…

Leave a Reply

Your email address will not be published. Required fields are marked *