Today YouTube announced a new tool within their upload editor that enables people to blur the faces within the video, and then publish a version with blurred faces. WITNESS has advocated for YouTube and other platforms to take this step for a number of years in blogs, public presentations, reports and private advocacy and applauds YouTube for leading the way in including this functionality.

Screen shot of YouTube’s new face blurring tool

We’ll be reviewing the tool in the coming days, and explaining how to use it well to protect vulnerable people in your videos. ​In this post I’ll discuss the human rights perspective on why tools like this are important for commercial platforms to adopt.

So why does the option to pixelate faces matter?

There are two levels to this. One is the direct experience of people on the ground in situations of challenge who are trying to share testimony of gross human rights violations but protect people at the same time. The second is the need for visual anonymity options in a broader range of settings as visual media becomes a dominant form of communication and access to facial recognition increases.

The Human Rights Use Scenario – Syria, Zimbabwe, United States

In places like Syria, activists on the ground note how often people are identified and tracked down because of the video that they share online. Here’s Rafeeq from Homs, whose story was shared recently via Al-Jazeera. I’ll quote him at length:

But for activists, the camera is a double-edged sword.

And here is how.

Many of my friends were arrested for protesting. However they weren’t arrested from the protest sites, but rather from the checkpoints spread across the city.

But how did Assad forces know they protested?

Government forces have special teams dedicated to monitoring protests that we film and upload to the internet.

One of my friends who was detained for a short period told me that, as he was undergoing torture in detention, he was asked by the investigator if he ever participated in rallies against the regime. When my friend denied protesting, the investigator showed him footage where his face clearly appeared in a protest.

This is when we started learning how to film rallies from angles that would clearly show the crackdown by Assad forces on protests but not the faces of those protesting.

A lot of Homs residents have become scared of the camera. This is not because there is any kind of animosity between the activists and the residents. But because of the fear the regime planted in their heart.

They know that a photo of them on the internet could result in several months of imprisonment and torture. This fear has grown as the number of arrests rose.

Residents living in opposition-controlled neighbourhoods are especially under threat of being arrested for appearing on “inciting TV networks”.

One resident who crossed to a government controlled neighbourhood was arrested at checkpoints for appearing in a footage filmed by an activist. In the footage, the man was simply removing the rubble from the street.

As a novice journalist, I have started to become very cautious about having the faces of people appear in my footage. I do not want anyone to be hurt because of me.

A survivor of rape in Zimbabwe who shared her story publicly, but wanted to protect her identity for fear of reprisal attacks.

Similar experiences have been seen in Egypt, Iran and before that in Burma. We also see a consistent need from less documented, less public struggles to protect individuals who speak out.

In fact, you may be at even greater risk if you’re the lone person speaking out in your community about, for example, as a survivor of gender-based violence in Zimbabwe or a member of an unpopular minority, like a sexworker speaking out against police violence in Macedonia, or a survivor talking about elder abuse in Pittsburgh or San Francisco.

As more and more people use video to speak out because its the medium of our age, the need to give people the option to choose tools to obscure becomes even more acute. As video becomes the de facto mode of communication in a mobile-camera enabled world how do we make sure that options for anonymity are not left behind?

The Human Rights at Stake: Why Options for Anonymity Matter for Freedom of Expression and Privacy

Anonymous and pseudonymous speech has a long history as a way to avoid personalization of a controversial debate, express unpopular viewpoints and reveal hidden truths or cover-ups. Think of the authors of the Federalist Papers, who wrote under the Publius pseudonym, or the leak of the ‘Collateral Murder’ video to Wikileaks to reveal a cover-up of the deaths of journalists in Iraq.

In an earlier post on this blog we talked about the human rights at stake when we discuss anonymity.

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. – Universal Declaration of Human Rights (UDHR), Article 19

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence. – UDHR, Article 12

Anonymity is very much a part of the right to free speech. International human rights law addresses the right to free expression and exchange of information, as well as freedom of association in Article 19 of the UDHR (see above), and also in Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which adds that restrictions on this right “shall only be such as provided by law and are necessary: (a) For respect of the rights or reputations of others; (b) For the protection of national security or of public order (ordre public), or public health and morals.” Further international declarations on the rights of human rights defenders also emphasize the capacity to disseminate and receive information on human rights topics.

Complementary to rights of freedom of expression is the right to freedom from arbitrary and unlawful interference with one’s privacy and correspondence, recognized both in Article 12 of the UDHR and in Article 17 of the ICCPR. The right to privacy is usually understood to include both the individual’s right to a zone of autonomy within a “private sphere” such as the home, as well as in respect to personal choices within the public sphere. Much discussion around online Internet privacy focuses on the security of personal data and personal identity.

Critical to an active right to both free expression and to privacy is the right to communicate anonymously. Of course, this is not an absolute right – after all, anonymity can also be used, for example to cover criminal activity. However, the active presence of options to have anonymity and no a priori restrictions on anonymity enables freedom of expression and supports the right to privacy.

People have (in the context of real name identity debates on social networking sites) highlighted the range of people who might choose for various reasons – from the most obvious, for example, victims of domestic violence to members of marginalized groups like LGBT people or disabled people.  “Geek Feminism” maintains a detailed and instructive wiki on the range of people affected by an insistence on real names and identity over anonymity or pseudonymity.

This recognition of the value of anonymity is also reflected in Supreme Court rulings in the US, for example, in a 1995 Supreme Court ruling in McIntyre v. Ohio Elections Commission cited by the Electronic Frontier Foundation:

Protections for anonymous speech are vital to democratic discourse. Allowing dissenters to shield their identities frees them to express critical minority views . . . Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.

Now, following the launch of our ‘Cameras Everywhere’ report, which heavily features discussion of the privacy and safety implications of ubiquitous video, we are putting a particular focus on forward-thinking on how to address moving the right to anonymous communication into the video era. What does it mean to exercise ‘visual anonymity’ and what can activists, technology companies, policy-makers and technology developers do to enhance this?

How can we enhance anonymity by thinking about what is both in front of the lens (for example, how do we hide someone’s voice or distort someone’s voice in easy and accessible ways), but also what sits embedded in the image (for example, metadata about a filmer’s camera or phone or geolocation that could be used to identify them). Our ObscuraCam project with the Guardian Project models an approach to this at the point-of-creation, just as the new YouTube tool looks at it at point of upload and distribution.

The Consequences of Facial Recognition: Binding Identities Together

Such a focus on visual anonymity is bound in with other privacy debates that are relevant both to human rights activists and to a broader public. One particular debate is around the growing role of facial recognition, including within public social media networks. My former colleague Sameer Padania talks about the ethics and practicalities of this in earlier posts here.  Such concerns have been given even more impetus by recent experimentation with facial recognition using social network-based photos.

team at Carnegie Mellon University succeeded in identifying people from their Match.com profiles by applying consumer facial recognition software to publicly available photos on Facebook, and in a subsequent experiment was able to identify a significant portion of people (a third) in a public setting by comparing their images to publicly available images on Facebook. The implication of this is that in an era of increasing facial recognition, the one time when you choose to say something politically unpopular, whistle-blow or otherwise speak out, can be correlated to the other 99% of your online identity, even if you try and do it outside the bounds of identity tracking.

Linking to an Understanding of Informed Consent

We also believe that an emphasis on tools options for visual anonymity must go hand-in-hand with a focus in user education in establishing an understanding of informed consent. Often in our own work we find that courageous people choose to speak out in public (and with their faces and information fully visible) despite the risks (see Erika Smith’s powerful reflection on this in the context of activists speaking out about women’s rights).

But the only way we can ensure this is by broadening the base of public understanding about what informed consent means. One way WITNESS is approaching this is by building a touch-interface for indicating and tracking consent into the InformaCam tool within our SecureSmartCam suite of apps, and by supporting a wide range of training approaches and media that speak to and help people understand informed consent alongside the tools.

Will Other Video and Photo-Sharing Platforms and Social Networks Follow Suit?

We applaud YouTube for thinking about how to integrate visual anonymity into their platform, and ask other video and photo-sharing platforms and social networks to follow their lead. Activists and ordinary citizens need easily controlled options for this at every stage of creating and sharing visual media.

​Join the conversation: Add your comments below or tweet them to us using #witnesslive. We’ll be hosting a  Google+ Hangout Monday July 23, 2012 at 2:30pm ET to discuss this tool and its human rights applications. More details here soon. You can also tweet me directly @SamGregory.

6 thoughts on “Visual Anonymity and YouTube’s New Blurring Tool

  1. Commonly I would not learn write-up upon weblogs, however would wish to point out that this kind of write-up quite required us to take a peek from as well as undertake it! Ones writing style is shocked me. Cheers, pretty excellent post.

  2. Thanks for sharing this, I had no idea that we can do that using Youtube, I think it’s a good idea to have this blurring tool so we can keep the privacy on certain people or even information.

  3. Even if the blurring was done off line, what prevents YouTube from unblurring the image? Is the blurring done with a ir-reversible algorithm? Is it done using modern encryption technology? If so who has the key?

    The only way to make this ‘safe’ is for the user to use open source software that includes modern encryption technology in the blurring algorithm.

  4. but the important question here is: does youtube keep the unblurred original? if so, you’re just getting a false sense of security.

    Imagine the day they’re hacked and all those blurred videos are leaked (or of course the gov’t could demand a look at the originals).

    Much better would be to blur offline, then delete and upload.

    1. Hi but- in the YouTube tool you have a choice when you have reviewed the video with the blurring (to see if its as good as you need) to then delete the original. The default option is in fact to delete the unblurred original.

      Agree that its important (and on balance the best choice if you have the right option) that there be tools like this that people have as close to point of creation as possible and on their own devices not on an online platform. We’ve been working with the Guardian Project on a tool called ObscuraCam that does selective blurring in photos and videos and also strips out metadata. Alongside autonomous tools like ObscuraCam built in the activist community we also think mobile manufacturers and app designers should be building this type of functionality in too.

Leave a Reply

Your email address will not be published. Required fields are marked *