By Rose Hackman
Where is my generation of tech-savvy whizz-kids when it comes to the debate on human rights, visual privacy and video advocacy? Blasé, bewildered, or at the cutting edge?
Like many others of my fellow tuned-in generation of twenty somethings who post updates online faster than we pour ourselves a cup of coffee, emerging technology and new online social tools are things I tend to embrace and carry forward (or disregard, depending on the gratification level), rather than pause and pointedly question.
On July 18, YouTube launched a new tool that would enable users to blur the faces in the videos they uploaded, thereby protecting the identities of people featured in them. The platform explicitly identified the human rights threat as a primary motivator for this online technological development.
The availability of this kind of feature comes at a crucial time in terms of tackling new threats posed by the widening reach of social media, often portrayed as a flattening source of good. Indeed, the same video used to show footage of demonstrating political activists in Syria, may be used by the government forces to identify people for arrest.
WITNESS specifically advocated for these types of changes in a report last year. YouTube is one of the first platforms to respond.
Facial Recognition and Its Relation to Privacy
On that same day, across the country from YouTube’s global headquarters in Washington, DC, Senator Al Franken (D-MN) held a hearing entitled “What Facial Recognition Technology Means for Privacy and Civil Liberties.”
A simple picture of you is now enough to connect you to your identity, criminal history, friends or online social profiles. In his opening statement, Franken described Facial Recognition Technology as a powerful form of biometric technology, which “may […] be abused in ways that could threaten basic aspects of our privacy and civil liberties.”
Faceprints, an evocative term used to describe acquiring an individual’s facial features through FRT, Franken argued, could easily be gathered by a grocery store’s hidden camera and sold on to third parties, completely unbeknownst to the person whose faceprint was being taken in the first place. As it stands, no legal structure is in place to stop this.
Why is there no such structure to protect individual rights? As testimonies flowed in from representatives of a number of organizations including the FBI and Facebook, both of which already use FRT, I realized that somehow this drastic development in technology was flying under the radar, hardly scrutinized.
Are We Opting In to Limit Our Privacy?
In my generation’s collective defense, I would argue that there is a peripheral understanding that there is an ongoing trade-off, or tension, between this new form of exercising our right to freedom of expression/speech and our right to privacy.
This passive understanding is most notably active when Facebook changes its privacy settings and there is a mass scramble to opt out of a latest feature, which would make public certain private aspects of our online profile.
Among my contemporaries, Facebook is probably the place which receives the most contemplation when it comes to privacy issues, mainly because we are morbidly aware of the amount of information about ourselves that we have seamlessly parted with (including that identity theft friendly date of birth).
Yet Facebook’s ongoing facial recognition capabilities have still tended to go relatively unnoticed, as did its public acquisition this past June of Face.com, the leading Israeli facial recognition company, which developed its FRT program in use since mid 2011.
Facebook’s use of FRT surrounds users’ uploaded photos, which has the capacity to automatically recognize people in the photos and “help you tag your friends,” as it puts it, pointedly avoiding wherever it can to use the two words ‘facial’ and ‘recognition.’
Furthermore, unlike its rival Google+, Facebook has made tagging your friends (currently undergoing maintenance), a feature by default, as opposed to an optional one.
Facebook Manager of Privacy and Public Policy Rob Sherman justified this at the July 18 Senate hearing by calling Facebook an “opt in experience,” presumably implying that the fact that no one forced you to join the social network is enough to justify using technology you are unaware of without your explicit consent.
Is our silence down to ignorance then? Or rather are we a frightened generation that understands that there are consequences, two sides to the coin, but perhaps finds it easier to be collective ostriches, with our heads in the sand? Would proactively and constantly seeking out potential darker sides just brand us conspiracy theorists, or is there a way to sensibly tackle these issues?
What Photo Album is Your Face Being Saved To?
I will not deny I have thought about the darker side. Last fall, a friend of mine who is a fellow graduate student organizer at Columbia University, recounted his experience to me on November 15, the day Zuccoti Park was evacuated, marking the end of the official Wall Street occupation.
One haunting detail remains with me. Joe described to me how, hours after the eviction, the police officers in charge of letting people back into the park to collect belongings made protesters line up and, one by one, took video footage and photos of them.
What would become of those photos? Would they be put into some kind of blacklist anarchist database, making former occupiers recognizable wherever they turned up next? Regardless, this type of surveillance is illegal, as ACLU’s Naomi Gillens recently wrote in a blog post on the subject.
My worry was backed up during the July 18 hearings when Jerome Pender, Deputy Assistant Director of the FBI’s Information Services Branch in their Criminal Justice Information Services Division, was questioned on his agency’s use of FRT.
While Pender responded by assuring his audience that the FBI’s ambitions were focused on facilitating cooperation between law enforcement agencies across the country and criminal activity only, Senator Franken pointed out that by the FBI’s own admission, FRT had previously been used at political rallies and demonstrations.
I then found out that the National Institute of Justice has been spending millions on developing facial recognition binoculars. Even if used in a pure law enforcement context, can you imagine what being given the criminal history of anyone you look at through a pair of lenses would do to profiling?
Heads Out of the Sand
With mainstream, better understanding of the consequences of visual online exposure and evolving facial recognition technologies, Generation Y could come up with scenarios to help, not threaten citizens around the globe.
If YouTube adapted its services to include facial blurring to, among other aims, protect human rights activists, why not imagine similar developments elsewhere? Enough outside understanding and pressure could push Facebook to make progress on consent issues, but even go further. Under stringent regulations for instance (and if trust levels were sufficient), individuals who may suffer or be at risk if shown up in photos uploaded onto Facebook by third parties, may for instance choose to provide a faceprint to Facebook, so that whenever their face appeared in a photo, these would be automatically and immediately blurred. This is just one idea. There are so many others, which need to be developed.
So heads out of the sand my ostriches, global justice needs our techie instincts for a fight to defend human rights – the right to speak out when we want and to seek out truths, without third parties presenting well-staged, but unsanctioned, arbitrary threats to our privacy, our freedom, and even, in the most extreme of cases, our most basic right to live.
Rose is a Human Rights graduate student at Columbia University with a background in journalism. She is currently an intern at WITNESS working on the Cameras Everywhere initiative.