In my last post I looked at how facial recognition technology (FRT) works, how it’s now in our phones, social networks and media management, and how legislators and regulators are reacting to this. But it’s also increasingly used by law enforcement and for surveillance of “public” spaces, as Evgeny Morozov notes in his London Review of Books review of Kelly Gates’ excellent book, Our Biometric Future.
But many of the practical and labour-saving applications of FRT could equally be applied for repressive or invasive purposes, especially as FRT becomes more powerful and ubiquitous. Recently, Hitachi Kokusai Electric unveiled a CCTV system that it claims can identify a face against a database of 36 million faces in under a second. But it’s also possible to use FRT on two-dimensional images of people on the web, including those we post to social media or other sites.
Do we simply have to accept this as inevitable, or are there things we can do to protect ourselves, and others, against improper or repressive use of FRT?
In this post I will look at a few suggested tactical and technological defences against FRT. Specifically I will discuss two layers: 1) when we are being watched, for example at protests or in public space, and 2) when we ourselves are taking and sharing images of others, especially online.
Tactical Defences in Public
There’s an increasing amount of discussion and experimentation on how to fool and spoof automatic visual recognition systems in public. One of my favourites is this, which plays with number-plate recognition. But is there anything we can do to confuse facial recognition systems?
The simplest, most widespread defence against face recognition in public space, in the media or at demonstrations is to wear a mask, hoodie, bandana or similar face covering. Protestors and rebel fighters across the Arab Spring used bandanas to mask their identity, like generations of activists before them. A new development has been the adoption of the Guy Fawkes mask by protestors the world over, notably those involved in #occupy, and Anonymous online attacks – both protecting the wearer’s identity, and signaling participation in a shared cause.
Mask-wearing is illegal in some jurisdictions (for police too), and can lead to targeting by law enforcement. It also has its activist detractors, but this post makes a robust defence of its role in “maintaining personal privacy and security [and] an important exercise of some fundamental liberties, including freedom of expression and freedom of association […] a crucial element of a robust social fabric.”
Beyond Masks: Other Options for “Fooling” FRT
But what if it’s impractical or illegal for you to mask your face? It’s important to note the difference between defeating face detection – which means stopping a camera from recognizing the usual patterns that make up a face – and defeating face recognition – which means stopping a camera from matching you by altering your physical features and the distances between them. This peculiar 2002 document suggests wearing fake Dracula teeth, chewing tobacco, or inserting nose plugs can defeat face recognition. These countermeasures may, however, attract other kinds of unwanted attention, particularly if implemented all at once. Smiling also makes facial recognition more difficult – hence the “no smile” policy for passport photos.
Technologist Alex Kilpatrick told Forbes in 2010: “You have to break from the human perception of the face. […] We key on big features like hairstyle and beard, but software works on very different principles.” Kilpatrick suggests investing in a pair of large sunglasses, but Josh Marpet, in this audio from The Next Hope hacker conference and in this talk at Defcon 18, says that face detection and recognition systems can be trained to accommodate such measures. Marpet also suggests that facial recognition is only about 60% effective for the moment, and that fooling face detection altogether is a more effective countermeasure.
Adam Harvey focused on using make-up to fool the pattern recognition element in face detection systems. Harvey’s CV Dazzle project continues to be the most widely–referenced face detection counter-measure. It looks like this:
Here are videos of it in action, and audio of his talk, also at The Next Hope. This is how it does against Facebook’s face detection:
It’s early days for these analogue countermeasures – and they raise as many questions as they solve. We’ll keep tracking them here.
Technological Solutions, On-Screen
Video-makers and journalists have long used measures such as pixelization and censor bars to protect the identity of individuals they film, and those caught in the background. In some cases it can be especially ruinous not to protect someone’s identity – click the image to see a story I wrote 6 years ago.
Now, with ever easier (or “frictionless”) online image-sharing, online facial recognition and widespread image-harvesting, it becomes even more important to protect those in the frame at the point of uploading an image. Because this functionality is not built into camera-phones, it has been fairly laborious to do this kind of protective image editing. In fast-moving situations, citizens post footage and images to social networks and media platforms, and worry about the consequences later – and as we’ve noted on several occasions, governments as diverse as Iran, Burma and the UK have used these kinds of unprotected social media images to track down miscreants.
WITNESS is a partner in ObscuraCam, an Android app that provides a simple way to protect the visual privacy and anonymity of those you photograph. It’s a direct counterpoint to mobile facial recognition apps like face.com’s Klik – and perhaps it too might make an appearance in a future Batman, as in the image below.
There’s a growing list of social networks that use or permit facial recognition on their images – but there are protective measures you can take. You can personally opt out – but remember that face recognition is only one part of overall information security. If you’re friends or working with people in sensitive situations, try as much as possible to follow the broader advice in this guide on protecting your identity and security online and when mobile.
Facebook lets you opt out of their facial recognition system, and delete any face data they might have for you. Google+ asks you to opt in to facial recognition, and it can be turned on and off here. You can turn it off in Google’s Picasa (it’s on by default), but not in Apple’s iPhoto.
If you’ve got other methods of protecting yourself on social networks and media-sharing sites, let us know in the comments, and we’ll update the post.
With the scope of facial recognition widening, protecting ourselves and others needs to be a more mindful, conscious act, both in public spaces and online. And as the social networks continue to make sharing and connecting ever more frictionless, we’ll all need to learn when and how to put the brakes on.
Social influence service Klout posts a by-network list of how to control your privacy settings in social media, including Twitter, Facebook, Google+, LinkedIn, Foursquare, YouTube, Instagram, Tumblr, Blogger, WordPress, Flickr:
http://klout.com/understand/privacy
“But I wore the juice!”
Excellent post. I want to thank you for this informative read,
I really appreciate sharing this great post. Keep up your work.
This is very nice article, and the new concept of tech
test