The Ethics of Face Recognition Technology
Posted on March 7, 2012 by Sameer Padania
At SXSW next week, WITNESS is running a workshop on the ethics of facial recognition. It’s an issue we’ve talked about before – most recently in the Cameras Everywhere report, and with the ObscuraCam Android app.
In this post we want to give you a better sense of what facial recognition technology (FRT) is, where it’s being used, and why people – including Google’s Eric Schmidt and Senator John D Rockefeller – are worried about it.
The Technology Behind Face Recognition
What is face recognition – and how is it different from face detection?
Face detection technology recognises the size and location of faces in an image. It helps power autofocus in digital cameras – you’ve probably seen this when a square appears round the face of the person you’re taking a picture of.
For all you Jennifer Aniston fans, here’s the science bit. Basically, all this technology says is “this is a face” – it doesn’t capture, store or recognise the faces it detects. Face detection can also be used to protect privacy – it’s how ObscuraCam and Google Streetview know to blur the faces of anyone caught on camera.
Face recognition technology analyses an image of your face for a series of measurements – like the distance between your eyes – to create what’s called a “faceprint”. It then compares this against a database of stored faceprints until it finds a match, either to verify the person’s identity (“X is who he says he is”) or to identify them (“who is X?”). This is what is happening when, say, iPhoto asks you:
[flickr id="6815872426" thumbnail="small" overlay="true" size="medium" group="" align="center"]
OK, so it’s not foolproof. But facial recognition technology is improving all the time, and it’s being incorporated into a growing list of consumer products (and even street advertisements), used by millions. It makes it easier to tag photos of individuals as in Apple’s iPhoto and Google’s Picasa, or on your social networks, like Facebook or Google+, and videos, with iMovie. It’s also built into the iPhone 4S, and into the latest version of Android. And face recognition makes mobile apps like Viewdle, Recognizr, and SceneTap possible. A recent ad for LeNovo laptops suggests face recognition can even save your marriage…:
Not New, But Accelerating
These technologies aren’t new. In fact, face detection, facial recognition and related biometric technologies have been used over the last decade in various settings – in Las Vegas casinos to spot card counters, at the SuperBowl, and at U.S. border controls. Law enforcement and security services particularly like FRT, as it does not require consent or knowledge of the subject being processed – unlike finger-printing, iris-scanning or similar biometric technologies, this can be done at a distance.
What’s new is this: this technology, which used to be accessible only to a few agencies, is now being used voluntarily, and unwittingly by millions of us through our use of social media. Our willingness to tag people in photos, and rapid advances in computer vision and object recognition have accelerated the use of FRT. We share so many images now that Facebook has, as this chart shows, the largest photo collection in history.
Reactions From the Public, Tech Companies and Regulatory Agencies
Unease over facial recognition has been growing over the past couple of years, and finally hit the mainstream last summer, when Facebook turned on – to widespread outrage – facial recognition for all its hundreds of millions of users. Even Eric Schmidt, Chairman of Google, said, in response to my question at a 2011 Google event, that facial recognition is the technology he’s most worried about, as it is “scarily accurate.”
So what can be done about FRT when the pace of inclusion of the technology into consumer products and apps is picking up so dramatically? Potential misuses of FRT are being exposed so regularly that regulators and legislators are taking notice. Late last year, the U.S. consumer protection and competition regulator, the Federal Trade Commission, held a workshop on facial recognition technology (details here) – and in January the leaders of the Congressional Bi-Partisan Privacy Caucus sent the FTC a letter (here’s ours) calling for clear policies governing companies’ use of facial recognition. In December, the International Biometric Industry Association was so concerned that it released a discussion paper on face recognition asking “Is It Time To Hit The Panic Button?”, along with privacy principles for companies using face recognition to follow.
But broader data protection initiatives aren’t yet picking up the baton. The White House’s recent Consumer Privacy Bill of Rights - aimed at steering the major Internet companies towards better privacy practices - only mentions photos and videos once (p.18), perhaps because they’re still not recognised in the U.S. as part of protected data (as p.3 of this document suggests).
These protections need to be implemented for data held by governments too. The Information and Privacy Commissioner in Canada recently ruled, for example, that law enforcement would need a warrant to use another government department’s facial recognition database to identify people involved in last June’s riots in Vancouver.
Improving Technology for All Users
We’re not knee-jerk against face recognition technology – it’s here to stay, and we need to explore how it can work for human rights. But we are very concerned that legislators and regulators have been so slow on the uptake – and that technology companies have been reluctant to participate in genuine public debate on a technology from which there really is no turning-back. We can – and must – all do better to make the best of this powerful new technology – and to protect us all from the worst.
In the next post in this series, we’ll look specifically at the implications of FRT for human rights activists.