Would today’s George Holliday, the documenter of the beating of Rodney King, have a human rights app?
Metadata for good?
This year at RightsCon WITNESS and the Guardian Project are leading a roundtable called “Metadata for Good?” at which we’ll talk about the power and value of metadata to an individual and to collective projects of social good for human rights. Among the ideas we’ll raise is our push for opt-in ‘eyewitness’ and ‘proof’ functionalities in mainstream tools and platforms. These are functionalities that enable someone to choose to add metadata to their images and video. This metadata can enable a greater degree of trust, verification and enhance utilization in data-mapping.
Making sense of all the data and finding the needle in the haystack
There are so many more citizen witnesses. They are both in conflict zones but also documenting insidious human rights violations in our own societies. They are first on the scene, or in the middle of it, and are the ones filming. And their numbers are increasing with the spread of mobile devices and mobile Internet service globally.
But often the videos these citizens film face challenges: in being discovered, and in being trusted and contextualized. There are 100 + hours of video uploaded to YouTube every minute and 350 million+ photo uploads to Facebook daily. Most human rights videos are part of the vast volume of little seen videos uploaded to YouTube each minute. From Syria to-date there have been more than half a million videos that could be documentation of human rights violations. They are not like “Kony 2012,” the viral video from 2012 about Joseph Kony, the warlord in Central Africa. That’s to say, they are not polished, sourced to an identifiable creator and generating massive audiences.
Separating fact from fiction
One of the biggest challenges with human rights video (and any video of news events) is separating the false from the true, and confirming the truthful more rapidly. As my colleague Madeleine Bair illustrated in a recent blog on video from Venezuela (or was it Colombia? or Mexico?) (and as I have seen frequently in video from Burma), in the real-time news economy of social media, false images circulate fast and frequently. But alongside the patently false, there is also the question of how to more easily enable genuine, but metadata- and context-free, images to pass the first test of credibility. Images need to be credible, whether they are rapid turn-around news material or human rights documentation for more long-run justice and accountability.
Tools and training for better trust and authentication
WITNESS has been supporting innovation in technology that enhances better documentation at the moment of filming; and that also allows this material to be better trusted at a later stage. One way we have been looking at how to respond to this is to consider how we can build in the data at point of creation that will meet the initial expectations of authentication such as a confirmation of a reliable technology, confirmation of source and location, and a clear chain of custody. In this light we been developing the InformaCam tool with the Guardian Project. Additionally we’ve been ramping up our training on manual modes of creating video that is more likely to be authenticable, blogging on key techniques (for example this series), and contributing to guidance on open-source verification such as the recent Verification Handbook. Next year we will be releasing a guide to enhancing the potential evidentiary value of video.
A big-picture approach
Alongside the specific InformaCam tool and its white label version, we are also engaging with mobile manufacturers, online service providers and app developers around using the related J3M evidentiary metadata standard, or our InformaCam libraries, or even to build their own ‘eyewitness’ or ‘news mode’ functionalities into their devices for anyone wants to give their footage a leg-up in the verification process. We’re doing that because we know that today’s version of George Holliday, the man who filmed the Rodney King incident, won’t have a dedicated human rights app. And even if he does, if he’s in a high-risk situation, it may help single him out as an activist. The capacity to choose to document with a higher degree of potential trust needs to be mainstreamed on Android, on native camera apps and in key photo and in video platforms like Google+ and YouTube and social networks like Facebook.
Why eyewitness modes matter beyond human rights: the ‘proof’ mode concept
If this was the human rights use case alone we know (really, we know) this is a long-shot. However, we believe there’s an opportunity here beyond the human rights use-cases. As we worked on the project we realized that a new approach to mobile media metadata is critically important not only for our original use scenarios for proving a war crime in Syria or supporting monitoring multiple viewpoints on a peaceful protest.
Eyewitness or ‘proof’ modes also matter for social good and journalistic purposes like tracking abuses of hours for low-wage workers on farms in Georgia (as the Guardian Project are doing in one of their test projects). They matter for reducing the time to verify citizen-submitted news footage (from those fifteen or so steps outlined in the Verification Handbook to fewer, even as we get more and more media) of a violent event or that too-good to be true twister. And commercially they have applications in enabling interesting consumer-oriented visualizations of UGC, and even to trust-based marketplaces such as the peer-to-peer economy and insurance where mobile media is increasingly playing a role in verification and confirmation.
What might an eyewitness or ‘proof’ mode look like?
This function – whether on a camera app or a sharing platform or social network – would need to be an opt-in mode that users select before creating or sharing their media files. On capture, this mode could incorporate and preserve rich metadata (for example, using the J3M standard) and provide ways to check file integrity. Or it could be a platform-based way of confirming that the data in a video matches the associated hash or was captured with an app that meets verification standards, or of indicating that there is additional metadata available from the creator or from a third-party site.
Verification metadata could also be both embedded inside of headers of media files that are published to social media sites or private data repositories, and simultaneously published to an independent notary and escrow service. The notary servers would operate to publicly validate that a certain set of pixels and sensor data did exists at a certain time, and maintain the continuity and security of that verification as long as the media exists. With an escrow service, data can also be safeguarded through encryption and anonymity systems, to ensure that only the organizations or individuals meant to have access to the data are able to access it. While this all sounds complicated, it in fact, the entire complexity of the cryptographic signing and metadata publishing can be automated and hidden from the user, much in the same way that buying a coffee at Starbucks with your smartphone has been whittled down to a few taps.
If J3M, or a similar approach to InformaCam or J3M was incorporated into the devices we use in our daily lives and the platforms we use to communicate it would make it far easier for anyone who happens to become a first-hand witness to document in a way that is more likely to be trusted (which could frankly be documenting war crimes, a news event, or an insurance claim). It would greatly expand the number of people who could use verified visual media to serve as valuable information and proof for news, legal and other information-led processes.
We believe there is an opportunity on the one hand to support individuals to have a greater ability to share information and be trusted and to choose who they share it to with more flexibility. On the other hand there is an opportunity to support people trying to make sense of that data and separate truth from falsehood in crisis maps, news events, fender-benders and human rights violations. We know this will require careful balancing of privacy and control concerns with the positive potential.
Will tech providers step up to the ‘proof’ challenge?
4 thoughts on “How An Eyewitness Mode Helps Activists (and Others) Be Trusted”
One interesting idea from the ‘Metadata for Good?’ roundtable at RightsCon – WITNESS should be pushing for combined strip out/maintain rich data functionalities in platform. This would follow from the models we and Guardian Project have tried to present with ObscuraCam and InformaCam. ObscuraCam allows you to strip out metadata and blur faces; InformaCam allows you to embed rich metadata. The key is that they are focused on the choicepoint to do one or the other.
Here’s an article from PCWorld about the roundtable we held on this topic at the Rights Con conference: “Tracking with metadata: It’s not all bad” http://www.pcworld.com/article/2105220/tracking-with-metadata-its-not-all-bad.html
Another use case beyond the human rights realm — but relevant to journalists and the news industry — is to use mobile video metadata to protect the copyrights of content creators.
Writing metadata about who created a video into the file at the time of creation can help communicate and assert the rights of videographers over their raw content. The fact that the data is machine-readable facilitates the ability of platforms to restrict or license content for re-use. This would probably be very useful in the news industry.
Of course metadata can be lost from video files, but as Sam mentioned, it is possible to build registries where copyright owners can submit their work (analogous to the US Copyright Office, I guess).
The still photography world relies on embedded metadata a lot for rights management. It may be useful to look at the approaches of organizations like IPTC (www.iptc.org).
Thanks Yvonne, definitely. And in the human rights context we’ve looked at how with tools like InformaCam you can note aspects like consent or the intention of usage of the creator, so that these are retained in the file