By Nathan Freitas and Bryan Nunez

Unfolding Approaches for Mobile Protest Coverage

Activists all over the world have turned to mobile phones to organize, coordinate and document their struggle.  Images and videos shot on mobile phones have been the standard for what revolution looks like in the public imagination.  We have seen iconic moments, captured in low resolution on mobile phones, captivate global audiences. We have moved from a handful of grainy clips uploaded hours or days after events unfold, to multiple livestreams, showing different angles on something happening right now. The Arab Spring, the #Occupy Movement, as well less politicized events like the London and Vancouver riots have shown us that the mobile phone is the recording device used to document the next breaking news story, especially if that story involves any sort of protest or activism.

For those wishing to visually record events on the front lines, there are generally two options: capture and share media in real or near real-time by streaming it or rapidly posting it to social media sites like Facebook and YouTube directly from a mobile phone, or record higher quality footage offline using DV cameras and smartphones, and then edit, process and upload that footage at some point after the event.

When it comes to considerations such as quality, safety, effectiveness and impact, each option has its advantages and disadvantages. Ultimately, we must evolve our behavior beyond just “Point, Capture and Post” into a more strategic approach, if we don’t want to see this important tool for human rights turned against visual media practitioners and those they document.

Mobile Activist Media Workflow Spectrum

We believe the trade-offs do not have to be so binary, however. Through both a more thought out, team-based workflow, as well as new tools for on-device editing and processing, media creators and journalists embedded within protests and other crisis events, have more options than ever. Be it mainstream apps such as iMovie for iPhone and iPad, or the ongoing research in real-time visual privacy filters that WITNESS and the Guardian Project are developing, it is time for frontline media creators to upgrade their toolkit, tactics and techniques.

Livestreaming: Promises and Pitfalls

With a mobile livestream, the location of the person broadcasting is shared with the world in real-time, as is anyone else who might happen to be in the shot. There is no consideration of the consent of the subjects in the video, and conversations that the movement or people at risk might be accidentally captured and broadcast to the world. Even if you, as the recorder, are neutral towards the cause of the subject you are documenting, it is unethical to put someone at risk, without their permission, simply to get the story or clip that will make your own work more well known. You might be able to boast you had “10,000 viewers of your livestream last night,” but have you considered that simply by observing and broadcasting the events, you may have changed their outcome?

The audience of a livestream can range from a supporter to a member of an opposition group, to law enforcement, who are more than happy to have an embedded, roving surveillance camera, to be used for their own means. Using crowd-sourced efforts like posting faces on a website, or more sophisticated facial recognition software, any face that appears in a livestream, has the potential of being matched against a public profile on a social network or government database of photo IDs.

Both of these tactics were used to crack down on protesters during Iran’s Green Revolution, and more recently by law enforcement in the aftermath of the Vancouver hockey riots, publicly linking, without due process, peoples’ faces to criminal activity they may or may not have been a part of.

It is useful to consider whether “live” really needs to be live. While the goal of authenticity and transparency should not be sacrificed, there are perhaps benefits to adding a curation or filtering step in to the broadcast of any event using livestream technology.

Getting Coverage on Mount Everest

In 2007, a group of people who support the independence movement for Tibet secretly traveled to the Mount Everest (Qomolongma in Tibetan) basecamp located inside of the Chinese-occupied areas of Tibet. Their plan was to stand in front of the mountain and the Chinese military camp there, unfurl a banner, make a few statements and sing the Tibetan independence songs and national anthem.

The key problem was that the basecamp is days away from any major city and had no Internet coverage. If a standard DV camera was used to film the protest, someone would have to take those tapes, run down the mountain, and somehow evade authorities for days until they could reach the Nepal border. In short, it was basically impossible. The solution was to film the protest using a camera hooked up to a computer, which was then connected to a satellite modem capable of transmitting data at speeds fast enough for a “live video” stream. This would allow the footage to reach safety in near real-time, without requiring anyone to run away from the protest site.

At the time, there were no free or affordable options for the type of mass-market live streaming we see now through sites such as or Our solution was to utilize a private point-to-point stream using the free Quicktime Broadcaster and Server software or alternatively a private Skype video call, between the protest media team at Mount Everest, and a Receiving Coordinator based in the United States, where the satellite data returns to the terrestrial earth.

The broadcast team was divided into two roles. First, was the Shooter, who actually filmed the event with an HDV. The Primary Speaker of the protest was wearing a wireless bluetooth microphone which connected to the camera. The output of the camera was connected to a short-range, high quality wireless video broadcast system, sending footage about 20 meters away to the Broadcaster, who was in a tent with the laptop computer and satellite modem.

While the camera itself was recording footage, the Broadcaster was also capturing to the local hard-drive while also broadcasting live to the Receiving Coordinator over the Quicktime stream. This ensured a local digital copy if the stream failed and the HDV camera was confiscated. The broadcaster and receiving coordinator were in constant communication over a Skype chat, to monitor the quality of the satellite modem connection.


Mt. Everest Protest Workflow
diagram showing flow from subject to remote stream, with multiple backups along the way

At this point, the Receiving Coordinator, safely based in the U.S., was recording the stream to the local hard drive and also to a recordable DVD, in case of a computer crash. The goal was to record the stream, edit it down to the specific focused protest clip, and then post that as a raw, broadcast ready MPEG-4 file. In addition, DVDs would be burned for handing the footage off to the Associated Press and others interested in running the clip. Finally, some news organizations offer a web or FTP-based upload option for public footage, and this content would be pushed there in order to help assure it would make it onto broadcast television. Finally, the clip would also be uploaded to YouTube, where in early 2007, getting a 100,000 views could still allow your clip to make it to the homepage.

The stream lasted for thirty minutes, until the protesters were stopped and detained. The determined Broadcaster did manage to make it down the mountain with the physical HDV tapes, but was still detained later a short while later. Most importantly, the satellite stream worked nearly perfectly, and within a few hours, the footage of this protest thousands of miles away was being shown around the world on news broadcasts, posted to blogs and watched on YouTube. The short, effective impact of the nearly live, “this just happened” clip, helped gather the support needed to assure the protesters were released a few days later.

In summary, by using a private, point-to-point live stream, the ability to safeguard footage in the face of inevitable detention was still achieved, while still allowing for a more curated approach, which allowed the best, most high impact video images to reach the right outlets in a very timely manner.

If only HDV cameras were used in offline, physically transported mode, there would have likely been no image of the protest at all, and the world would not have been moved to intervene on their behalf. If only mobile livestreaming had been used, the critical message and moments would have been missed, and the resulting quality of the content would likely not been high enough for broadcast television.

Mobile Video “Drop Off” Service

Just over one year later, a team of activists and citizen journalists traveled to Beijing during the 2008 Summer Olympics Games in order to further highlight the Tibetan cause, and the wider issues of human rights and free speech in China. We knew that the intense security of the Olympics environment, coupled with the sophisticated Chinese surveillance infrastructure, meant that the window of time for documenting a protest and having the opportunity to safely transmit that footage to the “free” world was small. In addition, the teams documenting the protests were not credentialed journalists, otherwise they would have been assigned official minders, and would have compromised the ability of the protest teams to operate.

What we realized was needed was a way to separate the capture of the video images from the processing and upload, as the person recording a protest had little time to worry about compression and upload. In addition, the mobile bandwidth was not reliable, and so we need to find locations with fast wifi access points. Fortunately, Starbucks is very popular in Beijing, with many locations (see the map below), that all provided fast, robust wifi. When a protest was to occur at a specific location, a nearby Starbucks was scouted, and a coordinator was located there with a tablet computer or small laptop, filled with tools ready to curate, filter, compress and upload any media that was dropped off.

Starbucks in Beijing
Map of Starbucks Locations in Beijing

The workflow in this case then was: the protesters held their protest, the camera operator recorded and photographed them to SD card flash memory. This memory cards we handed off to a runner, who dropped them at the designated Starbucks with the coordinator. This person then curated the footage, found the best shots and clips, quickly compressed the video down from the original HD format, and uploaded them through a proxy server using a secure FTP connection. The media was received by a remote coordinator, who then redistributed them to the social web and mainstream media in a manner similar to the Mount Everest protests.

Beijing Olympics Protest Media Workflow
Beijing Olympics Protest Media Workflow

This system allowed the camera operators to focus on their specific goal – capture the action, stay low-profile and keep on the move – while providing the support network they needed to get the result of the work out to the wider world. In addition, the local coordinator was empowered to make curatorial choices about which media was best suited for the coverage, and perhaps protect the identities of any of the support network as well as local citizens of Beijing caught in the image frame, who might face persecution or at least questioning, if their face showed up in a YouTube protest video or on the front page of The New York Times.

Coordinating Coverage Between Multiple Cameras

Not all events take place in such remote locations or require coordinating secret meetings in the night. In fact, for protests or events held in urban settings, wireless coverage for mobile phones is cheap and ubiquitous. At many recent Occupy gatherings in the United States, it was not uncommon to see more than one person live streaming from their mobile phone, never mind the many others capturing footage with a mind towards uploading to their Facebook page or Twitter stream later. The problem then becomes not about promoting a single stream or clip, but more about how to aggregate, manage and curate all of the sources of video into a cohesive whole.

In 2008, during a citywide day of action protest in San Francisco, a site I coordinated utilized a single web page that included a two by two grid of embedded live streams showing four different roving cameras throughout the city. As the relationship with each mobile video broadcaster was a bit ad hoc, we decided to allow them to use whatever service they felt comfortable with –UstreamJustin.tvQik or Livestream. The viewer to the page however could easily watch all four streams at once, choosing which to stop, start of fullscreen.

San Francisco Day of Action Livestream Aggregation Page
San Francisco Day of Action Livestream Aggregation Page

In addition to the videos, an embedded view of a Twitter hashtag stream has shown on the side of the page, and below a Google map documenting up to date locations of protest actions, police activity and more. Using this map data, which came from various sources in the street, we were able to coordinate with our four camera operators through text messages, and guide them towards or away from action, as appropriate.

San Francisco Protest Map 2008
San Francisco Protest Map 2008

One of the operators was riding a bicycle with helmet camera, which allowed him to move rapidly between locations. Another operator was using a smaller mobile phone, which allowed them to film without being noticed so quickly, but resulted in lower quality footage. A third operator was actually a team of two people, with one operating a full HD camera, and the other a tablet which handled the streaming. The fourth was a stationary camera used for “man on the street” interviews of people in a main gathering area. Together, each of these views provided unique angles on the action, and the ability to coordinate them allowed for even greater quality of the coverage.

The best moment of the entire day was when Wolf Blitzer’s producer from the “Situation Room” on CNN called us up, because they were amazed at how good our coverage was, and wanted to know our secret.

Virtual Studios of the Future – NOW!

The bespoke method of aggregating multiple streams and data sources was quite a complex thing to pull of in 2008. The good news is that three years later, services like offer a powerful virtual multi-camera online studio feature as part of their standard package. We have already seen sites like utilize this capability to patch in different mobile streams live into their ongoing streaming show. Through a single web console, you can mix together multiple streams, text, clips from YouTube and more. A single console can switch between multiple remote mobile streaming broadcasters, providing the best view of an event, and perhaps even considering the privacy and safety aspects of a specific view. This coordinate is safely removed from the street action, and can direct cameras as well, as described in the previous scenario. Studio Studio

courtesy of I.E 3.0 blog

CollaboraCam takes this same studio capability into a mobile environment, providing an iPad mixing console that can take footage from up to four iPhone cameras running on the same local wifi LAN or mesh network.


courtesy of Fast Co Design blog

 It also includes the ability for the director to communicate to the cameras using the microphone and headphones provided by the mobile devices. Currently, the mixed footage is saved locally on the iPad device, but adding in the option to push the mix to a livestream can’t be too far down the road. In addition, having filters to crop, blur, pixelate or otherwise obscure visual identity, in real-time would be an ideal innovation for use of this product in a human rights or activists context.


The increasing ways in which we can create video content “live” and share it in (close to) real-time, sharpens the need for us to be more aware of the impact that may have on people intentionally or unintentionally included in our footage. In the past, this has meant that there would be a significant delay between the time at which the footage was captured to the moment it could be distributed. Hopefully the examples we’ve shared above show that with some advance planning and a team-based workflow the balance between ethical respect for “subjects” in our video footage and timely distribution can be struck.

Here again are the main points we hope you take away:

  1. Mobile video deployments should involve more than just one person, and preferably someone away from the action who can help guide and coordinate
  2. Workflow or plan should include how to get mobile format video content out to your desired distribution channels and forums for advocacy, in the best quality format in the shortest amount of time, and with some level of redundancy in case one channel is compromised
  3. Having a 3G or 4G network is not always necessary – sneakernets and Starbucks can work just fine
  4. Curation is important and can be done in a way that does not violate transparency or authenticity, and also enhances safety and options for visual privacy
  5. Whether live or “almost” live, it is important to consider the impact on unintentional subjects caught in frame


Nathan Freitas worked as the technology coordinator for Students for a Free Tibet from 2006 to 2008, specifically focused on online and mobile media strategy for protest coverage. He now leads the Guardian Project, an open-source mobile security project partnered with WITNESS to develop new camera software capable of improving security and visual privacy capabilities for human rights video contexts.

Bryan Nunez is Technology Manager at WITNESS.

This post is part of our Human Rights Day Series: Resources for #Video4Change Activists. You can access other posts in the series here.

3 thoughts on “Rethinking the Mobile Workflow for Human Rights Video

Leave a Reply

Your email address will not be published. Required fields are marked *