Above image is a composite of stills from the case study video (URL below), “stitched” together.
Editor’s note: From finding online video to fact-checking its claims, Amnesty International’s Crisis Response and Prevention Team provides a detailed look at the process of citizen video verification in this three-part blog series. Part 1 discussed how citizen video is sourced. Part 3 walks through the use of tools like Google Maps to corroborate the video’s location.
The video example being used throughout the case study was uploaded by YouTube user Masaken Hanano. Initial clues in the video point toward its credibility as authentic evidence of a building’s destruction, in Aleppo, Syria.
In Part 2 below, we cover how to create a panoramic image from video stills to reveal signifiers unique to the site and event depicted in the video.
Stitching a panoramic photo from the video
Making a panoramic photo from video takes two steps: extracting frames from the video and then stitching them together into a larger photo.
To extract the frames, we’ll use the VLC video player (free, open-source, and multi-platform). After downloading the video (by using, for example, Keepvid.com, or the integrated download functionality in the Torch browser) and opening it in VLC, we play the video to find the segment that contains the camera pan that we want to turn into a panoramic photo. An ideal set of images has at least half a frame-width of overlap between each adjacent image. In other words, if the camera is panning from left to right, the item in the center of the frame in image #1 should be visible on the left third of the frame of image #2.
This is a bit tricky here since the camera motion does not follow a linear motion, but most parts of the video are useful. In addition to frames that include the building itself, we want to get frames around the edges, since they contain much of the contextual information that will be helpful. Using VLC’s slow-motion and frame-by-frame playback features we can find the important frames to extract.
When we have the video paused on a frame we want, we use the “take snapshot” command from the Video menu to save an image of that frame with a timestamp . After repeating that for all of the frames we want, we find the folder where the images were saved — this is determined in VLC preferences, usually a Pictures folder — and resave them in a memorable place.
(On a Windows machine we can also use Kinovea for this step, a free and open-source video analysis tool. Although the program is intended for sports video performance analysis, it serves our purposes well and has a number of other features for annotating videos that may prove useful for documentation or human rights purposes.)
Hugin, a free, open-source, multi-platform image stitcher program allows us to combine the frames together. (While this program can be intimidating in its complexity, there are plenty of tutorials available, though most were not necessary for this project.) The “Assistant” tab in Hugin leads us through the necessary steps. (The only challenge I ran into was providing the “Horizontal Field of View” for the images, though with some experimentation I found that values between 30 and 50 degrees seem to work well for images extracted from these citizen-type videos.) After checking the control points (the places on the images that match with each other) for accuracy, and adjusting the crop settings in the “Fast Panorama Preview,” the “Stitcher” tab lets us put all of the images together into one:
(For a quick-and-easy version of this image-stitching step, Windows users can try the free Microsoft Image Composite Editor, which has many fewer settings but seems to work well.)
Analyzing the panorama
Now that we have a good image reference for the video, we want to look for key features (such as landmarks, parks, etc.) that will help us confirm its location, especially those features that will be visible from satellite imagery.
- The destroyed building is directly adjacent to another building — perhaps they would even seem like a single building from a prior satellite image — and also next to a long, straight street.
- The videographer is filming from about the same height as the still-standing building , suggesting that the video was taken from the roof or top floor of a similar building adjacent to the destroyed one, though separated by an open space of perhaps a building’s width.
- Directly across the street from the building, is an open area with a few scattered trees.
- At the edge of that open area is a low shed or building, followed towards the horizon by more high-rise buildings.
5. Just past the building on the near side of the street, from the videographer’s point of view, is a walled open area with a large tree close to the destroyed building.
6. Past that is a collection of low buildings surrounded by a number of trees.
7. Beyond the low buildings we see a large mosque featuring a large dome and a minaret to the right, with two smaller domes below. Looking carefully at the shadows on the walls, it looks like the mosque is aligned at a diagonal in relation to the street grid.
Following the lead of the Brown Moses Blog, let’s make a rough sketch of that same main information we have from our composite image:
In this rendition, rectangles represent buildings, circles are domes, pentagons are trees, and the star is camera location.
Now that we have a sketch of the site itself, including key markers that will help us to place on the map, we can take the next step and geo-locate the site in the video using Google Earth. In Part 3, which we’ll publish tomorrow, we will take you through this process.
2 thoughts on “Verifying Citizen Video: A Case Study from Aleppo – Part 2, Creating a Panoramic Image”
Hello Abbou, I will send you an email shortly. Thank you for your comment!
Is it possible to make , a an explicit page for French people, because actually we have a lot of videos posted on walls and Facebook, who we don’t know the source… If you can respond me , it will be a great pleasure. Thank you very much..