While generating the videograms of Bergensbanen, I discovered that Max/Jitter cannot export images from matrices that are larger than 32767 pixels wide/tall. This is still fairly large, but if I was going to generate a videogram with one pixel stripe per frame in the video, I would need to create an image file that is 1 302 668 pixels wide.
This made me curious as to what type of limitations exist around images. A very quick run-through has told me this:
GraphicConverter: 32 000 pixels
Photoshop: 30 000 pixels
OSX Preview: 30 000 pixels
So it seems that approx. 30 000 pixels wide/tall is some kind of limit to how large digital pictures can be. I guess there is a memory/storage issue related to this, e.g. related to file sizes not exceeding 2GB. For now I have therefore decided to generate videograms that are maximum 32767 pixels wide, but may decide to make some with several separate videograms instead.
While on paternity leave, I (finally) have time to do small projects that require little brain activity and lots of computation time. One of the things I have wanted to do for a long time is to create a videogram of Bergensbanen (which I briefly mentioned last year). This was a project undertaken by the Norwegian broadcast company (NRK), where they filmed (and broadcast live) the entire train trip from Bergen to Oslo. The best thing is that the entire 7.5 hour video file is available under a CC-license, which opens for many creative applications.
First I wanted to create a videogram based on reducing every single frame in the video file into a pixel stripe in the videogram. However, this is not possible to do in one operation in Max/Jitter, since the video file contains 1 302 668 frames. Jitter cannot export images larger than 32767 pixels wide, and even though I could have set it up to export images of subchunks of the original video, I am not sure if there are any programs that will support reading image files that are more than 1 million pixels wide?
So I have created a videogram based on sampling every 50th frame from the video file, based on a revised version of my VideoAnalysis program. The full videogram (26 056 x 720 pixels) can be found here or on Flickr.
I have also made a more browser friendly version (4096×720 pixels):
Besides just looking at the videogram, which I find quite fascinating in itself, such a display can also reveal various things that happened over time in the recording, e.g.:
the NRK logo is rendered as a white line throughout the entire videogram
tunnels are dark/black
stops at stations can be seen when there are long non-moving parts in the image
These are summarized in this little image excerpt:
Now, this videogram was just a test case, as I am now working on creating videograms of the 5 day recording of Hurtigruten…
For some upcoming blog posts on videograms, I will start by explaining the difference between a motiongram and a videogram. Both are temporal (image) representations of video content (as explained here), and are produced almost in the same way. The difference is that videograms start with the regular video image, and motiongrams start with a motion image.
So for a video of my hand like this:
we will get this horizontal videogram:
and this horizontal motiongram:
As you see, they both reflect the video content. The main difference is that the videogram preserves the original background colours, while the motiongram only reflects what changes between the frames (i.e. the motion).
I just heard a talk called “Real-Time Synaesthetic Sonification of Traveling Landscapes” (PDF) by Tim Pohle and Peter Knees from the Department of Computational Perception (great name!) in Linz. They have made an application creating music from a moving video camera. The implementation is based on grabbing a one pixel wide column from the video, plotting these columns and sonifying the image. Interestingly enough, the images they get out (see below) of this are very close to the motiongrams and videograms I have been working on.