New department video

As I have mentioned previously, life has been quite hectic over the last year, becoming Head of Department at the same time as getting my second daughter. So my research activities have slowed down considerably, and also the activity on this blog.

When it comes to blogging, I have focused on building up my Head of Department blog (in Norwegian), which I use to comment on things happening in the Department as well as relevant (university) political issues. My longterm plan, though, is also to write some posts about being Head of Department on this English-language blog.

Today I would like to point to our new department video, targeted at recruiting new students:

The video is made by video journalist Camilla Smaadal, who is also responsible for a set of video presentations of our faculty. Most of these are in Norwegian, but we are planning to add English subtitles through YouTube.

The new video is aiming at giving students an impression of all the cool things happening in our Department. There are a lot of new music education programs popping up everywhere these days, so we realise that we need to be more active in promoting the qualities of our university education. This video is one little step towards this goal.

New publication: Non-Realtime Sonification of Motiongrams

SMC-poster-thumbToday I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.

Non-Realtime Sonification of Motiongrams


The paper presents a non-realtime implementation of the sonomotiongram method, a method for the sonification of motiongrams. Motiongrams are spatiotemporal displays of motion from video recordings, based on frame-differencing and reduction of the original video recording. The sonomotiongram implementation presented in this paper is based on turning these visual displays of motion into sound using FFT filtering of noise sources. The paper presents the application ImageSonifyer, accompanied by video examples showing the possibilities of the sonomotiongram method for both analytic and creative applications

Jensenius, A. R. (2013). Non-realtime sonification of motiongrams. In Proceedings of Sound and Music Computing, pages 500–505, Stockholm.


    Address = {Stockholm},
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {Proceedings of Sound and Music Computing},
    Pages = {500--505},
    Title = {Non-Realtime Sonification of Motiongrams},
    Year = {2013}}

Calculating duration of QuickTime movie files

I have been doing video analysis on quicktime movie files for several years, but have never really had the need to use the time information of the movie files. For a project, I now had the need for getting the timecode in seconds out of the files, and this turned out to be a little more tricky than first expected. Hence this little summary for other people that may be in the same situation.

It turns out that QuickTime uses something called time units for the the internal representation of time, and this is also what is output in Jitter when run my analyses on the files. The time unit is not very meaningful for humans, as it is a combination of frames and a timescale that defines the actual length of each time unit. Apple has posted some more technical information about Timecode Media Handler Functions, but it didn’t really help me solve the problem easily.

Fortunately, there are a few threads on this topic on the Cycling ’74 forum, including one on relating sfplay time to quicktime time, and another on converting quicktime Units into time code. These threads helped me realise that calculating the duration of a movie file in seconds, is as easy as dividing the duration in frames by the timescale. And, by knowing the total number of frames it is then also possible to calculate the frames per second of the file. Now that I know this, it is obvious, but I post the patch here in case there are others looking for this information.


My main problem, though, was that I already had a lot of analysis files (hundreds) with only the QuickTime time unit as the time reference. It was not an option to rerun these analyses (which have taken weeks to finish), so I had to figure out a way of retroactively calculating a more meaningful timecode (in seconds).

After fiddling around with the time series data for a little while, I realised that it is possible to use the difference between two time samples and, knowing the original fps of the movies (using QuickTime Player to check the fps), calculate the correct duration in seconds. For my dataset there turned out to be only five different time unit durations, so it was fairly easy to write a small script for calculating the durations in Matlab. This is the part of the script that handles the time calculation, in which a is a Matlab structure with my data, hence a.time(1) is the time code of the first sample in the dataset.

    case 1001
        t = (a.time-a.time(1))/(29.97*time_diff); % 29.97 fps
    case 2000
        t = (a.time-a.time(1))/(29.97*time_diff); % 59.94 fps
    case 3731
        t = (a.time-a.time(1))/(24*time_diff);    % 24 fps
    case 3733
        t = (a.time-a.time(1))/(24*time_diff);    % 24 fps
    case 3750
        t = (a.time-a.time(1))/(24*time_diff);    % 24 fps
        t = (a.time-a.time(1))/(29.97*time_diff); % 29.97 fps
        disp('!!! Unknown timecode.')

For new analyses, I will calculate the correct duration in seconds right away, but this hack has a least helped in solve the problem for my current data set.

Documentation of the NIME project at Norwegian Academy of Music

From 2007 to 2011 I had a part-time research position at the Norwegian Academy of Music in a project called New Instruments for Musical Exploration, and with the acronym NIME. This project was also the reason why I ended up organising the NIME conference in Oslo in 2011.

The NIME project focused on creating an environment for musical innovation at the Norwegian Academy of Music, through exploring the design of new physical and electronic instruments. We were three people involved in the project, percussionist/electro-improviser Kjell Tore Innervik, composer Ivar Frounberg, and myself, and we had a great time together in creating and performing with a number of different new instruments.

A slogan for the project was to create instrument “for the many and for the few”. The “for the many” part we approached through the creation of Oslo Laptop Orchestra and Oslo Mobile Orchestra, and the creation of a series of music balls. The “for the few” part was more specifically targeted at creating specific instruments for professional musicians. Some of these were glass instruments, and here we also did some historic and analytic studies that were presented at NIME 2010.

As an artistic research project we were also careful about documenting all the processes we were involved in, and we also ended up creating a final series of video documentaries reflecting on the process and the artistic outcomes. Kjell Tore has written more about all of this on his own web page. Here I would like to mention three short documentaries we created, reflecting on the roles of technologist, performer, and composer in the project. Creating these documentaries was in itself an interesting exercise. As an academic researcher, I am used to writing formal research papers about my findings. However, as artistic researchers in the NIME project, we all felt that a more discussion-based reflection was more suitable. The documentaries are, unfortunately, only in Norwegian, but we hope to be able to include subtitles in English at some point.

Visualisations of a timelapse video

Yesterday, I posted a blog entry on my TimeLapser application, and how it was used to document the working process of the making of the sculpture Hommage til kaffeselskapene by my mother. The final timelapse video looks like this:

Now I have run this timelapse video through my VideoAnalysis application, to see what types of analysis material can come out of such a video.

The average image displays a “summary” of the entire video recording, somehow similar to an “open shutter” in traditional photography. This image allows for seeing what has been moving and what has not been moving throughout the entire sequence.

Average image
Average image 

The motion average image is somehow similar to the average image, but it summarises the motion images through the entire sequence, that is, only the parts of the image that changed.

Motion average image
Motion average image

What I call a motion history image, is the motion average image overlaid only a single frame from the original video. I typically create such motion history images using both the first and last frames of the video, as can be seen below.

Motion history image, based on first video frame
Motion history image, based on first video frame
Motion history image, based on last video frame
Motion history image, based on last video frame

Finally, I have also created both horisontal and vertical motiongrams of the timelapse video. The horisontal motiongram displays the vertical motion, which in this case is how the sculptor moved back and forth when sitting at the table. The edge of the table can be seen as the “stripe” running throughout the image.

Horisontal motiongram, displaying vertical motion
Horisontal motiongram, displaying vertical motion

The vertical motiongram, on the other hand, displays horisontal motion, that is, how the artist moved sideways throughout the process. Here it is very interesting to note the rhythmic swaying pattern, as the sculptor moved back and forth in what seems to be a periodic pattern.

Vertical motiongram, displaying horisontal motion
Vertical motiongram, displaying horisontal motion

I also have some more motion data, which it will be interesting to study in more detail in Matlab.