Simple video editing in Ubuntu

I have been using Ubuntu as my main OS for the past year, but have often relied on my old MacBook for doing various things that I haven’t easily figured out how to do in Linux. One of those things is to trim video files non-destructively. This is quite simple to do in QuickTime, although Apple now forces you to save the file with a QuickTime container (.mov) even though there is still only MPEG-4 compression in the file (h.264).

There are numerous linux video editors available, but most of these offer way too many features and hence the need to re-compress the files. But I have found two solutions that work well.

The first one, ffmpeg, should be obvious, although I hadn’t thought that it could also do trimming. However, I often like GUI software, and I have found that Avidemux can do what I need very easily. Just open a file, add start and stop markers for the section to be trimmed, and click save. As opposed to QuickTime, it also allows for saving directly to MPEG-4 files (.mp4) without recoding the file.

There was only one thing that I had to look up, and that was the need for starting the trim section on a keyframe in the video. This is quite obvious when wanting to avoid re-encoding the file, but unfortunately Avidemux doesn’t help in explaining this but only gives an error message. The trick was to use the >> arrows to jump to the next keyframe, and then the file saved nicely.

New department video

As I have mentioned previously, life has been quite hectic over the last year, becoming Head of Department at the same time as getting my second daughter. So my research activities have slowed down considerably, and also the activity on this blog.

When it comes to blogging, I have focused on building up my Head of Department blog (in Norwegian), which I use to comment on things happening in the Department as well as relevant (university) political issues. My longterm plan, though, is also to write some posts about being Head of Department on this English-language blog.

Today I would like to point to our new department video, targeted at recruiting new students:

The video is made by video journalist Camilla Smaadal, who is also responsible for a set of video presentations of our faculty. Most of these are in Norwegian, but we are planning to add English subtitles through YouTube.

The new video is aiming at giving students an impression of all the cool things happening in our Department. There are a lot of new music education programs popping up everywhere these days, so we realise that we need to be more active in promoting the qualities of our university education. This video is one little step towards this goal.

New publication: Non-Realtime Sonification of Motiongrams

SMC-poster-thumbToday I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.

Non-Realtime Sonification of Motiongrams


The paper presents a non-realtime implementation of the sonomotiongram method, a method for the sonification of motiongrams. Motiongrams are spatiotemporal displays of motion from video recordings, based on frame-differencing and reduction of the original video recording. The sonomotiongram implementation presented in this paper is based on turning these visual displays of motion into sound using FFT filtering of noise sources. The paper presents the application ImageSonifyer, accompanied by video examples showing the possibilities of the sonomotiongram method for both analytic and creative applications

Jensenius, A. R. (2013). Non-realtime sonification of motiongrams. In Proceedings of Sound and Music Computing, pages 500–505, Stockholm.


    Address = {Stockholm},
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {Proceedings of Sound and Music Computing},
    Pages = {500--505},
    Title = {Non-Realtime Sonification of Motiongrams},
    Year = {2013}}

Calculating duration of QuickTime movie files

I have been doing video analysis on quicktime movie files for several years, but have never really had the need to use the time information of the movie files. For a project, I now had the need for getting the timecode in seconds out of the files, and this turned out to be a little more tricky than first expected. Hence this little summary for other people that may be in the same situation.

It turns out that QuickTime uses something called time units for the the internal representation of time, and this is also what is output in Jitter when run my analyses on the files. The time unit is not very meaningful for humans, as it is a combination of frames and a timescale that defines the actual length of each time unit. Apple has posted some more technical information about Timecode Media Handler Functions, but it didn’t really help me solve the problem easily.

Fortunately, there are a few threads on this topic on the Cycling ’74 forum, including one on relating sfplay time to quicktime time, and another on converting quicktime Units into time code. These threads helped me realise that calculating the duration of a movie file in seconds, is as easy as dividing the duration in frames by the timescale. And, by knowing the total number of frames it is then also possible to calculate the frames per second of the file. Now that I know this, it is obvious, but I post the patch here in case there are others looking for this information.


My main problem, though, was that I already had a lot of analysis files (hundreds) with only the QuickTime time unit as the time reference. It was not an option to rerun these analyses (which have taken weeks to finish), so I had to figure out a way of retroactively calculating a more meaningful timecode (in seconds).

After fiddling around with the time series data for a little while, I realised that it is possible to use the difference between two time samples and, knowing the original fps of the movies (using QuickTime Player to check the fps), calculate the correct duration in seconds. For my dataset there turned out to be only five different time unit durations, so it was fairly easy to write a small script for calculating the durations in Matlab. This is the part of the script that handles the time calculation, in which a is a Matlab structure with my data, hence a.time(1) is the time code of the first sample in the dataset.

    case 1001
        t = (a.time-a.time(1))/(29.97*time_diff); % 29.97 fps
    case 2000
        t = (a.time-a.time(1))/(29.97*time_diff); % 59.94 fps
    case 3731
        t = (a.time-a.time(1))/(24*time_diff);    % 24 fps
    case 3733
        t = (a.time-a.time(1))/(24*time_diff);    % 24 fps
    case 3750
        t = (a.time-a.time(1))/(24*time_diff);    % 24 fps
        t = (a.time-a.time(1))/(29.97*time_diff); % 29.97 fps
        disp('!!! Unknown timecode.')

For new analyses, I will calculate the correct duration in seconds right away, but this hack has a least helped in solve the problem for my current data set.