Method chapter freely available

I am a big supporter of Open Access publishing, but for various reasons some of my publications are not openly available by default. This is the case for the chapter Methods for Studying Music-Related Body Motion that I have contributed to the Springer Handbook of Systematic Musicology.

I am very happy to announce that the embargo on the book ran out today, which means that a pre-print version of my chapter is finally freely available in UiO’s digital repository. This chapter is a summary of my experiences with music-related motion analysis, and I often recommend it to students. Therefore it is great that it is finally available to download from everywhere.

Abstract

This chapter presents an overview of some methodological approaches and technologies that can be used in the study of music-related body motion. The aim is not to cover all possible approaches, but rather to highlight some of the ones that are more relevant from a musicological point of view. This includes methods for video-based and sensor-based motion analyses, both qualitative and quantitative. It also includes discussions of the strengths and weaknesses of the different methods, and reflections on how the methods can be used in connection to other data in question, such as physiological or neurological data, symbolic notation, sound recordings and contextual data.

Pixel array images of long videos in FFmpeg

Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.

I find that creating motiongrams or videograms is a good way to visualize the content of videos. They are abstract representations, but still reveal some of what is going on. However, for longer videos, motiongrams may be a bit tricky to look at, and they also take a lot of time to generate (hours, or even days). For that reason I was excited to see how pixel array images would work on some of my material.

First I tried on my “standard” dance video:

which gives this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

Yes, that is mainly a blue line, resulting from the average colour of the video being blue throughout the entire video.

Then I tried with one of the videos from the AIST Dance Video Database:

Which results in this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

And, yes, that is mainly a gray line… I realize that this method does not work very well with single-shot videos.

To try something very different, I also decided to make a pixel array image of Bergensbanen, a 7-hour TV production of the train between Oslo and Bergen. I made videograms of this recording some years ago, which turned out to be quite nice. So I was excited to see how a pixel array image would work. The end result looks like this (1920 pixels wide):

Pixel array image (1920 pixels wide) of the 7-hour TV production Bergensbanen

As you see, not much is changing, but that also represents the slowness of the train ride. While I originally thought this would be a smart representation, I still think that my videograms were more informative, such as this one:

Bergensbanen
Videogram of Bergensbanen

The big difference between the two visualizations, is that each frame is represented with vertical information in the videogram. The pixel array image, on the other hand, only displays one single pixel per frame. That said, it took only some minutes to generate the pixel array image, and I recall spending several days on generating the videogram.

To sum up, I think that pixel array images are probably more useful for movies and video material in which there are lots of changes throughout. They would be better suited for such a reduction technique. For my videos, in which I always use single-shot stationary cameras, motiongrams and videograms may still be the preferred solution.

VideoAnalysis v2.0

I am happy to announce a new version of VideoAnalysis, a standalone application for OSX and Windows for creating visualizations and extract motion features from video files.

The GUI of VideoAnalysis v2.0

VideoAnalysis was developed as a standalone version of the Musical Gestures Toolbox. I began working on the toolbox back in 2004, as a collection of modules for Max/MSP/Jitter. Then some people asked me to make a standalone version with some of the core functionality. This version was primarily developed for music researchers, but is also used for sports, dance, healthcare, architecture, and interaction design.

I have less time for development myself these days, so most of the work on the new release has been made by Bálint Laczkó and Aleksander Tidemann. Thanks!

So for anyone reading this: please try out the new version. And if you have problems and/or find any bugs, please report them in the tracker.

Creating different types of keyframe displays with FFmpeg

In some recent posts I have explored the creation of motiongrams and average images, multi-exposure displays, and image masks. In this blog post I will explore different ways of generating keyframe displays using the very handy command line tool FFmpeg.

As in the previous posts, I will use a contemporary dance video from the AIST Dance Video Database as an example:

The first attempt is to create a 3×3 grid image by just sampling frames from the original image. I spent some time exploring different ways of doing this. It is possible to do it with a one-liner:

ffmpeg -ss 00:00:05 -i dance.mp4 -frames 1 -vf "select=not(mod(n\,200)),scale=495:256,tile=3x3" tile.jpg

The problem with this approach, and many similar that I found by googling around, is that it samples frames with a specific interval. In the above code it looks up every 200th frame, which gives this image:

The problem is that the image only contains information about the 1600 first frames, or more specifically frames 0, 200, 400, 600, 800, 1000, 1200, 1400, 1600. I want to include frames that represent the whole video.

I see that many people create such displays by sampling based on scene changes in the video. There are two problems with this. First, it requires that there are scene changes in the video. This is usually not the case in the videos that I study, which are primarily recorded with a stationary camera in which only the “foreground” changes. The second problem with sampling one “salient” frames, is that we loose information about the temporal unfolding of the video file. From an analysis point of view, it is actually quite useful to know more or less when things happened in the video. That is not so easy if the sampling is uneven.

I was therefore happy to find a nice script made by Martin Sikora, which is based on looking up the duration of the file and use this to calculate the frames to export from the file. Running this script on the original video gives this image:

The 9 frames in the display above reveal that there is little dance in the first one third of the video file (can see the arm of the dancer enter in the third image). It also shows how the dancer moved around in the space. It is possible to get some idea about her spatial distribution, but there is little information about her actual motion throughout the sequence. I was therefore curious to try out making such a grid-based display from a history video, which actually shows some more of the actual motion.

It is possible to make (motion) history videos in both the Matlab and Python versions of the Musical Gestures Toolbox, but today I was curious as to whether it could be done simply with FFmpeg. And it turns out to be quite simple using a filter called tmix:

ffmpeg -i dance.mp4 -filter:v tmix=frames=30:weights="10 1 1" dance_tmix30.mp4

I played around for a while with the settings before ending up with these ones. Here I average over 30 frames (which is half a second for this 60fps video). I also use weight feature to give preference to the current frame. This makes it easier to follow the dancer, as the trajectories of past motion become more blurred.

Running the above grid-script on this video results in a keyframe display that shows more of the motion happening in the frames in question. This is useful to see, for example, when she moved more than in other frames.

I am quite happy with the above-mentioned, but it is not particularly fast. Creating the history video is time-consuming, since it has to process all the frames in the entire video. I therefore tested speeding up the video 8 times, using this command (the -an flag is used to remove the audio):

ffmpeg -i dance.mp4 -filter:v "setpts=0.125*PTS" -an output8x.mp4

Running the history video function on this then runs quite a bit faster, and results in this hi-speed history video:

Running this through the grid-script gives a keyframe display that is both similar and different to the one above:

It is quite a lot quicker to generate, and also gives more information about the motion sequence.

The conclusion is that it is, indeed, possible to make a lot of interesting video visualizations using “only” FFmpeg. Several of these scripts are also much faster than the scripts I have previously used in Matlab and Python. So I will definitely continue to explore FFmpeg, and look at how it can be integrated with the other toolboxes.

New paper: Test–retest reliability of computer-based video analysis of general movements in healthy term-born infants

Screenshot from 2015-08-03 22:10:06I have for several years been collaborating with researchers at NTNU in Trondheim on developing video analysis tools for studying the movement patterns of infants. This has resulted in several papers, international testing (and a TV documentary). Now there is a new paper out, with some very successful data testing the reliability of the video analysis method:

Reference:

Valle, Susanne Collier, Ragnhild Støen, Rannei Sæther, Alexander Refsum Jensenius, and Lars Adde.
Test–retest Reliability of Computer-Based Video Analysis of General Movements in Healthy Term-Born Infants.
Early Human Development 91, no. 10 (October 2015): 555–58. doi:10.1016/j.earlhumdev.2015.07.001.

Highlights:

  • Test–retest reliability of computer-based video analysis of general movements.
  • Results showed high reliability in healthy term-born infants.
  • There was significant association between computer-based video analysis and temporal organization of fidgety movements.