Sonification of motiongrams

I have made a new Jamoma module for sonification of motiongrams called jmod.sonifyer~. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and “read” as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.

See the demonstration video below:

The module is available from the Jamoma source repository, and will probably make it into an official release at some point.

New motiongram features

Inspired by the work Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:

About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.g. waveform displays and sono/spectrograms). The method is based on creating a motion image, doing a matrix reduction on it and plotting the resultant 1xn or nx1 matrices over time either horizontally or vertically.

Comparison of motiongrams Below is an image showing three different types of motiongrams:

  1. Single line scan based on regular image
  2. Average scan based on regular image
  3. Average scan based on motion image

I think all of them are interesting, so the use of them will have to be adjusted according to what type of material you are working with.


Presenting mocapgrams

Earlier today I held the presentation “Reduced Displays of Multidimensional Motion Capture Data Sets of Musical Performance” at the ESCOM conference in Jyväskylä, Finland. The presentation included an overview of different approaches to visualization of music-related movement, and also our most recent method: mocapgrams.

While motiongrams are reduced displays created from video files, mocapgrams are intended to work in a similar way, but created from motion capture data. They are conceptually similar, but otherwise quite different in the way they are generated. In mocapgrams we map XYZ coordinates of motion capture markers into RGB colours. Thus the end result gives an impression of how the markers moved in 3D-space over time, as seen below:

Example of a mocapgram generated from a 3D accelerometer recording. The XYZ values are mapped into a RGB colourspace. The bottom image is generated by frame differencing the top one, and therefore shows how the regular mocapgram is changing.
Still from the video recorded from the piano study.

Below is an example of two different types of mocapgrams (as well as a motiongram and spectrogram) generated from a motion capture recording of myself playing the piano (recorded at the IDMIL, McGill University).

Different plots from a short piano recording. Mocapgram2 is a frame-difference mocapgram, while mocapgram1 is a regular mocapgram. The motiongram is generated from the video recording, and spectrogram of the sound.

There is no paper published based on the presentation, but the PDF of the presentation summarizes the main idea.

Citation: Jensenius, Alexander Refsum, Ståle Skogstad, Kristian Nymoen, Jim Torresen, and Mats Erling Høvin. “Reduced Displays of Multidimensional Motion Capture Data Sets of Musical Performance.” In Proceedings of the Conference of the European Society for the Cognitive Sciences of Music. Jyväskylä, Finland, 2009.


To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.

It currently has the following features:

  • Draws a spectrogram from any connected microphone
  • Draws a motiongram/videogram from any connected camera
  • Press the escape button to toggle fullscreen mode

Built with Max/MSP by Cycling ’74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.

A snapshot of the main interface:

The main window of the AudioVideoAnalysis application

The fullscreen can be toggled with the escape button:

Fullscreen mode in the AudioVideoAnalysis application

The are, obviously, lots of things that can and will be improved in future versions. Please let me know of any problems you experience with the application, and if there is anything in particular you think should be included.

Sonification of Traveling Landscapes

I just heard a talk called “Real-Time Synaesthetic Sonification of Traveling Landscapes” (PDF) by Tim Pohle and Peter Knees from the Department of Computational Perception (great name!) in Linz. They have made an application creating music from a moving video camera. The implementation is based on grabbing a one pixel wide column from the video, plotting these columns and sonifying the image. Interestingly enough, the images they get out (see below) of this are very close to the motiongrams and videograms I have been working on.

Picture 1.png