Sonification of motiongrams

A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.

See below for the full paper and video examples.

The Paper

Abstract: The paper presents a method for sonification of human body motion based on motiongrams. Motiongrams show the spatiotemporal development of body motion by plotting average matrices of motion images over time. The resultant visual representation resembles spectrograms, and is treated as such by the new sonifyer module for Jamoma for Max, which turns motiongrams into sound by reading a part of the matrix and passing it on to an oscillator bank. The method is surprisingly simple, and has proven to be useful for analytical applications and in interactive music systems.

Full reference: A. R. Jensenius. Motion-sound interaction using sonification based on motiongrams. In ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions, pages 170–175. IARIA, 2012.

@inproceedings{Jensenius:2012d,
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions},
    Pages = {170--175},
    Publisher = {IARIA},
    Title = {Motion-sound Interaction Using Sonification based on Motiongrams},
    Year = {2012}}

Video examples

Video 1: A screencast demonstrating the jmod.sonifyer~ module.

Video 2: Examples of sonification of some basic movement patterns: up-down, sideways, diagonal and circular.

Video 3: One attempt at sonifying the two axes at the same time. Here both horizontal and vertical motiongrams are created from the same video recording, and the sonifications of the two motiongrams have been mapped to the left and right audio channel respectively.

Video 4: Examples of the importance of filtering and thresholding of the motion image for the final sounding result. The recordings were done at high-speed (200 fps) and played back at 25 fps.

Video 5: Sonification of a short violin improvisation (courtesy of Victoria Johnson).

Video 6: Sonification of a piece by a French-Canadian fiddler (courtesy of Erwin Schoonderwaldt).

Video 7: Sonification of free dance to music.

Video 8: Soniperforma: Performing with the sonifyer at Biermannsgården in Oslo on 18 December 2010. The performance was improvised and based on applying only video effects to change the sonic quality.

Concert: Victoria Johnson

Together with Victoria Johnson I have developed the piece Transformation, a piece where we are using video analysis to control sound selection and spatialisation. We have been developing the setup and piece during the last couple of years, and performed variations of the piece at MIC, the Opera house and at the music academy last year.

The piece will be performed again today, Monday 28 March 2011 at 19:00 at the Norwegian Academy of Music. The concert is the final performance of Victoria’s 3 year research fellowship in the arts (PhD-equivalent).

Sonification of motiongrams

I have made a new Jamoma module for sonification of motiongrams called jmod.sonifyer~. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and “read” as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.

See the demonstration video below:

The module is available from the Jamoma source repository, and will probably make it into an official release at some point.

GDIF recording and playback

Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here.

We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:

New motiongram features

Inspired by the work Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:

About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.g. waveform displays and sono/spectrograms). The method is based on creating a motion image, doing a matrix reduction on it and plotting the resultant 1xn or nx1 matrices over time either horizontally or vertically.

Comparison of motiongrams Below is an image showing three different types of motiongrams:

  1. Single line scan based on regular image
  2. Average scan based on regular image
  3. Average scan based on motion image

I think all of them are interesting, so the use of them will have to be adjusted according to what type of material you are working with.