Record videos of sonification

I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.

It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max. Below is a screenshot from a patch (sonifyer-recorder.maxpat) doing this:

The most important part here is to remember to input a 4 plane matrix to jit.vcr, otherwise it will complain. For this I use the little jcom.luma2rgb% component, which will automagically convert the video stream from 1 to 4 matrices, if needed. Here I have also combined the original video, motion image as well as motiongram into one image that I record, alongside the sonification of the motion. The output from this patch looks something like this:

Sonification of motiongrams

A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.

See below for the full paper and video examples.

The Paper

Abstract: The paper presents a method for sonification of human body motion based on motiongrams. Motiongrams show the spatiotemporal development of body motion by plotting average matrices of motion images over time. The resultant visual representation resembles spectrograms, and is treated as such by the new sonifyer module for Jamoma for Max, which turns motiongrams into sound by reading a part of the matrix and passing it on to an oscillator bank. The method is surprisingly simple, and has proven to be useful for analytical applications and in interactive music systems.

Full reference: A. R. Jensenius. Motion-sound interaction using sonification based on motiongrams. In ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions, pages 170–175. IARIA, 2012.

@inproceedings{Jensenius:2012d,
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions},
    Pages = {170--175},
    Publisher = {IARIA},
    Title = {Motion-sound Interaction Using Sonification based on Motiongrams},
    Year = {2012}}

Video examples

Video 1: A screencast demonstrating the jmod.sonifyer~ module.

Video 2: Examples of sonification of some basic movement patterns: up-down, sideways, diagonal and circular.

Video 3: One attempt at sonifying the two axes at the same time. Here both horizontal and vertical motiongrams are created from the same video recording, and the sonifications of the two motiongrams have been mapped to the left and right audio channel respectively.

Video 4: Examples of the importance of filtering and thresholding of the motion image for the final sounding result. The recordings were done at high-speed (200 fps) and played back at 25 fps.

Video 5: Sonification of a short violin improvisation (courtesy of Victoria Johnson).

Video 6: Sonification of a piece by a French-Canadian fiddler (courtesy of Erwin Schoonderwaldt).

Video 7: Sonification of free dance to music.

Video 8: Soniperforma: Performing with the sonifyer at Biermannsgården in Oslo on 18 December 2010. The performance was improvised and based on applying only video effects to change the sonic quality.

Difference between videogram and motiongram

For some upcoming blog posts on videograms, I will start by explaining the difference between a motiongram and a videogram. Both are temporal (image) representations of video content (as explained here), and are produced almost in the same way. The difference is that videograms start with the regular video image, and motiongrams start with a motion image.

So for a video of my hand like this:

we will get this horizontal videogram:

Videogram of hi-speed hand motion

and this horizontal motiongram:

Motiongram of hi-speed hand motion

As you see, they both reflect the video content. The main difference is that the videogram preserves the original background colours, while the motiongram only reflects what changes between the frames (i.e. the motion).

Sonification of motiongrams

I have made a new Jamoma module for sonification of motiongrams called jmod.sonifyer~. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and “read” as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.

See the demonstration video below:

The module is available from the Jamoma source repository, and will probably make it into an official release at some point.

New motiongram features

Inspired by the work Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:

About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.g. waveform displays and sono/spectrograms). The method is based on creating a motion image, doing a matrix reduction on it and plotting the resultant 1xn or nx1 matrices over time either horizontally or vertically.

Comparison of motiongrams Below is an image showing three different types of motiongrams:

  1. Single line scan based on regular image
  2. Average scan based on regular image
  3. Average scan based on motion image

I think all of them are interesting, so the use of them will have to be adjusted according to what type of material you are working with.