Tag Archives: jitter

New publication: Non-Realtime Sonification of Motiongrams

SMC-poster-thumbToday I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.

Title
Non-Realtime Sonification of Motiongrams

Links

Abstract
The paper presents a non-realtime implementation of the sonomotiongram method, a method for the sonification of motiongrams. Motiongrams are spatiotemporal displays of motion from video recordings, based on frame-differencing and reduction of the original video recording. The sonomotiongram implementation presented in this paper is based on turning these visual displays of motion into sound using FFT filtering of noise sources. The paper presents the application ImageSonifyer, accompanied by video examples showing the possibilities of the sonomotiongram method for both analytic and creative applications

Reference
Jensenius, A. R. (2013). Non-realtime sonification of motiongrams. In Proceedings of Sound and Music Computing, pages 500–505, Stockholm.

BibTeX

 @inproceedings{Jensenius:2013f,
    Address = {Stockholm},
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {Proceedings of Sound and Music Computing},
    Pages = {500--505},
    Title = {Non-Realtime Sonification of Motiongrams},
    Year = {2013}}

Timelapser

TimeLapser-screenshotI have recently started moving my development efforts over to GitHub, to keep everything in one place. Now I have also uploaded a small application I developed for a project by my mother, Norwegian sculptor Grete Refsum. She wanted to create a timelapse video of her making a new sculpture, “Hommage til kaffeselskapene”, for her installation piece Tante Vivi, fange nr. 24 127 Ravensbrück.

There are lots of timelapse software available, but none of them that fitted my needs. So I developed a small Max patch called TimeLapser. TimeLapser takes an image from a webcam at a regular interval (1 minute). Each image is saved with the time code as the name of the file, making it easy to use the images for documentation purposes or assembling the images into timelapse videos. The application was originally developed for an art project, but can probably be useful for other timelapse applications as well.

The application will only store separate image files, which can easily be assembled into timelapse movies using for example Quicktime.

Below is a video showing the final timelapse of my mother’s sculpture:

Sonification of motiongrams

A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.

See below for the full paper and video examples.

The Paper

Abstract: The paper presents a method for sonification of human body motion based on motiongrams. Motiongrams show the spatiotemporal development of body motion by plotting average matrices of motion images over time. The resultant visual representation resembles spectrograms, and is treated as such by the new sonifyer module for Jamoma for Max, which turns motiongrams into sound by reading a part of the matrix and passing it on to an oscillator bank. The method is surprisingly simple, and has proven to be useful for analytical applications and in interactive music systems.

Full reference: A. R. Jensenius. Motion-sound interaction using sonification based on motiongrams. In ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions, pages 170–175. IARIA, 2012.

@inproceedings{Jensenius:2012d,
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions},
    Pages = {170--175},
    Publisher = {IARIA},
    Title = {Motion-sound Interaction Using Sonification based on Motiongrams},
    Year = {2012}}

Video examples

Video 1: A screencast demonstrating the jmod.sonifyer~ module.

Video 2: Examples of sonification of some basic movement patterns: up-down, sideways, diagonal and circular.

Video 3: One attempt at sonifying the two axes at the same time. Here both horizontal and vertical motiongrams are created from the same video recording, and the sonifications of the two motiongrams have been mapped to the left and right audio channel respectively.

Video 4: Examples of the importance of filtering and thresholding of the motion image for the final sounding result. The recordings were done at high-speed (200 fps) and played back at 25 fps.

Video 5: Sonification of a short violin improvisation (courtesy of Victoria Johnson).

Video 6: Sonification of a piece by a French-Canadian fiddler (courtesy of Erwin Schoonderwaldt).

Video 7: Sonification of free dance to music.

Video 8: Soniperforma: Performing with the sonifyer at Biermannsgården in Oslo on 18 December 2010. The performance was improvised and based on applying only video effects to change the sonic quality.

Sonification of motiongrams

I have made a new Jamoma module for sonification of motiongrams called jmod.sonifyer~. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and “read” as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.

See the demonstration video below:

The module is available from the Jamoma source repository, and will probably make it into an official release at some point.

New motiongram features

Inspired by the work Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:

About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.g. waveform displays and sono/spectrograms). The method is based on creating a motion image, doing a matrix reduction on it and plotting the resultant 1xn or nx1 matrices over time either horizontally or vertically.

Comparison of motiongrams Below is an image showing three different types of motiongrams:

  1. Single line scan based on regular image
  2. Average scan based on regular image
  3. Average scan based on motion image

I think all of them are interesting, so the use of them will have to be adjusted according to what type of material you are working with.