ICMC 2006 proceedings details

A colleague of mine recently asked if I could help her find the bibligraphic details of the ICMC 2006 proceedings. Apparently this information is not easily available online, and she had spent a great deal of research time trying to find the information.

I was lucky enough to participate in this wonderful event at Tulane University, and still have the paper version of the proceedings in my office. So here is the relevant information, in case anyone else also wonders about these details:

  • Editors (Paper chairs): Georg Essl and Ichiro  Fujinaga
  • November 6-11 2006
  • Publisher: International Computer Music Association, San Francisco, CA & The Music Department, Tulane University, New Orleans, LA
  • ISBN: 0-9713192-4-3

 

 

Papers at ICMC 2008

Last week I was in Belfast for the International Computer Music Conference (ICMC 2008). The conference was hosted by SARC, and it was great to finally be able to see (and hear!) the sonic lab which they have installed in their new building.

I was involved in two papers, the first one being a Jamoma-related paper called “Flexible Control of Composite Parameters in Max/MSP” (PDF) written by Tim Place, Trond Lossius, Nils Peters and myself. Below is a picture of Trond giving the presentation. The main point of the paper is that we suggest that parameters should have properties and methods. This is both a general suggestion, and a specific one which we have started implementing in Jamoma using OSC.

The second paper was called “A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians” (PDF) and was written by Kristian Nymoen, Rolf Inge Godøy and myself. This was a presentation of how we are currently using the Sound Description Interchange Format (SDIF) for the storage of GDIF data. This helps solve a number of the challenges we have previously experienced in terms of synchronisation of data, audio and video with different (and varying) sampling rates and resolution.

There are lots of more pictures from the conference on Flickr.

Motiongrams

Challenge

Traditional keyframe displays of videos are not particularly useful when studying single-shot studio recordings of music-related movements, since they mainly show static postural information and no motion.

Using motion images of various kinds helps in visualizing what is going on in the image. Below can be seen (from left): motion image, with noise reduction, with edge detection, with “trails” and added to the original image.

 

Making Motiongrams

We are used to visualizing audio with spectrograms, and have been exploring different techniques for visualizing music-related movements in a similar manner. Motiongrams are made by calculating the means of the rows and columns of the motion image (difference between consecutive frames) and plotting them over time.

No motion tracking or other computer vision techniques are applied. A motiongram is simply a reduction of the video stream and is thus a good starting point for further quantitative and qualitative analysis.

 

 

Using Motiongrams

Motiongrams allow for quick navigation in video material and for comparative analysis of motion qualities. Although quite rough, it is easy to see differences in the quantity of motion and similarities in upward/downward patterns between motion sequences.

Below is a motiongram of a five minute video of free dance movements to music. The dancer moved to five different musical excerpts (marked a-e) and each excerpt was repeated three times (marked 1-3).

We use motiongrams in comparative studies. Below are motiongrams of three dancers moving freely to the same musical excerpts.

If we zoom into the image and look at the first 40 seconds of the sequence displayed above, it is possible to follow the trajectories of the hands (because of the yellow and read gloves) and head (pink due to saturation), as well as the body (appears blue due to the background).

 

 

Future Work

A number of issues will have to be adressed in future research:

  • 3D motiongrams showing both horizontal and vertical motion.
  • Combined displays with audio, video and sensor information.
  • Improve efficiency.

Acknowledgments

Rolf Inge Godøy and Marcelo M. Wanderley for valuable feedback and support. This research is funded by the Norwegian Research Council.

 

Source

This post was first presented as a web page of the Musical Gesture group at University of Oslo, in connection to the following publication:

  • Jensenius, A. R. (2006). Using motiongrams in the study of musical gestures. In Proceedings of the 2006 International Computer Music Conference, 6-11 November, New Orleans. [PDF] [Poster]

and has retroactively been moved to this blog so that the content won’t be lost.