Tag Archives: GDIF

GDIF recording and playback

Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here.

We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:

Papers at ICMC 2008

Last week I was in Belfast for the International Computer Music Conference (ICMC 2008). The conference was hosted by SARC, and it was great to finally be able to see (and hear!) the sonic lab which they have installed in their new building.

I was involved in two papers, the first one being a Jamoma-related paper called “Flexible Control of Composite Parameters in Max/MSP” (PDF) written by Tim Place, Trond Lossius, Nils Peters and myself. Below is a picture of Trond giving the presentation. The main point of the paper is that we suggest that parameters should have properties and methods. This is both a general suggestion, and a specific one which we have started implementing in Jamoma using OSC.

The second paper was called “A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians” (PDF) and was written by Kristian Nymoen, Rolf Inge God√ły and myself. This was a presentation of how we are currently using the Sound Description Interchange Format (SDIF) for the storage of GDIF data. This helps solve a number of the challenges we have previously experienced in terms of synchronisation of data, audio and video with different (and varying) sampling rates and resolution.

There are lots of more pictures from the conference on Flickr.

Janer’s dissertation

I had a quick read of Jordi Janer’s dissertation today: Singing-Driven Interfaces for Sound Synthesizers. The dissertation presents a good overview of various types of voice analysis techniques, and suggestions for various ways of using the voice as a controller for synthesis. I am particularly interested in his suggestion of a GDIF namespace for structuring parameters for voice control:

/gdif/instrumental/excitation/loudness x
/gdif/instrumental/modulation/pitch x
/gdif/instrumental/modulation/formants x1 x2
/gdif/instrumental/modulation/breathiness x
/gdif/instrumental/selection/phoneticclass x

Here he is using Cadoz’ division of various types of instrumental “gestures”: excitation, modulation and selection, something which would also make sense for describing other types of instrumental actions.

I am looking forward to getting back to working on GDIF again soon, I just need to finish this semester’s teaching + administrative work + moving into our new lab first…

Some thoughts on GDIF

We had a meeting about GDIF at McGill yesterday, and I realised that people had very different thoughts about what it is and what it can be used for.

While GDIF is certainly intended for formalising the way we code movement and gesture information for realtime usage in NIME using OSC, it is also supposed to be used for offline analysis. I think the best way of doing this, is to have a three level approach as sketched here:

gdif-storage.png

The realtime communication is done with OSC, usually over UDP/IP, while we could use SDIF tools available in FTM for storing the streams (it might be better to just use some binary format for storing the OSC streams). Then, after discussing with Esteban and Jordi from Pompeu Fabra, I have been convinced that it is probably a good idea to use XML for creating structured files for offline analysis.

When it comes to what to store, I think it is important to separate the data into different layers to avoid confusion:

gdif-namespace.png

Not all the streams will have to be communicated all the time (which would obviously create quite a lot of overhead), but they could. The raw data level would typically not be useful for most realtime applications, but for analytical purposes it is crucial to be able to get back to the original data.

ICMC papers

My paper entitled “Using motiongrams in the study of musical gestures” was accepted to ICMC 06 in New Orleans. The abstract is:

Navigating through hours of video material is often time-consuming, and it is similarly difficult to create good visualization of musical gestures in such a material. Traditional displays of time-sampled video frames are not particularly useful when studying single-shot studio recordings, since they present a series of still images and very little movement related information. We have experimented with different types of motion displays, and present how we use motiongrams in our study of musical gestures.

And a paper entitled “Using a polhemus liberty electromagnetic tracker for gesture control of spatialization” which I co-wrote with Mark Marshall, Nils Peters, Julien Boissinot and Marcelo Wanderley at McGill was also accepted. The abstract of that one is:

This paper presents our current approach in using a Polhemus Liberty electromagnetic tracker for controlling spatialization in a performance setup for small ensemble. We are developing a Gesture Description Interchange Format (GDIF) to standardize the way gesture-related information is stored and shared in a networked computer setup. Examples are given of our current GDIF namespace, the gesture tracking subsystem developed to use this namespace and patches written to control spatialization and mapping using gesture data.

Motiongrams