Earlier today I held the presentation “Reduced Displays of Multidimensional Motion Capture Data Sets of Musical Performance” at the ESCOM conference in Jyväskylä, Finland. The presentation included an overview of different approaches to visualization of music-related movement, and also our most recent method: mocapgrams.

While motiongrams are reduced displays created from video files, mocapgrams are intended to work in a similar way, but created from motion capture data. They are conceptually similar, but otherwise quite different in the way they are generated. In mocapgrams we map XYZ coordinates of motion capture markers into RGB colours. Thus the end result gives an impression of how the markers moved in 3D-space over time, as seen below:

Example of a mocapgram generated from a 3D accelerometer recording. The XYZ values are mapped into a RGB colourspace. The bottom image is generated by frame differencing the top one, and therefore shows how the regular mocapgram is changing.

alt text

Below is an example of two different types of mocapgrams (as well as a motiongram and spectrogram) generated from a motion capture recording of myself playing the piano (recorded at the IDMIL, McGill University).

Different plots from a short piano recording. Mocapgram2 is a frame-difference mocapgram, while mocapgram1 is a regular mocapgram. The motiongram is generated from the video recording, and spectrogram of the sound.

There is no paper published based on the presentation, but the PDF of the presentation summarizes the main idea.

Citation: Jensenius, Alexander Refsum, Ståle Skogstad, Kristian Nymoen, Jim Torresen, and Mats Erling Høvin. “Reduced Displays of Multidimensional Motion Capture Data Sets of Musical Performance.” In Proceedings of the Conference of the European Society for the Cognitive Sciences of Music. Jyväskylä, Finland, 2009.

Download: PDF of the presentationDownload

/–>