NIME publication and performance: Vrengt

My PhD student Cagri Erdem developed a performance together with dancer Katja Henriksen Schia. The piece was first performed together with Qichao Lan and myself during the RITMO opening and also during MusicLab vol. 3. See here for a teaser of the performance:

This week Cagri, Katja and myself performed a version of the piece Vrengt at NIME in Porto Alegre.

We also presented a paper describing the development of the instrument/piece:

Erdem, Cagri, Katja Henriksen Schia, and Alexander Refsum Jensenius. “Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance.” In Proceedings of the International C Onference on New Interfaces for Musical Expression. Porto Alegre, 2019.

Abstract:

This paper describes the process of developing a shared instrument for music–dance performance, with a particular focus on exploring the boundaries between standstill vs motion, and silence vs sound. The piece Vrengt grew from the idea of enabling a true partnership between a musician and a dancer, developing an instrument that would allow for active co-performance. Using a participatory design approach, we worked with sonification as a tool for systematically exploring the dancer’s bodily expressions. The exploration used a “spatiotemporal matrix,” with a particular focus on sonic microinteraction. In the final performance, two Myo armbands were used for capturing muscle activity of the arm and leg of the dancer, together with a wireless headset microphone capturing the sound of breathing. In the paper we reflect on multi-user instrument paradigms, discuss our approach to creating a shared instrument using sonification as a tool for the sound design, and reflect on the performers’ subjective evaluation of the instrument.

Kinectofon: Performing with shapes in planes

2013-05-28-DSCN7184Yesterday, Ståle presented a paper on mocap filtering at the NIME conference in Daejeon. Today I presented a demo on using Kinect images as input to my sonomotiongram technique.

Title
Kinectofon: Performing with shapes in planes

Links

Abstract
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB and depth image streams retrieved from a Microsoft Kinect sensor device. These two image streams are used to create different types of motiongrams, which, again, are used as the source material for a sonification process based on inverse FFT. The instrument is intuitive to play, allowing the performer to create sound by “touching’’ a virtual sound wall.

Reference
Jensenius, A. R. (2013). Kinectofon: Performing with shapes in planes. In Proceedings of the International Conference on New Interfaces For Musical Expression, pages 196–197, Daejeon, Korea.

BibTeX

@inproceedings{Jensenius:2013e,
   Address = {Daejeon, Korea},
   Author = {Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
   Pages = {196--197},
   Title = {Kinectofon: Performing with Shapes in Planes},
   Year = {2013}
}

kinectofon_poster

ImageSonifyer

ImageSonifyer

Earlier this year, before I started as head of department, I was working on a non-realtime implementation of my sonomotiongram technique (a sonomotiongram is a sonic display of motion from a video recording, created by sonifying a motiongram). Now I finally found some time to wrap it up and make it available as an OSX application called ImageSonifyer.  The Max patch is also available, for those that want to look at what is going on.

I am working on a paper that will describe everything in more detail, but the main point can hopefully be understood by looking at some of the videos I have posted in the sonomotiongram playlist on YouTube. In its most basic form, the ImageSonifyer will work more or less like Metasynth, sonifying an image. Here is a basic example showing how an image is sonified by being “played” from left to right.

But my main idea is to use motiongrams as the source material for the sonification. Here is a sonification of the high-speed guitar recordings I have written about earlier, first played at a rate of 10 seconds:

and then played at a rate of 1 second, which is about the original recording speed.

Record videos of sonification

I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.

It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max. Below is a screenshot from a patch (sonifyer-recorder.maxpat) doing this:

The most important part here is to remember to input a 4 plane matrix to jit.vcr, otherwise it will complain. For this I use the little jcom.luma2rgb% component, which will automagically convert the video stream from 1 to 4 matrices, if needed. Here I have also combined the original video, motion image as well as motiongram into one image that I record, alongside the sonification of the motion. The output from this patch looks something like this: