Analyzing correspondence between sound objects and body motion

acm-tapNew publication:

Title 
Analyzing correspondence between sound objects and body motion

Authors
Kristian Nymoen, Rolf Inge Godøy, Alexander Refsum Jensenius and Jim Tørresen has now been published in ACM Transactions on Applied Perception.

Abstract
Links between music and body motion can be studied through experiments called sound-tracing. One of the main challenges in such research is to develop robust analysis techniques that are able to deal with the multidimensional data that musical sound and body motion present. The article evaluates four different analysis methods applied to an experiment in which participants moved their hands following perceptual features of short sound objects. Motion capture data has been analyzed and correlated with a set of quantitative sound features using four different methods: (a) a pattern recognition classifier, (b) t-tests, (c) Spearman’s ? correlation, and (d) canonical correlation. This article shows how the analysis methods complement each other, and that applying several analysis techniques to the same data set can broaden the knowledge gained from the experiment.

Reference
Nymoen, K., Godøy, R. I., Jensenius, A. R., and Torresen, J. (2013). Analyzing correspondence between sound objects and body motion. ACM Transactions on Applied Perception, 10(2).

BibTeX

@article{Nymoen:2013,
 Author = {Nymoen, Kristian and God{\o}y, Rolf Inge and Jensenius, Alexander Refsum and Torresen, Jim},
 Journal = {ACM Transactions on Applied Perception},
 Number = {2},
 Title = {Analyzing correspondence between sound objects and body motion},
 Volume = {10},
 Year = {2013}}

ImageSonifyer

ImageSonifyer

Earlier this year, before I started as head of department, I was working on a non-realtime implementation of my sonomotiongram technique (a sonomotiongram is a sonic display of motion from a video recording, created by sonifying a motiongram). Now I finally found some time to wrap it up and make it available as an OSX application called ImageSonifyer.  The Max patch is also available, for those that want to look at what is going on.

I am working on a paper that will describe everything in more detail, but the main point can hopefully be understood by looking at some of the videos I have posted in the sonomotiongram playlist on YouTube. In its most basic form, the ImageSonifyer will work more or less like Metasynth, sonifying an image. Here is a basic example showing how an image is sonified by being “played” from left to right.

But my main idea is to use motiongrams as the source material for the sonification. Here is a sonification of the high-speed guitar recordings I have written about earlier, first played at a rate of 10 seconds:

and then played at a rate of 1 second, which is about the original recording speed.

Musikkteknologidagene 2012

Keynote
Alexander holding a keynote lecture at Musikkteknologidagene 2012 (Photo: Nathan Wolek).

Last week I held a keynote lecture at the Norwegian music technology conference Musikkteknologidagene, by (and at) the Norwegian Academy of Music and NOTAM. The talk was titled: “Embodying the human body in music technology”, and was an attempt at explaining why I believe we need to put more emphasis on human-friendly technologies, and why the field of music cognition is very much connected to that of music technology. I got a comment that it would have been better to exchange “embodying” with “embedding” in my title, and I totally agree. So now I already have a title for my next talk!

Sverm demo
One of the “pieces” we did for the Sverm demo at Musikkteknologidagene 2012: three performers standing still and controlling a sine tone each based on their micromovements.

Besides my talk, we also did a small performance of parts of the Sverm project that I am working on together with an interdisciplinary group of sound, movement and light artists. We showed three parts: (1) very slow movement with changing lights (2) sonification of the micromovements of people standing still (3) micromovement interaction with granular synthesis. This showcase was based on the work we have done since the last performance and seminar.

Besides the things I was involved in myself during Musikkteknologidagene, I was very happy about being “back” at the conference after a couple of years of “absence” (I had enough with organising NIME last year). It is great to find that the conference is still alive and manages to gather people doing interesting stuff in and with music technology in Norway.

Sverm talking
Alexander talking about the Sverm project and fourMs motion capture lab at Musikkteknologidagene 2012 (Photo: Nathan Wolek).

When I started up the conference series back in 2005, the idea was to create a meeting place for music technology people in Norway. Fortunately, NOTAM has taken on the responsibility of finding and supporting local organisers that can host it every year. So far it has been bouncing back and forth between Oslo, Trondheim and Bergen, and I think it is now time that it moves on to Kristiansand, Tromsø and Stavanger. All these cities now have small active music technology communities, and some very interesting festivals (Punkt, Insomnia, Numusic) that it could be connected to.

As expected, the number of people attending the conference has been going up and down over the years. In general I find that it is always difficult to get people from Oslo to attend, something that I find slightly embarassing, but which can probably be explained by the overwhelming amount of interesting things happening in this comparably little capital at any point in time.

Snow
We had the first snow this year during Musikkteknologidagene, a good time to stay indoors at NOTAM listening to presentations.

The first years of Musikkteknologidagene we mainly spent on informing each other of what we are all doing, really just getting to know each other. Over the years the focus has been shifted more towards “real” presentations, and all the presentations I heard this year were very interesting and inspiring. This is a good sign that the field of music technology has matured in Norway. Several institutions have been able to start up research and educational programs in fields somehow related to music technology, and I think we are about to reach a critical mass of groups of people involved in the field, not only a bunch of individual researchers and artists trying to survive. This year we agreed that we are now going to make a communal effort of building up a database of all institutions and individuals involved in the field, and develop a roadmap along the lines of what was made in the S2S2 project.

All in all, this year’s Musikkteknologidagene was a fun experience, and I am already looking forwards to next year’s edition.

fourMs videos

Over the years we have uploaded various videos to YouTube of our fourMs lab activities. Some of these videos have been uploaded using a shared YouTube user, others by myself and others. I just realised that a good solution for gathering all the different videos is just to create a playlist, and then add all relevant videos there. Then it should also be possible to embed this playlist in web pages, like below: