MusicTestLab as a Testbed of Open Research

Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab. In this blog post, I will reflect on yesterday’s MusicTestLab – Slow TV.

About MusicLab

MusicLab is an innovation project by RITMO and the University Library. The aim is to explore new methods for conducting research, research communication and education. The project is organized around events: a concert in a public venue, which is also the object of study. The events also contain an edutainment element through panel discussions with world-leading researchers and artists, as well as “data jockeying” in the form of live data analysis of recorded data.

We have carried out 5 full MusicLab events so far and a couple of in-between cases. Now we are preparing for a huge event in Copenhagen with the Danish String Quartet. The concert has already been postponed once due to corona, but we hope to make it happen in May next year.

The wildest data collection ever

As part of the preparation for MusicLab Copenhagen, we decided to run a MusicTestLab to see if it is at all possible to carry out the type of data collection that we would like to do. Usually, we work in the fourMs Lab, a custom-built facility with state-of-the-art equipment. This is great for many things, but the goal of MusicLab is to do data collection in the “wild”, which would typically mean a concert venue.

For MusicTestLab, we decided to run the event on the stage in the foyer of the Science Library at UiO, which is a real-world venue that gives us plenty of challenges to work with. We decided to bring a full “package” of equipment, including:

  • infrared motion capture (Optitrack)
  • eye trackers (Pupil Labs)
  • physiological sensors (EMG from Delsys)
  • audio (binaural and ambisonics)
  • video (180° GoPros and 360° Garmin)

We are used to working with all of these systems separately in the lab, but it is more challenging when combining them in an out-of-lab setting, and with time pressure on setting everything up in a fairly short amount of time.

Musicians on stage with many different types of sensors on, with RITMO researchers running the data collection and a team from LINK filming.

Streaming live – Slow TV

In addition to actually doing the data collection in a public venue, where people passing by can see what is going on, we decided to also stream the entire setup online. This may seem strange, but we have found that many people are actually interested in what we are doing. Many people also ask about how we do things, and this was a good opportunity to show people the behind-the-scenes of a very complex data collection process. The recording of the stream is available online:

To make it a little more watcher-friendly, the stream features live commentary by myself and Solveig Sørbø from the library. We talk about what is going on and make interviews with the researchers and musicians. As can be seen from the stream, it was a quite hectic event, which was further complicated by corona restrictions. We were about an hour late for the first performance, but we managed to complete the whole recording session within the allocated time frame.

The performances

The point of the MusicLab events is to study live music, and this was also the focal point of the MusicTestLab, featuring the very nice, young student-led Borealis String Quartet. They performed two movements of Haydn’s Op. 76, no. 4 «Sunrise» quartet. The first performance can be seen here (with a close-up of the motion capture markers):

The first performance of Haydn’s string quartet Op. 76, no. 4 (movements I and II) by the Borealis String Quartet.

Then after the first performance, the musicians took off the sensors and glasses, had a short break, and then put everything back on again. The point of this was for the researchers to get more experience with putting everything on properly. From a data collection point of view, it is also interesting to see how reliable the data are between different recordings. The second performance can be seen here, now with a projection of the gaze from the violist’s eye-tracking glasses:

The second performance of Haydn’s string quartet Op. 76, no. 4 (movements I and II) by the Borealis String Quartet.

A successful learning experience

The most important conclusion of the day was that it is, indeed, possible to carry out such a large and complex data collection in an out-of-lab setting. It took an hour longer than expected to set everything up, but it also took an hour less to take everything down. This is valuable information for later. We also learned a lot about what types of clamps, brackets, cables, etc., that are needed for such events. Also useful is the experience of calibrating all the equipment in a new and uncontrolled environment. All in all, the experience will help us in making better data collections in the future.

Sharing with the world

Why is it interesting to share all of this with the world? RITMO is a Norwegian Centre of Excellence, which means that we get a substantial amount of funding for doing cutting-edge research. We are also in a unique position to have a very interdisciplinary team of researchers, with broad methodological expertise. With the trust we have received from UiO and our many funding agencies, we, therefore, feel an obligation to share as much as possible of our knowledge and expertise with the world. Of course, we present our findings at the major conferences and publish our final results in leading journals. But we also believe that sharing the way we work can help others.

Sharing our internal research process with the world is also a way of improving our own way of working. Having to explain what you do to others help to sharpen your own thinking. I believe that this will again lead to better research. We cannot run MusicTestLabs every day. Today all the researchers will copy all the files that we recorded yesterday and start on the laborious post-processing of all the material. Then we can start on the analysis, which may eventually lead to a publication in a year (or two or three) from now. If we do end up with a publication (or more) based on this material, everyone will be able to see how it was collected and be able to follow the data processing through all its chains. That is our approach to doing research that is verifiable by our peers. And, if it turns out that we messed something up, and that the data cannot be used for anything, we have still learned a lot through the process. In fact, we even have a recording of the whole data collection process so that we can go back and see what happened.

Other researchers need to come up with their approaches to opening their research. MusicLab is our testbed. As can be seen from the video, it is hectic. Most importantly, though, is that it is fun!

RITMO researchers transporting equipment to MusicTestLab in the beautiful October weather.

Analyzing correspondence between sound objects and body motion

acm-tapNew publication:

Title 
Analyzing correspondence between sound objects and body motion

Authors
Kristian Nymoen, Rolf Inge Godøy, Alexander Refsum Jensenius and Jim Tørresen has now been published in ACM Transactions on Applied Perception.

Abstract
Links between music and body motion can be studied through experiments called sound-tracing. One of the main challenges in such research is to develop robust analysis techniques that are able to deal with the multidimensional data that musical sound and body motion present. The article evaluates four different analysis methods applied to an experiment in which participants moved their hands following perceptual features of short sound objects. Motion capture data has been analyzed and correlated with a set of quantitative sound features using four different methods: (a) a pattern recognition classifier, (b) t-tests, (c) Spearman’s ? correlation, and (d) canonical correlation. This article shows how the analysis methods complement each other, and that applying several analysis techniques to the same data set can broaden the knowledge gained from the experiment.

Reference
Nymoen, K., Godøy, R. I., Jensenius, A. R., and Torresen, J. (2013). Analyzing correspondence between sound objects and body motion. ACM Transactions on Applied Perception, 10(2).

BibTeX

@article{Nymoen:2013,
 Author = {Nymoen, Kristian and God{\o}y, Rolf Inge and Jensenius, Alexander Refsum and Torresen, Jim},
 Journal = {ACM Transactions on Applied Perception},
 Number = {2},
 Title = {Analyzing correspondence between sound objects and body motion},
 Volume = {10},
 Year = {2013}}

ImageSonifyer

ImageSonifyer

Earlier this year, before I started as head of department, I was working on a non-realtime implementation of my sonomotiongram technique (a sonomotiongram is a sonic display of motion from a video recording, created by sonifying a motiongram). Now I finally found some time to wrap it up and make it available as an OSX application called ImageSonifyer.  The Max patch is also available, for those that want to look at what is going on.

I am working on a paper that will describe everything in more detail, but the main point can hopefully be understood by looking at some of the videos I have posted in the sonomotiongram playlist on YouTube. In its most basic form, the ImageSonifyer will work more or less like Metasynth, sonifying an image. Here is a basic example showing how an image is sonified by being “played” from left to right.

But my main idea is to use motiongrams as the source material for the sonification. Here is a sonification of the high-speed guitar recordings I have written about earlier, first played at a rate of 10 seconds:

and then played at a rate of 1 second, which is about the original recording speed.

Musikkteknologidagene 2012

Keynote
Alexander holding a keynote lecture at Musikkteknologidagene 2012 (Photo: Nathan Wolek).

Last week I held a keynote lecture at the Norwegian music technology conference Musikkteknologidagene, by (and at) the Norwegian Academy of Music and NOTAM. The talk was titled: “Embodying the human body in music technology”, and was an attempt at explaining why I believe we need to put more emphasis on human-friendly technologies, and why the field of music cognition is very much connected to that of music technology. I got a comment that it would have been better to exchange “embodying” with “embedding” in my title, and I totally agree. So now I already have a title for my next talk!

Sverm demo
One of the “pieces” we did for the Sverm demo at Musikkteknologidagene 2012: three performers standing still and controlling a sine tone each based on their micromovements.

Besides my talk, we also did a small performance of parts of the Sverm project that I am working on together with an interdisciplinary group of sound, movement and light artists. We showed three parts: (1) very slow movement with changing lights (2) sonification of the micromovements of people standing still (3) micromovement interaction with granular synthesis. This showcase was based on the work we have done since the last performance and seminar.

Besides the things I was involved in myself during Musikkteknologidagene, I was very happy about being “back” at the conference after a couple of years of “absence” (I had enough with organising NIME last year). It is great to find that the conference is still alive and manages to gather people doing interesting stuff in and with music technology in Norway.

Sverm talking
Alexander talking about the Sverm project and fourMs motion capture lab at Musikkteknologidagene 2012 (Photo: Nathan Wolek).

When I started up the conference series back in 2005, the idea was to create a meeting place for music technology people in Norway. Fortunately, NOTAM has taken on the responsibility of finding and supporting local organisers that can host it every year. So far it has been bouncing back and forth between Oslo, Trondheim and Bergen, and I think it is now time that it moves on to Kristiansand, Tromsø and Stavanger. All these cities now have small active music technology communities, and some very interesting festivals (Punkt, Insomnia, Numusic) that it could be connected to.

As expected, the number of people attending the conference has been going up and down over the years. In general I find that it is always difficult to get people from Oslo to attend, something that I find slightly embarassing, but which can probably be explained by the overwhelming amount of interesting things happening in this comparably little capital at any point in time.

Snow
We had the first snow this year during Musikkteknologidagene, a good time to stay indoors at NOTAM listening to presentations.

The first years of Musikkteknologidagene we mainly spent on informing each other of what we are all doing, really just getting to know each other. Over the years the focus has been shifted more towards “real” presentations, and all the presentations I heard this year were very interesting and inspiring. This is a good sign that the field of music technology has matured in Norway. Several institutions have been able to start up research and educational programs in fields somehow related to music technology, and I think we are about to reach a critical mass of groups of people involved in the field, not only a bunch of individual researchers and artists trying to survive. This year we agreed that we are now going to make a communal effort of building up a database of all institutions and individuals involved in the field, and develop a roadmap along the lines of what was made in the S2S2 project.

All in all, this year’s Musikkteknologidagene was a fun experience, and I am already looking forwards to next year’s edition.