MusicTestLab as a Testbed of Open Research

Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab. In this blog post, I will reflect on yesterday’s MusicTestLab – Slow TV.

About MusicLab

MusicLab is an innovation project by RITMO and the University Library. The aim is to explore new methods for conducting research, research communication and education. The project is organized around events: a concert in a public venue, which is also the object of study. The events also contain an edutainment element through panel discussions with world-leading researchers and artists, as well as “data jockeying” in the form of live data analysis of recorded data.

We have carried out 5 full MusicLab events so far and a couple of in-between cases. Now we are preparing for a huge event in Copenhagen with the Danish String Quartet. The concert has already been postponed once due to corona, but we hope to make it happen in May next year.

The wildest data collection ever

As part of the preparation for MusicLab Copenhagen, we decided to run a MusicTestLab to see if it is at all possible to carry out the type of data collection that we would like to do. Usually, we work in the fourMs Lab, a custom-built facility with state-of-the-art equipment. This is great for many things, but the goal of MusicLab is to do data collection in the “wild”, which would typically mean a concert venue.

For MusicTestLab, we decided to run the event on the stage in the foyer of the Science Library at UiO, which is a real-world venue that gives us plenty of challenges to work with. We decided to bring a full “package” of equipment, including:

  • infrared motion capture (Optitrack)
  • eye trackers (Pupil Labs)
  • physiological sensors (EMG from Delsys)
  • audio (binaural and ambisonics)
  • video (180° GoPros and 360° Garmin)

We are used to working with all of these systems separately in the lab, but it is more challenging when combining them in an out-of-lab setting, and with time pressure on setting everything up in a fairly short amount of time.

Musicians on stage with many different types of sensors on, with RITMO researchers running the data collection and a team from LINK filming.

Streaming live – Slow TV

In addition to actually doing the data collection in a public venue, where people passing by can see what is going on, we decided to also stream the entire setup online. This may seem strange, but we have found that many people are actually interested in what we are doing. Many people also ask about how we do things, and this was a good opportunity to show people the behind-the-scenes of a very complex data collection process. The recording of the stream is available online:

To make it a little more watcher-friendly, the stream features live commentary by myself and Solveig Sørbø from the library. We talk about what is going on and make interviews with the researchers and musicians. As can be seen from the stream, it was a quite hectic event, which was further complicated by corona restrictions. We were about an hour late for the first performance, but we managed to complete the whole recording session within the allocated time frame.

The performances

The point of the MusicLab events is to study live music, and this was also the focal point of the MusicTestLab, featuring the very nice, young student-led Borealis String Quartet. They performed two movements of Haydn’s Op. 76, no. 4 «Sunrise» quartet. The first performance can be seen here (with a close-up of the motion capture markers):

The first performance of Haydn’s string quartet Op. 76, no. 4 (movements I and II) by the Borealis String Quartet.

Then after the first performance, the musicians took off the sensors and glasses, had a short break, and then put everything back on again. The point of this was for the researchers to get more experience with putting everything on properly. From a data collection point of view, it is also interesting to see how reliable the data are between different recordings. The second performance can be seen here, now with a projection of the gaze from the violist’s eye-tracking glasses:

The second performance of Haydn’s string quartet Op. 76, no. 4 (movements I and II) by the Borealis String Quartet.

A successful learning experience

The most important conclusion of the day was that it is, indeed, possible to carry out such a large and complex data collection in an out-of-lab setting. It took an hour longer than expected to set everything up, but it also took an hour less to take everything down. This is valuable information for later. We also learned a lot about what types of clamps, brackets, cables, etc., that are needed for such events. Also useful is the experience of calibrating all the equipment in a new and uncontrolled environment. All in all, the experience will help us in making better data collections in the future.

Sharing with the world

Why is it interesting to share all of this with the world? RITMO is a Norwegian Centre of Excellence, which means that we get a substantial amount of funding for doing cutting-edge research. We are also in a unique position to have a very interdisciplinary team of researchers, with broad methodological expertise. With the trust we have received from UiO and our many funding agencies, we, therefore, feel an obligation to share as much as possible of our knowledge and expertise with the world. Of course, we present our findings at the major conferences and publish our final results in leading journals. But we also believe that sharing the way we work can help others.

Sharing our internal research process with the world is also a way of improving our own way of working. Having to explain what you do to others help to sharpen your own thinking. I believe that this will again lead to better research. We cannot run MusicTestLabs every day. Today all the researchers will copy all the files that we recorded yesterday and start on the laborious post-processing of all the material. Then we can start on the analysis, which may eventually lead to a publication in a year (or two or three) from now. If we do end up with a publication (or more) based on this material, everyone will be able to see how it was collected and be able to follow the data processing through all its chains. That is our approach to doing research that is verifiable by our peers. And, if it turns out that we messed something up, and that the data cannot be used for anything, we have still learned a lot through the process. In fact, we even have a recording of the whole data collection process so that we can go back and see what happened.

Other researchers need to come up with their approaches to opening their research. MusicLab is our testbed. As can be seen from the video, it is hectic. Most importantly, though, is that it is fun!

RITMO researchers transporting equipment to MusicTestLab in the beautiful October weather.

New publication: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music

After several years of hard work, we are very happy to announce a new publication coming out of the MICRO project that I am leading: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music (Frontiers Psychology).

From the setup of the experiment in which we tested the effects of listening to headphones and speakers.

This is the first journal article of my PhD student Agata Zelechowska, and it reports on a standstill study conducted a couple of years ago. It is slightly different than the paradigm we have used for the Championships of Standstill. While the latter is based on single markers on the head of multiple people, Agata’s experiment was conducted with full-body motion capture of individuals.

The most exciting thing about this new study, is that we have investigated whether there are any differences in people’s micromotion when they listen through either headphones or speakers. Is there a difference? Yes, it is! People move (a little) more when listening through headphones.

Want to know more? The article is Open Access, so you can read the whole thing here. The short summary is here:

Previous studies have shown that music may lead to spontaneous body movement, even when people try to stand still. But are spontaneous movement responses to music similar if the stimuli are presented using headphones or speakers? This article presents results from an exploratory study in which 35 participants listened to rhythmic stimuli while standing in a neutral position. The six different stimuli were 45 s each and ranged from a simple pulse to excerpts from electronic dance music (EDM). Each participant listened to all the stimuli using both headphones and speakers. An optical motion capture system was used to calculate their quantity of motion, and a set of questionnaires collected data about music preferences, listening habits, and the experimental sessions. The results show that the participants on average moved more when listening through headphones. The headphones condition was also reported as being more tiresome by the participants. Correlations between participants’ demographics, listening habits, and self-reported body motion were observed in both listening conditions. We conclude that the playback method impacts the level of body motion observed when people are listening to music. This should be taken into account when designing embodied music cognition studies.

Method chapter freely available

I am a big supporter of Open Access publishing, but for various reasons some of my publications are not openly available by default. This is the case for the chapter Methods for Studying Music-Related Body Motion that I have contributed to the Springer Handbook of Systematic Musicology.

I am very happy to announce that the embargo on the book ran out today, which means that a pre-print version of my chapter is finally freely available in UiO’s digital repository. This chapter is a summary of my experiences with music-related motion analysis, and I often recommend it to students. Therefore it is great that it is finally available to download from everywhere.

Abstract

This chapter presents an overview of some methodological approaches and technologies that can be used in the study of music-related body motion. The aim is not to cover all possible approaches, but rather to highlight some of the ones that are more relevant from a musicological point of view. This includes methods for video-based and sensor-based motion analyses, both qualitative and quantitative. It also includes discussions of the strengths and weaknesses of the different methods, and reflections on how the methods can be used in connection to other data in question, such as physiological or neurological data, symbolic notation, sound recordings and contextual data.

Testing reveal.js for teaching

I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities. This is an even more complex organisational structure than the MCT programme that I am involved in myself.

In addition to running a workshop with the Qualisys motion capture system that they have (similar to the one in our fourMs Lab at RITMO), I was asked to also present an introduction to motion capture in general, and also some video-based methods. I have made the more technically oriented tutorial Quantitative Video analysis for Qualitative Research, which is describing how to use the Musical Gestures Toolbox for Matlab. Since Matlab was outside the scope of this session, I decided to create a non-technical presentation focusing more on the concepts.

Most of my recent presentations have been made in Google Presentation, a tool that really shows the potential of web-based applications (yes, I think it has matured to a point where we can actually talk about an application in the browser). The big benefit of using a web-based presentation solution, is that I can share links to the presentation both before and after it was held, and I avoid all the hassle of issues with moving large video files around, etc.

Even though Google Presentation has been working fine, I would prefer moving to an open source solution. I have for a long time also wanted to try out markdown-based presentation solutions, since I use markdown for most of my other writing. I have tried out a few different solutions, but haven’t really found anything that worked smoothly enough. Many of the solutions add too much complexity to the way you need to write your markdown code, which then removes some of the weightlessness of this approach. The easiest and most nice-looking solution so far seems to be reveal.js, but I haven’t really found a way to integrate it into my workflow.

Parallel to my presentation experimentation, I have also been exploring Jupyter Notebook for analysis. The nice thing about this approach, is that you can write cells of code that can be evaluated on the fly, and be shown seamlessly in the browser. This is great for developing code, sharing code, teaching code, and also for moving towards more Open Research practices.

One cool thing I discovered, is that Jupyter Notebook has built-in support for reveal.js! This means that you can just export your complete notebook as a nice presentation. This is definitely something I am going to explore more with my coding tutorials, but for today’s workshop I ended up using it with only markdown code.

I created three notebooks, one for each topic I was talking about, and exported them as presentations:

A really cool feature in reveal.js, is the ability to move in two dimensions. That means that you can keep track of the main sections of the presentation horizontally, while filling in with more content vertically. Hitting the escape button, it is possible to “zoom” out, and look at the entire presentation, as shown below:

The overview mode in reveal.js presentations.

The tricky part of using Jupyter Notebook for plain markdown presentations, is that you need to make individual cell blocks for each part of the presentation. This works, but it would make even more sense if I had some python code in between. That is for next time, though.

Musical Gestures Toolbox for Matlab

Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.

The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.

Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago. A lot of the Matlab coding for the new version was done in the master’s thesis by Bo Zhou.

The new MGT is available on Github, and there is a more or less complete introduction to the main features in the software carpentry workshop Quantitative Video analysis for Qualitative Research.