New publication: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music

After several years of hard work, we are very happy to announce a new publication coming out of the MICRO project that I am leading: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music (Frontiers Psychology).

From the setup of the experiment in which we tested the effects of listening to headphones and speakers.

This is the first journal article of my PhD student Agata Zelechowska, and it reports on a standstill study conducted a couple of years ago. It is slightly different than the paradigm we have used for the Championships of Standstill. While the latter is based on single markers on the head of multiple people, Agata’s experiment was conducted with full-body motion capture of individuals.

The most exciting thing about this new study, is that we have investigated whether there are any differences in people’s micromotion when they listen through either headphones or speakers. Is there a difference? Yes, it is! People move (a little) more when listening through headphones.

Want to know more? The article is Open Access, so you can read the whole thing here. The short summary is here:

Previous studies have shown that music may lead to spontaneous body movement, even when people try to stand still. But are spontaneous movement responses to music similar if the stimuli are presented using headphones or speakers? This article presents results from an exploratory study in which 35 participants listened to rhythmic stimuli while standing in a neutral position. The six different stimuli were 45 s each and ranged from a simple pulse to excerpts from electronic dance music (EDM). Each participant listened to all the stimuli using both headphones and speakers. An optical motion capture system was used to calculate their quantity of motion, and a set of questionnaires collected data about music preferences, listening habits, and the experimental sessions. The results show that the participants on average moved more when listening through headphones. The headphones condition was also reported as being more tiresome by the participants. Correlations between participants’ demographics, listening habits, and self-reported body motion were observed in both listening conditions. We conclude that the playback method impacts the level of body motion observed when people are listening to music. This should be taken into account when designing embodied music cognition studies.

Method chapter freely available

I am a big supporter of Open Access publishing, but for various reasons some of my publications are not openly available by default. This is the case for the chapter Methods for Studying Music-Related Body Motion that I have contributed to the Springer Handbook of Systematic Musicology.

I am very happy to announce that the embargo on the book ran out today, which means that a pre-print version of my chapter is finally freely available in UiO’s digital repository. This chapter is a summary of my experiences with music-related motion analysis, and I often recommend it to students. Therefore it is great that it is finally available to download from everywhere.

Abstract

This chapter presents an overview of some methodological approaches and technologies that can be used in the study of music-related body motion. The aim is not to cover all possible approaches, but rather to highlight some of the ones that are more relevant from a musicological point of view. This includes methods for video-based and sensor-based motion analyses, both qualitative and quantitative. It also includes discussions of the strengths and weaknesses of the different methods, and reflections on how the methods can be used in connection to other data in question, such as physiological or neurological data, symbolic notation, sound recordings and contextual data.

Testing reveal.js for teaching

I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities. This is an even more complex organisational structure than the MCT programme that I am involved in myself.

In addition to running a workshop with the Qualisys motion capture system that they have (similar to the one in our fourMs Lab at RITMO), I was asked to also present an introduction to motion capture in general, and also some video-based methods. I have made the more technically oriented tutorial Quantitative Video analysis for Qualitative Research, which is describing how to use the Musical Gestures Toolbox for Matlab. Since Matlab was outside the scope of this session, I decided to create a non-technical presentation focusing more on the concepts.

Most of my recent presentations have been made in Google Presentation, a tool that really shows the potential of web-based applications (yes, I think it has matured to a point where we can actually talk about an application in the browser). The big benefit of using a web-based presentation solution, is that I can share links to the presentation both before and after it was held, and I avoid all the hassle of issues with moving large video files around, etc.

Even though Google Presentation has been working fine, I would prefer moving to an open source solution. I have for a long time also wanted to try out markdown-based presentation solutions, since I use markdown for most of my other writing. I have tried out a few different solutions, but haven’t really found anything that worked smoothly enough. Many of the solutions add too much complexity to the way you need to write your markdown code, which then removes some of the weightlessness of this approach. The easiest and most nice-looking solution so far seems to be reveal.js, but I haven’t really found a way to integrate it into my workflow.

Parallel to my presentation experimentation, I have also been exploring Jupyter Notebook for analysis. The nice thing about this approach, is that you can write cells of code that can be evaluated on the fly, and be shown seamlessly in the browser. This is great for developing code, sharing code, teaching code, and also for moving towards more Open Research practices.

One cool thing I discovered, is that Jupyter Notebook has built-in support for reveal.js! This means that you can just export your complete notebook as a nice presentation. This is definitely something I am going to explore more with my coding tutorials, but for today’s workshop I ended up using it with only markdown code.

I created three notebooks, one for each topic I was talking about, and exported them as presentations:

A really cool feature in reveal.js, is the ability to move in two dimensions. That means that you can keep track of the main sections of the presentation horizontally, while filling in with more content vertically. Hitting the escape button, it is possible to “zoom” out, and look at the entire presentation, as shown below:

The overview mode in reveal.js presentations.

The tricky part of using Jupyter Notebook for plain markdown presentations, is that you need to make individual cell blocks for each part of the presentation. This works, but it would make even more sense if I had some python code in between. That is for next time, though.

Musical Gestures Toolbox for Matlab

Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.

The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.

Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago. A lot of the Matlab coding for the new version was done in the master’s thesis by Bo Zhou.

The new MGT is available on Github, and there is a more or less complete introduction to the main features in the software carpentry workshop Quantitative Video analysis for Qualitative Research.

Nordic Sound and Music Computing Network up and running

I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.

This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.

At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.

The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.