New online course: Motion Capture

After two years in the making, I am happy to finally introduce our new online course: Motion Capture: The art of studying human activity.

The course will run on the FutureLearn platform and is for everyone interested in the art of studying human movement. It has been developed by a team of RITMO researchers in close collaboration with the pedagogical team and production staff at LINK – Centre for Learning, Innovation & Academic Development.

Motivation

In the past, we had so few users in the fourMs lab that they could be trained individually. With all the new exciting projects at RITMO and an increasing amount of external users, we realized that it was necessary to have a more structured approach to teaching motion capture to new users.

The idea was to develop an online course that would teach incoming RITMO students, staff, and guests about motion capture basics. After completing the online course, they would move on to hands-on training in the lab. However, once the team started sketching the content of the course, it quickly grew in scope. The result is a six-week online course, a so-called massive open online course (MOOC) that will run on the FutureLearn platform.

People talking in lab
From one of the early workshops with LINK, in which I explain the basics of a motion capture system (Photo: Nina Krogh).

MOOC experience

Developing a MOOC is a major undertaking, but we learned a lot when we developed Music Moves back in 2015-2016. Thousands of people have been introduced to embodied music cognition through that course. In fact, we will run it for the seventh time on 24 January 2022.

Motion capture is only mentioned in passing in Music Moves. Many learners ask for more. Looking around, we haven’t really found any general courses on motion capture. There are many system-specific tutorials and courses, but not any that introduce the basics of motion capture more broadly. As I have written about in the Springer Handbook of Systematic Musicology (open access version), there are many types of motion capture systems. Most people think about the ones where users wear a suit with reflective markers, but this is only one type of motion capture.

From biomechanics to data management

In the new Motion Capture course, we start with teaching the basics of human anatomy and biomechanics. I started using motion capture without that knowledge myself and have later realized that it is better to understand a bit about how the body moves before playing with the technology.

People talking in front of a whiteboard
RITMO lab engineer Kayla Burnim discusses the course structure with Audun Bjerknes and Mirjana Coh from LINK (Photo: Nina Krogh).

The following weeks in the course contain all the information necessary to conduct a motion capture experiment: setting up cameras, calibrating the system, post-processing, and analysis. The focus is on infrared motion capture, but some other sensing technologies are also presented, including accelerometers, muscle sensors, and video analysis. The idea is not to show everything but to give people a good foundation when walking into a motion capture lab.

The last week is dedicated to data management, including documentation, privacy, and legal issues. These are not the most exciting topics if you want to motion capture. But they are necessary if you’re going to research according to today’s regulations.

From idea to course

Making a complete online course is a major undertaking. Having done it twice, I would compare it to writing a textbook. It helps with prior experience and a good team, but it is still a significant team effort.

We worked with UiO’s Centre for Learning, Innovation and Academic Development, LINK, when developing Music Moves, and I also wanted to get them on board for this new project. They helped structure the development into different stages: ideation, development of learning outcomes, production planning, and production. It is tempting to start filming right away, but the result is much better if you plan properly. The last time we made the quizzes and tests last, and this time, I pushed to make them first to know the direction we were heading.

People talking in front of a table
Mikkel Kornberg Skjeflo from LINK explains how the learning experience becomes more engaging by using different learning activities in the course (Photo: Nina Krogh).

Video production

In Music Moves, we did a lot of “talking head” studio recordings, like this one:

It works in bringing over content, but I look uncomfortable and don’t get through the content very well. I find the “dialogue videos” much more engaging:

Looking at the feedback from learners (we have had around 10 000 people in Music Moves over the years!), they also seem to engage more with less polished video material. So for Motion Capture, we decided to avoid “lecture videos”. Instead, we created situations where pairs would talk about a particular topic. We wrote scripts first, but the recordings were spontaneous, making for a much more lively interaction.

The course production coincided with MusicTestLab, an event for testing motion capture in a real-world venue. The team agreed to use this event as a backdrop for the whole course, making for a quote chaotic recording session. Filming an online course in parallel to running an actual experiment that was also streamed live was challenging, but it also gives the learners an authentic look into how we work.

Musicians on stage with motion capture equipment.
Audun Bjerknes and Thea Dahlborg filming a motion capture experiment in the foyer of the Science Library.

Ready for Kick-off

The course will run on FutureLearn from 24 January 2022. In the last months, we have done the final tweaking of the content. Much effort has also been put into ensuring accessibility. All videos have been captioned, images have been labelled, and copyrights have been checked. That is why I compare it to writing a textbook. Writing the content is only part of the process. Similarly, developing a MOOC is not only about writing texts and recording videos. The whole package needs to be in place.

Music Moves has been running since 2016 and is still going strong. I am excited to see how Motion Capture will be received!

MusicLab Copenhagen

After nearly three years of planning, we can finally welcome people to MusicLab Copenhagen. This is a unique “science concert” involving the Danish String Quartet, one of the world’s leading classical ensembles. Tonight, they will perform pieces by Bach, Beethoven, Schnittke and folk music in a normal concert setting at Musikhuset in Copenhagen. However, the concert is nothing but normal.

Live music research

During the concert, about twenty researchers from RITMO and partner institutions will conduct investigations and experiments informed by phenomenology, music psychology, complex systems analysis, and music technology. The aim is to answer some big research questions, like:

  • What is musical complexity?
  • What is the relation between musical absorption and empathy?
  • Is there such a thing as a shared zone of absorption, and is it measurable?
  • How can musical texture be rendered visually?

The concert will be live-streamed (on YouTube and Facebook) and it will also be aired on Danish radio. There will also be a short film documenting the whole process.

Researchers and staff from RITMO (and friends) in front of the concert venue.

Real-world Open Research

This concert will be the biggest and most complex MusicLab event to date. Still, all the normal “ingredients” of a MusicLab will be in place. The core is a spectacular performance. We will capture a lot of data using state-of-the-art technologies, but in a way that is as little obtrusive as possible for performers and the audience. After the concert, both performers and researchers will talk about the experience.

Of course, being a flagship Open Research project, all the collected data will be shared openly. The researchers will show glimpses of data processing procedures as part of the “data jockeying” at the end of the event. However, it is first when all data is properly uploaded and pre-processed that data processing can start. All the involved researchers will dig into their respective data. But since everything is openly available, anyone can go in and work on the data as they wish.

Proper preparation

Due to the corona situation, the event has been postponed several times. That has been unfortunate and stressful for everyone involved. On the positive side, it has also meant that we have been able to rehearse and prepare very well. Already a year ago we ran a full rehearsal of the technical setup of the concert. We even live-streamed the whole preparation event, in the spirit of “slow TV”:

I am quite confident that things will run smooth during the concert. Of course, there are always obstacles. For example, one of our eye-trackers broke in one of the last tests. And it is always exciting to wait for Apple and Google to approve updates of our MusicLab app in their respective app stores.

Want to see how it went. Have a look here.

Sound and Music Computing at the University of Oslo

This year’s Sound and Music Computing (SMC) Conference has opened for virtual lab tours. When we cannot travel to visit each other, this is a great way to showcase how things look and what we are working on.

Stefano Fasciani and I teamed up a couple of weeks ago to walk around some of the labs and studios at the Department of Musicology and RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion. We started in the Portal used for the Music, Communication & Technology master’s programme and ended up in the fourMs Lab.

Needless to say, we only scratched the surface of everything going on in the field of sound and music computing at the University of Oslo in this video. The video focused primarily on our infrastructures. We have several ongoing projects that use these studios and labs and also some non-lab-based projects. This include:

And I should not forget to mention our exciting collaboration with partners in Copenhagen, Stockholm, Helsinki, and Reykjavik in the Nordic Sound and Music Computing network.

And, as we end the video, please don’t hesitate to get in touch if you want to visit us or collaborate on projects.

.


MusicTestLab as a Testbed of Open Research

Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab. In this blog post, I will reflect on yesterday’s MusicTestLab – Slow TV.

About MusicLab

MusicLab is an innovation project by RITMO and the University Library. The aim is to explore new methods for conducting research, research communication and education. The project is organized around events: a concert in a public venue, which is also the object of study. The events also contain an edutainment element through panel discussions with world-leading researchers and artists, as well as “data jockeying” in the form of live data analysis of recorded data.

We have carried out 5 full MusicLab events so far and a couple of in-between cases. Now we are preparing for a huge event in Copenhagen with the Danish String Quartet. The concert has already been postponed once due to corona, but we hope to make it happen in May next year.

The wildest data collection ever

As part of the preparation for MusicLab Copenhagen, we decided to run a MusicTestLab to see if it is at all possible to carry out the type of data collection that we would like to do. Usually, we work in the fourMs Lab, a custom-built facility with state-of-the-art equipment. This is great for many things, but the goal of MusicLab is to do data collection in the “wild”, which would typically mean a concert venue.

For MusicTestLab, we decided to run the event on the stage in the foyer of the Science Library at UiO, which is a real-world venue that gives us plenty of challenges to work with. We decided to bring a full “package” of equipment, including:

  • infrared motion capture (Optitrack)
  • eye trackers (Pupil Labs)
  • physiological sensors (EMG from Delsys)
  • audio (binaural and ambisonics)
  • video (180° GoPros and 360° Garmin)

We are used to working with all of these systems separately in the lab, but it is more challenging when combining them in an out-of-lab setting, and with time pressure on setting everything up in a fairly short amount of time.

Musicians on stage with many different types of sensors on, with RITMO researchers running the data collection and a team from LINK filming.

Streaming live – Slow TV

In addition to actually doing the data collection in a public venue, where people passing by can see what is going on, we decided to also stream the entire setup online. This may seem strange, but we have found that many people are actually interested in what we are doing. Many people also ask about how we do things, and this was a good opportunity to show people the behind-the-scenes of a very complex data collection process. The recording of the stream is available online:

To make it a little more watcher-friendly, the stream features live commentary by myself and Solveig Sørbø from the library. We talk about what is going on and make interviews with the researchers and musicians. As can be seen from the stream, it was a quite hectic event, which was further complicated by corona restrictions. We were about an hour late for the first performance, but we managed to complete the whole recording session within the allocated time frame.

The performances

The point of the MusicLab events is to study live music, and this was also the focal point of the MusicTestLab, featuring the very nice, young student-led Borealis String Quartet. They performed two movements of Haydn’s Op. 76, no. 4 «Sunrise» quartet. The first performance can be seen here (with a close-up of the motion capture markers):

The first performance of Haydn’s string quartet Op. 76, no. 4 (movements I and II) by the Borealis String Quartet.

Then after the first performance, the musicians took off the sensors and glasses, had a short break, and then put everything back on again. The point of this was for the researchers to get more experience with putting everything on properly. From a data collection point of view, it is also interesting to see how reliable the data are between different recordings. The second performance can be seen here, now with a projection of the gaze from the violist’s eye-tracking glasses:

The second performance of Haydn’s string quartet Op. 76, no. 4 (movements I and II) by the Borealis String Quartet.

A successful learning experience

The most important conclusion of the day was that it is, indeed, possible to carry out such a large and complex data collection in an out-of-lab setting. It took an hour longer than expected to set everything up, but it also took an hour less to take everything down. This is valuable information for later. We also learned a lot about what types of clamps, brackets, cables, etc., that are needed for such events. Also useful is the experience of calibrating all the equipment in a new and uncontrolled environment. All in all, the experience will help us in making better data collections in the future.

Sharing with the world

Why is it interesting to share all of this with the world? RITMO is a Norwegian Centre of Excellence, which means that we get a substantial amount of funding for doing cutting-edge research. We are also in a unique position to have a very interdisciplinary team of researchers, with broad methodological expertise. With the trust we have received from UiO and our many funding agencies, we, therefore, feel an obligation to share as much as possible of our knowledge and expertise with the world. Of course, we present our findings at the major conferences and publish our final results in leading journals. But we also believe that sharing the way we work can help others.

Sharing our internal research process with the world is also a way of improving our own way of working. Having to explain what you do to others help to sharpen your own thinking. I believe that this will again lead to better research. We cannot run MusicTestLabs every day. Today all the researchers will copy all the files that we recorded yesterday and start on the laborious post-processing of all the material. Then we can start on the analysis, which may eventually lead to a publication in a year (or two or three) from now. If we do end up with a publication (or more) based on this material, everyone will be able to see how it was collected and be able to follow the data processing through all its chains. That is our approach to doing research that is verifiable by our peers. And, if it turns out that we messed something up, and that the data cannot be used for anything, we have still learned a lot through the process. In fact, we even have a recording of the whole data collection process so that we can go back and see what happened.

Other researchers need to come up with their approaches to opening their research. MusicLab is our testbed. As can be seen from the video, it is hectic. Most importantly, though, is that it is fun!

RITMO researchers transporting equipment to MusicTestLab in the beautiful October weather.

New publication: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music

After several years of hard work, we are very happy to announce a new publication coming out of the MICRO project that I am leading: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music (Frontiers Psychology).

From the setup of the experiment in which we tested the effects of listening to headphones and speakers.

This is the first journal article of my PhD student Agata Zelechowska, and it reports on a standstill study conducted a couple of years ago. It is slightly different than the paradigm we have used for the Championships of Standstill. While the latter is based on single markers on the head of multiple people, Agata’s experiment was conducted with full-body motion capture of individuals.

The most exciting thing about this new study, is that we have investigated whether there are any differences in people’s micromotion when they listen through either headphones or speakers. Is there a difference? Yes, it is! People move (a little) more when listening through headphones.

Want to know more? The article is Open Access, so you can read the whole thing here. The short summary is here:

Previous studies have shown that music may lead to spontaneous body movement, even when people try to stand still. But are spontaneous movement responses to music similar if the stimuli are presented using headphones or speakers? This article presents results from an exploratory study in which 35 participants listened to rhythmic stimuli while standing in a neutral position. The six different stimuli were 45 s each and ranged from a simple pulse to excerpts from electronic dance music (EDM). Each participant listened to all the stimuli using both headphones and speakers. An optical motion capture system was used to calculate their quantity of motion, and a set of questionnaires collected data about music preferences, listening habits, and the experimental sessions. The results show that the participants on average moved more when listening through headphones. The headphones condition was also reported as being more tiresome by the participants. Correlations between participants’ demographics, listening habits, and self-reported body motion were observed in both listening conditions. We conclude that the playback method impacts the level of body motion observed when people are listening to music. This should be taken into account when designing embodied music cognition studies.