MusicLab receives Danish P2 Prisen

Yesterday, I was in Copenhagen to receive the Danish Broadcasting Company’s P2 Prisen for “event of the year”. The prize was awarded to MusicLab Copenhagen, a unique “research concert” last October after two years of planning.

The main person behind MusicLab Copenhagen is Simon Høffding, a former postdoc at RITMO, now an associate professor at The University of Southern Denmark. He has collaborated with the world-leading Danish String Quartet for a decade, focusing on understanding more about musical absorption.

Simon and I met up to have a quick discussion about the prize before the ceremony.

The organizers asked if we could do some live data capturing during the prize award ceremony. However, we could not repeat what we did during MusicLab Copenhagen. Then, a team of 20 researchers spent a day setting up before the concert. Instead, I created some real-time video analysis using MGT for Max of the Danish Baroque orchestra. That at least gave some idea about what it is possible to extract from a video recording.

Testing the video visualization during the dress rehearsal.

The prize is a fantastic recognition of a unique event. MusicLab is an innovation project between RITMO and the University Library in Oslo. The aim is to explore how it is possible to carry out Open Research in real-world settings. MusicLab Copenhagen is the largest and most complex MusicLab we have organized to date. In fact, we did one complete concert and one test run of the setup to be sure that everything would work well.

While Simon, Fredrik (from the DSQ), and I were on stage to receive the prize, it should be said that we received it on behalf of many others. Around 20 people from RITMO and many others contributed to the event. Thanks to everyone for making MusicLab Copenhagen a reality!

MusicLab Copenhagen was a huge team effort. Here, many of us gathered in front of Musikhuset in Copenhagen before setting up equipment for the concert in October 2021.

The status of FAIR in higher education

I participated in the closing event of the FAIRsFAIR project last week. For that, I was asked to share thoughts on the status of FAIR in higher education. This is a summary of the notes that I wrote for the event.

What is FAIR?

First of all, The FAIR principles state that data should be:

  • Findable: The first step in (re)using data is to find them. Metadata and data should be easy to find for both humans and computers. Machine-readable metadata are essential for automatic discovery of datasets and services, so this is an essential component of the FAIRification process.
  • Accessible: Once the user finds the required data, she/he/they need to know how they can be accessed, possibly including authentication and authorisation.
  • Interoperable: The data usually need to be integrated with other data. In addition, the data need to interoperate with applications or workflows for analysis, storage, and processing.
  • Reusable: The ultimate goal of FAIR is to optimise the reuse of data. To achieve this, metadata and data should be well-described so that they can be replicated and/or combined in different settings.

This sounds all good, but the reality is that FAIRifying data is a non-trivial task.

FAIR is not (necessarily) Open

Readers of this blog know that I am an open research advocate, and that also means that I embrace the FAIR principles. It is impossible to make data genuinely open if they are not FAIR. It is essential to say that FAIR data is not the same as Open Data. It is perfectly possible to “FAIRify” closed data. That would entail making metadata about the data openly available according to the FAIR principles while still closing the actual data.

There are many cases where it is impossible to make data openly available. In the fourMs Lab, we often have to keep the data closed. There are usually two reasons for this:

  • Privacy: we perform research on and with humans and, therefore, almost always also record audio and video files from which it is possible to identify the people involved. Some participants in our studies consent to us sharing their data, but others do not. So we try to anonymize our material, such as blurring faces, creating motion videos, or keeping only motion capture data, but sometimes that is not possible. Anonymized data make sense in some cases, such as when we capture hundreds of people for the Championships of Standstill. But if we study expert musicians, they are not so easy to anonymize. The sound of an expert hardanger fiddler is enough for anyone in the community to recognize the person in question. And, their facial expressions may be an important part in understanding the effects of a performance. So if we cannot share their face and their sound, the data is not useful.
  • Copyright: even more challenging than the privacy matters, are questions related to copyright. When working with real music (as opposed to the “synthetic” music used in many music psychology studies) we need to consider the rights of composers, musicians, producers, and so on. This is a tricky matter. On one hand, we would like to work with new music by living musicians. On the other hand, the legal territory is challenging. There are a lot of actors, national legislations, copyright unions, and so on. Unfortunately, we do not have the legal competency and administrative capacity to tackle all the practicalities of ensuring that we are allowed to share a lot of the music we use openly.

These challenges make sharing all our data openly difficult, but we can still make them FAIR.

How far have we come?

Three things need to be in place to FAIRify data properly:

  1. Good and structured data. This is the main responsibility of the researcher. However, it is much easier said than done. Take the example of MusicLab Copenhagen, an event we ran in October. It was a huge undertaking with lots of data and media being collected and recorded by around 20 people. We are still working on organizing the data in meaningful ways. The plan is to release as much as possible as fast as possible, but it takes an astonishing amount of time to pre-process the data and structure it in a meaningful way. After all, if the data does not make sense to the person that collected it, nobody else will be able to use it either.
  2. Data repository to store the data. Once the data is structured, it needs to be stored somewhere that provides the necessary tools, and, in particular, persistent identifiers (such as DOIs). We don’t have our own data repository at UiO, so here we need to rely on other solutions. There are two main types of repositories: (a) “bucket-based” repositories that researchers can use themselves, (b) data archives run by institutions with data curators. That brings me to the third point:
  3. Data wranglers and curators. With some training and the right infrastructures, researchers may take on this role themselves. Tools such as Zenodo, Figshare, and OSF, allow researchers to drop in their files and get DOIs, version control, and timetagging. However, in my experience, even though these technical preservation parts are in place, the data may not be (a) good and structured enough in itself, and/or (b) have sufficient metadata and “padding” to be meaningful to others. That is why the institutional archives have professional data wranglers and curators employed to help with these large parts.

More people and institutions have started to realize that data handling is a skill and profession of its own. Many universities have begun to employ data curators in their libraries to help with the FAIRification process. However, the challenge is that they are too few and are too far removed from where the research is happening. In my experience, much trouble can be avoided at the later part of the data archiving “ladder” if the data is handled better from the start.

At RITMO, we have two lab engineers that double as data managers. When we hired them back in 2018, they were among the first local data managers at UiO, and that has proven to be essential for moving things forward. As lab engineers, they are involved in data collection from the start, and they can therefore help with data curation long before we get to the archival stage. They also help train new researchers in thinking about data management and follow data from beginning to end.

Incentives and rewards

There are many challenges to solve before we have universal FAIRification in place. Fortunately, many things move in the right direction. Policies and recommendations are made at international, national, and institutional levels, and Infrastructures are established, and personnel trained.

The biggest challenge now, I think, is to get incentives and rewards in place. Sharing data openly, or at least making the data FAIR, is still seen as costly, cumbersome, and or/unnecessary. That is because of the lack of incentives and rewards for doing so. We are still at a stage where publications “count” the most in the system. Committees primarily look at publication lists and h-indexes when making decisions about hiring, promotion, or funding allocations.

Fortunately, there is a lot of focus on research assessment these days. I have been involved in developing the Norwegian Career Assessment Matrix (NOR-CAM) as a model for broadening the focus. I am also pleased to see that research evaluation and assessment is at the forefront at the Paris Open Science European Conference (OSEC) starting today. When researchers get proper recognition for FAIRifying their data, we will see a radical change.

One month of sound actions

One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.

Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:

The beauty of everyday sounds

One interesting result from the project so far is that many people have told me that they have started to reflect on sounds in their environment. That was also one of my motivations for the project. We produce sounds, willingly and unwillingly, all the time. Yet, we rarely think about how these sounds actually sound. By isolating and presenting everyday sounds, I help to “frame” them and make people reflect on their sonic qualities.

The project is not only about sound. Equally important are the actions that produce the sounds, what I call sound-producing actions. I aim to show that both the visual and sonic parts of a sound action are important. Watching the action gives a sense of the sound to appear, and listening to the sound can inform about the actions and objects involved.

Recording gear

I didn’t have a thorough plan for recording the sound actions. However, it was clear from the start that I would not aim for studio-quality recordings. The most important has been to make the recordings as simple as possible. That said, I don’t want too much auditory or visual noise in the recordings either. So I try to find quiet locations and frame only the action.

It is said that the best camera is the one you have at hand. In my case, that is my mobile phone (a Samsung S21 Ultra 5G), which sports a quite good mobile phone camera. It can record in 4K, which may be a bit overkill for this project. But, hey, why not… I don’t know exactly how to use all the material later on, but having more pixels to work with will probably come in handy at some point.

The built-in microphones on the phone are not bad but not particularly good either. The phone can record in stereo, and it is possible to switch between the “front” and “back” microphones. Still, both of these settings capture a lot of ambient sounds. That is not ideal for this project, in which I am more interested in directional sound. So I mainly use a VideoMic Me-C for the recordings. The microphone sounds a bit “sharp,” which I guess is because it is targeted at speech recording. Nevertheless, it is small and therefore easy to carry around. So I will probably continue to use it for a while.

Different sound types

I haven’t been very systematic in capturing sound actions so far. Looking at the first 31 recordings shows a nice mix captured at home, in my office, and outside. Many of the sound actions have been proposed by my family, and they have also helped with the recording. Some colleagues have also come up with ideas for new sound actions, so I am sure that I will have enough inspiration to capture 365 recordings by the end of the year.

One challenge has been to isolate single sound actions. But what is actually one sound action? For example, consider today’s recording:

I would argue that this is one action; I move the switch from left to right with one rotating motion. However, due to the steps in the button, the resultant sound has three amplitude spikes. So it can be seen as a sustained action type leading to a series of impulsive sounds. Together, this could probably be considered an iterative sound action type if we were to look at the three main types from Schaeffer’s taxonomy:

An illustration of the sound and action energy profiles of the three main sound types proposed by Pierre Schaeffer (impulsive, sustained, and iterative).

My new book discusses sound-producing actions from a theoretical perspective. The nice thing about my current project is that I test the theories in practice. That is easier said than done. It is generally easy to record impulsive sound actions, but the sustained and iterative ones are more challenging. For example, consider the coffee grinding:

The challenge was that I didn’t know how long it should be. Including only one turn wouldn’t give a sense of the quality of the action or the sound. So I decided to make it a little longer. It is too early to start the analysis of the recordings, but I think that more patterns will emerge as I keep going.

Well, one month has passed, 11 more to come. I am looking forward to continuing my exploration into sound actions!

New online course: Motion Capture

After two years in the making, I am happy to finally introduce our new online course: Motion Capture: The art of studying human activity.

The course will run on the FutureLearn platform and is for everyone interested in the art of studying human movement. It has been developed by a team of RITMO researchers in close collaboration with the pedagogical team and production staff at LINK – Centre for Learning, Innovation & Academic Development.

Motivation

In the past, we had so few users in the fourMs lab that they could be trained individually. With all the new exciting projects at RITMO and an increasing amount of external users, we realized that it was necessary to have a more structured approach to teaching motion capture to new users.

The idea was to develop an online course that would teach incoming RITMO students, staff, and guests about motion capture basics. After completing the online course, they would move on to hands-on training in the lab. However, once the team started sketching the content of the course, it quickly grew in scope. The result is a six-week online course, a so-called massive open online course (MOOC) that will run on the FutureLearn platform.

People talking in lab
From one of the early workshops with LINK, in which I explain the basics of a motion capture system (Photo: Nina Krogh).

MOOC experience

Developing a MOOC is a major undertaking, but we learned a lot when we developed Music Moves back in 2015-2016. Thousands of people have been introduced to embodied music cognition through that course. In fact, we will run it for the seventh time on 24 January 2022.

Motion capture is only mentioned in passing in Music Moves. Many learners ask for more. Looking around, we haven’t really found any general courses on motion capture. There are many system-specific tutorials and courses, but not any that introduce the basics of motion capture more broadly. As I have written about in the Springer Handbook of Systematic Musicology (open access version), there are many types of motion capture systems. Most people think about the ones where users wear a suit with reflective markers, but this is only one type of motion capture.

From biomechanics to data management

In the new Motion Capture course, we start with teaching the basics of human anatomy and biomechanics. I started using motion capture without that knowledge myself and have later realized that it is better to understand a bit about how the body moves before playing with the technology.

People talking in front of a whiteboard
RITMO lab engineer Kayla Burnim discusses the course structure with Audun Bjerknes and Mirjana Coh from LINK (Photo: Nina Krogh).

The following weeks in the course contain all the information necessary to conduct a motion capture experiment: setting up cameras, calibrating the system, post-processing, and analysis. The focus is on infrared motion capture, but some other sensing technologies are also presented, including accelerometers, muscle sensors, and video analysis. The idea is not to show everything but to give people a good foundation when walking into a motion capture lab.

The last week is dedicated to data management, including documentation, privacy, and legal issues. These are not the most exciting topics if you want to motion capture. But they are necessary if you’re going to research according to today’s regulations.

From idea to course

Making a complete online course is a major undertaking. Having done it twice, I would compare it to writing a textbook. It helps with prior experience and a good team, but it is still a significant team effort.

We worked with UiO’s Centre for Learning, Innovation and Academic Development, LINK, when developing Music Moves, and I also wanted to get them on board for this new project. They helped structure the development into different stages: ideation, development of learning outcomes, production planning, and production. It is tempting to start filming right away, but the result is much better if you plan properly. The last time we made the quizzes and tests last, and this time, I pushed to make them first to know the direction we were heading.

People talking in front of a table
Mikkel Kornberg Skjeflo from LINK explains how the learning experience becomes more engaging by using different learning activities in the course (Photo: Nina Krogh).

Video production

In Music Moves, we did a lot of “talking head” studio recordings, like this one:

It works in bringing over content, but I look uncomfortable and don’t get through the content very well. I find the “dialogue videos” much more engaging:

Looking at the feedback from learners (we have had around 10 000 people in Music Moves over the years!), they also seem to engage more with less polished video material. So for Motion Capture, we decided to avoid “lecture videos”. Instead, we created situations where pairs would talk about a particular topic. We wrote scripts first, but the recordings were spontaneous, making for a much more lively interaction.

The course production coincided with MusicTestLab, an event for testing motion capture in a real-world venue. The team agreed to use this event as a backdrop for the whole course, making for a quote chaotic recording session. Filming an online course in parallel to running an actual experiment that was also streamed live was challenging, but it also gives the learners an authentic look into how we work.

Musicians on stage with motion capture equipment.
Audun Bjerknes and Thea Dahlborg filming a motion capture experiment in the foyer of the Science Library.

Ready for Kick-off

The course will run on FutureLearn from 24 January 2022. In the last months, we have done the final tweaking of the content. Much effort has also been put into ensuring accessibility. All videos have been captioned, images have been labelled, and copyrights have been checked. That is why I compare it to writing a textbook. Writing the content is only part of the process. Similarly, developing a MOOC is not only about writing texts and recording videos. The whole package needs to be in place.

Music Moves has been running since 2016 and is still going strong. I am excited to see how Motion Capture will be received!

Try not to headbang challenge

I recently came across a video of the so-called Try not to headbang challenge, where the idea is to, well, not to headbang while listening to music. This immediately caught my attention. After all, I have been researching music-related micromotion over the last years and have run the Norwegian Championship of Standstill since 2012.

Here is an example of Nath & Johnny trying the challenge:

As seen in the video, they are doing ok, although they are far from sitting still. Running the video through the Musical Gestures Toolbox for Python, it is possible to see when and how much they moved clearly.

Below is a quick visualization of the 11-minute long sequence. The videogram (similar to a motiongram but of the original video) shows quite a lot of motion throughout. There is no headbanging, but they do not sit still.

A videogram of the complete video recording (top) with a waveform of the audio track. Two selected frames from the sequence and “zoomed-in” videograms show the motion of specific passages.

There are many good musical examples listed here. We should consider some of them for our next standstill championship. If corona allows, we plan to run a European Championship of Standstill in May 2022. More information soon!