MusicLab receives Danish P2 Prisen

Yesterday, I was in Copenhagen to receive the Danish Broadcasting Company’s P2 Prisen for “event of the year”. The prize was awarded to MusicLab Copenhagen, a unique “research concert” last October after two years of planning.

The main person behind MusicLab Copenhagen is Simon Høffding, a former postdoc at RITMO, now an associate professor at The University of Southern Denmark. He has collaborated with the world-leading Danish String Quartet for a decade, focusing on understanding more about musical absorption.

Simon and I met up to have a quick discussion about the prize before the ceremony.

The organizers asked if we could do some live data capturing during the prize award ceremony. However, we could not repeat what we did during MusicLab Copenhagen. Then, a team of 20 researchers spent a day setting up before the concert. Instead, I created some real-time video analysis using MGT for Max of the Danish Baroque orchestra. That at least gave some idea about what it is possible to extract from a video recording.

Testing the video visualization during the dress rehearsal.

The prize is a fantastic recognition of a unique event. MusicLab is an innovation project between RITMO and the University Library in Oslo. The aim is to explore how it is possible to carry out Open Research in real-world settings. MusicLab Copenhagen is the largest and most complex MusicLab we have organized to date. In fact, we did one complete concert and one test run of the setup to be sure that everything would work well.

While Simon, Fredrik (from the DSQ), and I were on stage to receive the prize, it should be said that we received it on behalf of many others. Around 20 people from RITMO and many others contributed to the event. Thanks to everyone for making MusicLab Copenhagen a reality!

MusicLab Copenhagen was a huge team effort. Here, many of us gathered in front of Musikhuset in Copenhagen before setting up equipment for the concert in October 2021.

The status of FAIR in higher education

I participated in the closing event of the FAIRsFAIR project last week. For that, I was asked to share thoughts on the status of FAIR in higher education. This is a summary of the notes that I wrote for the event.

What is FAIR?

First of all, The FAIR principles state that data should be:

  • Findable: The first step in (re)using data is to find them. Metadata and data should be easy to find for both humans and computers. Machine-readable metadata are essential for automatic discovery of datasets and services, so this is an essential component of the FAIRification process.
  • Accessible: Once the user finds the required data, she/he/they need to know how they can be accessed, possibly including authentication and authorisation.
  • Interoperable: The data usually need to be integrated with other data. In addition, the data need to interoperate with applications or workflows for analysis, storage, and processing.
  • Reusable: The ultimate goal of FAIR is to optimise the reuse of data. To achieve this, metadata and data should be well-described so that they can be replicated and/or combined in different settings.

This sounds all good, but the reality is that FAIRifying data is a non-trivial task.

FAIR is not (necessarily) Open

Readers of this blog know that I am an open research advocate, and that also means that I embrace the FAIR principles. It is impossible to make data genuinely open if they are not FAIR. It is essential to say that FAIR data is not the same as Open Data. It is perfectly possible to “FAIRify” closed data. That would entail making metadata about the data openly available according to the FAIR principles while still closing the actual data.

There are many cases where it is impossible to make data openly available. In the fourMs Lab, we often have to keep the data closed. There are usually two reasons for this:

  • Privacy: we perform research on and with humans and, therefore, almost always also record audio and video files from which it is possible to identify the people involved. Some participants in our studies consent to us sharing their data, but others do not. So we try to anonymize our material, such as blurring faces, creating motion videos, or keeping only motion capture data, but sometimes that is not possible. Anonymized data make sense in some cases, such as when we capture hundreds of people for the Championships of Standstill. But if we study expert musicians, they are not so easy to anonymize. The sound of an expert hardanger fiddler is enough for anyone in the community to recognize the person in question. And, their facial expressions may be an important part in understanding the effects of a performance. So if we cannot share their face and their sound, the data is not useful.
  • Copyright: even more challenging than the privacy matters, are questions related to copyright. When working with real music (as opposed to the “synthetic” music used in many music psychology studies) we need to consider the rights of composers, musicians, producers, and so on. This is a tricky matter. On one hand, we would like to work with new music by living musicians. On the other hand, the legal territory is challenging. There are a lot of actors, national legislations, copyright unions, and so on. Unfortunately, we do not have the legal competency and administrative capacity to tackle all the practicalities of ensuring that we are allowed to share a lot of the music we use openly.

These challenges make sharing all our data openly difficult, but we can still make them FAIR.

How far have we come?

Three things need to be in place to FAIRify data properly:

  1. Good and structured data. This is the main responsibility of the researcher. However, it is much easier said than done. Take the example of MusicLab Copenhagen, an event we ran in October. It was a huge undertaking with lots of data and media being collected and recorded by around 20 people. We are still working on organizing the data in meaningful ways. The plan is to release as much as possible as fast as possible, but it takes an astonishing amount of time to pre-process the data and structure it in a meaningful way. After all, if the data does not make sense to the person that collected it, nobody else will be able to use it either.
  2. Data repository to store the data. Once the data is structured, it needs to be stored somewhere that provides the necessary tools, and, in particular, persistent identifiers (such as DOIs). We don’t have our own data repository at UiO, so here we need to rely on other solutions. There are two main types of repositories: (a) “bucket-based” repositories that researchers can use themselves, (b) data archives run by institutions with data curators. That brings me to the third point:
  3. Data wranglers and curators. With some training and the right infrastructures, researchers may take on this role themselves. Tools such as Zenodo, Figshare, and OSF, allow researchers to drop in their files and get DOIs, version control, and timetagging. However, in my experience, even though these technical preservation parts are in place, the data may not be (a) good and structured enough in itself, and/or (b) have sufficient metadata and “padding” to be meaningful to others. That is why the institutional archives have professional data wranglers and curators employed to help with these large parts.

More people and institutions have started to realize that data handling is a skill and profession of its own. Many universities have begun to employ data curators in their libraries to help with the FAIRification process. However, the challenge is that they are too few and are too far removed from where the research is happening. In my experience, much trouble can be avoided at the later part of the data archiving “ladder” if the data is handled better from the start.

At RITMO, we have two lab engineers that double as data managers. When we hired them back in 2018, they were among the first local data managers at UiO, and that has proven to be essential for moving things forward. As lab engineers, they are involved in data collection from the start, and they can therefore help with data curation long before we get to the archival stage. They also help train new researchers in thinking about data management and follow data from beginning to end.

Incentives and rewards

There are many challenges to solve before we have universal FAIRification in place. Fortunately, many things move in the right direction. Policies and recommendations are made at international, national, and institutional levels, and Infrastructures are established, and personnel trained.

The biggest challenge now, I think, is to get incentives and rewards in place. Sharing data openly, or at least making the data FAIR, is still seen as costly, cumbersome, and or/unnecessary. That is because of the lack of incentives and rewards for doing so. We are still at a stage where publications “count” the most in the system. Committees primarily look at publication lists and h-indexes when making decisions about hiring, promotion, or funding allocations.

Fortunately, there is a lot of focus on research assessment these days. I have been involved in developing the Norwegian Career Assessment Matrix (NOR-CAM) as a model for broadening the focus. I am also pleased to see that research evaluation and assessment is at the forefront at the Paris Open Science European Conference (OSEC) starting today. When researchers get proper recognition for FAIRifying their data, we will see a radical change.

Different 16:9 format resolutions

I often have to convert between different resolutions of videos and images and always forget the pixel dimensions that correspond to a 16:9 format. So here is a cheat-sheet:

  • 2160p: 3840×2160
  • 1440p: 2560×1440
  • 1080p: 1920×1080
  • 720p: 1280×720
  • 540p: 960×540
  • 480p: 854×480
  • 360p: 640×360
  • 240p: 426×240
  • 120p: 213×120

I also came across this complete list of true 16:9 resolution combinations, but the ones above suffice for my usage. Happy converting!

One month of sound actions

One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.

Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:

The beauty of everyday sounds

One interesting result from the project so far is that many people have told me that they have started to reflect on sounds in their environment. That was also one of my motivations for the project. We produce sounds, willingly and unwillingly, all the time. Yet, we rarely think about how these sounds actually sound. By isolating and presenting everyday sounds, I help to “frame” them and make people reflect on their sonic qualities.

The project is not only about sound. Equally important are the actions that produce the sounds, what I call sound-producing actions. I aim to show that both the visual and sonic parts of a sound action are important. Watching the action gives a sense of the sound to appear, and listening to the sound can inform about the actions and objects involved.

Recording gear

I didn’t have a thorough plan for recording the sound actions. However, it was clear from the start that I would not aim for studio-quality recordings. The most important has been to make the recordings as simple as possible. That said, I don’t want too much auditory or visual noise in the recordings either. So I try to find quiet locations and frame only the action.

It is said that the best camera is the one you have at hand. In my case, that is my mobile phone (a Samsung S21 Ultra 5G), which sports a quite good mobile phone camera. It can record in 4K, which may be a bit overkill for this project. But, hey, why not… I don’t know exactly how to use all the material later on, but having more pixels to work with will probably come in handy at some point.

The built-in microphones on the phone are not bad but not particularly good either. The phone can record in stereo, and it is possible to switch between the “front” and “back” microphones. Still, both of these settings capture a lot of ambient sounds. That is not ideal for this project, in which I am more interested in directional sound. So I mainly use a VideoMic Me-C for the recordings. The microphone sounds a bit “sharp,” which I guess is because it is targeted at speech recording. Nevertheless, it is small and therefore easy to carry around. So I will probably continue to use it for a while.

Different sound types

I haven’t been very systematic in capturing sound actions so far. Looking at the first 31 recordings shows a nice mix captured at home, in my office, and outside. Many of the sound actions have been proposed by my family, and they have also helped with the recording. Some colleagues have also come up with ideas for new sound actions, so I am sure that I will have enough inspiration to capture 365 recordings by the end of the year.

One challenge has been to isolate single sound actions. But what is actually one sound action? For example, consider today’s recording:

I would argue that this is one action; I move the switch from left to right with one rotating motion. However, due to the steps in the button, the resultant sound has three amplitude spikes. So it can be seen as a sustained action type leading to a series of impulsive sounds. Together, this could probably be considered an iterative sound action type if we were to look at the three main types from Schaeffer’s taxonomy:

An illustration of the sound and action energy profiles of the three main sound types proposed by Pierre Schaeffer (impulsive, sustained, and iterative).

My new book discusses sound-producing actions from a theoretical perspective. The nice thing about my current project is that I test the theories in practice. That is easier said than done. It is generally easy to record impulsive sound actions, but the sustained and iterative ones are more challenging. For example, consider the coffee grinding:

The challenge was that I didn’t know how long it should be. Including only one turn wouldn’t give a sense of the quality of the action or the sound. So I decided to make it a little longer. It is too early to start the analysis of the recordings, but I think that more patterns will emerge as I keep going.

Well, one month has passed, 11 more to come. I am looking forward to continuing my exploration into sound actions!

Preparing videos for FutureLearn courses

This week we started up our new online course, Motion Capture: The Art of Studying Human Activity, and we are also rerunning Music Moves: Why Does Music Make You Move? for the seventh time. Most of the material for these courses is premade, but we record a new wrap-up video at the end of each week. This makes it possible to answer questions that have been posed during the week and add some new and relevant material.

To simplify making these wrap-up videos, I am this time around recording them with my Samsung Galaxy s21 Ultra and a set of Røde Wireless GO II microphones. Time is limited when making these videos, so I have decided to quickly trim the files with FFmpeg instead of spending time in video editing software.

I have started shooting videos in 4K, not necessarily because I need it right now, but all my equipment supports 4K these days, and it feels more future-proof. However, FutureLearn does not like 4K and is rather picky about the files to be uploaded:

  • File format: .mp4 / .mov / .m4v
  • File size: up to 5GB
  • Codec: H264
  • Frame rate: 25 fps
  • Bit rate: min 2 Mbps constant bit rate
  • Sound: AAC 44khz stereo

So how do you go about creating such files? Well, FFmpeg comes to the rescue again:

ffmpeg -i input.mp4 -vf "scale=1920:1080 , fps=25" -ar 44100 -b:v 8M -minrate 2M input.mp4

The one-liner is relatively self-explanatory. First, I apply a video filter that scales down the video to 1080p and reduces the framerate to 25fps. Then I specify that the audio should be reduced to 44100 Hz. FutureLearn wants a bitrate of 2 Mbps but does not specify a preferred bitrate. I decided to go for 8 Mbps, the suggested bitrate for 1080p uploads to YouTube. I added a minimum bitrate of 2 Mbps at the end, but I don’t think it is necessary since the bitrate used for MP4 files is constant.

All in all, this means that I can do the complete video editing with two simple one-liners, one for trimming the file and the one above for converting to the correct format. That way, I should manage to create two such wrap-up videos each week for the coming weeks.