Completing the MICRO project

I wrote up the final report on the project MICRO – Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.

Aims and objectives

The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion. Micromotion is here used to describe the smallest motion that we can produce and experience, typically at a rate lower than 10 mm/s.

Example plots of the micromotion observed in the motion capture data of a person standing still for 10 minutes.

The last decades have seen an increased focus on the role of the human body in both the performance and the perception of music. Up to now, however, the micro-level of these experiences has received little attention.

The main objective of MICRO was broken down into three secondary objectives:

  1. Define a set of sub-categories of music-related micromotion.
  2. Understand more about how musical sound influences the micromotion of perceivers and which musical features (such as melody, harmony, rhythm, timbre, loudness, spatialization) come into play.
  3. Develop conceptual models for controlling sound through micromotion, and develop prototypes of interactive music systems based on these models.

Results

The project completed most of its planned activities and several more:

  1. The scientific results include many insights about human music-related micromotion. Results have been presented in one doctoral dissertation, two master theses, several journal papers, and at numerous conferences. As hypothesized, music influences human micromotion. This has been verified with different types of music in all the collected datasets. We have also found that music with a regular and strong beat, particularly electronic dance music, leads to more motion. Our data also supports the idea that music with a pulse of around 120 beats per minute is more motion-inducing than music with slower or faster tempi. In addition, we found that people generally moved more when listening with headphones. Towards the end of the project, we began studying whether there are individual differences. One study found that people who score high on empathic concern move more to music than others. This aligns with findings from recent studies of larger-scale music-related body motion.
  2. Data collected from the project has been released openly in Oslo Standstill Database. The database contains data from all Championships of Standstill, the Headphones-Speakers study, and from the Sverm project that preceded MICRO.
  3. Software developed during the project has been made openly available. This includes various analysis scrips implemented in Jupyter Notebooks. Several of the developed software modules have been wrapped up in the Musical Gestures Toolbox for Python.
  4. The scientific results have inspired a series of artistic explorations, including several installations and performances with the Self-playing Guitars, Oslo Muscle Band, and the Micromotion Apps.
  5. The project and its results have been featured in many media appearances, including a number of newspaper stories and several times on national TV and radio.

Open Research

MICRO has been an Open Research flagship project. This includes making the entire project as open as possible but as closed as necessary. The project shares publications, data, source code, application, and other parts of the research process openly.

Summing up

I am very happy about the outcomes of the MICRO project. This is largely thanks to the fantastic team, particularly postdoctoral fellow Victor Gonzalez Sanchez and doctoral fellow Agata Zelechowska.

Results from the Sverm project inspired the MICRO project, and many lines of thought will continue in my new AMBIENT project. I am looking forward to researching unconscious and involuntary micromotion in the years to come.

Edit video rotation metadata in FFmpeg

I am recording a lot of short videos these days for my sound actions project. Sometimes the recordings end up being rotated, which is based on the orientation sensor (probably the gyroscope) of my mobile phone. This rotation is not part of the recorded video data, it is just information written into the header of the MPEG file. That also means that it is possible to change the rotation without recoding the file. It is possible to see the rotation by looking at the metadata of a file:

ffmpeg -i filename.mp4

Then you will see a lot of information about the file. A bit down in the list is information about the rotation:

Side data:
displaymatrix: rotation of -90.00 degrees

Fixing it is as simple as running this command on the file:

ffmpeg -i filename.mp4 -c copy -metadata:s:v:0 rotate=0 output.mp4

This will quickly copy the video data over to a new file and fix the metadata information.

Recruiting for the AMBIENT project

I am happy to announce that I am recruiting for my new research project AMBIENT: Bodily Entrainment to Audiovisual Rhythms. The project will continue my line of research into the effects of sound and visuals on our bodies and minds and the creative use of such effects. Here is a short video in which I explain the motivation for the project:

Now hiring

The idea is to put together a multidisciplinary team of three early career researchers experienced with one or more of the following methods: sound analysis, video analysis, interviews, questionnaires, motion capture, physiological sensing, statistics, signal processing, machine learning, interactive (sound/music) systems. The announcement texts are available here:

Application deadline: 15 March 2022. Do not hesitate to get in touch if you have any questions about the positions.

About the project

Much focus has been devoted to understanding the “foreground” of human activities: things we say, actions we do, sounds we hear. AMBIENT will study the sonic and visual “background” of indoor environments: the sound of a ventilation system in an office, the footsteps of people in a corridor, or people’s fidgeting in a classroom.

Examples of periodic auditory and visual stimuli in the environments to be studied in AMBIENT: individuals in offices (WP2), physical-virtual coworking (WP3), telematic classroom (WP4).

The project aims to study how such elements influence people’s bodily behaviour and how people feel about the rhythms in an environment. This will be done by studying how different auditory and visual stimuli combine to create rhythms in various settings.

Rhythms can be constructed from different elements: (a) visual, (b) auditory, (c) audiovisual, (d) spatiotemporal, or (e) a combination of audiovisual and spatiotemporal. The numbers in (e) indicate the cyclic, temporal order of the events.

The hypothesis is that various types of rhythms influence people’s bodily behaviour through principles of entrainment, that is, the process by which independent rhythmical systems interact with each other.

Objectives

The primary objective of AMBIENT is to understand more about bodily entrainment to audiovisual rhythms in both local and telematic environments. This will be studied within everyday workspaces like offices and classrooms.

The primary objective can be broken down into three secondary objectives:

  1. Understand more about the rhythms of in-door environments, and make a theoretical model of such rhythms that can be implemented in software.
  2. Understand more about how people interact with the rhythms of in-door environments, both when working alone – and together.
  3. Explore how such rhythms can be captured and (re)created in a different environment using state-of-the-art audiovisual technologies.

Work packages

The work in AMBIENT is divided into five work packages:

  • WP1: Theoretical Development
  • WP2: Observation study of individuals in their offices
  • WP3: Observation study of physical-virtual workspaces
  • WP4: Exploration of (re)creation of ambience in a telematic classroom
  • WP5: Software development

The work packages overlap and feed into each other in various ways.

The relationships between work packages. The small boxes within WP2–4 indicate the different studies (a/b/c) and their phases (1/2/3). See WP sections for explanations.

Open Research

The AMBIENT project is an open research lighthouse project. The aim is to keep the entire research as open as possible, including sharing methods, data, publications, etc.

Funding

The Research Council of Norway, project number 324003, 2021-2025

MusicLab receives Danish P2 Prisen

Yesterday, I was in Copenhagen to receive the Danish Broadcasting Company’s P2 Prisen for “event of the year”. The prize was awarded to MusicLab Copenhagen, a unique “research concert” last October after two years of planning.

The main person behind MusicLab Copenhagen is Simon Høffding, a former postdoc at RITMO, now an associate professor at The University of Southern Denmark. He has collaborated with the world-leading Danish String Quartet for a decade, focusing on understanding more about musical absorption.

Simon and I met up to have a quick discussion about the prize before the ceremony.

The organizers asked if we could do some live data capturing during the prize award ceremony. However, we could not repeat what we did during MusicLab Copenhagen. Then, a team of 20 researchers spent a day setting up before the concert. Instead, I created some real-time video analysis using MGT for Max of the Danish Baroque orchestra. That at least gave some idea about what it is possible to extract from a video recording.

Testing the video visualization during the dress rehearsal.

The prize is a fantastic recognition of a unique event. MusicLab is an innovation project between RITMO and the University Library in Oslo. The aim is to explore how it is possible to carry out Open Research in real-world settings. MusicLab Copenhagen is the largest and most complex MusicLab we have organized to date. In fact, we did one complete concert and one test run of the setup to be sure that everything would work well.

While Simon, Fredrik (from the DSQ), and I were on stage to receive the prize, it should be said that we received it on behalf of many others. Around 20 people from RITMO and many others contributed to the event. Thanks to everyone for making MusicLab Copenhagen a reality!

MusicLab Copenhagen was a huge team effort. Here, many of us gathered in front of Musikhuset in Copenhagen before setting up equipment for the concert in October 2021.

The status of FAIR in higher education

I participated in the closing event of the FAIRsFAIR project last week. For that, I was asked to share thoughts on the status of FAIR in higher education. This is a summary of the notes that I wrote for the event.

What is FAIR?

First of all, The FAIR principles state that data should be:

  • Findable: The first step in (re)using data is to find them. Metadata and data should be easy to find for both humans and computers. Machine-readable metadata are essential for automatic discovery of datasets and services, so this is an essential component of the FAIRification process.
  • Accessible: Once the user finds the required data, she/he/they need to know how they can be accessed, possibly including authentication and authorisation.
  • Interoperable: The data usually need to be integrated with other data. In addition, the data need to interoperate with applications or workflows for analysis, storage, and processing.
  • Reusable: The ultimate goal of FAIR is to optimise the reuse of data. To achieve this, metadata and data should be well-described so that they can be replicated and/or combined in different settings.

This sounds all good, but the reality is that FAIRifying data is a non-trivial task.

FAIR is not (necessarily) Open

Readers of this blog know that I am an open research advocate, and that also means that I embrace the FAIR principles. It is impossible to make data genuinely open if they are not FAIR. It is essential to say that FAIR data is not the same as Open Data. It is perfectly possible to “FAIRify” closed data. That would entail making metadata about the data openly available according to the FAIR principles while still closing the actual data.

There are many cases where it is impossible to make data openly available. In the fourMs Lab, we often have to keep the data closed. There are usually two reasons for this:

  • Privacy: we perform research on and with humans and, therefore, almost always also record audio and video files from which it is possible to identify the people involved. Some participants in our studies consent to us sharing their data, but others do not. So we try to anonymize our material, such as blurring faces, creating motion videos, or keeping only motion capture data, but sometimes that is not possible. Anonymized data make sense in some cases, such as when we capture hundreds of people for the Championships of Standstill. But if we study expert musicians, they are not so easy to anonymize. The sound of an expert hardanger fiddler is enough for anyone in the community to recognize the person in question. And, their facial expressions may be an important part in understanding the effects of a performance. So if we cannot share their face and their sound, the data is not useful.
  • Copyright: even more challenging than the privacy matters, are questions related to copyright. When working with real music (as opposed to the “synthetic” music used in many music psychology studies) we need to consider the rights of composers, musicians, producers, and so on. This is a tricky matter. On one hand, we would like to work with new music by living musicians. On the other hand, the legal territory is challenging. There are a lot of actors, national legislations, copyright unions, and so on. Unfortunately, we do not have the legal competency and administrative capacity to tackle all the practicalities of ensuring that we are allowed to share a lot of the music we use openly.

These challenges make sharing all our data openly difficult, but we can still make them FAIR.

How far have we come?

Three things need to be in place to FAIRify data properly:

  1. Good and structured data. This is the main responsibility of the researcher. However, it is much easier said than done. Take the example of MusicLab Copenhagen, an event we ran in October. It was a huge undertaking with lots of data and media being collected and recorded by around 20 people. We are still working on organizing the data in meaningful ways. The plan is to release as much as possible as fast as possible, but it takes an astonishing amount of time to pre-process the data and structure it in a meaningful way. After all, if the data does not make sense to the person that collected it, nobody else will be able to use it either.
  2. Data repository to store the data. Once the data is structured, it needs to be stored somewhere that provides the necessary tools, and, in particular, persistent identifiers (such as DOIs). We don’t have our own data repository at UiO, so here we need to rely on other solutions. There are two main types of repositories: (a) “bucket-based” repositories that researchers can use themselves, (b) data archives run by institutions with data curators. That brings me to the third point:
  3. Data wranglers and curators. With some training and the right infrastructures, researchers may take on this role themselves. Tools such as Zenodo, Figshare, and OSF, allow researchers to drop in their files and get DOIs, version control, and timetagging. However, in my experience, even though these technical preservation parts are in place, the data may not be (a) good and structured enough in itself, and/or (b) have sufficient metadata and “padding” to be meaningful to others. That is why the institutional archives have professional data wranglers and curators employed to help with these large parts.

More people and institutions have started to realize that data handling is a skill and profession of its own. Many universities have begun to employ data curators in their libraries to help with the FAIRification process. However, the challenge is that they are too few and are too far removed from where the research is happening. In my experience, much trouble can be avoided at the later part of the data archiving “ladder” if the data is handled better from the start.

At RITMO, we have two lab engineers that double as data managers. When we hired them back in 2018, they were among the first local data managers at UiO, and that has proven to be essential for moving things forward. As lab engineers, they are involved in data collection from the start, and they can therefore help with data curation long before we get to the archival stage. They also help train new researchers in thinking about data management and follow data from beginning to end.

Incentives and rewards

There are many challenges to solve before we have universal FAIRification in place. Fortunately, many things move in the right direction. Policies and recommendations are made at international, national, and institutional levels, and Infrastructures are established, and personnel trained.

The biggest challenge now, I think, is to get incentives and rewards in place. Sharing data openly, or at least making the data FAIR, is still seen as costly, cumbersome, and or/unnecessary. That is because of the lack of incentives and rewards for doing so. We are still at a stage where publications “count” the most in the system. Committees primarily look at publication lists and h-indexes when making decisions about hiring, promotion, or funding allocations.

Fortunately, there is a lot of focus on research assessment these days. I have been involved in developing the Norwegian Career Assessment Matrix (NOR-CAM) as a model for broadening the focus. I am also pleased to see that research evaluation and assessment is at the forefront at the Paris Open Science European Conference (OSEC) starting today. When researchers get proper recognition for FAIRifying their data, we will see a radical change.