NIME Publication Ecosystem Workshop

During the NIME conference this year (which as run entirely online due to the coronavirus crisis), I led a workshop called NIME Publication Ecosystem Workshop. In this post, I will explain the background of the workshop, how it was run in an asynchronous+synchronous mode, and reflect on the results.

If you don’t want to read everything below, here is a short introduction video I made to explain the background (shot at my “summer office” up in the Hardangervidda mountain range in Norway):

Background for the workshop

The idea of the NIME Publication Ecosystem Workshop was to continue community discussions started in the successful NIMEHub workshop in Brisbane in 2016 and the Open NIME workshop in Porto Alegre in 2019. Besides, comes discussions about establishing a NIME journal, as well as better solutions to archive various types of NIME-related activities.

The term “publication” should in this context be understood in a broad sense, meaning different types of output of the community, including but not limited to textual productions. This is particularly important at NIME since this community consists of people designing, building, and performing new musical interfaces.

When I gathered a workshop team and proposed the topic back in January, this was mainly coming out of the increasing focus on Open Research. Please note that I use “open research” here, not “open science”, a significant difference that I have written about in a previous blog post. The focus on more openness in research has recently received a lot of political attention through the Plan S initiative, The Declaration on Research Assessment (DORA), EU’s Horizon Europe, funder’s requirements of FAIR data principles, and so on.

Of course, the recent coronavirus crisis has made it even more necessary to develop open research strategies, as well as finding alternative ways of communicating our research. This also includes rethinking the format of conferences. The need to travel less is not something that will go away after the coronavirus crisis calms down, however. Long-term change is necessary to reduce problems with climate change. While such a move may feel limiting to some of us that could travel to international conferences every year, it also opens possibilities for many others to participate. The topic of this year’s NIME conference was “accessibility,” and it turned out that the virtual conference format was, indeed, one that promoted accessibility in many ways. I will write more on that in another blog post.

When it comes to openness in research, this is something the NIME community has embraced since the beginning. The paper proceedings, for example, have been freely available online all the time. Also, the database of the archive has been made available as a collection of BibTeX files. Some people don’t understand why we do this, but opening up also the metadata for the archive makes it much more flexible to integrate with other data sources. It also makes it much easier to research the community’s output.

Despite these efforts, there are also several things about the NIME conference that we have not been able to make openly available, such as hardware designs, code, empirical data, music performances, installations, and so on. This is not that we don’t want to, but it has proven hard to find long-term solutions that are maintainable by a volunteer-driven community. People in the community have different interests and skills, so it is essential to find solutions that are both innovative and user-friendly at the same time. The longevity of chosen solutions is also important since NIME is central to an increasing number of people’s careers. Hence, we need to balance the exploration of new solutions with the need for preservation and stability. 

In addition to finding solutions for the NIME conference itself, the establishment of a NIME journal has been discussed for several years. This discussion has surfaced again during the testing of a new paper template for the conference. But rather than thinking about the conference proceedings and a journal as two separate projects, one could imagine a broader NIME publication ecosystem that could cover everything from draft manuscripts, complete papers, peer-reviewed proceedings papers, and peer-reviewed journal papers. This could be thought of as a more “Science 2.0”-like system in which the entire research process is open from the beginning.

The aims of the workshop were therefore to:

  1. discuss how a broader publication ecosystem built around (but not limited to) the annual conference could work
  2. brainstorm and sketch concrete (technical) solutions to support such an idea
  3. agree on some concrete steps on how to proceed with the development of such ideas the coming year

Workshop format

We had initially planned to have a physical workshop in Birmingham but ended up with an online event. To make it as accessible as possible, we decided to run it using a combination of asynchronous and synchronous delivery. This included the preparation of various types of introductory material by the organizing committee and some participants. All of this material was gathered into a pre-workshop padlet, which was sent to the participants some days before the online workshop.

The synchronous part of the workshop was split over two-hour-long time slots. We ended up doing it like this to allow people from all time zones to participate in at least one of the workshops. Since most of the organizers were located in Europe, and the conference itself was scheduled around UK time, we ended up with one slot in the morning (9-10 UK time) and one in the afternoon (17-18 UK time). The program for each of the slots was the same so that everyone would feel that they participated equally in the event.

Around 30 people showed up for each time slot, with only a few participating in both. Since preparatory material was distributed beforehand, most of the online workshop time consisted of discussions in breakout rooms with 5-6 people in each group. The groups wrote their feedback into separate padlets and also reported back in a short plenary session at the end of the hour-long session.

A post-workshop padlet was created to gather links after the workshop. The topic was also lively discussed in separate threads on the Slack channel that was used during the conference. After the conference, and as a result of the workshop, we have established a forum on nime.org, with a separate ecosystem thread.

All the pre- and post-workshop material from the workshop has been archived in Zenodo.

Conclusions

It is, of course, impossible to conclude such a vast topic after one workshop. But what is clear is that there is an interest in the community to establish a more extensive ecosystem around the NIME conference. The establishment of a forum to continue discussion is one concrete move ahead. So is the knowledge gained from running a very successful online conference this year. This included pre-recorded talks, written Q&A in Slack channels, plenary sessions, and breakout rooms. A lot of this can also be archived and be part of an extended ecosystem. All in all, things are moving in the right direction, and I am very excited to see where we end up!

Method chapter freely available

I am a big supporter of Open Access publishing, but for various reasons some of my publications are not openly available by default. This is the case for the chapter Methods for Studying Music-Related Body Motion that I have contributed to the Springer Handbook of Systematic Musicology.

I am very happy to announce that the embargo on the book ran out today, which means that a pre-print version of my chapter is finally freely available in UiO’s digital repository. This chapter is a summary of my experiences with music-related motion analysis, and I often recommend it to students. Therefore it is great that it is finally available to download from everywhere.

Abstract

This chapter presents an overview of some methodological approaches and technologies that can be used in the study of music-related body motion. The aim is not to cover all possible approaches, but rather to highlight some of the ones that are more relevant from a musicological point of view. This includes methods for video-based and sensor-based motion analyses, both qualitative and quantitative. It also includes discussions of the strengths and weaknesses of the different methods, and reflections on how the methods can be used in connection to other data in question, such as physiological or neurological data, symbolic notation, sound recordings and contextual data.

Pixel array images of long videos in FFmpeg

Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.

I find that creating motiongrams or videograms is a good way to visualize the content of videos. They are abstract representations, but still reveal some of what is going on. However, for longer videos, motiongrams may be a bit tricky to look at, and they also take a lot of time to generate (hours, or even days). For that reason I was excited to see how pixel array images would work on some of my material.

First I tried on my “standard” dance video:

which gives this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

Yes, that is mainly a blue line, resulting from the average colour of the video being blue throughout the entire video.

Then I tried with one of the videos from the AIST Dance Video Database:

Which results in this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

And, yes, that is mainly a gray line… I realize that this method does not work very well with single-shot videos.

To try something very different, I also decided to make a pixel array image of Bergensbanen, a 7-hour TV production of the train between Oslo and Bergen. I made videograms of this recording some years ago, which turned out to be quite nice. So I was excited to see how a pixel array image would work. The end result looks like this (1920 pixels wide):

Pixel array image (1920 pixels wide) of the 7-hour TV production Bergensbanen

As you see, not much is changing, but that also represents the slowness of the train ride. While I originally thought this would be a smart representation, I still think that my videograms were more informative, such as this one:

Bergensbanen
Videogram of Bergensbanen

The big difference between the two visualizations, is that each frame is represented with vertical information in the videogram. The pixel array image, on the other hand, only displays one single pixel per frame. That said, it took only some minutes to generate the pixel array image, and I recall spending several days on generating the videogram.

To sum up, I think that pixel array images are probably more useful for movies and video material in which there are lots of changes throughout. They would be better suited for such a reduction technique. For my videos, in which I always use single-shot stationary cameras, motiongrams and videograms may still be the preferred solution.

Creating different types of keyframe displays with FFmpeg

In some recent posts I have explored the creation of motiongrams and average images, multi-exposure displays, and image masks. In this blog post I will explore different ways of generating keyframe displays using the very handy command line tool FFmpeg.

As in the previous posts, I will use a contemporary dance video from the AIST Dance Video Database as an example:

The first attempt is to create a 3×3 grid image by just sampling frames from the original image. I spent some time exploring different ways of doing this. It is possible to do it with a one-liner:

ffmpeg -ss 00:00:05 -i dance.mp4 -frames 1 -vf "select=not(mod(n\,200)),scale=495:256,tile=3x3" tile.jpg

The problem with this approach, and many similar that I found by googling around, is that it samples frames with a specific interval. In the above code it looks up every 200th frame, which gives this image:

The problem is that the image only contains information about the 1600 first frames, or more specifically frames 0, 200, 400, 600, 800, 1000, 1200, 1400, 1600. I want to include frames that represent the whole video.

I see that many people create such displays by sampling based on scene changes in the video. There are two problems with this. First, it requires that there are scene changes in the video. This is usually not the case in the videos that I study, which are primarily recorded with a stationary camera in which only the “foreground” changes. The second problem with sampling one “salient” frames, is that we loose information about the temporal unfolding of the video file. From an analysis point of view, it is actually quite useful to know more or less when things happened in the video. That is not so easy if the sampling is uneven.

I was therefore happy to find a nice script made by Martin Sikora, which is based on looking up the duration of the file and use this to calculate the frames to export from the file. Running this script on the original video gives this image:

The 9 frames in the display above reveal that there is little dance in the first one third of the video file (can see the arm of the dancer enter in the third image). It also shows how the dancer moved around in the space. It is possible to get some idea about her spatial distribution, but there is little information about her actual motion throughout the sequence. I was therefore curious to try out making such a grid-based display from a history video, which actually shows some more of the actual motion.

It is possible to make (motion) history videos in both the Matlab and Python versions of the Musical Gestures Toolbox, but today I was curious as to whether it could be done simply with FFmpeg. And it turns out to be quite simple using a filter called tmix:

ffmpeg -i dance.mp4 -filter:v tmix=frames=30:weights="10 1 1" dance_tmix30.mp4

I played around for a while with the settings before ending up with these ones. Here I average over 30 frames (which is half a second for this 60fps video). I also use weight feature to give preference to the current frame. This makes it easier to follow the dancer, as the trajectories of past motion become more blurred.

Running the above grid-script on this video results in a keyframe display that shows more of the motion happening in the frames in question. This is useful to see, for example, when she moved more than in other frames.

I am quite happy with the above-mentioned, but it is not particularly fast. Creating the history video is time-consuming, since it has to process all the frames in the entire video. I therefore tested speeding up the video 8 times, using this command (the -an flag is used to remove the audio):

ffmpeg -i dance.mp4 -filter:v "setpts=0.125*PTS" -an output8x.mp4

Running the history video function on this then runs quite a bit faster, and results in this hi-speed history video:

Running this through the grid-script gives a keyframe display that is both similar and different to the one above:

It is quite a lot quicker to generate, and also gives more information about the motion sequence.

The conclusion is that it is, indeed, possible to make a lot of interesting video visualizations using “only” FFmpeg. Several of these scripts are also much faster than the scripts I have previously used in Matlab and Python. So I will definitely continue to explore FFmpeg, and look at how it can be integrated with the other toolboxes.

Creating image masks from video file

As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.

I will use a contemporary dance video from the AIST Dance Video Database as an example:

The first step is to extract keyframes from the video file using this one-liner ffmpeg command:

ffmpeg -skip_frame nokey -i *.mp4 -vsync 0 -r 30 -f image2 t%02d.tiff

This will use the keyframes from the MP4 file, which should be faster than doing a new analysis of the file. It could, of course, also be possible to sample the video at regular intervals, but the keyframes seem to work fine for my usage. I also choose to save the exported keyframes as TIFF files to avoid running multiple rounds of compression on the files. The end result is a bunch of keyframe images that can be used for further processing.

Here we are lucky, because the first frame actually contains the background of the scene. So we can use that frame to create a “foreground” image by subtracting the background image like this:

for i in *.tiff; 
do 
name=`echo $i | cut -d'.' -f1`; 
convert t01.tiff $i -compose difference -composite -threshold 5% -blur 0x3 -threshold 20% -blur 0x3 "$name-mask.tiff" 
convert $i "$name-mask.tiff" -compose multiply -flatten "$name-clean.jpg"
done

The end result is a series with the foreground masks:

And then the final result is a series of images in which only the foreground is shown. The “glow” around the images is because of the blur effect used when creating the mask:

Adaptive background

There may also be cases in which there is no readily available background image as we used above, such as in this hip-hop AIST dance video:

Then it is possible to create a background image by averaging over all the images, and hope that this could “remove” the foreground. Here is a one-liner that does this (assuming that you have exported the individual keyframes as mentioned in the beginning of this post):

convert *.tiff -background black -compose lighten -flatten background.tiff

This works quite well, although we can see that the camera right behind the dancer is a little more faint the two others:

Background image created by averaging over all the keyframes.

This background image can then be used to subtract from the other images like we did above:

for i in *.tiff; 
do 
name=`echo $i | cut -d'.' -f1`; 
convert background.tiff $i -compose difference -composite -threshold 5% -blur 0x3 -threshold 20% -blur 0x3 "$name-mask.tiff" 
convert $i "$name-mask.tiff" -compose multiply -flatten "$name-clean.jpg"
done

It works very well, except for that the camera behind the performer (that wasn’t masked properly) also shows up in the masked foreground images:

This method works quite well and has the benefit of being very fast. It is possible to get a better result by creating an average image from the entire video (and not only the keyframes), but this would also take very much longer.