Musical Gestures Toolbox for Matlab

Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.

The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.

Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago. A lot of the Matlab coding for the new version was done in the master’s thesis by Bo Zhou.

The new MGT is available on Github, and there is a more or less complete introduction to the main features in the software carpentry workshop Quantitative Video analysis for Qualitative Research.

Deciding on author names in publications

Publications are important for researchers. Therefore deciding on who should be named as author for an academic publication is a topic that often leads to discussions. Also the ordering of the author names in a publication is a topic for heated debate, and particularly when you work in interdisciplinary teams with different traditions, as can be seen in the version from PhD Comics below.

Here is a task I have developed as a point of departure for discussing this issue in research groups. This is a task we have used successfully at RITMO, and hopefully others can make use of it too.

Publication case

Consider the following scenario:

  • Professor Pia secures funding for a large project with a brilliant overarching research idea.
  • Professor Per leads a sub-project in the project focusing on an empirical investigation of the brilliant research idea. He hires PhD student Siri and Postdoc Palle to work on the experiment.
  • PhD student Siri and Postdoc Palle designs and carries out the experiment.
  • Administrator Anton helps with recruiting all the participants.
  • PhD student Sofie provides all the sound material used in the study, and a preliminary analysis of the sound.
  • Research assistant Anders helps with all the recordings for the experiment, including post-processing all the data.
  • Lab engineer Erik programs the system used for data collection.
  • Statistician Svein helps with the analysis of the data.
  • A large part of the analysis is done using a toolbox made by Postdoc Penelope.
  • Professor Pernille suggests an alternative analysis method in a seminar with a presentation of preliminary results of the data. This alternative analysis method turns out to be very promising and is therefore included in the paper.
  • PhD student Siri writes the main part of the paper.
  • Postdoc Palle makes all the figures and writes some of the text.
  • Professor Per reads the paper and comments on a few things.

Question:

Who gets on the publication list, and in which order?

Moving to a new building

I have not been very good at blogging recently. This is not because nothing is happening, but rather because so much is happening that I don’t have time to write about it.

One of these things is the startup of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, that I am co-directing with Anne Danielsen. We got the funding last year, and have spent the year in planning, preparing and now executing the startup. This includes moving to a different building – Harald Schjelderups Hus – on the northwest corner of the Blindern campus.

The fourMs lab is also moving, and right now everything is in boxes while we are waiting for the construction work to be done. Meanwhile, the new MCT master’s programme is moving into the old facilities of the fourMs lab, a very nice re-use of a great lab space.

What is almost in place, however, is my new office. After several days of packing, all my stuff was moved today, and I am looking forwards to getting everything set up. 

New article: “Correspondences Between Music and Involuntary Human Micromotion During Standstill”

I am happy to announce a new journal article coming out of the MICRO project:

Victor E. Gonzalez-Sanchez, Agata Zelechowska and Alexander Refsum Jensenius
Correspondences Between Music and Involuntary Human Micromotion During Standstill
Front. Psychol., 07 August 2018 | https://doi.org/10.3389/fpsyg.2018.01382

Abstract: The relationships between human body motion and music have been the focus of several studies characterizing the correspondence between voluntary motion and various sound features. The study of involuntary movement to music, however, is still scarce. Insight into crucial aspects of music cognition, as well as characterization of the vestibular and sensorimotor systems could be largely improved through a description of the underlying links between music and involuntary movement. This study presents an analysis aimed at quantifying involuntary body motion of a small magnitude (micromotion) during standstill, as well as assessing the correspondences between such micromotion and different sound features of the musical stimuli: pulse clarity, amplitude, and spectral centroid. A total of 71 participants were asked to stand as still as possible for 6 min while being presented with alternating silence and music stimuli: Electronic Dance Music (EDM), Classical Indian music, and Norwegian fiddle music (Telespringar). The motion of each participant’s head was captured with a marker-based, infrared optical system. Differences in instantaneous position data were computed for each participant and the resulting time series were analyzed through cross-correlation to evaluate the delay between motion and musical features. The mean quantity of motion (QoM) was found to be highest across participants during the EDM condition. This musical genre is based on a clear pulse and rhythmic pattern, and it was also shown that pulse clarity was the metric that had the most significant effect in induced vertical motion across conditions. Correspondences were also found between motion and both brightness and loudness, providing some evidence of anticipation and reaction to the music. Overall, the proposed analysis techniques provide quantitative data and metrics on the correspondences between micromotion and music, with the EDM stimulus producing the clearest music-induced motion patterns. The analysis and results from this study are compatible with embodied music cognition and sensorimotor synchronization theories, and provide further evidence of the movement inducing effects of groove-related music features and human response to sound stimuli. Further work with larger data sets, and a wider range of stimuli, is necessary to produce conclusive findings on the subject.

Testing Blackmagic Web Presenter

Blackmagic Web PresenterWe are rapidly moving towards the start of our new Master’s programme Music, Communication & Technology. This is a unique programme in that it is split between two universities (in Oslo and Trondheim), 500 kilometres apart. We are working on setting up a permanent high-quality, low-latency connection that will be used as the basis for our communication. But in addition to this permanent setup we need solutions for quick and easy communication. We have been (and will be) testing a lot of different software and hardware solutions, and in a series of blog posts I will describe some of the pros and cons of these.

Today I have been testing the Blackmagic Web Presenter. This is a small box with two video inputs (one HDMI and one SDI), and two audio inputs (one XLR and one stereo RCA). The box functions as a very basic video/audio mixer, but the most interesting thing is that it shows up as a normal web camera on the computer (even in Ubuntu, without drivers!). This means that it can be used in most communication platforms, including Skype, Teams, Hangouts, Appear.in, Zoom, etc., and be the centerpiece of slightly more advanced communication.

My main interest in testing it now was to see if I could connect a regular camera (Canon XF105) and a document camera (Lumens DC193) to the device. As you can see in the video below, this worked flawlessly, and I was able to do a quick recording using the built-in video recorder (Cheese) in Ubuntu.

So to the verdict:

Positive:

  • No-frills setup, even on Ubuntu!
  • Very positive that it scales the video correctly. My camera was running 1080i and the document camera 780p, and the scaling worked flawlessly (you need the same inputs for video transition effects to work, though, but not really a problem for my usage).
  • Hardware encoding makes it easy to connect also to fairly moderate PCs.
  • Nice price tag (~$500).

Negative:

  • Most people have HDMI devices, but SDI is rare. We have a lot of SDI stuff, so it works fine for our use.
  • No phantom power for the XLR. This is perhaps the biggest problem, I think. You can use a dynamic microphone, but I would have preferred a condenser. Now I ended up connecting a wireless lavalier microphone, with a line-level XLR connection in the receiver. It is also possible to use a mixer, but the whole point of this box is to have a small, portable and easy set up.
  • 720p output is ok for many things we will use it for, but is not particularly future-proof.
  • It has a fan. It makes a little more noise than when my laptop fan kicks in, but is not noticeable if it is moved one meter away.

Not perfect, but for its usage I think it works very nicely. For meetings and teaching where it is necessary to have a little more than just a plain web camera, I think it does it job nicely.