New article: “Correspondences Between Music and Involuntary Human Micromotion During Standstill”

I am happy to announce a new journal article coming out of the MICRO project:

Victor E. Gonzalez-Sanchez, Agata Zelechowska and Alexander Refsum Jensenius
Correspondences Between Music and Involuntary Human Micromotion During Standstill
Front. Psychol., 07 August 2018 | https://doi.org/10.3389/fpsyg.2018.01382

Abstract: The relationships between human body motion and music have been the focus of several studies characterizing the correspondence between voluntary motion and various sound features. The study of involuntary movement to music, however, is still scarce. Insight into crucial aspects of music cognition, as well as characterization of the vestibular and sensorimotor systems could be largely improved through a description of the underlying links between music and involuntary movement. This study presents an analysis aimed at quantifying involuntary body motion of a small magnitude (micromotion) during standstill, as well as assessing the correspondences between such micromotion and different sound features of the musical stimuli: pulse clarity, amplitude, and spectral centroid. A total of 71 participants were asked to stand as still as possible for 6 min while being presented with alternating silence and music stimuli: Electronic Dance Music (EDM), Classical Indian music, and Norwegian fiddle music (Telespringar). The motion of each participant’s head was captured with a marker-based, infrared optical system. Differences in instantaneous position data were computed for each participant and the resulting time series were analyzed through cross-correlation to evaluate the delay between motion and musical features. The mean quantity of motion (QoM) was found to be highest across participants during the EDM condition. This musical genre is based on a clear pulse and rhythmic pattern, and it was also shown that pulse clarity was the metric that had the most significant effect in induced vertical motion across conditions. Correspondences were also found between motion and both brightness and loudness, providing some evidence of anticipation and reaction to the music. Overall, the proposed analysis techniques provide quantitative data and metrics on the correspondences between micromotion and music, with the EDM stimulus producing the clearest music-induced motion patterns. The analysis and results from this study are compatible with embodied music cognition and sensorimotor synchronization theories, and provide further evidence of the movement inducing effects of groove-related music features and human response to sound stimuli. Further work with larger data sets, and a wider range of stimuli, is necessary to produce conclusive findings on the subject.

Testing Blackmagic Web Presenter

Blackmagic Web PresenterWe are rapidly moving towards the start of our new Master’s programme Music, Communication & Technology. This is a unique programme in that it is split between two universities (in Oslo and Trondheim), 500 kilometres apart. We are working on setting up a permanent high-quality, low-latency connection that will be used as the basis for our communication. But in addition to this permanent setup we need solutions for quick and easy communication. We have been (and will be) testing a lot of different software and hardware solutions, and in a series of blog posts I will describe some of the pros and cons of these.

Today I have been testing the Blackmagic Web Presenter. This is a small box with two video inputs (one HDMI and one SDI), and two audio inputs (one XLR and one stereo RCA). The box functions as a very basic video/audio mixer, but the most interesting thing is that it shows up as a normal web camera on the computer (even in Ubuntu, without drivers!). This means that it can be used in most communication platforms, including Skype, Teams, Hangouts, Appear.in, Zoom, etc., and be the centerpiece of slightly more advanced communication.

My main interest in testing it now was to see if I could connect a regular camera (Canon XF105) and a document camera (Lumens DC193) to the device. As you can see in the video below, this worked flawlessly, and I was able to do a quick recording using the built-in video recorder (Cheese) in Ubuntu.

So to the verdict:

Positive:

  • No-frills setup, even on Ubuntu!
  • Very positive that it scales the video correctly. My camera was running 1080i and the document camera 780p, and the scaling worked flawlessly (you need the same inputs for video transition effects to work, though, but not really a problem for my usage).
  • Hardware encoding makes it easy to connect also to fairly moderate PCs.
  • Nice price tag (~$500).

Negative:

  • Most people have HDMI devices, but SDI is rare. We have a lot of SDI stuff, so it works fine for our use.
  • No phantom power for the XLR. This is perhaps the biggest problem, I think. You can use a dynamic microphone, but I would have preferred a condenser. Now I ended up connecting a wireless lavalier microphone, with a line-level XLR connection in the receiver. It is also possible to use a mixer, but the whole point of this box is to have a small, portable and easy set up.
  • 720p output is ok for many things we will use it for, but is not particularly future-proof.
  • It has a fan. It makes a little more noise than when my laptop fan kicks in, but is not noticeable if it is moved one meter away.

Not perfect, but for its usage I think it works very nicely. For meetings and teaching where it is necessary to have a little more than just a plain web camera, I think it does it job nicely.

Trim video file using FFMPEG

This is a note to self, and hopefully to others, about how to easily and quickly trim videos without recompression.

Often I end up with long video recordings that I want to split or trim. One a side note sometimes people call this “cropping”, but in my world cropping is to cut out parts of the image, that is, a spatial transformation. Splitting and trimming are temporal transformations.

You can of course both split and trim in most video editing software, but these will typically also recompress the file on export. This reduces the quality of the video, and it also takes a long time. A much better solution is to trim losslessly, and fortunately there is a way to do this with the wonder-tool FFMPEG. Being a command line utility (available on most platforms) it has a ton of different options, and I never remember these. So here it goes, this is what I use (on Ubuntu) to trim out parts of a long video file:

This will cut out the section from about 1h19min to 2h18min losslessly, and will only take a few seconds to run.

Nordic Sound and Music Computing Network up and running

I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.

This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.

At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.

The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.

New Publication: Analyzing Free-Hand Sound-Tracings of Melodic Phrases

We have done several sound-tracing studies before at University of Oslo, and here is a new one focusing on free-hand sound-tracings of melodies. I am happy to say that this is a gold open access publication, and that all the data are also available. So it is both free and “free”!

Kelkar, Tesjaswinee; Jensenius, Alexander Refsum
Analyzing Free-Hand Sound-Tracings of Melodic Phrases
Applied Sciences 2018, 8, 135. (Special Issue Sound and Music Computing)
View Full-Text
| Download PDF |

In this paper, we report on a free-hand motion capture study in which 32 participants ‘traced’ 16 melodic vocal phrases with their hands in the air in two experimental conditions. Melodic contours are often thought of as correlated with vertical movement (up and down) in time, and this was also our initial expectation. We did find an arch shape for most of the tracings, although this did not correspond directly to the melodic contours. Furthermore, representation of pitch in the vertical dimension was but one of a diverse range of movement strategies used to trace the melodies. Six different mapping strategies were observed, and these strategies have been quantified and statistically tested. The conclusion is that metaphorical representation is much more common than a ‘graph-like’ rendering for such a melodic sound-tracing task. Other findings include a clear gender difference for some of the tracing strategies and an unexpected representation of melodies in terms of a small object for some of the Hindustani music examples. The data also show a tendency of participants moving within a shared ‘social box’.