Nordic Sound and Music Computing Network up and running

I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.

This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.

At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.

The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.

New Publication: Analyzing Free-Hand Sound-Tracings of Melodic Phrases

We have done several sound-tracing studies before at University of Oslo, and here is a new one focusing on free-hand sound-tracings of melodies. I am happy to say that this is a gold open access publication, and that all the data are also available. So it is both free and “free”!

Kelkar, Tesjaswinee; Jensenius, Alexander Refsum
Analyzing Free-Hand Sound-Tracings of Melodic Phrases
Applied Sciences 2018, 8, 135. (Special Issue Sound and Music Computing)
View Full-Text
| Download PDF |

In this paper, we report on a free-hand motion capture study in which 32 participants ‘traced’ 16 melodic vocal phrases with their hands in the air in two experimental conditions. Melodic contours are often thought of as correlated with vertical movement (up and down) in time, and this was also our initial expectation. We did find an arch shape for most of the tracings, although this did not correspond directly to the melodic contours. Furthermore, representation of pitch in the vertical dimension was but one of a diverse range of movement strategies used to trace the melodies. Six different mapping strategies were observed, and these strategies have been quantified and statistically tested. The conclusion is that metaphorical representation is much more common than a ‘graph-like’ rendering for such a melodic sound-tracing task. Other findings include a clear gender difference for some of the tracing strategies and an unexpected representation of melodies in terms of a small object for some of the Hindustani music examples. The data also show a tendency of participants moving within a shared ‘social box’.

Come work with me! Lots of new positions at University of Oslo

I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.

Now we are recruiting lots of new people for the centre, so please apply or forward to people you think may be interested:

Come study with me! New master’s programme: Music, Communication and Technology

It has been fairly quiet here on the blog recently. One reason for this is that I am spending quite some time on setting up the new Music, Communication and Technology master’s programme. This is an exciting collaborative project with our colleagues at NTNU. The whole thing is focused around network-based communication, and the students will use, learn about, develop and evaluate technologies for musical communication between the two campuses in Oslo and Trondheim.

Interested, apply to become a student!

Working with an Arduino Mega 2560 in Max

I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.

I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.

I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.

One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.

The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:

Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.

Not a very advanced mapping, but it works for testing the sensors and the concept.