New Publication: Analyzing Free-Hand Sound-Tracings of Melodic Phrases

We have done several sound-tracing studies before at University of Oslo, and here is a new one focusing on free-hand sound-tracings of melodies. I am happy to say that this is a gold open access publication, and that all the data are also available. So it is both free and “free”!

Kelkar, Tesjaswinee; Jensenius, Alexander Refsum
Analyzing Free-Hand Sound-Tracings of Melodic Phrases
Applied Sciences 2018, 8, 135. (Special Issue Sound and Music Computing)
View Full-Text
| Download PDF |

In this paper, we report on a free-hand motion capture study in which 32 participants ‘traced’ 16 melodic vocal phrases with their hands in the air in two experimental conditions. Melodic contours are often thought of as correlated with vertical movement (up and down) in time, and this was also our initial expectation. We did find an arch shape for most of the tracings, although this did not correspond directly to the melodic contours. Furthermore, representation of pitch in the vertical dimension was but one of a diverse range of movement strategies used to trace the melodies. Six different mapping strategies were observed, and these strategies have been quantified and statistically tested. The conclusion is that metaphorical representation is much more common than a ‘graph-like’ rendering for such a melodic sound-tracing task. Other findings include a clear gender difference for some of the tracing strategies and an unexpected representation of melodies in terms of a small object for some of the Hindustani music examples. The data also show a tendency of participants moving within a shared ‘social box’.

Come work with me! Lots of new positions at University of Oslo

I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.

Now we are recruiting lots of new people for the centre, so please apply or forward to people you think may be interested:

Come study with me! New master’s programme: Music, Communication and Technology

It has been fairly quiet here on the blog recently. One reason for this is that I am spending quite some time on setting up the new Music, Communication and Technology master’s programme. This is an exciting collaborative project with our colleagues at NTNU. The whole thing is focused around network-based communication, and the students will use, learn about, develop and evaluate technologies for musical communication between the two campuses in Oslo and Trondheim.

Interested, apply to become a student!

Working with an Arduino Mega 2560 in Max

I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.

I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.

I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.

One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.

The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:

Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.

Not a very advanced mapping, but it works for testing the sensors and the concept.

And we’re off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion

I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.

Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed. The plan is that the faculty group will begin working together from January, while in parallel recruiting PhD and postdoctoral fellows. We aim at moving into our new spaces and having most of the people in place by August 2018, and that is also when we will have the kick-off party.

At least we now have a small web page up and running, and more content will be added as we move along. Here is a short summary of what we will be working on:

RITMO is an interdisciplinary research centre focused on rhythm as a structuring mechanism for the temporal dimensions of human life.
The research will be highly interdisciplinary, combining methods from musicology, psychology and informatics to study rhythm as a fundamental property that shapes and underpins human cognition, behaviour and culture.

Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future. Rhythm is also central to human biology, from the oscillations of our nervous system to our heartbeats, breathing patterns and longer chronobiological cycles. As such, it is a key aspect of human action and perception that is in complex interplay with the various cultural, biological and mechanical rhythms of the world.

RITMO will undertake research on rhythm in human action and perception, using music, motion and audio-visual media as empirical points of departure. Our core idea is that the human ability to experience the world and our actions as rhythmic, points to a basic cognitive mechanism that is in itself rhythmic in nature. The vision of RITMO is to understand more about this cognitive mechanism, and through this generate ground-breaking knowledge about the ways in which humans structure and understand the temporal dimensions of their life.

The centre is interdisciplinary and will combine perspectives and methods from music and media studies, philosophy and aesthetics, cognitive neuroscience, and informatics, using state-of-the-art technologies for motion capture, neuroimaging, pupillometry and robotics.

RITMO is to reveal the basic cognitive mechanism(s) underlying human rhythm, using music, motion and audiovisual media as empirical points of departure.

The research will be highly interdisciplinary, combining methods from musicology, psychology and informatics to study rhythm as a fundamental property that shapes and underpins human cognition, behaviour and culture.