Nordic Sound and Music Computing Network up and running

I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.

This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.

At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.

The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.

Come work with me! Lots of new positions at University of Oslo

I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.

Now we are recruiting lots of new people for the centre, so please apply or forward to people you think may be interested:

Working with an Arduino Mega 2560 in Max

I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.

I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.

I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.

One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.

The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:

Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.

Not a very advanced mapping, but it works for testing the sensors and the concept.

And we’re off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion

I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.

Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed. The plan is that the faculty group will begin working together from January, while in parallel recruiting PhD and postdoctoral fellows. We aim at moving into our new spaces and having most of the people in place by August 2018, and that is also when we will have the kick-off party.

At least we now have a small web page up and running, and more content will be added as we move along. Here is a short summary of what we will be working on:

RITMO is an interdisciplinary research centre focused on rhythm as a structuring mechanism for the temporal dimensions of human life.
The research will be highly interdisciplinary, combining methods from musicology, psychology and informatics to study rhythm as a fundamental property that shapes and underpins human cognition, behaviour and culture.

Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future. Rhythm is also central to human biology, from the oscillations of our nervous system to our heartbeats, breathing patterns and longer chronobiological cycles. As such, it is a key aspect of human action and perception that is in complex interplay with the various cultural, biological and mechanical rhythms of the world.

RITMO will undertake research on rhythm in human action and perception, using music, motion and audio-visual media as empirical points of departure. Our core idea is that the human ability to experience the world and our actions as rhythmic, points to a basic cognitive mechanism that is in itself rhythmic in nature. The vision of RITMO is to understand more about this cognitive mechanism, and through this generate ground-breaking knowledge about the ways in which humans structure and understand the temporal dimensions of their life.

The centre is interdisciplinary and will combine perspectives and methods from music and media studies, philosophy and aesthetics, cognitive neuroscience, and informatics, using state-of-the-art technologies for motion capture, neuroimaging, pupillometry and robotics.

RITMO is to reveal the basic cognitive mechanism(s) underlying human rhythm, using music, motion and audiovisual media as empirical points of departure.

The research will be highly interdisciplinary, combining methods from musicology, psychology and informatics to study rhythm as a fundamental property that shapes and underpins human cognition, behaviour and culture.

Sverm-Resonans – Installation at Ultima Contemporary Music Festival

I am happy to announce the opening of our new interactive art installation at the Ultima Contemporary Music Festival 2017: Sverm-resonans.

Time and place: Sep. 12, 2017 12:30 PM Sep. 14, 2017 3:30 PM, Sentralen

Conceptual information

The installation is as much haptic as audible.

An installation that gives you access to heightened sensations of stillness, sound and vibration.

Stand still. Listen. Locate the sound. Move. Stand still. Listen. Hear the tension. Feel your movements. Relax. Stand stiller. Listen deeper. Feel the boundary between the known and the unknown, the controllable and the uncontrollable. How does the body meet the sound? How does the sound meet the body? What do you hear?

Approach one of the guitars. Place yourself in front of it and connect to your standstill. Feel free to put your hands on the body of the instrument. Try closing your eyes. From there, allow yourself to open up to the sound-vibrations through the resting touch and listening. Stay as long as you like and follow the development of the sound, and your inner sensations, experience, images, and associations as the sound meets you. As opposed to a traditional instrument, these guitars are “played” by (you) trying to stand still. The living body interacts with an electronic sound system played through the acoustic instrument. In this way, Sverm-Resonans explores the meeting points between the tactile and the kinesthetic, the body and the mind, and between motion and sound.

Technical information

The technical setup of Sverm-Resonans is focused on the meeting point between digital and acoustic sound making. Each of the guitars is equipped with a Bela micro-computer, which produces electronic sound through an actuator placed on the back of the guitars. There are no external speakers, all the sound generation is coming the vibration of the acoustic guitar. Each of the guitars produce a slowly pulsing sound – based on an additive synthesis with a slight randomness on the sine tones – that breathes and gives life to the soundscape. The guitars are also equipped with an infrared sensor that detects the presence of a person standing in front of the guitar, and which inversely controls the amplitude of a pulsating noise signal. That is, the longer you stand still, the more sound you will get.

About the installation

Sverm-Resonans at Sentralen

Sverm-Resonans is a new sound installation by Alexander Refsum Jensenius, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson, Victor Gonzalez Sanchez, Agata Zelechowska, and Charles Martin.

The installation is the result of the ongoing art/science research projects Sverm, MICRO and AAAI, three projects which in different ways explore human micromotion and musical microsound. Supported by University of Oslo, Research Council of Norway, Arts Council Norway, The Fund for Performing Artists, The Audio and Visual Fund, and The Nordic Culture Fund.