The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.
Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago. A lot of the Matlab coding for the new version was done in the master’s thesis by Bo Zhou.
This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.
At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.
The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.
I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.
I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.
I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.
One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.
The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:
Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.
Not a very advanced mapping, but it works for testing the sensors and the concept.
I am happy to announce the opening of our new interactive art installation at the Ultima Contemporary Music Festival 2017: Sverm-resonans.
Time and place: Sep. 12, 2017 12:30 PM – Sep. 14, 2017 3:30 PM, Sentralen
An installation that gives you access to heightened sensations of stillness, sound and vibration.
Stand still. Listen. Locate the sound. Move. Stand still. Listen. Hear the tension. Feel your movements. Relax. Stand stiller. Listen deeper. Feel the boundary between the known and the unknown, the controllable and the uncontrollable. How does the body meet the sound? How does the sound meet the body? What do you hear?
Approach one of the guitars. Place yourself in front of it and connect to your standstill. Feel free to put your hands on the body of the instrument. Try closing your eyes. From there, allow yourself to open up to the sound-vibrations through the resting touch and listening. Stay as long as you like and follow the development of the sound, and your inner sensations, experience, images, and associations as the sound meets you. As opposed to a traditional instrument, these guitars are “played” by (you) trying to stand still. The living body interacts with an electronic sound system played through the acoustic instrument. In this way, Sverm-Resonans explores the meeting points between the tactile and the kinesthetic, the body and the mind, and between motion and sound.
The technical setup of Sverm-Resonans is focused on the meeting point between digital and acoustic sound making. Each of the guitars is equipped with a Bela micro-computer, which produces electronic sound through an actuator placed on the back of the guitars. There are no external speakers, all the sound generation is coming the vibration of the acoustic guitar. Each of the guitars produce a slowly pulsing sound – based on an additive synthesis with a slight randomness on the sine tones – that breathes and gives life to the soundscape. The guitars are also equipped with an infrared sensor that detects the presence of a person standing in front of the guitar, and which inversely controls the amplitude of a pulsating noise signal. That is, the longer you stand still, the more sound you will get.
About the installation
Sverm-Resonans is a new sound installation by Alexander Refsum Jensenius, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson, Victor Gonzalez Sanchez, Agata Zelechowska, and Charles Martin.
The installation is the result of the ongoing art/science research projects Sverm, MICRO and AAAI, three projects which in different ways explore human micromotion and musical microsound. Supported by University of Oslo, Research Council of Norway, Arts Council Norway, The Fund for Performing Artists, The Audio and Visual Fund, and The Nordic Culture Fund.
We have been carrying out three editions of the Norwegian Championship of Standstill over the years, but it is first with the new resources in the MICRO project that we have finally been able to properly analyze all the data. The first publication coming out of the (growing) data set was published at SMC this year:
Abstract: The paper presents results from an experiment in which 91 subjects stood still on the floor for 6 minutes, with the first 3 minutes in silence, followed by 3 minutes with mu- sic. The head motion of the subjects was captured using an infra-red optical system. The results show that the average quantity of motion of standstill is 6.5 mm/s, and that the subjects moved more when listening to music (6.6 mm/s) than when standing still in silence (6.3 mm/s). This result confirms the belief that music induces motion, even when people try to stand still.