Working with an Arduino Mega 2560 in Max

I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.

I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.

I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.

One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.

The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:

Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.

Not a very advanced mapping, but it works for testing the sensors and the concept.

Sverm-Resonans – Installation at Ultima Contemporary Music Festival

I am happy to announce the opening of our new interactive art installation at the Ultima Contemporary Music Festival 2017: Sverm-resonans.

Time and place: Sep. 12, 2017 12:30 PM Sep. 14, 2017 3:30 PM, Sentralen

Conceptual information

The installation is as much haptic as audible.

An installation that gives you access to heightened sensations of stillness, sound and vibration.

Stand still. Listen. Locate the sound. Move. Stand still. Listen. Hear the tension. Feel your movements. Relax. Stand stiller. Listen deeper. Feel the boundary between the known and the unknown, the controllable and the uncontrollable. How does the body meet the sound? How does the sound meet the body? What do you hear?

Approach one of the guitars. Place yourself in front of it and connect to your standstill. Feel free to put your hands on the body of the instrument. Try closing your eyes. From there, allow yourself to open up to the sound-vibrations through the resting touch and listening. Stay as long as you like and follow the development of the sound, and your inner sensations, experience, images, and associations as the sound meets you. As opposed to a traditional instrument, these guitars are “played” by (you) trying to stand still. The living body interacts with an electronic sound system played through the acoustic instrument. In this way, Sverm-Resonans explores the meeting points between the tactile and the kinesthetic, the body and the mind, and between motion and sound.

Technical information

The technical setup of Sverm-Resonans is focused on the meeting point between digital and acoustic sound making. Each of the guitars is equipped with a Bela micro-computer, which produces electronic sound through an actuator placed on the back of the guitars. There are no external speakers, all the sound generation is coming the vibration of the acoustic guitar. Each of the guitars produce a slowly pulsing sound – based on an additive synthesis with a slight randomness on the sine tones – that breathes and gives life to the soundscape. The guitars are also equipped with an infrared sensor that detects the presence of a person standing in front of the guitar, and which inversely controls the amplitude of a pulsating noise signal. That is, the longer you stand still, the more sound you will get.

About the installation

Sverm-Resonans at Sentralen

Sverm-Resonans is a new sound installation by Alexander Refsum Jensenius, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson, Victor Gonzalez Sanchez, Agata Zelechowska, and Charles Martin.

The installation is the result of the ongoing art/science research projects Sverm, MICRO and AAAI, three projects which in different ways explore human micromotion and musical microsound. Supported by University of Oslo, Research Council of Norway, Arts Council Norway, The Fund for Performing Artists, The Audio and Visual Fund, and The Nordic Culture Fund.

SMC paper based on data from the first Norwegian Championship of Standstill

We have been carrying out three editions of the Norwegian Championship of Standstill over the years, but it is first with the new resources in the MICRO project that we have finally been able to properly analyze all the data. The first publication coming out of the (growing) data set was published at SMC this year:

Reference: Jensenius, Alexander Refsum; Zelechowska, Agata & Gonzalez Sanchez, Victor Evaristo (2017). The Musical Influence on People’s Micromotion when Standing Still in Groups, In Tapio Lokki; Jukka Pa?tynen & Vesa Va?lima?ki (ed.),  Proceedings of the 14th Sound and Music Computing Conference 2017.

Full text: PDF

Abstract: The paper presents results from an experiment in which 91 subjects stood still on the floor for 6 minutes, with the first 3 minutes in silence, followed by 3 minutes with mu- sic. The head motion of the subjects was captured using an infra-red optical system. The results show that the average quantity of motion of standstill is 6.5 mm/s, and that the subjects moved more when listening to music (6.6 mm/s) than when standing still in silence (6.3 mm/s). This result confirms the belief that music induces motion, even when people try to stand still.

We are also happy to announce that the dataset is freely available here.

 

New publication: Sonic Microinteraction in “the Air”

I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.

Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale. I focus on the conceptualization of interactive systems that can exploit the smallest possible micromotion that people are able to both perceive and produce. It is also important that the interaction that is taking place allow for a recursive element via a feedback loop from the sound produced back to the performer producing it.

Music Moves on YouTube

We have been running our free online course Music Moves a couple of times on the FutureLearn platform. The course consists of a number of videos, as well as articles, quizzes, etc., all of which help create a great learning experience for the people that take part.

One great thing about the FutureLearn model (similar to Coursera, etc.) is that they focus on creating a complete course. There are many benefits to such a model, not least to create a virtual student group that interact in a somewhat similar way to campus students. The downside to this, of course, is that the material is not accessible to others when the course is not running.

We spent a lot of time and effort on making all the material for Music Moves, and we see that some of it could also be useful in other contexts. This semester, for example, I am teaching a course called Interactive Music, in which some of the videos on motion capture would be very relevant for  the students.

For that reason we have now decided to upload all the Music Moves videos to YouTube, so that everyone can access them. We still encourage interested people to enroll in the complete course, though. The next run on FutureLearn is scheduled to start in September.