Working with an Arduino Mega 2560 in Max

I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.

I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.

I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.

One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.

The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:

Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.

Not a very advanced mapping, but it works for testing the sensors and the concept.

And we’re off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion

I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.

Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed. The plan is that the faculty group will begin working together from January, while in parallel recruiting PhD and postdoctoral fellows. We aim at moving into our new spaces and having most of the people in place by August 2018, and that is also when we will have the kick-off party.

At least we now have a small web page up and running, and more content will be added as we move along. Here is a short summary of what we will be working on:

RITMO is an interdisciplinary research centre focused on rhythm as a structuring mechanism for the temporal dimensions of human life.
The research will be highly interdisciplinary, combining methods from musicology, psychology and informatics to study rhythm as a fundamental property that shapes and underpins human cognition, behaviour and culture.

Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future. Rhythm is also central to human biology, from the oscillations of our nervous system to our heartbeats, breathing patterns and longer chronobiological cycles. As such, it is a key aspect of human action and perception that is in complex interplay with the various cultural, biological and mechanical rhythms of the world.

RITMO will undertake research on rhythm in human action and perception, using music, motion and audio-visual media as empirical points of departure. Our core idea is that the human ability to experience the world and our actions as rhythmic, points to a basic cognitive mechanism that is in itself rhythmic in nature. The vision of RITMO is to understand more about this cognitive mechanism, and through this generate ground-breaking knowledge about the ways in which humans structure and understand the temporal dimensions of their life.

The centre is interdisciplinary and will combine perspectives and methods from music and media studies, philosophy and aesthetics, cognitive neuroscience, and informatics, using state-of-the-art technologies for motion capture, neuroimaging, pupillometry and robotics.

RITMO is to reveal the basic cognitive mechanism(s) underlying human rhythm, using music, motion and audiovisual media as empirical points of departure.

The research will be highly interdisciplinary, combining methods from musicology, psychology and informatics to study rhythm as a fundamental property that shapes and underpins human cognition, behaviour and culture.

Sverm-Resonans – Installation at Ultima Contemporary Music Festival

I am happy to announce the opening of our new interactive art installation at the Ultima Contemporary Music Festival 2017: Sverm-resonans.

Time and place: Sep. 12, 2017 12:30 PM Sep. 14, 2017 3:30 PM, Sentralen

Conceptual information

The installation is as much haptic as audible.

An installation that gives you access to heightened sensations of stillness, sound and vibration.

Stand still. Listen. Locate the sound. Move. Stand still. Listen. Hear the tension. Feel your movements. Relax. Stand stiller. Listen deeper. Feel the boundary between the known and the unknown, the controllable and the uncontrollable. How does the body meet the sound? How does the sound meet the body? What do you hear?

Approach one of the guitars. Place yourself in front of it and connect to your standstill. Feel free to put your hands on the body of the instrument. Try closing your eyes. From there, allow yourself to open up to the sound-vibrations through the resting touch and listening. Stay as long as you like and follow the development of the sound, and your inner sensations, experience, images, and associations as the sound meets you. As opposed to a traditional instrument, these guitars are “played” by (you) trying to stand still. The living body interacts with an electronic sound system played through the acoustic instrument. In this way, Sverm-Resonans explores the meeting points between the tactile and the kinesthetic, the body and the mind, and between motion and sound.

Technical information

The technical setup of Sverm-Resonans is focused on the meeting point between digital and acoustic sound making. Each of the guitars is equipped with a Bela micro-computer, which produces electronic sound through an actuator placed on the back of the guitars. There are no external speakers, all the sound generation is coming the vibration of the acoustic guitar. Each of the guitars produce a slowly pulsing sound – based on an additive synthesis with a slight randomness on the sine tones – that breathes and gives life to the soundscape. The guitars are also equipped with an infrared sensor that detects the presence of a person standing in front of the guitar, and which inversely controls the amplitude of a pulsating noise signal. That is, the longer you stand still, the more sound you will get.

About the installation

Sverm-Resonans at Sentralen

Sverm-Resonans is a new sound installation by Alexander Refsum Jensenius, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson, Victor Gonzalez Sanchez, Agata Zelechowska, and Charles Martin.

The installation is the result of the ongoing art/science research projects Sverm, MICRO and AAAI, three projects which in different ways explore human micromotion and musical microsound. Supported by University of Oslo, Research Council of Norway, Arts Council Norway, The Fund for Performing Artists, The Audio and Visual Fund, and The Nordic Culture Fund.

SMC paper based on data from the first Norwegian Championship of Standstill

We have been carrying out three editions of the Norwegian Championship of Standstill over the years, but it is first with the new resources in the MICRO project that we have finally been able to properly analyze all the data. The first publication coming out of the (growing) data set was published at SMC this year:

Reference: Jensenius, Alexander Refsum; Zelechowska, Agata & Gonzalez Sanchez, Victor Evaristo (2017). The Musical Influence on People’s Micromotion when Standing Still in Groups, In Tapio Lokki; Jukka Pa?tynen & Vesa Va?lima?ki (ed.),  Proceedings of the 14th Sound and Music Computing Conference 2017.

Full text: PDF

Abstract: The paper presents results from an experiment in which 91 subjects stood still on the floor for 6 minutes, with the first 3 minutes in silence, followed by 3 minutes with mu- sic. The head motion of the subjects was captured using an infra-red optical system. The results show that the average quantity of motion of standstill is 6.5 mm/s, and that the subjects moved more when listening to music (6.6 mm/s) than when standing still in silence (6.3 mm/s). This result confirms the belief that music induces motion, even when people try to stand still.

We are also happy to announce that the dataset is freely available here.

 

New article: Group behaviour and interpersonal synchronization to electronic dance music

I am happy to announce the publication of a follow-up study to our former paper on group dancing to EDM, and a technical paper on motion capture of groups of people. In this new study we successfully managed to track groups of 9-10 people dancing in a semi-ecological setup in our motion capture lab. We also found a lot of interesting things when it came to how people synchronize to both the music and each other.

Citation:
Solberg, R. T., & Jensenius, A. R. (2017). Group behaviour and interpersonal synchronization to electronic dance music. Musicae Scientiae.

Abstract:
The present study investigates how people move and relate to each other – and to the dance music – in a club-like setting created within a motion capture laboratory. Three groups of participants (29 in total) each danced to a 10-minute-long DJ mix consisting of four tracks of electronic dance music (EDM). Two of the EDM tracks had little structural development, while the two others included a typical “break routine” in the middle of the track, consisting of three distinct passages: (a) “breakdown”, (b) “build-up” and (c) “drop”. The motion capture data show similar bodily responses for all three groups in the break routines: a sudden decrease and increase in the general quantity of motion. More specifically, the participants demonstrated an improved level of interpersonal synchronization after the drop, particularly in their vertical movements. Furthermore, the participants’ activity increased and became more pronounced after the drop. This may suggest that the temporal removal and reintroduction of a clear rhythmic framework, as well as the use of intensifying sound features, have a profound effect on a group’s beat synchronization. Our results further suggest that the musical passages of EDM efficiently lead to the entrainment of a whole group, and that a break routine effectively “re-energizes” the dancing.