Working with an Arduino Mega 2560 in Max

I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ’74’s Max.

I have previously used Maxuino for interfacing Arduinos with Max. This is a general purpose tool, with a step by step approach to connecting to the Arduino and retrieving data. This is great when it works, but due to its many options, and a somewhat convoluted patching style, I found the patch quite difficult to debug when things did not work out of the box.

I then came across the opposite to Maxuino, a minimal patch showing how to get the data right off the serial port. As can be seen from the screenshot below, it is, in fact, very simple, although not entirely intuitive if you are not into this type of thing.

One thing is the connection, another is to parse the incoming data in a meaningful way. So I decided to fork a patch made by joesanford, which had solved some of these problems in a more easy to understand patching style. For this patch to work, it requires a particular Arduino sketch (both the Max patch and Arduino sketch are available in my forked version on github). I also added a small sound engine, so that it is possible to control an additive synthesis with the sensors. The steps to make this work is explained below.

The mapping from sensor data starts by normalizing the data from the 15 analog sensors to a 0.-1. range (by dividing by 255). Since I want to control the amplitudes of each of the partials in the additive synthesis, it makes sense to slightly reduce all of the amplitudes by multiplying each element with a decreasing figure, as shown here:

Then the amplitudes are interleaved with the frequency values and sent to an ioscbank~ object to do the additive synthesis.

Not a very advanced mapping, but it works for testing the sensors and the concept.

uOSC

micro-OSC (uOSC) was made public yesterday at NIME:

micro-OSC (uOSC) is a firmware runtime system for embedded platforms designed to remain as small as possible while also supporting evolving trends in sensor interfaces such as regulated 3.3 Volt high-resolution sensors, mixed analog and digital multi-rate sensor interfacing, n > 8-bit data formats.

uOSC supports the Open Sound Control protocol directly on the microprocessor, and the completeness of this implementation serves as a functional reference platform for research and development of the OSC protocol.

The design philosophy of micro-OSC is “by musicians, for musicians”—it is used at CNMAT as a component in prototypes of new sensor-based musical instruments as well as a research platform for the study of realtime protocols and signal-quality issues related to musical gestures.

I have only read through the NIME paper briefly, but an interesting aspect is how they are focusing on the implementation of OSC bundles with time tags, something which is rarely found in most OSC applications. Looking forward to test this on the CUI.

Gumstix and PDa

Another post from the Mobile Music Workshop in Vienna. Yesterday I saw a demo on the Audioscape project by Mike Wozniewski (McGill). He was using the Gumstix, a really small system running a Linux version called OpenEmbedded. He was running PDa (a Pure Data clone) and was able to process sensor data and run audio off of the small device.

Sensing Music-related Actions

The web page for our new research project called Sensing Music-related Actions is now up and running. This is a joint research project of the departments of Musicology and Informatics, and has received external funding through the VERDIKT program of the The Research Council of Norway. The project runs from July 2008 until July 2011.

The focus of the project will be on basic issues of sensing and analysing music-related actions, and creating various prototypes for testing the control possibilities of such actions in enactive devices.

We are organising a kickoff-seminar on Tuesday 6 May with the following program:

  • 10:15-10:30: Rolf Inge Godøy (UiO): The Sensing Music-related Actions project
  • 10:30-11:30: Marcelo M. Wanderley (McGill): Motion capture of music-related actions
  • 11:30-12:30: Ben Knapp (Queens, Belfast): Biosensing of music-related actions
  • 13:30-17:00: Workshop with various biosensors and motion capture equipment

Thus we will be able to discuss music-related actions both from an “internal” (i.e. biosignals) and “external” (i.e. movement) point of view. Please come by if you are in Oslo!