Today I have added MultiControl to my GitHub account. Inititally, I did not intend to release the source code for MultiControl, because it is so old and dirty. The whole patch is based on bpatchers and trying to hide things away in the pre-Max5-days, when presentation view did not exist.
I originally developed the Max patch back in 2004, mainly so that I could distribute a standalone application for my students to use. I have only incrementally updated it to work with newer versions of Max and OSX, but have never really given it a full brush-over.
The reason why I decided to release the code now is because I get so many questions about the program. Even though there are several other good alternatives out there, a lot of people download the application each month, and I get lots of positive feedback from happy users. I also get information about bugs, and occasionally also some feature requests. While I do not really have time to update the patch myself, hopefully someone else might pick it up and improve it.
If you did not understand anything about the above, here is a little screencast showcasing some of the functionality of MultiControl:
I was involved in two papers, the first one being a Jamoma-related paper called “Flexible Control of Composite Parameters in Max/MSP” (PDF) written by Tim Place, Trond Lossius, Nils Peters and myself. Below is a picture of Trond giving the presentation. The main point of the paper is that we suggest that parameters should have properties and methods. This is both a general suggestion, and a specific one which we have started implementing in Jamoma using OSC.
The second paper was called “A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians” (PDF) and was written by Kristian Nymoen, Rolf Inge Godøy and myself. This was a presentation of how we are currently using the Sound Description Interchange Format (SDIF) for the storage of GDIF data. This helps solve a number of the challenges we have previously experienced in terms of synchronisation of data, audio and video with different (and varying) sampling rates and resolution.
There are lots of more pictures from the conference on Flickr.
micro-OSC (uOSC) was made public yesterday at NIME:
micro-OSC (uOSC) is a firmware runtime system for embedded platforms designed to remain as small as possible while also supporting evolving trends in sensor interfaces such as regulated 3.3 Volt high-resolution sensors, mixed analog and digital multi-rate sensor interfacing, n > 8-bit data formats.
uOSC supports the Open Sound Control protocol directly on the microprocessor, and the completeness of this implementation serves as a functional reference platform for research and development of the OSC protocol.
The design philosophy of micro-OSC is “by musicians, for musicians”—it is used at CNMAT as a component in prototypes of new sensor-based musical instruments as well as a research platform for the study of realtime protocols and signal-quality issues related to musical gestures.
I have only read through the NIME paper briefly, but an interesting aspect is how they are focusing on the implementation of OSC bundles with time tags, something which is rarely found in most OSC applications. Looking forward to test this on the CUI.
Music researcher. Research musician. RITMO. University of Oslo. NIME. NordicSMC. Open Research. Father.