I think Kurt Ralske puts it very well in “The Pianist: A Note on Digital Technique”
“For the classical pianist, the tedium of endless hours of practicing scales takes on an aura of nobility; it’s a virtuous, character-building activity. Instead of practicing scales, the digital artist learns software and hardware, learns programming languages, learns the techniques of creating digital models of sound, image, information, and intelligence.”
I wonder when music technologists will be employed in orchestras as musicians.
The MIT Media Lab: $100 Laptop aims at making an affordable laptop for poor countries:
“The proposed $100 machine will be a Linux-based, full-color, full-screen laptop that will use innovative power (including wind-up) and will be able to do most everything except store huge amounts of data. This rugged laptop will be WiFi-enabled and have USB ports galore. Its current specifications are: 500MHz, 1GB, 1 Megapixel.”
I finally got around to download and try
ChucK : Concurrent, On-the-fly Audio Programming Language by Ge Wang. Feels a bit strange, but I guess I need to work a little bit more with it. It says something about graphical tools in the readme, and I’m looking forward to that.
CMMR, Pisa, Italy 26-28 September 2005
This was a rather small conference, with only about 40 participants, organised at the CNR in lovely Pisa. The topics presented were varied, but here, as in most other computer music conferences these days, there were a high percentage of music information retrieval presentations. I was there to present a short paper on building low-cost music controllers from hacked gamepads and homemade sensors, something I worked on while at McGill in the spring. A summary of things I found interesting:
- Mark Havryliv and Terumi Narushima, Wollongong University, Australia, presented Metris, a Tetris-like game for music. Unfortunately, I arrived right after their presentation, but the concept seems very interesting.
- Laurent Pottier, GMEM, presented a microsound system implemented in Max/MSP, and different ways of controlling it. I am looking forward to the release of the objects.
- Leonello Tarabella, Pisa, played with his "air piano" system using video analysis. It worked very well considering the obvious problems with resolution and speed of video cameras.
- Carlos Guedes, NYU and Porto, presented a dance-piece using the m-objects that he presented ICMC a couple of weeks ago. He has been focusing on rhythmic aspects of dance movements and implementations in music. Very nice!
- Philippe Guillemain, CNRS Marseille, presented work on transitions in reed instruments. This is perceptually very relevant, and it is strange that it has not received greater attention earlier.
- Giordano Cabral, Paris 6, presented something Francois Pachet covered very quickly at the S2S^2 summer school, and this time I actually understood some more. It is about using the Extractor Discovery System (EDS) for recognition. It is using a genetic algorithm to automatically build extraction algorithms from a set of basic mathematics and signal processing operators. For the user, this makes it possible to ask the system to develop different types of equations, and make it find the best ones. Seems very interesting, and apparently it works.
- Markus Schedl, Johannes Kepler University, Linz , presented a web-mining paper, where they had been building artist ranking tables withing various musical styles based on querying for artist-pairs. The novel thing was how they had made a penalizing system, to avoid over-ranking of artists with names similar to common words (kiss, prince, madonna). I find it fascinating that such systems, which are completely ignorant of any music theory, manage to come up with results which seems to be very "correct" in terms of human classification.
- Rodrigo Segnini and Craig Sapp, CCRMA and CCARH, Stanford, presented the ideas of making Scoregrams from notation. This is basically a way of generating "spectrograms" of a symbolic signal. The point is to be able to quickly visualize what is going on at different levels in the music. They have made them so that the window size they are looking at is decreasing from bottom to top. This gives a very detailed image at the bottom and only one value on top.
- Snorre Farner, NTNU, presented work on "naturalness" in clarinet play, and jump-started a discussion on the concept of naturalness, expressivness etc. Definitely a burning topic these days!
- Christophe Rhodes, Goldsmiths, London, had made a system for writing lute tablature. Kind of a niche thing, but it looked very neat!
- Mark Marshall, McGill, presented results from some preliminary tests on the usability of various sensors for musical applications. I think this is a very important topic, and I hope he continues to look into this. At the moment he has been focusing on pitch/melody related issues, but this should be extended to also cover rhythmical and timbral elements.
- Kristoffer Jensen, Aalborg, showed some very interesting examples of the boundaries between noice and tonal sound.
- Cynthia Grund, Odense, called for a panel on interdisciplinarity issues. Coming from music philosophy, she called for more coopeartion between the technologies and the "traditional" humanities. Many coming from a technical side would benefit from looking at recent issues in the humanities, and vice versa. What is quite clear is that most people working in the humanities have not realized the exponential growth in the Music Information Retrieval in the last years, driven by strong commercial and application-based (internet queries) interests.
ICMC, Barcelona, Spain 4-10 September 2005
Sunday 4 September
- I attended a workshop on audio mosaicing, which was more like a set of presentations by different people, but still interesting.
- Jason Freeman, PhD from Columbia, now at Georgia Tech, talked about a java applet creating a 5 second "thumbnail song" of your iTunes collection.
- Opening concert
- Chris Brown had composed a piece for the Reactable interactive table made at UPF. The table is very nice, and is responding quickly, but I felt there was a missing link when it comes to the relationships between the gestures made, objects presented and sonic output.
- Jose Manuel Berenguer played sounds and visuals. I liked the beginning a lot, with a nice combination of granulated sounds and visual particle sworms.
- Ali Momeni’s installation "un titled" is using the new moother object which makes it possible to acces the Freesound database from within Max/MSP and PD. Ali used it to query for similar files and organizing them in 2-dimensional "sound spaces". A large mechanical construction is controlling the parameters via Wacom tablets. Nice concept and I like the idea of getting things bigger and more heavy to use, but I had some problems with the mappings and the concept of having to press the large sticks down in the ground to get new sounds.
Monday 5 September
- Fernando Lopez-Lezcano, CCRMA, Stanford, talked about Planet CCRMA and future issues. On a question on free software, he said something like "to me, free software is definitely not free".
- Norbert Schnell from IRCAM presented FTM, a nice collection of Max-objects for more advanced data handling in Max/MSP.
- Rosemary Mountain, Concordia / Hexagram, showed a setup for testing how people can organize visual and auditory stimuli. She used a wireless barcode reader.
- Ge Wang, Soundlab, Princeton, showed his Chuck programming language, a text-based music language, with some nice graphical add-ons. I’m very sorry I missed his "text-battle" with Nick Collins at the Off-ICMC.
Tuesday 6 September
- Vegard Sandvold, NOTAM, presented some promising results on the use of semantic descriptors of musical intensity. I tried the experiment when it was up and running, and have some problems with the concept of forcing stimuli into predetermined categories. It would be interesting to do a set of similar experiments using a continuous scale instead. His system is currently used by NRK in the radio-intentiometer.
- Douglas Geers, Minnesota, presented a nice piece in the evening concert, with a violinist wearing glowing thread which he processed with Jitter.
Wednesday 7 September
Thursday 8 September
- Rui Pedro Paiva, University of Coimbra, Portugal, presented a way of melody extraction from a polyphonic signal. Based on auditory filtering, and with no attempt to make it fast, they obtained an average performance of about 82% on a varied set of music.
- Geoffery Peeters, IRCAM, presented a method for rhythm detection which seems to be very promising.
- Nick Collins, Cambridge, presented an overview of different segmentation algorithms.
- Xavier Serra, UPF, presented a nice overview of current music technology research, and called for a roadmap for future research.
Friday 9 September
- Eduardo Reck Miranda, Future Music Lab, Plymouth, showed some of his work using EEG to control music. They still have a long way to go, since the signals are weak and noisy, but they had managed to get people to control simple playback of sequences.
- Carlos Guedes, NYU / Porto, presented his m-tools, a small package of Max-objects developed for controlling musical rhythm from dance movements.
- and Perry Cook, Princeton, showed tools
- Jasch and played a nice set at the Off-ICMC.