One thing that has occurred to me over recent years, is how the new international trend of developing music controllers and instruments, as for example most notably seen at the annual NIME conferences, challenges many traditional roles in music. A traditional Western view has been that of a clear separation between instrument constructor, musician and composer. The idea has been that the constructor makes the instrument, the composer makes the score, the performer plays the score with the instrument, and the perceiver experiences the performance, as illustrated in the figure below.
However, as we often see in the community surrounding the NIME conferences, there are many people that take on all of these three roles themselves. They make their own instruments, compose the music, and also perform themselves. This new trend also challenges the traditionally separated concepts of instrument and composition. Using various types of neurophysiological, physiological or biomechanical sensors, performers themselves may become part of the instrument. Similarly, the instrument may become part of the composition through various types of algorithmic processing. The perceivers may also become part of both the instrument and the composition in systems based on audience participation and collaborative performance. As such, the notion of the traditional concert is changing, since many “instruments” and “compositions” may be used as installations in which the perceivers take an active part. In this way perceivers are turned into performers, and the composers end up as perceivers to the performance.
I find this change of role exciting, but it is also a challenge to traditional (music) institutions that are built around the very idea of separating all these elements. So it is perhaps not too surprising, that a lot of NIME activity is happening outside traditional music arenas. I don’t have any empirical evidence of this, but my feeling is that there are more people developing, composing and performing with NIMEs in computer science departments, architecture schools, fine art academies or just entirely outside of any institutions, than within music academies. It will be interesting to see whether this will change over the years, and that we will see more interdisciplinary work also within the musical ecosphere.
I have moved my web pages back and forth between many servers and domains over the years, and each time something breaks and/or disappears. This is particularly the case for my various projects, and I realized that on my projects page more or less all the links were broken. But rather than trying to update that page, I have decided to start using my blog as an archive for projects, and tag them with projects. So over the coming months I will slowly start adding historic blog posts, trying to date them at the time when I first published the content.
First out of these historic documents is the project Laser Dance (2001), which was the first project where I got into interactivity in performance. The interactivity was based on a very simple solution: a single IR-sensor pointing in the same direction as a laser beam. Even though it only gave a digital signal (motion on/off), it had a strong visual (and auditory) impact. A good reminder that a simple solution can often work best!
Flickr has opened for uploading videos, or, rather, what they call “long photos”. As such, they are not trying to compete with YouTube or Vimeo, but rather making it possible to upload videos that are closer to a photography than a movie (i.e. with a narrative). I like this approach, and it resonates with how I am often recording a video as if it was a photography.
The difference between what I could call a photo video and a movie video, can be seen as analog to the difference between music compostion/production and soundscaping. Composition/production is about organizing sonic events in time. Soundscaping and field recording is about capturing and reproducing sound.
Obviously, drawing distinct borders between photo videos and movie videos make no sense. After all, a video recording is using time as an element, no matter what is being recorded. And the photography/film history is full of experimentation on the borders between photography and movies. But there are some important conceptual differences and practical considerations being taken when photographing a still picture rather than shooting a movie. It will be interesting to see how people approach this, as the possibility to add “long photos” to Flickr is now a reality.
This, and several other recent and forthcoming blog posts, have been lying in the drafts folder of my blog writing software (MarsEdit) for a while (some for more than 4 years…). I am currently going through the drafts one by one, deleting most of them, but also posting a few. Here is one I started writing back in 2009:
Alex Payne has published a list of rules for computing happiness. I don’t agree with all of them, but many of them resonate with my own thoughts. Here is a condensed list, based on the things I find most important:
Use as little software as possible.
Use software that does one thing well, do not use software that does many things poorly.
Do not use software that must sync over the internet to function.
Use a plain text editor that you know well. Not a word processor, a plain text editor.
Do not use software that’s unmaintained.
Pay for software that’s worth paying for, but only after evaluating it for no less than two weeks.
Keep as much as possible in plain text. Not Word or Pages documents, plain text.
For tasks that plain text doesn’t fit, store documents in an open standard file format if possible.
Particularly the last ones, about using plain text files rather than a bunch of proprietary formats, is something I have been more concerned about recently.