Forestill deg Betong full av mennesker som kun ved sin tilstedeværelse er med på å definere både form og innhold på en nattverdsgudstjeneste. Dette er Interaktiv messe, en ikke-lineær multimediadustjeneste som ble arrangert av Norges kristelige studentforbund søndag 29. august kl 2100 på Betong.
Interaktiv messe føyer seg inn i rekken av Norges kristelige studentforbunds eksperimenterelle messer. Denne gangen brukes ny teknologi for å forandre både form og innhold i gudstjenesten. Ideen er at hverken struktur eller innhold er fast bestemt på forhånd. Alt innholdet ligger lagret i et nettverkssystem som minner om internett. Dette skaper et fleksibelt system, som vil resultere i forskjellig messe hver gang den arrangeres. Ved hjelp av en rekke forskjellige sensorer og videoanalysesystemer rundt omkring i rommet, vil de som er tilstede påvirke systemet enten passivt eller aktivt.
Den tradisjonelle gudstjenesteliturgien er forankret i en lineær struktur som kan virke fremmed for mange som ikke går i kirken til vanlig, eller gjøre at mange lett glir inn i en døs uten å følge med på hva som egentlig skjer. Målet har derfor vært å fokusere på innholdet, og åpne for at det kan konstrueres på flere forskjellige måter. Istedenfor å planlegge i hvilken rekkefølge ting skal skje, defineres hvilket innhold man ønsker og så vil rekkefølgen “skapes” underveis. Fordi de forskjellige elementene i systemet er vektet i forhold til hverandre vil ingenting skje helt tilfeldig, men det vil alltid være rom for flere “løsninger”.
Det er viktig at de forskjellige elementene skal kunne eksistere parallelt med hverandre, og gjensidig påvirke hverandre.
Det er mye snakk om å lage messer som er inkluderende, alternative osv, men ofte utfordres kun enkeltelementer i messen? Er det mulig å være radikal når det gjelder både innhold og form? Utfordringen ligger i å lage et system som ivaretar formelle messekrav, men som utfordrer tradisjonelle oppfatninger. – Arbeidet med Interaktiv messe har vært veldig annerledes enn det jeg er vant til, sier studentprest i Forbundet Margrete Hovland. – Istedenfor å jobbe med de tradisjonelle strukturene, har jeg måttet tenke mer på å skape muligheter for improvisasjon. Jeg tror resultatet kan virke som en tankevekker for mange.
Vi har jobbet med å bryte ned messen. Resultatet er at innstiftelsesordene blir stående som det sentrale strukturelle elementet. I forbindelse med dette har vi laget en liste med ord som kan assosieres med de strukturelle ordene. Basert på dette har vi igjen trukket ut noen kategorier som sammenfatter budskapet.
Rommet er fylt av forskjellige type sensorer som til enhver tid følger menneskemassens bevegelser i rommet og enkeltpersoners handlinger. Dette danner grunnlaget for messens struktur og innhold. Materialet som er forberedt på forhånd er lagret slik at det kan settes sammen på et utall forskjellige måter, basert på signaler fra de interaktive elementene. Systemet er programmert ved hjelp av det grafiske programmeringsverktøyet MAX/MSP/Jitter.
Working with choreographer Mia Habib, I created the piece Laser Dance, which was shown on 30 November 1 December 2001 at the Norwegian Academy of Ballet and Dance in Oslo.
The theme of the piece was “Light”, and the choreographer wanted to use direct light sources as the point of departure for the interaction. Mia had decided to work with laser beams, one along the backside of the stage and one on the diagonal, facing towards the audience. The idea was to get sound when the dancers went through the laser beams. This way the sound would be an aural representation of the “broken” light. My part was to help with the lasers and the interactive sound.
First of all, we needed to get some lasers. We tested with a normal laser pen, but the thin beam was merely invisible on stage. Luckily, I happened to have access to a professional laser used for physics experiments. With some smoke on stage, this laser was bright and clear. Since we only had one good laser, we used a narrowed down spotlight as the light beam in the back of the stage.
It soon became clear that the visual beams were not good for motion detection. Henrik Sundt atNoTAM suggested using pairs of IR-senders/receivers instead. When the sender and receiver are in contact with each other nothing happens, but as soon as the signal is broken the receiver sends a pulse. The sensors were quite cheap consumer electronics and when we tested the equipment at NoTAM we found that the reaction time was somewhat slow. This triggered some nerves because the whole project would not be much worth it if the sensors were not capable of detecting fast movements from the dancers. Finally getting everything set up on stage we were happy to find that they worked quite well. A minor problem was the latency of the receiver to regain contact with the sender after moving out of the beam. Anyway, the fact that the sound turned on almost immediately was more important than a second of sound coming also after passing the beam.
Øyvind Hammer and Henrik Sundt helped with getting the NoTAM MIDI controller to work with the sensors. The chords from the IR-receivers were connected to the inputs of the controller. When a signal was detected from one of the receivers, the controller sent a MIDI message with the corresponding channel (1 or 2) and a note-off message. This signal went into MAX/MSP, and we were finally ready to start experimenting with the sound.
From the beginning, Mia knew that she wanted to have two bright and different sounds. On one side they should be almost as pure as a sine tone, as a resemblance to the bright light beams. On the other side, they needed to be “living” and interesting for the dancers to improvise with while in the beam. Since also a side-drum would be involved, the sounds should aspire to blend in with this sound.
This way of musical thinking was novel to me. I am used to working with sounds as self-contained aesthetic objects, and often from a linear perspective. Here I was presented with the task of making what could be called “hypertext” sound with clear limitations. The sounds themselves would constitute only a part of the whole performance and would be more as a tool for the dancers to work with, than aesthetic pleasing. After having made a couple of samples, we sat down tweaking the parameters in the patch. Finally, we came up with two sounds that we both found to be pleasing, interesting and meeting the initial requirements.
When I started building the patch, it was with the idea that it should be self-contained and easy to use. At the same time, it should be powerful enough to provide for doing rapid changes in the sound. This way Mia could operate the whole system herself for the rehearsals with the dancers. The final patch is seen in the picture above. Two boxes labelled ”Laser 1” and ”Laser 2” are turned on and off by the IR sensors. They can also be used on-screen for rehearsals and testing. The gate switch on ”Laser 2” was used at the end of the show when only this sound should be turned off. Frequencies and random generators can be turned on and off and adjusted easily. As shown in the picture on the next side, the patch reveals a hidden world of patch chords and objects, resulting from constant alterations. In the following I will briefly go through the various parts of the patch:
We recall that the MIDI controller sent note-offs when the IR-signal was broken. As can be seen from the patch, MIDI is being picked up by a ”notein”. First, the two different signals are separated by the ”select” command. Then an if-test checkswhether the beam is on (broken) or off (unbroken). For “on” the volume is set to 50 and else it is set to 0.
Each sound is made from a set of three added sinusoids represented with the ”cycle~” object. All of the frequencies can be adjusted directly on-screen.
The ”rand~” object is used to make the sound more vibrant, through a random function. To make this even more “lively”, a random generator controlled by a metronome is changing the ”rand~” seed at given intervals.
A chaos slider at the left was added to increase the tension in both sounds, controlled by an expression pedal.5
A “reset” button is available for quickly regaining the preferred settings.
It was less than a week before the premiere that we finally managed to get everything together. After working with gathering the equipment and making the patch, it was very exciting to test everything in practice and see how it worked with the dancers and the drummer.
Three dancers (two male, one female) were on stage, “trapped” within the light beams. During the 15 minutes long performance, they worked their way from silence to sound climax to silence. Spotlights were used to create massive, cascading effects in between total darkness. The interactive sounds were used partly in short sequences with the dancers jumping in and out of the beams, partly in some longer sequences where the dancers moved “inside” the beams. The two short videos on the CD give some idea of the show.
Luckily, the Mac did not crash and the sound turned on and off as it should, every day. Except for that, we did not encounter any other problems than repositioning the sensors to work every night, because the stage had to be cleared after each performance.
SYNCHRONICITY AND CROSSMODALITY
After working with this project, it is tempting to draw a parallel to Egil Haga’s (1999) Master’s thesis about sounds and actions. Using the term synchronicity he refers to the concept when a physical action and a sound gesture is believed to be generated from the same action. He points out that most films contain soundtracks generated in studios, and he is critical of examples where the synchronicity is imprecise and poor. Further, he mentions the fact that cross-modal perception results in greater stimuli. For example, it is easier to understand a person talking if you can see the lip movement.
In his last chapter, Haga analyses a dance-music performance where he points out the excellent synchronicity between the music and the gestures used by the dancers. Somehow, after working with the Laser Dance project I got a much better understanding of his comments. Even though he is talking about classical music and dance movements based on the sound gestures, I see a clear line to the interactive sound I have been working with. The difference is mostly that in the Laser Dance project, a sound is more a tool for the dancers to work with, than a musical piece.
Interactive music controlled by sensors really makes the dancers in control of the whole performance. Through body gestures, they control the sound themselves making the synchronicity excellent (at least with efficient sensors…) and the result is a performance where the dance and sound blend together in harmony. This is not always the case when looking at other dance performances with “linear” music. No doubt, this project has opened my eyes to a totally new way of thinking about music, and I have also got a lot of ideas for new projects involving dance and interactive sound.
Last week I performed my master exam concert at the Department of Music and Theatre, University of Oslo. The program consisted of improvisations for piano and live electronics. Different MIDI, audio, and video processing techniques were used. Here I describe the different pieces.
I always find it sad that there is no (musical) sound when you arrive at a concert hall. this installation is based on a series of random functions that will in theory play “new” sound for years. People passing by interacts with the installation through an infrared “switch”.
It is incredible how many exciting sounds one can get from a piano, and mallets are a nice change from playing on the keys. The computer helps with temporal adjustments and background sounds.
An improvisation for piano and reactive video animation.
When I studied in the US, I was asked to play Norwegian folk music in a concert. The best I came up with was an improvisation of Norsk, opus 12 no 6 by Edvard Grieg. This is a version with some MIDI transformations.
Every pianist’s nightmare would be that the keys change position while playing, which happens here, allowing for a different type of improvisation.
This piece was initially inspired by Spain by Chick Corea but has turned into something completely different.
The piece is based on short recorded sound sequences chopped up and played over four speakers.
Thanks to all my previous piano teachers, and in particular Anne Eline Risnæs (UiO), Misha Alperin (NMH), Bevan Manson (University of California, Berkeley). Thanks also to my computer music teachers Edmund Campion and David Wessel (CNMAT, UC Berkeley) and Asbjørn Flø (NOTAM). finally, thanks to Gunnar Flåtten, Rolf Inge Godøy, and Henrik Sundt for various assistance with the concert.