New publication: Performing the Electric Violin in a Sonic Space

I am happy to announce that a paper I wrote together with Victoria Johnson has just been published in Computer Music Journal. The paper is based on the experiences that Victoria and I gained while working on the piece Transformation for electric violin and live electronics (see video of the piece below).

A. R. Jensenius and V. Johnson. Performing the electric violin in a sonic space. Computer Music Journal, 36(4):28–39, 2012.

This article presents the development of the improvisation piece Transformation for electric violin and live electronics. The aim of the project was to develop an “invisible” technological setup that would allow the performer to move freely on stage while still being in full control of the electronics. The developed system consists of a video-based motion-tracking system, with a camera hanging in the ceiling above the stage. The performer’s motion and position on stage is used to control the playback of sonic fragments from a database of violin sounds, using concatenative synthesis as the sound engine. The setup allows the performer to improvise freely together with the electronic sounds being played back as she moves around the “sonic space.” The system has been stable in rehearsal and performance, and the simplicity of the approach has been inspiring to both the performer and the audience.

The PDF will be available in the University of Oslo public repository after the 6 month embargo. Until then, it is available through either MIT Press or Project MUSE.

BibTeX entry
Author = {Jensenius, Alexander Refsum and Johnson, Victoria},
Journal = {Computer Music Journal},
Number = {4},
Pages = {28–39},
Title = {Performing the Electric Violin in a Sonic Space},
Volume = {36},
Year = {2012}}

Video of the piece Transformation.

Performing with the Norwegian Noise Orchestra

Performing with the Norwegian Noise OrchestraYesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.

For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.sonifyer~). The technique is based on creating a motion image from the live camera input (the webcam of my laptop in this case), and use this to draw a motiongram over time, which again is converted to sound through an “inverse FFT” process.

In the performance I experimented with how different types of video filters and effects influenced the sonic output. The end result was, in fact, quite noisy, as it should be at a noise performance.

To document my contribution, I have made a quick and dirty edit of some of the video recordings I did during the performance. Unfortunately, the audio recording of the cameras used does not do justice to the excellent noise in the venue, but it gives an impression of what was going on.

Transformation on YouTube

Victoria Johnson has posted a video of the performance of our piece Transformation on Youtube:

The video is from Victoria’s final performance as part of her research fellowship in the arts (PhD-equivalent), which happened Monday 28 March 2011 at the Norwegian Academy of Music.

As I wrote earlier this year:

Transformation a piece where we are using video analysis to control sound selection and spatialisation. We have been developing the setup and piece during the last couple of years, and performed variations of the piece at MIC, the Opera house and at the music academy last year.

Musical Objects, Action Sound Couplings and Open Form

From the Open Form workshop
My setup at the Open Form workshop

Participating in the Open Form rehearsals and workshops has been very interesting (as previously mentioned here). One thing has puzzled me over the last few days: the lack of focusing on the musical objects. I use musical object to denote a coherent entity consisting of sounding objects (in a Schaefferian sense) but also all the other modalities (in my case particularly visual and haptic features). The musical object is a rather short entity, typically in the range between 1-5 seconds, but sometimes shorter or longer. After reading Schaeffer, Stern, Godøy and others, and my own work on short term music recognition, I have come to believe that this is both the most interesting unit in terms of performance and perception of music.

This is one of the reasons I find the ideals in the open form movement fascinating. In one way, open form is all about allowing for controlled improvisation with a focus on space and listening. The scores often lay out a palette of effects and musical qualities to be used. As such, each of these entities can be thought of as a musical object. This is also often how the musicians conceptualise musical improvisation. What I have found, though, is that still people tend to focus more on the form than the object. This surprises me, because of any musical style, I would assume that the form aspects would be the least important in open form.

Another thing I have been puzzled about is the breakdown of action-sound couplings. Such couplings can range from natural (as we are used to from acoustical properties of all objects surrounding us in nature) to abstract (as we often find in electronic devices). Every object has a certain action-sound characteristic a palette of possible interaction modes and sounds. This is what governs our listening. However, in experimental music, performers increasingly tend to extend the palette of their instruments. This could be done either acoustically, e.g. prepared piano, but also electronically, e.g. using various sound effects. In many cases such extensions become standard, the most obvious example being electric guitars where distortion pedals and other types of effects have long become part of the standard action-sound repertoire. This also makes it is easy for everyone to understand what is happening when a distorted sound appear. Typically, the guitarist will also step on the pedal so that everyone will be prepared for the new sound to appear.

However, in electronic instruments there are few, or no, action-sound characteristics to choose from, leaving both the performer(s) and perceivers trying to look for couplings that work. This is demanding for everyone since it requires a lot of mental effort to continuously organise new action-sound couplings.

Master exam concert

Last week I performed my master exam concert at the Department of Music and Theatre, University of Oslo. The program consisted of improvisations for piano and live electronics. Different MIDI, audio, and video processing techniques were used. Here I describe the different pieces.

Metrosus (installation)

I always find it sad that there is no (musical) sound when you arrive at a concert hall. this installation is based on a series of random functions that will in theory play “new” sound for years. People passing by interacts with the installation through an infrared “switch”.


It is incredible how many exciting sounds one can get from a piano, and mallets are a nice change from playing on the keys. The computer helps with temporal adjustments and background sounds.

Contrasting Thoughts

An improvisation for piano and reactive video animation.

Skeiv Halling

When I studied in the US, I was asked to play Norwegian folk music in a concert. The best I came up with was an improvisation of Norsk, opus 12 no 6 by Edvard Grieg. This is a version with some MIDI transformations.

Random Piano

Every pianist’s nightmare would be that the keys change position while playing, which happens here, allowing for a different type of improvisation.


This piece was initially inspired by Spain by Chick Corea but has turned into something completely different.


The piece is based on short recorded sound sequences chopped up and played over four speakers.


Thanks to all my previous piano teachers, and in particular Anne Eline Risnæs (UiO), Misha Alperin (NMH), Bevan Manson (University of California, Berkeley). Thanks also to my computer music teachers Edmund Campion and David Wessel (CNMAT, UC Berkeley) and Asbjørn Flø (NOTAM). finally, thanks to Gunnar Flåtten, Rolf Inge Godøy, and Henrik Sundt for various assistance with the concert.