New article: Group behaviour and interpersonal synchronization to electronic dance music

I am happy to announce the publication of a follow-up study to our former paper on group dancing to EDM, and a technical paper on motion capture of groups of people. In this new study we successfully managed to track groups of 9-10 people dancing in a semi-ecological setup in our motion capture lab. We also found a lot of interesting things when it came to how people synchronize to both the music and each other.

Solberg, R. T., & Jensenius, A. R. (2017). Group behaviour and interpersonal synchronization to electronic dance music. Musicae Scientiae.

The present study investigates how people move and relate to each other – and to the dance music – in a club-like setting created within a motion capture laboratory. Three groups of participants (29 in total) each danced to a 10-minute-long DJ mix consisting of four tracks of electronic dance music (EDM). Two of the EDM tracks had little structural development, while the two others included a typical “break routine” in the middle of the track, consisting of three distinct passages: (a) “breakdown”, (b) “build-up” and (c) “drop”. The motion capture data show similar bodily responses for all three groups in the break routines: a sudden decrease and increase in the general quantity of motion. More specifically, the participants demonstrated an improved level of interpersonal synchronization after the drop, particularly in their vertical movements. Furthermore, the participants’ activity increased and became more pronounced after the drop. This may suggest that the temporal removal and reintroduction of a clear rhythmic framework, as well as the use of intensifying sound features, have a profound effect on a group’s beat synchronization. Our results further suggest that the musical passages of EDM efficiently lead to the entrainment of a whole group, and that a break routine effectively “re-energizes” the dancing.


New publication: “How still is still? exploring human standstill for artistic applications”

sverm-dumpI am happy to announce a new publication titled How still is still? exploring human standstill for artistic applications (PDF of preprint), published in the International Journal of Arts and Technology. The paper is based on the Sverm project, and was written and accepted two years ago. Sometimes academic publishing takes absurdly long, which this is an example of, but I am happy that the publication is finally out in the wild.


We present the results of a series of observation studies of ourselves standing still on the floor for 10 minutes at a time. The aim has been to understand more about our own standstill, and to develop a heightened sensitivity for micromovements and how they can be used in music and dance performance. The quantity of motion, calculated from motion capture data of a head marker, reveals remarkably similar results for each person, and also between persons. The best results were obtained with the feet at the width of the shoulders, locked knees, and eyes open. No correlation was found between different types of mental strategies employed and the quantity of motion of the head marker, but we still believe that different mental strategies have an important subjective and communicative impact. The findings will be used in the development of a stage performance focused on micromovements.


Jensenius, A. R., Bjerkestrand, K. A. V., and Johnson, V. (2014). How still is still? exploring human standstill for artistic applications. International Journal of Arts and Technology, 7(2/3):207–222.


    Author = {Jensenius, Alexander Refsum and Bjerkestrand, Kari Anne Vadstensvik and Johnson, Victoria},
    Journal = {International Journal of Arts and Technology},
    Number = {2/3},
    Pages = {207--222},
    Title = {How Still is still? Exploring Human Standstill for Artistic Applications},
    Volume = {7},
    Year = {2014}}

Laser dance

Working with choreographer Mia Habib, I created the piece Laser Dance, which was shown 30 November 1 December 2001 at the Norwegian Academy of Ballet and Dance in Oslo.

The theme of the piece was “Light”, and the choreographer wanted to use direct light sources as the point of departure for the interaction. Mia had decided to work with laser beams, one along the back side of the stage and one on the diagonal, facing towards the audience. The idea was to get sound when the dancers went through the laser beams. This way the sound would be an aural representation of the “broken” light. My part was to help with the lasers and the interactive sound.

First of all we needed to get some lasers. We tested with a normal laser pen, but the thin beam was merely invisible on stage. Luckily, I happened to have access to a professional laser used for physics experiments. With some smoke on stage, this laser was bright and clear. Since we only had one good laser, we used a narrowed down spotlight as the light beam in the back of the stage.

It soon became clear that the visual beams were not good for motion-detection. Henrik Sundt atNoTAM suggested to use pairs of IR-senders/receivers instead. When the sender and receiver is in contact with each other nothing happens, but as soon as the signal is broken the receiver sends a pulse.The sensors were quite cheap consumer electronics and when we tested the equipment at NoTAM we found that the reaction time was somewhat slow. This triggered some nerves, because the whole project would not be much worth if the sensors were not capable of detecting fast movements from the dancers. Finally getting everything set up on stage we were happy to find that they worked quite well. A minor problem was the latency of the receiver to regain contact with the sender after moving out of the beam. Anyway, the fact that the sound turned on almost immediately was more important than a second of sound coming also after passing the beam.

Øyvind Hammer and Henrik Sundt helped with getting the NoTAM MIDI-controller to work withthe sensors. The chords from the IR-receivers were connected to the inputs of the controller. When a signal was detected from one of the receivers, the controller sent a MIDI message with the corresponding channel (1 or 2) and a note-off message. This signal went into MAX/MSP, and we were finally ready to start experimenting with the sound.


From the begnning, Mia knew that she wanted to have two bright and different sounds. On one side they should be almost as pure as a sine tone, as a resemblance to the bright light beams. On the other side they needed to be “living” and interesting for the dancers to improvise with while in the beam. Since also a side-drum would be involved, the sounds should aspire to blend in with this sound.

This way of musical thinking was novel to me. I am used to working with sounds as self-contained aesthetic objects, and often from a linear perspective. Here I was presented with the task of making what could be called “hyper text” sound with clear limitations. The sounds themselves would constitute only a part of the whole performance, and would be more as a tool for the dancers to work with, than aesthetic pleasing.After having made a couple of samples, we sat down tweaking on the parameters in the patch. Finally we came up with two sounds that we both found to be pleasing, interesting and meeting theinitial requirements.


When I started building the patch, it was with the idea that it should be self-contained and easy to use. At the same time it should be powerful enough to provide for doing rapid changes in the sound. This way Mia could operate the whole system herself for the rehearsals with the dancers. The final patch is seen in the picture above. Twoboxes labeled ”Laser 1” and ”Laser 2” are turned on and off by the IR-sensors. They can also be used on-screen for rehearsals and testing. The gate switch on ”Laser 2” was used in the end of the show when only this sound should be turned off. Frequencies and random generators can be turned on and off and adjusted easily.As shown on the picture on the next side, the patch reveals a hidden world of patch chords and objects, resulting from constant alterations. In the following I will briefly go through the various parts of the patch:

  1. We recall that the MIDI controller sent note-offs when the IR-signal was broken. As can be seen from the patch, MIDI is being picked up by a ”notein”. First, the two different signals are separated by the ”select” command. Then an if-test checkswhether the beam is on (broken) or off (unbroken). For “on” the volume is set to 50 and else it is set to 0.
  2. Each sound is made from a set of three added sinusoids represented with the ”cycle~” object. All of the frequencies can be adjusted directly on-screen.
  3. The ”rand~” object is used to make the sound more vibrant, through a random function. To make this even more “lively”, a random generator controlled by a metronome is changing the ”rand~” seed at given intervals.
  4. A chaos slider at the left was added to increase the tension in both sounds, controlled by an expression pedal.5
  5. A “reset” button is available for quickly regaining the preferred settings.


It was less than a week before the premiere that we finally managed got everything together. After working with gathering the equipment and making the patch, it was very exciting to test everything in practise and see how it worked with the dancers and the drummer.

Three dancers (two male, one female) were on stage, “trapped” within the light beams. During the 15 minute long performance they worked their way from silence to sound climax to silence. Spotlights were used to create massive, cascading effects in between total darkness. The interactive sounds where used partly in short sequences with the dancers jumping in and out of the beams, partly in some longer sequences where the dancers moved “inside” the beams. The two short videos on the CD give some idea of the show.

Laserdance video excerpt

Luckily, the Mac did not crash and the sound turned on and off as it should, every day. Except for that we did not encounter any other problems than repositioning the sensors to work every night, because the stage had to be cleared after each performance.


After working with this project, it is tempting to draw a parallel to Egil Haga’s (1999) Master’s thesis about sounds and actions. Using the term synchronicity he refers to the concept when a physical action and a sound gesture is believed to be generated from the same action. He points out that most films contain sound tracks generated in studios, and he is critical towards examples where the synchronicity is imprecise and poor. Further, he mentions the fact that cross-modal perception results in greater stimuli. For example, it is easier to understand a person talking if you can see the lip movement.

In his last chapter, Haga analyses a dance-music performance where he points out the excellent synchronicity between the music and the gestures used by the dancers. Somehow, after working with the Laser Dance project I got a much better understanding of his comments. Even though heis talking about classical music and dance movements based on the sound gestures, I see a clear line to the interactive sound I have been working with. The difference is mostly that in the Laser Dance project, the sound is more a tool for the dancers to work with, than a musical piece.

Interactive music controlled by sensors really make the dancers in control of the whole performance. Through body gestures they control the sound themselves making the synchronicity excellent (at least with efficient sensors…) and the result is a performance where the dance and sound blend together in harmony. This is not always the case when looking at other dance performances with “linear” music. No doubt, this project has opened my eyes for a totally new way of thinking about music, and I have also got a lot of ideas for new projects involving dance and interactive sound.