One of the most satisfying things of being a researcher, is to see that ideas, theories, methods, software and other things that you come up with, are useful to others. Today I received the master’s thesis of Per Erik Walslag, titled Are you jumping or bouncing? A case-study of jumping and bouncing in classical ballet using the motiongram computer program, in which he has made excellent use of my motiongram technique and my VideoAnalysis software. This thesis was completed at NTNU last year within the program Nordic Master’s in Dance (NoMAds). That master program is in itself a great example of how a group of fairly small departments can come up with an excellent collaborative study program. I was invited to guest lecture at the program back in 2009, and am very happy to see that my lecture inspired some thoughts and ideas in the students.
Even has carried out a so-called “practical” master thesis, with a more practical focus. He has carried out a mocap analysis of how people move while playing computer games with a Kinect device, and has also prototyped several mocap instruments.
Sound is often used as a feedback modality in technological devices. Yet relatively little is known about the relation between sound and motion in interactive systems. This thesis exam- ines what happens in the intersection between human-computer interaction, motion and sonic feedback. From the connection of music and motion, coupled by technology, we can draw the expression “Music Kinection”. A theoretical foundation accounts for the relationships that exist between sound and motion, and cognitive foundations for these relationships. This study of literature on music and motion, and music cognition theory, shows that there are many aspects that support various relationships between sound and motion. To see if it is possible to detect similarities between users of an interactive system, a user-study was performed with 16 subjects playing commercially available video games for the Kinect platform. Motion capture data was recorded and analyzed. The user-study showed that there is an overall similarity in the amount of motion performed by the user, but that there is some deviation in amount of motion performed by body parts important to the gameplay. Many users will choose the same body part for one task, but will apply different tactics when using this limb. Knowledge from the theory and observation study was used in the practical explorations of sound-action relationships. Two installations, Kinect Piano and Popsenteret Kinect installation, was made, together with two software prototypes, Soundshape and Music Kinection. The practical study showed that working with full-body motion capture and sound in human-computer interaction is dependent on good motion feature extraction algorithms and good mapping to sound engines.
- Catherine Støver: Freestyle Dressage : an equipage riding to music
Catherine wrote about the importance and influence of music in freestyle dressage. Most of my students are working on more music technological topics, and I can clearly say that supervising Catherine was both fun and a great learning experience for myself. I now know much more about horses and riding and music than I did before.
Here is Catherine’s own abstract for the thesis:
This thesis is a study of freestyle dressage as a specific case of music related movement. Freestyle dressage is performed by horse and rider in competitions, and is ridden with music. The music is a part of the performance and music and movement is supposed to be related. The aims of the thesis is to (a) shed light on what influence the music has on the equipage (b) how this affect the audience and judges (c) whether the synchronicity between horse and rider is real or imagined. The symbiosis of what we hear and see is what makes the performance spectacular, but it is also the reason why we very quickly sense when something is not synchronized. These strong links between sound and movement is something the audience is aware of, but do we still get spellbound? This thesis tries to reveal to what degree our senses presume that events are synchronous, and at the same time tries to establish whether the music and movements are related. The thesis is divided into three parts, the first part is theoretical and the two following are both empirical. The methods used here are a literature study and an empirical study with qualitative analysis of relationships between motion and sound and interviews of a selected group of people with different backgrounds. The thesis concludes that the music does make a difference to the audience and the rider. The rider has to pay attention to the music and the audience gets a spectacular show when music is part of the freestyle dressage program.
While preparing a lecture for the PhD students at the Norwegian Academy of Music, I came across some of the sound files I created for my MA thesis on salience in (musical) sound perception. While the content of that thesis is now mostly interesting as a historic document, I had a good time listening to the sound examples again. There are three things, in particular, that I still find interesting:
1. Duration of sound
How short sound excerpts are musically meaningful? Try, for example, these cuts from the opening of Eric Clapton’s “Tears in Heaven”:
When asking people (mainly student groups), my experience is that some people actually manage to recognise the tune after only listening to the first fragment (134 ms), and a lot of people manage to recognise it after the second fragment (380 ms). This I find quite remarkable, considering how little (sonic) information is actually there. It is a great example of the fantastic capabilities of our auditory system.
2. Number of sinusoidal components
Another thing I tested in the thesis, was how timbre influences our perception of sound. To test this I created a set of examples of how a different number of sinusoidal components of a saxophone tone (played by [Sony Rollins](http://en.wikipedia.org/wiki/Sonny_Rollins)) influence the perceived timbral quality:
It is first with 60 sinusoidal components present that we really get to hear all the timbral features of the sound properly, yet the tonal content (the pitch) is preserved with only a few sinusoidal components.
3. Sound thumbnailing
While the above example focused on reduction in (timbral) space, I also tested reduction of sound in the temporal space. There has been a lot of research on sound thumbnails since the time I did my experiments. I still find the idea of creating really short sonic summaries of longer musical examples fascinating. Here are my tests of creating different types of thumbnails of Ravel’s Bolero: