Analyzing correspondence between sound objects and body motion

New publication: **Title ** Analyzing correspondence between sound objects and body motion Authors Kristian Nymoen, Rolf Inge Godøy, Alexander Refsum Jensenius and Jim Tørresen has now been published in ACM Transactions on Applied Perception. Abstract Links between music and body motion can be studied through experiments called sound-tracing. One of the main challenges in such research is to develop robust analysis techniques that are able to deal with the multidimensional data that musical sound and body motion present. The article evaluates four different analysis methods applied to an experiment in which participants moved their hands following perceptual features of short sound objects. Motion capture data has been analyzed and correlated with a set of quantitative sound features using four different methods: (a) a pattern recognition classifier, (b) t-tests, (c) Spearman’s ? correlation, and (d) canonical correlation. This article shows how the analysis methods complement each other, and that applying several analysis techniques to the same data set can broaden the knowledge gained from the experiment. ...

June 3, 2013 · 2 min · 235 words · ARJ

Performing with the Norwegian Noise Orchestra

Yesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon. For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.sonifyer~). The technique is based on creating a motion image from the live camera input (the webcam of my laptop in this case), and use this to draw a motiongram over time, which again is converted to sound through an “inverse FFT” process. ...

December 13, 2012 · 1 min · 207 words · ARJ

McLaren's Dots

I am currently working on some extensions to my motiongram-sonifyer, and came across this beautiful little film by Norman McLaren from 1940: The sounds heard in the film are entirely synthetic, created by drawing in the sound-track part of the film. McLaren explained this a 1951 BBC interview: I draw a lot of little lines on the sound-track area of the 35-mm. film. Maybe 50 or 60 lines for every musical note. The number of strokesto the inch controls the pitch of the note: the more, the higher the pitch; the fewer, the lower is the pitch. The size of the stroke con- trols the loudness: a big stroke will go “boom,” a smaller stroke will give a quieter sound, and the faintest stroke will be just a little “m-m-m.” A black ink is another way of making a loud sound, a mid-gray ink will make a medium sound, and a very pale ink will make a very quiet sound. The tone quality, which is the most difficult ele- ment to control, is made by the shape of the strokes. Well-rounded forms give smooth sounds; sharper or angular forms give harder, harsher sounds. Sometimes I use a brush instead of a pen to get very soft sounds. By drawing or exposing two or more patterns on the same bit of film I can create harmony and textural effects. (From Jordan, W. E. (1953). Norman McLaren: His career and techniques. The Quarterly of Film Radio and Television, 8(1):pp. 1–14). ...

September 11, 2012 · 2 min · 248 words · ARJ

Sound files from MA thesis

Edit: These files are now more easily accessible from my UiO page. While preparing a lecture for the PhD students at the Norwegian Academy of Music, I came across some of the sound files I created for my MA thesis on salience in (musical) sound perception. While the content of that thesis is now most interesting as a historical document, I had a good time listening to the sound examples again. There are three things, in particular, that I still find interesting: ...

June 6, 2012 · 2 min · 383 words · ARJ

New screencast on the basics of creating reverb in PD

I have written about my making of a series of sreencasts of basic sound synthesis in puredata in an earlier blog post. The last addition to the series is the building of a patch that shows how a simple impulse response, combined with a delay, a feedback loop and a low pass filter, can be used to simulate reverberation. In fact, dependent on the settings, this patch can also be used for making phasor, flanger, chorus and echo as well. It is interesting to see how these concepts are just variations of the same thing. ...

October 31, 2010 · 1 min · 128 words · ARJ

Music is not only sound

After working with music-related movements for some years, and thereby arguing that movement is an integral part of music, I tend to react when people use “music” as a synonym for either “score” or “sound”. I certainly agree that sound is an important part of music, and that scores (if they exist) are related to both musical sound and music in general. But I do not agree that music is sound. To me, sound is one (and an important one) component of music, but not the only one. From the perspective of embodied music cognition, music is truly multimodal, meaning that all our senses and modalities are involved in performance and perception. This is not to mention all the cultural and contextual elements involved in our experience of music. ...

October 25, 2010 · 1 min · 172 words · ARJ

AudioAnalysis v0.5

I am teaching a course in sound theory this semester, and therefore thought it was time to update a little program I developed several years ago, called SoundAnalysis. While there are many excellent sound analysis programs out there (SonicVisualiser, Praat, etc.), they all work on pre-recorded sound material. That is certainly the best approach to sound analysis, but it is not ideal in a pedagogical setting where you want to explain things in realtime. ...

October 11, 2010 · 2 min · 278 words · ARJ

PD introductions in Norwegian on YouTube

I am teaching two courses this semester: Sound theory 1 (in English) Sound analysis (in Norwegian, together with Rolf Inge Godøy) In both courses I use Pure Data (PD) for demonstrating various interesting phenomena (additive synthesis, beating, critical bands, etc.), and the students also get various assignments to explore such things themselves. There are several PD introduction videos on YouTube in English, but I found that it could be useful to also have something in Norwegian. So far I have made three screencasts going through the basics of PD and sound synthesis: ...

September 3, 2010 · 1 min · 92 words · ARJ

Actions can be based on both movement and touch

Ok, so I have been discussing the concepts of movement, action and gesture with various people since I posted this entry, and I have come to disagree with myself. Marcelo Wanderley pointed out that an action doesn’t necessarily have to involve a movement, as touch and other types of manipulation should also be considered an action. After all, holding down the keys on a piano after the attack results in no movement, but it is certainly an action. ...

February 21, 2007 · 2 min · 247 words · ARJ

Sound and Timbre

Here, I focus on how we can analyse, visualize and synthesize sound, or more specifically the timbre of instruments. Pitch and Timbre Perception Our perception of music is based on the grouping of frequencies in time and space. That is why a set of frequencies can be heard as a specific tone with an associated pitch, loudness and timbre. Such grouping is done by relating frequencies that have their origin close in spatial location, have similar onset time, and move in the same direction. The problem, however, is that there are no computational tools that can do this in an immediate and straight forward way like the human brain. ...

November 20, 2002 · 13 min · 2670 words · ARJ