Edit: These files are now more easily accessible from my UiO page.
While preparing a lecture for the PhD students at the Norwegian Academy of Music, I came across some of the sound files I created for my MA thesis on salience in (musical) sound perception. While the content of that thesis is now most interesting as a historical document, I had a good time listening to the sound examples again. There are three things, in particular, that I still find interesting:
1. Duration of sound
How are short sound excerpts musically meaningful? Try, for example, these cuts from the opening of Eric Clapton’s “Tears in Heaven”:
When asking people (mainly student groups), my experience is that some people actually manage to recognise the tune after only listening to the first fragment (134 ms), and a lot of people manage to recognise it after the second fragment (380 ms). This I find quite remarkable, considering how little (sonic) information is actually there. It is a great example of the fantastic capabilities of our auditory system.
2. Number of sinusoidal components
Another thing I tested in the thesis was how timbre influences our perception of sound. To test this, I created a set of examples of how a different number of sinusoidal components of a saxophone tone (played by [Sony Rollins](http://en.wikipedia.org/wiki/Sonny_Rollins)) influence the perceived timbral quality:
It is first with 60 sinusoidal components present that we really get to hear all the sound’s timbral features properly. Yet, the tonal content (the pitch) is preserved with only a few sinusoidal components.
3. Sound thumbnailing
While the above example focused on reducing (timbral) space, I also tested the reduction of sound in the temporal space. There has been a lot of research on sound thumbnails since the time I did my experiments. I still find the idea of creating really short sonic summaries of longer musical examples fascinating. Here are my tests of creating different types of thumbnails of Ravel’s Bolero:
I have written about my making of a series of sreencasts of basic sound synthesis in puredata in an earlier blog post. The last addition to the series is the building of a patch that shows how a simple impulse response, combined with a delay, a feedback loop and a low pass filter, can be used to simulate reverberation. In fact, dependent on the settings, this patch can also be used for making phasor, flanger, chorus and echo as well. It is interesting to see how these concepts are just variations of the same thing.
The screencast is in Norwegian, but the patching is universal, so I guess it could still be interesting for non-Norwegian speakers. The patch (with some more comments and presets) can be downloaded here.
After working with music-related movements for some years, and thereby arguing that movement is an integral part of music, I tend to react when people use “music” as a synonym for either “score” or “sound”.
I certainly agree that sound is an important part of music, and that scores (if they exist) are related to both musical sound and music in general. But I do not agree that music is sound. To me, sound is one (and an important one) component of music, but not the only one. From the perspective of embodied music cognition, music is truly multimodal, meaning that all our senses and modalities are involved in performance and perception. This is not to mention all the cultural and contextual elements involved in our experience of music.
From a scientific point of view it makes sense to try to separate musical sound from all the other sensations and contextual elements, but we should not forget that the magic of music is really based on how all the components work together.
I am teaching a course in sound theory this semester, and therefore thought it was time to update a little program I developed several years ago, called SoundAnalysis. While there are many excellent sound analysis programs out there (SonicVisualiser, Praat, etc.), they all work on pre-recorded sound material. That is certainly the best approach to sound analysis, but it is not ideal in a pedagogical setting where you want to explain things in realtime.
There are not so many realtime audio analysis programs around, at least not anyone that looks and behaves similar on both OSX and Windows. One exception that is worth mentioning is the excellent sound tools from Princeton, but they lack some of the analysis features I am interested in showing to the students.
So my update of the SoundAnalysis program, should hopefully cover a blank spot in the area of realtime sound visualisation and analysis. The new version provides a larger spectrogram view, and the option to change various spectrogram features on the fly. The quantitative features have been moved to a separate window, and now also includes simple beat tracking.
Below is a screenshot giving an overview of the new version:
Other new selling points include a brand new name… I have also decided to rename it to AudioAnalysis, so that it harmonizes with my AudioVideoAnalysis and VideoAnalysis programs.
In both courses I use Pure Data (PD) for demonstrating various interesting phenomena (additive synthesis, beating, critical bands, etc.), and the students also get various assignments to explore such things themselves. There are several PD introduction videos on YouTube in English, but I found that it could be useful to also have something in Norwegian. So far I have made three screencasts going through the basics of PD and sound synthesis: