I am currently working on some extensions to my motiongram-sonifyer, and came across this beautiful little film by Norman McLaren from 1940:
The sounds heard in the film are entirely synthetic, created by drawing in the sound-track part of the film. McLaren explained this a 1951 BBC interview:
I draw a lot of little lines on the sound-track area of the 35-mm. film. Maybe 50 or 60 lines for every musical note. The number of strokesto the inch controls the pitch of the note: the more, the higher the pitch; the fewer, the lower is the pitch. The size of the stroke con- trols the loudness: a big stroke will go “boom,” a smaller stroke will give a quieter sound, and the faintest stroke will be just a little “m-m-m.” A black ink is another way of making a loud sound, a mid-gray ink will make a medium sound, and a very pale ink will make a very quiet sound. The tone quality, which is the most difficult ele- ment to control, is made by the shape of the strokes. Well-rounded forms give smooth sounds; sharper or angular forms give harder, harsher sounds. Sometimes I use a brush instead of a pen to get very soft sounds. By drawing or exposing two or more patterns on the same bit of film I can create harmony and textural effects. (From Jordan, W. E. (1953). Norman McLaren: His career and techniques. The Quarterly of Film Radio and Television, 8(1):pp. 1–14).
Edit: These files are now more easily accessible from my UiO page.
While preparing a lecture for the PhD students at the Norwegian Academy of Music, I came across some of the sound files I created for my MA thesis on salience in (musical) sound perception. While the content of that thesis is now most interesting as a historical document, I had a good time listening to the sound examples again. There are three things, in particular, that I still find interesting:
1. Duration of sound
How are short sound excerpts musically meaningful? Try, for example, these cuts from the opening of Eric Clapton’s “Tears in Heaven”:
When asking people (mainly student groups), my experience is that some people actually manage to recognise the tune after only listening to the first fragment (134 ms), and a lot of people manage to recognise it after the second fragment (380 ms). This I find quite remarkable, considering how little (sonic) information is actually there. It is a great example of the fantastic capabilities of our auditory system.
2. Number of sinusoidal components
Another thing I tested in the thesis was how timbre influences our perception of sound. To test this, I created a set of examples of how a different number of sinusoidal components of a saxophone tone (played by [Sony Rollins](http://en.wikipedia.org/wiki/Sonny_Rollins)) influence the perceived timbral quality:
It is first with 60 sinusoidal components present that we really get to hear all the sound’s timbral features properly. Yet, the tonal content (the pitch) is preserved with only a few sinusoidal components.
3. Sound thumbnailing
While the above example focused on reducing (timbral) space, I also tested the reduction of sound in the temporal space. There has been a lot of research on sound thumbnails since the time I did my experiments. I still find the idea of creating really short sonic summaries of longer musical examples fascinating. Here are my tests of creating different types of thumbnails of Ravel’s Bolero:
I have written about my making of a series of sreencasts of basic sound synthesis in puredata in an earlier blog post. The last addition to the series is the building of a patch that shows how a simple impulse response, combined with a delay, a feedback loop and a low pass filter, can be used to simulate reverberation. In fact, dependent on the settings, this patch can also be used for making phasor, flanger, chorus and echo as well. It is interesting to see how these concepts are just variations of the same thing.
The screencast is in Norwegian, but the patching is universal, so I guess it could still be interesting for non-Norwegian speakers. The patch (with some more comments and presets) can be downloaded here.
After working with music-related movements for some years, and thereby arguing that movement is an integral part of music, I tend to react when people use “music” as a synonym for either “score” or “sound”.
I certainly agree that sound is an important part of music, and that scores (if they exist) are related to both musical sound and music in general. But I do not agree that music is sound. To me, sound is one (and an important one) component of music, but not the only one. From the perspective of embodied music cognition, music is truly multimodal, meaning that all our senses and modalities are involved in performance and perception. This is not to mention all the cultural and contextual elements involved in our experience of music.
From a scientific point of view it makes sense to try to separate musical sound from all the other sensations and contextual elements, but we should not forget that the magic of music is really based on how all the components work together.
I am teaching a course in sound theory this semester, and therefore thought it was time to update a little program I developed several years ago, called SoundAnalysis. While there are many excellent sound analysis programs out there (SonicVisualiser, Praat, etc.), they all work on pre-recorded sound material. That is certainly the best approach to sound analysis, but it is not ideal in a pedagogical setting where you want to explain things in realtime.
There are not so many realtime audio analysis programs around, at least not anyone that looks and behaves similar on both OSX and Windows. One exception that is worth mentioning is the excellent sound tools from Princeton, but they lack some of the analysis features I am interested in showing to the students.
So my update of the SoundAnalysis program, should hopefully cover a blank spot in the area of realtime sound visualisation and analysis. The new version provides a larger spectrogram view, and the option to change various spectrogram features on the fly. The quantitative features have been moved to a separate window, and now also includes simple beat tracking.
Below is a screenshot giving an overview of the new version:
Other new selling points include a brand new name… I have also decided to rename it to AudioAnalysis, so that it harmonizes with my AudioVideoAnalysis and VideoAnalysis programs.