Sound and Timbre

Here, I focus on how we can analyse, visualize and synthesize sound, or more specifically the timbre of instruments. Pitch and Timbre Perception Our perception of music is based on the grouping of frequencies in time and space. That is why a set of frequencies can be heard as a specific tone with an associated pitch, loudness and timbre. Such grouping is done by relating frequencies that have their origin close in spatial location, have similar onset time, and move in the same direction. The problem, however, is that there are no computational tools that can do this in an immediate and straight forward way like the human brain. ...

November 20, 2002 · 13 min · 2670 words · ARJ

Synthesis of a Tone

Explaining how one can synthesize a musical tone with Max/MSP.

October 17, 2002 · 4 min · 795 words · ARJ

Introduction to Max/MSP

The intention of MAX was to create a graphical programming environment for musicians and composers (Puckette 1985). Originally developed at IRCAM by David Zicarelli and Miller Puckette in the late 1980’s, it soon became popular for controlling MIDI- instruments (Puckette 1988). Its unique flexibility, and the possibility for users to extend the capabilities of the environment by writing new code, secured its position in computer music (Puckette and Zicarelli 1990). The novel idea was the creation of a graphical environment that could be run in real-time, allowing the user to interact with the program. The MSP-package (released in 1996) added audio capabilities, and Jitter (released in 2002) allows the manipulation of video. Today MAX/MSP/Jitter are commercially available products, continuing to attract a large community of users10. ...

October 16, 2002 · 4 min · 808 words · ARJ