I had a quick read of Jordi Janer’s dissertation today: Singing-Driven Interfaces for Sound Synthesizers. The dissertation presents a good overview of various types of voice analysis techniques, and suggestions for various ways of using the voice as a controller for synthesis. I am particularly interested in his suggestion of a GDIF namespace for structuring parameters for voice control:

/gdif/instrumental/excitation/loudness x
/gdif/instrumental/modulation/pitch x
/gdif/instrumental/modulation/formants x1 x2
/gdif/instrumental/modulation/breathiness x
/gdif/instrumental/selection/phoneticclass x

Here he is using Cadoz’ division of various types of instrumental “gestures”: excitation, modulation and selection, something which would also make sense for describing other types of instrumental actions.

I am looking forward to getting back to working on GDIF again soon, I just need to finish this semester’s teaching + administrative work + moving into our new lab first…