I just heard Esteban Maestre from UPF present his project on creating a database of instrumental actions of bowed instruments, for use in the synthesis of score-based material. They have come up with a very interesting solution to the recording and synchronisation of audio with movement data: Building a VST plugin which implements recording of motion capture data from a Polhemus Liberty, together with bow sensing through an Arduino. This makes it possible to load the VST-plugin inside regular audio sequencing software and do the recording from there.

Esteban played an example of a synthesised version of Pachelbel’s Canon, and it was amazing how much more lively it sounded when the performance actions were also synthesised and used to control the sound. However, as Antonio Camurri noted in the discussion, the performance sounded a bit rushed and without breathing. This is probably because the model is, so far, only based on the recording and synthesising of the instrumental actions (excitation and modification), and does not take into account various types of ancillary movements (e.g. support and phrasing) which typically would create the larger scale shapes.