NRK has made a short documentary telling the story about my collaboration with a group of medical researchers working on ADHD. Of course, the air guitar is still up front, but since this was broadcast in the weekly science magazine, we actually managed to get through some of the underlying research as well. I found an online version of the TV interview here.
I just heard a presentation by a group of researchers from the Tril centre (Technology Research for Independent Living) in Dublin. They have developed Emobius (or EyesWeb Mobius), a set of blocks for various types of biomedical processing, as well as a graphical front-end to the forthcoming EyesWeb XMI. It is fascinating to see how the problems they are working on in applications for older persons are so similar to what we are dealing with in music research.
I also got the chance to play with a Shimmer, a 1×3 cm device with 3D accelerometer, gyroscope, some extra I/O ports, and zigbee communication. This is what they expect to be the central hardware device in medical and rehabilitation applications in the near future. Very cool.
I just heard Esteban Maestre from UPF present his project on creating a database of instrumental actions of bowed instruments, for use in the synthesis of score-based material. They have come up with a very interesting solution to the recording and synchronisation of audio with movement data: Building a VST plugin which implements recording of motion capture data from a Polhemus Liberty, together with bow sensing through an Arduino. This makes it possible to load the VST-plugin inside regular audio sequencing software and do the recording from there.
Esteban played an example of a synthesised version of Pachelbel’s Canon, and it was amazing how much more lively it sounded when the performance actions were also synthesised and used to control the sound. However, as Antonio Camurri noted in the discussion, the performance sounded a bit rushed and without breathing. This is probably because the model is, so far, only based on the recording and synthesising of the instrumental actions (excitation and modification), and does not take into account various types of ancillary movements (e.g. support and phrasing) which typically would create the larger scale shapes.
We had a programming session this morning, and Paolo Coletta implemented a block for creating motiongrams in EyesWeb. It will be available in the new EyesWeb XMI release which will happen in the end of this week. Great!
I had my first go at restoring a file using Time Machine today. Looking for a Keynote presentation, I realised that I had kept only the PDF of the presentation and not the original presentation file. Not really sure how that happened, but, anyway, the file was lost.
I have had Time Machine running on my computer ever since I upgraded to X.5, and have been wondering whether it would be worth the extra CPU peaks that appear every hour or so when it activates and copies changed files. After this incident, and after luckily retrieving the deleted file, I have to agree that the solution is actually working very well! I could have done without the Star Wars-like effects and looks of the program, though…