Just went to a lecture by Sidney Fels from the Human Communication Technologies lab and MAGIC[]{#mce_editor_0_parent} at the University of British Columbia (interestingly enough located in the Forest Sciences Centre…). He was talking on the topic of intimate control of musical instruments, and presented some different projects:

  • GloveTalkII: “a system that translates hand gestures to speech through an adaptive interface.”
  • Iamascope: a caleidoscope like thing, where users would see themselves on a big screen, as well as controlling a simple sound synthesis. This he claimed was responsive visually, but the lack of any physcial reference made the sound control very non-expressive.
  • Tooka: a collaborative instrument where two performers blow on either side of a tube and have to coordinate their fingerings to play scales. The direct relationship between blowing and sound output makes the instrument easy and intuitive to control, and it also allows for a quite high degree of expressiveness.

He ended the presentation by describing a model for how to describe NIMEs, but I didn’t really get the main point of that part. Have to see if I can find some information in one of his papers.

One comment I found interesting was in a discussion on mappings from a egocentric versus exocentric point of view. This is an important consideration we have to make when designing interfaces and controllers; should we take our body as point of departure or the device? This is something I will probably have to devote some pages to in my dissertation.