IMRAD and PICO

Reading the latest issue of the Norwegian researcher’s magazine Forskerforum, I learned about PICO as an alternative model to the often used IMRAD approach to scientific writing. To summarize:

  • IMRAD = Introduction, Method, Research and Discussion
  • PICO = Problem, Intervention, Comparison, Outcome

It seems like PICO comes from clinical practice in medicine. Not sure if this helps music research that much, but I do like the structured approach to organising paper writing.

Lots of NIME publications

I am getting ready to travel to Sydney for the upcoming NIME 2010 conference where I am involved in no less than 5 papers:

Glass instruments – from pitch to timbre
Frounberg, I., A. R. Jensenius, and K. T. Innervik (2010)
The paper reports on the development of prototypes of glass instruments. The focus has been on developing acoustic instruments specifically designed for electronic treatment, and where timbral qualities have had priority over pitch. The paper starts with a brief historical overview of glass instruments and their artistic use. Then follows an overview of the glass blowing process. Finally the musical use of the instruments is discussed.

Evaluating the subjective effects of microphone placement on
glass instruments

Jensenius, A. R., K. T. Innervik, and I. Frounberg (2010)
Abstract: We report on a study of perceptual and acoustic features related to the placement of microphones around a custom made glass instrument. Different microphone setups were tested: above, inside and outside the instrument and at different distances. The sounds were evaluated by an expert performer, and further qualitative and quantitative analyses have been carried out. Preference was given to the recordings from microphones placed close to the rim of the instrument, either from the inside or the outside.

Searching for cross-individual relationships between sound
and movement features using an svm classifier
Nymoen, K., K. Glette, S. A. Skogstad, J. Tørresen, and A.
R. Jensenius (2010)

In this paper we present a method for studying relationships between features of sound and features of movement. The method has been tested by carrying out an experiment with people moving an object in space along with short sounds. 3D position data of the object was recorded and several features were calculated from each of the recordings. These features were provided as input to a classifier which was able to classify the recorded actions satisfactorily; particularly when taking into account that the only link between the actions performed by the different subjects were the sound they heard while making the action.

Using ir optical marker based motion capture for exploring
musical interaction

Skogstad, S. A., A. R. Jensenius, and K. Nymoen (2010)
The paper presents a conceptual overview of how optical infrared marker based motion capture systems (IrMoCap) can be used in musical interaction. First we present a review of related work of using IrMoCap for musical control. This is followed by a discussion of possible features which can be exploited. Finally, the question of mapping movement features to sound features is presented and discussed.

Wireless sensor data collection based on zigbee
communication
Torresen, J., E. Renton, and A. R. Jensenius (2010)
This paper presents a comparison of different configurations of a wireless sensor system for capturing human motion. The systems consist of sensor elements which wirelessly transfers motion data to a receiver element. The sensor elements consist of a microcontroller, accelerometer(s) and a radio transceiver. The receiver element consists of a radio receiver connected through a microcontroller to a computer for real time sound synthesis. The wireless transmission between the sensor elements and the receiver element is based on the low rate IEEE 802.15.4/ZigBee standard. A configuration with several accelerometers connected by wire to a wireless sensor element is compared to using multi- ple wireless sensor elements with only one accelerometer in each. The study shows that it would be feasable to connect 5-6 accelerometers in the given setups. Sensor data processing can be done in either the receiver element or in the sensor element. For various reasons it can be reasonable to implement some sensor data processing in the sensor element. The paper also looks at how much time that typically would be needed for a simple pre-processing task.

Electronic versions will be available next week.

New GDIF + SpatDIF wiki: xDIF

Today I am starting my post-semester activities. Not that all grading is finished yet, and all administrative meetings are over for a while, but we had the last part of the official teaching program yesterday, so now I at least feel that the university summer has started. This means that I will (finally) start focusing more on doing research again, and I have several papers that I will try to finish over the coming months.

But first of all, I would like to officially thank IRCAM for putting together the GDIF SpatDIF meeting a couple of weeks ago. We had a great meeting with lots of interesting discussions. The conclusion from the meeting was that many people think that continuing the development process of GDIF and SpatDIF is wortwhile, and also that there are many things in common between the two formats. To create a base for development, I have set up a wiki at our server, called xDIF (which can also be read *DIF) so that other DIFs can join in as well.

To avoid spam in the wiki, user registration has been restricted, so anyone that is interested in contributing need to contact me to get a user account (please do!).

NTNU PhD defense 2

Music technology research is a fairly small field worldwide, and even smaller in Norway. So therefore I am very happy that Andreas Bergsland defended his PhD at NTNU last Friday. He has done some great work on voice in electroacoustic music, more specifically on some of Paul Lansky‘s pieces.

The thesis, software and audio examples are available online.

Abstract:

This dissertation presents a framework for describing and understanding the experience of voices in acousmatic electroacoustic music and related genres. The framework is developed with a phenomenological basis, where the author’s own listening experience has been the main object of study. One component of the framework has been to group aspects that potentially can be attended to into experiential domains based on some common feature, relationship or function. Four vocal experiential domains related to the voice are presented along with three domains not directly related to the voice. For each of these domains, a set of concepts are introduced allowing for qualification and description of features of the experience. The second component of the framework, the maximal-minimal model, is partly described through these domains. This model presents maximal and minimal voice as loosely defined poles constituting end points on a continuum on which experienced voices can be localized. Here, maximal voice, which parallels the informative and clearly articulated speaking voice dominant in the radio medium, is described as the converging fulfillment of seven premises. These premises are seen as partly interconnected conditions related to particular aspects or features of the experience of voice. At the other end of the continuum, minimal voice is defined as a boundary zone between voice and non-voice, a zone which is related to the negative fulfilment of the seven premises. A number of factors are presented that potentially can affect an evaluation of experiences according to the premises, along with musical excerpts that exemplifies different evaluation categories along the continuum. Finally, the two frameworks are applied in an evaluation and description of the author’s experience of Paul Lansky’s Six Fantasies on a Poem by Thomas Campion.