New paper: MuMYO – Evaluating and Exploring the MYO Armband for Musical Interaction

usertest3Yesterday, I presented my microinteraction paper here at the NIME conference (New Interfaces for Musical Expression), organised at Louisiana State University, Baton Rouge, LA. Today I am presenting a poster based on a paper written together with two of my colleagues at UiO.

Title
MuMYO – Evaluating and Exploring the MYO Armband for Musical Interaction

Authors
Kristian Nymoen, Mari Romarheim Haugen, Alexander Refsum Jensenius

Abstract
The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.

Files

BibTeX

@inproceedings{nymoen_mumyo_2015,
    address = {Baton Rouge, LA},
    title = {{MuMYO} - {Evaluating} and {Exploring} the {MYO} {Armband} for {Musical} {Interaction}},
    abstract = {The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband's sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new ``standard'' controller in the NIME community.},
    booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
    author = {Nymoen, Kristian and Haugen, Mari Romarheim and Jensenius, Alexander Refsum},
    year = {2015}
}

New publication: “To Gesture or Not” (NIME 2014)

This week I am participating at the NIME conference, organised at Goldsmiths, University of London. I am doing some administrative work as chair of the NIME steering committee, and I am also happy to present a paper tomorrow:

Title
To Gesture or Not? An Analysis of Terminology in NIME Proceedings 2001–2013

Links
Paper (PDF)
Presentation (HTML)
Spreadsheet with summary of data (ODS)
OSX shell script used for analysis

Abstract
The term ‘gesture’ has represented a buzzword in the NIME community since the beginning of its conference series. But how often is it actually used, what is it used to describe, and how does its usage here differ from its usage in other fields of study? This paper presents a linguistic analysis of the motion-related terminology used in all of the papers published in the NIME conference proceedings to date (2001– 2013). The results show that ‘gesture’ is in fact used in 62 % of all NIME papers, which is a significantly higher percentage than in other music conferences (ICMC and SMC), and much more frequently than it is used in the HCI and biomechanics communities. The results from a collocation analysis support the claim that ‘gesture’ is used broadly in the NIME community, and indicate that it ranges from the description of concrete human motion and system control to quite metaphorical applications.

Reference
Jensenius, A. R. (2014). To gesture or not? An analysis of terminology in NIME proceedings 2001–2013. In Proceedings of the International Conference on New Interfaces For Musical Expression, pages 217–220, London.

BibTeX

@inproceedings{Jensenius:2014c,
    Address = {London},
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
    Pages = {217--220},
    Title = {To Gesture or Not? {A}n Analysis of Terminology in {NIME} Proceedings 2001--2013},
    Year = {2014}}

New publication: An Action-Sound Approach to Teaching Interactive Music

action-sound-os2013My paper titled An action–sound approach to teaching interactive music has recently been published by Organised Sound. The paper is based on some of the theoretical ideas on action-sound couplings developed in my PhD, combined with how I designed the course Interactive Music based on such an approach to music technology.

Abstract
The conceptual starting point for an `action-sound approach’ to teaching music technology is the acknowledgment of the couplings that exist in acoustic instruments between sounding objects, sound-producing actions and the resultant sounds themselves. Digital music technologies, on the other hand, are not limited to such natural couplings, but allow for arbitrary new relationships to be created between objects, actions and sounds. The endless possibilities of such virtual action-sound relationships can be exciting and creatively inspiring, but they can also lead to frustration among performers and confusion for audiences. This paper presents the theoretical foundations for an action-sound approach to electronic instrument design and discusses the ways in which this approach has shaped the undergraduate course titled `Interactive Music’ at the University of Oslo. In this course, students start out by exploring various types of acoustic action-sound couplings before moving on to designing, building, performing and evaluating both analogue and digital electronic instruments from an action-sound perspective.

Reference
Jensenius, A. R. (2013). An action–sound approach to teaching interactive music. Organised Sound, 18(2):178–189.

BibTeX

@article{Jensenius:2013b,
 Author = {Jensenius, Alexander Refsum},
 Journal = {Organised Sound},
 Number = {2},
 Pages = {178--189},
 Title = {An Action--Sound Approach to Teaching Interactive Music},
 Volume = {18},
 Year = {2013}}
 

Definitions: Motion, Action, Gesture

I have been discussing definitions of the terms motion/movement, action and gesture several times before on this blog (for example here and here). Here is a summary of my current take on these three concepts:

Motion: displacement of an object in space over time. This object could be a hand, a foot, a mobile phone, a rod, whatever. Motion is an objective entity, and can be recorded with a motion capture system. A motion capture system could be anything from a simple slider (1-dimensional), to a mouse (2-dimensional), to a camera-based tracking system ((3-dimensional)  or an inertial system (6-dimensional: 3D position and 3D orientation). I have previously also discussed the difference between motion and movement. Since motion is a continuous phenomenon, it does not make sense to talk about it in plural form: “motions”. Then it makes more sense to talk about one or more motion sequences, but most probably it makes even more sense to talk about individual actions.

Action: a goal-directed motion (or force) sequence, for example picking up a stone from the ground, playing a piano tone. Actions may have a clear beginning and end, but they may also overlap due to coarticulation, such as when playing a series of tones on the piano. This uncertainty as to how actions should be segmented (or chunked), is what makes them subjective entities. As such, I do not think it is possible to measure an action directly, since there is no objective measure for when an action begins or ends, or how it is organised in relation to other actions. But, based on knowledge about human cognition, it is possible to create systems that can estimate various action features based on measurements of motion.

Gesture: the meaning being expressed through an action or motion. A gesture is not the same as action or motion, although it is related to both of them. As such, a gesture can be seen as a semiotic sign, in which the meaning is conveyed through an action, but it is highly subjective and dependent on the cultural context in which the action is carried out. Also, the same meaning can be conveyed through different types of physical actions. For example, the meaning you convey when you wave “good-bye” to someone may be independent of whether you do it with the left or the right arm, the size of the action, etc.

Unfortunately, with the popularity of motion and gesture studies over the last years, I see that many people use the term gesture more or less synonymously to action or motion. This is particularly the case in the field of “gesture recognition” in various versions of human-computer interaction (HCI).  I think it is unfortunate because we loose the precision with which we can describe the three different phenomena. If we track continuous motion in time and space, it is “motion tracking”. If we aim at recognising certain physical patterns in time and space, I would call it “action recognition” unless we are looking for some meanings attached to the actions. “Gesture recognition” I would only use if we actually recognise the meaning attached to some actions or motion. An example here would be to recognise the emotional quality of the performance  of a violinist. That, however, is something very different than tracking the bowing style.

movement-action
An illustration of my definition of the difference between motion and action