New paper: MuMYO – Evaluating and Exploring the MYO Armband for Musical Interaction

usertest3Yesterday, I presented my microinteraction paper here at the NIME conference (New Interfaces for Musical Expression), organised at Louisiana State University, Baton Rouge, LA. Today I am presenting a poster based on a paper written together with two of my colleagues at UiO.

Title
MuMYO – Evaluating and Exploring the MYO Armband for Musical Interaction

Authors
Kristian Nymoen, Mari Romarheim Haugen, Alexander Refsum Jensenius

Abstract
The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.

Files

BibTeX

@inproceedings{nymoen_mumyo_2015,
    address = {Baton Rouge, LA},
    title = {{MuMYO} - {Evaluating} and {Exploring} the {MYO} {Armband} for {Musical} {Interaction}},
    abstract = {The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband's sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new ``standard'' controller in the NIME community.},
    booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
    author = {Nymoen, Kristian and Haugen, Mari Romarheim and Jensenius, Alexander Refsum},
    year = {2015}
}

ICMC 2006 proceedings details

A colleague of mine recently asked if I could help her find the bibligraphic details of the ICMC 2006 proceedings. Apparently this information is not easily available online, and she had spent a great deal of research time trying to find the information.

I was lucky enough to participate in this wonderful event at Tulane University, and still have the paper version of the proceedings in my office. So here is the relevant information, in case anyone else also wonders about these details:

  • Editors (Paper chairs): Georg Essl and Ichiro  Fujinaga
  • November 6-11 2006
  • Publisher: International Computer Music Association, San Francisco, CA & The Music Department, Tulane University, New Orleans, LA
  • ISBN: 0-9713192-4-3

 

 

NIME 2013

Back from a great NIME 2013 conference in Daejeon + Seoul! For Norwegian readers out there, I have written a blog post about the conference on my head of department blog. I would have loved to write some more about the conference in English, but I think these images from my Flickr account will have to do for now:

2013-05-26-DSCN70162013-05-26-DSCN70232013-05-26-DSCN70242013-05-26-DSCN70272013-05-26-DSCN70292013-05-26-DSCN7031
2013-05-26-DSCN70322013-05-26-DSCN70372013-05-26-DSCN70432013-05-26-DSCN70452013-05-26-DSCN70502013-05-27-DSCN7055
2013-05-27-DSCN70632013-05-27-DSCN70662013-05-27-DSCN70702013-05-27-DSCN70742013-05-27-DSCN70832013-05-27-DSCN7084
2013-05-27-DSCN70882013-05-27-DSCN70972013-05-27-DSCN70982013-05-27-DSCN71012013-05-27-DSCN71042013-05-27-DSCN7106

At the last of the conference it was also announced that next year’s conference will be held in London and hosted by the Embodied AudioVisual Interaction Group at Goldsmiths. Future chair Atau Tanaka presented this teaser video:

Kinectofon: Performing with shapes in planes

2013-05-28-DSCN7184Yesterday, Ståle presented a paper on mocap filtering at the NIME conference in Daejeon. Today I presented a demo on using Kinect images as input to my sonomotiongram technique.

Title
Kinectofon: Performing with shapes in planes

Links

Abstract
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB and depth image streams retrieved from a Microsoft Kinect sensor device. These two image streams are used to create different types of motiongrams, which, again, are used as the source material for a sonification process based on inverse FFT. The instrument is intuitive to play, allowing the performer to create sound by “touching’’ a virtual sound wall.

Reference
Jensenius, A. R. (2013). Kinectofon: Performing with shapes in planes. In Proceedings of the International Conference on New Interfaces For Musical Expression, pages 196–197, Daejeon, Korea.

BibTeX

@inproceedings{Jensenius:2013e,
   Address = {Daejeon, Korea},
   Author = {Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
   Pages = {196--197},
   Title = {Kinectofon: Performing with Shapes in Planes},
   Year = {2013}
}

kinectofon_poster

Musikkteknologidagene 2012

Keynote
Alexander holding a keynote lecture at Musikkteknologidagene 2012 (Photo: Nathan Wolek).

Last week I held a keynote lecture at the Norwegian music technology conference Musikkteknologidagene, by (and at) the Norwegian Academy of Music and NOTAM. The talk was titled: “Embodying the human body in music technology”, and was an attempt at explaining why I believe we need to put more emphasis on human-friendly technologies, and why the field of music cognition is very much connected to that of music technology. I got a comment that it would have been better to exchange “embodying” with “embedding” in my title, and I totally agree. So now I already have a title for my next talk!

Sverm demo
One of the “pieces” we did for the Sverm demo at Musikkteknologidagene 2012: three performers standing still and controlling a sine tone each based on their micromovements.

Besides my talk, we also did a small performance of parts of the Sverm project that I am working on together with an interdisciplinary group of sound, movement and light artists. We showed three parts: (1) very slow movement with changing lights (2) sonification of the micromovements of people standing still (3) micromovement interaction with granular synthesis. This showcase was based on the work we have done since the last performance and seminar.

Besides the things I was involved in myself during Musikkteknologidagene, I was very happy about being “back” at the conference after a couple of years of “absence” (I had enough with organising NIME last year). It is great to find that the conference is still alive and manages to gather people doing interesting stuff in and with music technology in Norway.

Sverm talking
Alexander talking about the Sverm project and fourMs motion capture lab at Musikkteknologidagene 2012 (Photo: Nathan Wolek).

When I started up the conference series back in 2005, the idea was to create a meeting place for music technology people in Norway. Fortunately, NOTAM has taken on the responsibility of finding and supporting local organisers that can host it every year. So far it has been bouncing back and forth between Oslo, Trondheim and Bergen, and I think it is now time that it moves on to Kristiansand, Tromsø and Stavanger. All these cities now have small active music technology communities, and some very interesting festivals (Punkt, Insomnia, Numusic) that it could be connected to.

As expected, the number of people attending the conference has been going up and down over the years. In general I find that it is always difficult to get people from Oslo to attend, something that I find slightly embarassing, but which can probably be explained by the overwhelming amount of interesting things happening in this comparably little capital at any point in time.

Snow
We had the first snow this year during Musikkteknologidagene, a good time to stay indoors at NOTAM listening to presentations.

The first years of Musikkteknologidagene we mainly spent on informing each other of what we are all doing, really just getting to know each other. Over the years the focus has been shifted more towards “real” presentations, and all the presentations I heard this year were very interesting and inspiring. This is a good sign that the field of music technology has matured in Norway. Several institutions have been able to start up research and educational programs in fields somehow related to music technology, and I think we are about to reach a critical mass of groups of people involved in the field, not only a bunch of individual researchers and artists trying to survive. This year we agreed that we are now going to make a communal effort of building up a database of all institutions and individuals involved in the field, and develop a roadmap along the lines of what was made in the S2S2 project.

All in all, this year’s Musikkteknologidagene was a fun experience, and I am already looking forwards to next year’s edition.