Testing reveal.js for teaching

I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities. This is an even more complex organisational structure than the MCT programme that I am involved in myself.

In addition to running a workshop with the Qualisys motion capture system that they have (similar to the one in our fourMs Lab at RITMO), I was asked to also present an introduction to motion capture in general, and also some video-based methods. I have made the more technically oriented tutorial Quantitative Video analysis for Qualitative Research, which is describing how to use the Musical Gestures Toolbox for Matlab. Since Matlab was outside the scope of this session, I decided to create a non-technical presentation focusing more on the concepts.

Most of my recent presentations have been made in Google Presentation, a tool that really shows the potential of web-based applications (yes, I think it has matured to a point where we can actually talk about an application in the browser). The big benefit of using a web-based presentation solution, is that I can share links to the presentation both before and after it was held, and I avoid all the hassle of issues with moving large video files around, etc.

Even though Google Presentation has been working fine, I would prefer moving to an open source solution. I have for a long time also wanted to try out markdown-based presentation solutions, since I use markdown for most of my other writing. I have tried out a few different solutions, but haven’t really found anything that worked smoothly enough. Many of the solutions add too much complexity to the way you need to write your markdown code, which then removes some of the weightlessness of this approach. The easiest and most nice-looking solution so far seems to be reveal.js, but I haven’t really found a way to integrate it into my workflow.

Parallel to my presentation experimentation, I have also been exploring Jupyter Notebook for analysis. The nice thing about this approach, is that you can write cells of code that can be evaluated on the fly, and be shown seamlessly in the browser. This is great for developing code, sharing code, teaching code, and also for moving towards more Open Research practices.

One cool thing I discovered, is that Jupyter Notebook has built-in support for reveal.js! This means that you can just export your complete notebook as a nice presentation. This is definitely something I am going to explore more with my coding tutorials, but for today’s workshop I ended up using it with only markdown code.

I created three notebooks, one for each topic I was talking about, and exported them as presentations:

A really cool feature in reveal.js, is the ability to move in two dimensions. That means that you can keep track of the main sections of the presentation horizontally, while filling in with more content vertically. Hitting the escape button, it is possible to “zoom” out, and look at the entire presentation, as shown below:

The overview mode in reveal.js presentations.

The tricky part of using Jupyter Notebook for plain markdown presentations, is that you need to make individual cell blocks for each part of the presentation. This works, but it would make even more sense if I had some python code in between. That is for next time, though.

Musical Gestures Toolbox for Matlab

Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.

The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.

Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago. A lot of the Matlab coding for the new version was done in the master’s thesis by Bo Zhou.

The new MGT is available on Github, and there is a more or less complete introduction to the main features in the software carpentry workshop Quantitative Video analysis for Qualitative Research.

Nordic Sound and Music Computing Network up and running

I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.

This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.

At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.

The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.

New article: Group behaviour and interpersonal synchronization to electronic dance music

I am happy to announce the publication of a follow-up study to our former paper on group dancing to EDM, and a technical paper on motion capture of groups of people. In this new study we successfully managed to track groups of 9-10 people dancing in a semi-ecological setup in our motion capture lab. We also found a lot of interesting things when it came to how people synchronize to both the music and each other.

Citation:
Solberg, R. T., & Jensenius, A. R. (2017). Group behaviour and interpersonal synchronization to electronic dance music. Musicae Scientiae.

Abstract:
The present study investigates how people move and relate to each other – and to the dance music – in a club-like setting created within a motion capture laboratory. Three groups of participants (29 in total) each danced to a 10-minute-long DJ mix consisting of four tracks of electronic dance music (EDM). Two of the EDM tracks had little structural development, while the two others included a typical “break routine” in the middle of the track, consisting of three distinct passages: (a) “breakdown”, (b) “build-up” and (c) “drop”. The motion capture data show similar bodily responses for all three groups in the break routines: a sudden decrease and increase in the general quantity of motion. More specifically, the participants demonstrated an improved level of interpersonal synchronization after the drop, particularly in their vertical movements. Furthermore, the participants’ activity increased and became more pronounced after the drop. This may suggest that the temporal removal and reintroduction of a clear rhythmic framework, as well as the use of intensifying sound features, have a profound effect on a group’s beat synchronization. Our results further suggest that the musical passages of EDM efficiently lead to the entrainment of a whole group, and that a break routine effectively “re-energizes” the dancing.

 

New publication: Sonic Microinteraction in “the Air”

I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.

Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale. I focus on the conceptualization of interactive systems that can exploit the smallest possible micromotion that people are able to both perceive and produce. It is also important that the interaction that is taking place allow for a recursive element via a feedback loop from the sound produced back to the performer producing it.