From Basic Music Research to Medical Tool

The Research Council of Norway is evaluating the research being done in the humanities these days, and all institutions were given the task to submit cases of how societal impact. Obviously, basic research is per definition not aiming at societal impact in the short run, and my research definitely falls into category.Still it is interesting to see that some of my basic research is, indeed, on the verge of making a societal impact in the sense that policy makers like to think about. So I submitted the impact case “From Music to Medicine”, based on the system Computer-based Infant Movement Assessment (CIMA).

Musical Gestures Toolbox

CIMA is based on the Musical Gestures Toolbox, which started its life in the early 2000s, and which (in different forms) has been shared publicly since 2005.

My original aim of developing the MGT was to study musicians’ and dancers’ motion in a simple and holistic way.The focus was always on trying to capture as much relevant information as possible from a regular video recording, with a particular eye on the temporal development of human motion.

The MGT was first developed as standalone modules in the graphical programming environment Max, and was in 2006 merged into the Jamoma framework. This is a modular system developed and used by a group of international artists, under the lead of Timothy Place and Trond Lossius. The video analysis tools have since been used in a number of music/dance productions worldwide and are also actively used in arts education.

Studying ADHD

In 2006, I presented this research at the annual celebration of Norwegian research in the Oslo concert hall, after which professor Terje Sagvolden asked to test the video analysis system in his research on ADHD/ADD at Oslo University Hospital. This eventually lead to a collaboration in which the Musical Gestures Toolbox was used to analyse 16 rat caves in his lab. The system was also tested in the large-scale clinical ADHD study at Ullevål University Hospital in 2008 (1000+ participants). This collaboration ended abruptly with Sagvolden’s decease in 2011.

Studying Cerebral Palsy

The unlikely collaboration between researchers in music and medicine was featured in a newspaper article and a TV documentary in 2008, after which physiotherapist Lars Adde from the Department of Laboratory Medicine, Women’s and Children’s Health at the Norwegian University of Science and Technology (NTNU) called me to ask whether the tools could also be used to study infants. This has led to a long and fruitful collaboration and the development of the prototype Computer-based Infant Movement Assessment (CIMA) which is currently being tested in hospitals in Norway, USA, India, China and Turkey. A pre-patent has been filed and the aim is to provide a complete video-based solution for screening infants for the risk of developing cerebral palsy (CP).

It is documented that up to 18% of surviving infants who are born extremely preterm develop cerebral palsy (CP), and the total rate of neurological impairments is up to 45%. Specialist examination may be used to detect infants in the risk of developing CP, but this resource is only available at some hospitals. The CIMA aims to offer a standardised and affordable computer-based screening solution so that a much larger group of infants can be screened at an early stage, and the ones that fall in the risk zone may receive further specialist examination. Early intervention is critical to improving the motor capacities of the infants. The success of the CIMA methods developed on the MGT framework are to a large part based on the original focus on studying human motion through a holistic, simple and time-based approach.

The unlikely collaboration was featured in a new TV documentary in 2014.

References

  • Valle, S. C., Støen, R., Sæther, R., Jensenius, A. R., & Adde, L. (2015). Test–retest reliability of computer-based video analysis of general movements in healthy term- born infants. Early Human Development, 91(10), 555–558. http://doi.org/10.1016/j.earlhumdev.2015.07.001
  • Jensenius, A. R. (2014). From experimental music technology to clinical tool. In K. Stens\a eth (Ed.), Music, health, technology, and design. Oslo: Norwegian Academy of Music. Retrieved from http://urn.nb.no/URN:NBN:no-46186
  • Adde, L., Helbostad, J., Jensenius, A. R., Langaas, M., & Støen, R. (2013). Identification of fidgety movements and prediction of CP by the use of computer- based video analysis is more accurate when based on two video recordings. Physiotherapy Theory and Practice, 29(6), 469–475. http://doi.org/10.3109/09593985.2012.757404
  • Jensenius, A. R. (2013). Some video abstraction techniques for displaying body movement in analysis and performance. Leonardo, 46(1), 53–60. http://urn.nb.no/URN:NBN:no-38076
  • Adde, L., Langaas, M., Jensenius, A. R., Helbostad, J. L., & Støen, R. (2011). Computer Based Assessment of General Movements in Young Infants using One or Two Video Recordings. Pediatric Research, 70, 295–295. http://doi.org/10.1038/pr.2011.520
  • Adde, L., Helbostad, J. L., Jensenius, A. R., Taraldsen, G., Grunewaldt, K. H., & Støen, R. (2010). Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study. Developmental Medicine & Child Neurology, 52(8), 773–778. http://doi.org/10.1111/j.1469-8749.2010.03629.x
  • Adde, L., Helbostad, J. L., Jensenius, A. R., Taraldsen, G., & Støen, R. (2009). Using computer-based video analysis in the study of fidgety movements. Early Human Development, 85(9), 541–547. http://doi.org/10.1016/j.earlhumdev.2009.05.003
  • Jensenius, A. R. (2007). Action–Sound: Developing Methods and Tools to Study
    Music-Related Body Movement (PhD thesis). University of Oslo.
    http://urn.nb.no/URN:NBN:no-18922

MultiControl on GitHub

Screenshot of MultiControl v0.6.2
Screenshot of MultiControl v0.6.2

Today I have added MultiControl to my GitHub account. Inititally, I did not intend to release the source code for MultiControl, because it is so old and dirty. The whole patch is based on bpatchers and trying to hide things away in the pre-Max5-days, when presentation view did not exist.

I originally developed the Max patch back in 2004, mainly so that I could distribute a standalone application for my students to use. I have only incrementally updated it to work with newer versions of Max and OSX, but have never really given it a full brush-over.

The reason why I decided to release the code now is because I get so many questions about the program. Even though there are several other good alternatives out there, a lot of people download the application each month, and I get lots of positive feedback from happy users. I also get information about bugs, and occasionally also some feature requests. While I do not really have time to update the patch myself, hopefully someone else might pick it up and improve it.

Happy multicontrolling!

If you did not understand anything about the above, here is a little screencast showcasing some of the functionality of MultiControl:

New publication: Non-Realtime Sonification of Motiongrams

SMC-poster-thumbToday I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.

Title
Non-Realtime Sonification of Motiongrams

Links

Abstract
The paper presents a non-realtime implementation of the sonomotiongram method, a method for the sonification of motiongrams. Motiongrams are spatiotemporal displays of motion from video recordings, based on frame-differencing and reduction of the original video recording. The sonomotiongram implementation presented in this paper is based on turning these visual displays of motion into sound using FFT filtering of noise sources. The paper presents the application ImageSonifyer, accompanied by video examples showing the possibilities of the sonomotiongram method for both analytic and creative applications

Reference
Jensenius, A. R. (2013). Non-realtime sonification of motiongrams. In Proceedings of Sound and Music Computing, pages 500–505, Stockholm.

BibTeX

 @inproceedings{Jensenius:2013f,
    Address = {Stockholm},
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {Proceedings of Sound and Music Computing},
    Pages = {500--505},
    Title = {Non-Realtime Sonification of Motiongrams},
    Year = {2013}}

Timelapser

TimeLapser-screenshotI have recently started moving my development efforts over to GitHub, to keep everything in one place. Now I have also uploaded a small application I developed for a project by my mother, Norwegian sculptor Grete Refsum. She wanted to create a timelapse video of her making a new sculpture, “Hommage til kaffeselskapene”, for her installation piece Tante Vivi, fange nr. 24 127 Ravensbrück.

There are lots of timelapse software available, but none of them that fitted my needs. So I developed a small Max patch called TimeLapser. TimeLapser takes an image from a webcam at a regular interval (1 minute). Each image is saved with the time code as the name of the file, making it easy to use the images for documentation purposes or assembling the images into timelapse videos. The application was originally developed for an art project, but can probably be useful for other timelapse applications as well.

The application will only store separate image files, which can easily be assembled into timelapse movies using for example Quicktime.

Below is a video showing the final timelapse of my mother’s sculpture:

KinectRecorder

I am currently working on a paper describing some further exploration of the sonifyer technique and module that I have previously published on. The new thing is that I am now using the inputs from a Kinect device as the source material for the sonification, which opens up for using also the depth in the image as an element in the process.

To be able to create figures for the paper, I needed to record the input from a Kinect to a regular video file. For that reason I have created a small Max patch called KinectRecorder, which allows for easy recording of one combined video file from the two inputs (regular video image and depth image) from the Kinect. As the screenshot below shows, there is not much more to the patch than starting the video input from the Kinect, and then start the recording. Files will be stored as with MJPEG compression and named with the current date and time.

KinectRecorder

The patch is not particularly fancy, but I imagine that it could be useful for other people interested in recording video from the Kinect, either for analytical applications or for testing performance setups when not having access to a Kinect device. So here it is:

Below is a short video recorded with the patch, showing some basic movement patterns. This video is not particularly interesting in itself, but I can reveal that it actually leads to some interesting sonic results when run through my sonifyer technique. More on that later…