Tag Archives: jamoma

From Basic Music Research to Medical Tool

The Research Council of Norway is evaluating the research being done in the humanities these days, and all institutions were given the task to submit cases of how societal impact. Obviously, basic research is per definition not aiming at societal impact in the short run, and my research definitely falls into category.Still it is interesting to see that some of my basic research is, indeed, on the verge of making a societal impact in the sense that policy makers like to think about. So I submitted the impact case “From Music to Medicine”, based on the system Computer-based Infant Movement Assessment (CIMA).

Musical Gestures Toolbox

CIMA is based on the Musical Gestures Toolbox, which started its life in the early 2000s, and which (in different forms) has been shared publicly since 2005.

My original aim of developing the MGT was to study musicians’ and dancers’ motion in a simple and holistic way.The focus was always on trying to capture as much relevant information as possible from a regular video recording, with a particular eye on the temporal development of human motion.

The MGT was first developed as standalone modules in the graphical programming environment Max, and was in 2006 merged into the Jamoma framework. This is a modular system developed and used by a group of international artists, under the lead of Timothy Place and Trond Lossius. The video analysis tools have since been used in a number of music/dance productions worldwide and are also actively used in arts education.

Studying ADHD

In 2006, I presented this research at the annual celebration of Norwegian research in the Oslo concert hall, after which professor Terje Sagvolden asked to test the video analysis system in his research on ADHD/ADD at Oslo University Hospital. This eventually lead to a collaboration in which the Musical Gestures Toolbox was used to analyse 16 rat caves in his lab. The system was also tested in the large-scale clinical ADHD study at Ullevål University Hospital in 2008 (1000+ participants). This collaboration ended abruptly with Sagvolden’s decease in 2011.

Studying Cerebral Palsy

The unlikely collaboration between researchers in music and medicine was featured in a newspaper article and a TV documentary in 2008, after which physiotherapist Lars Adde from the Department of Laboratory Medicine, Women’s and Children’s Health at the Norwegian University of Science and Technology (NTNU) called me to ask whether the tools could also be used to study infants. This has led to a long and fruitful collaboration and the development of the prototype Computer-based Infant Movement Assessment (CIMA) which is currently being tested in hospitals in Norway, USA, India, China and Turkey. A pre-patent has been filed and the aim is to provide a complete video-based solution for screening infants for the risk of developing cerebral palsy (CP).

It is documented that up to 18% of surviving infants who are born extremely preterm develop cerebral palsy (CP), and the total rate of neurological impairments is up to 45%. Specialist examination may be used to detect infants in the risk of developing CP, but this resource is only available at some hospitals. The CIMA aims to offer a standardised and affordable computer-based screening solution so that a much larger group of infants can be screened at an early stage, and the ones that fall in the risk zone may receive further specialist examination. Early intervention is critical to improving the motor capacities of the infants. The success of the CIMA methods developed on the MGT framework are to a large part based on the original focus on studying human motion through a holistic, simple and time-based approach.

The unlikely collaboration was featured in a new TV documentary in 2014.


  • Valle, S. C., Støen, R., Sæther, R., Jensenius, A. R., & Adde, L. (2015). Test–retest reliability of computer-based video analysis of general movements in healthy term- born infants. Early Human Development, 91(10), 555–558. http://doi.org/10.1016/j.earlhumdev.2015.07.001
  • Jensenius, A. R. (2014). From experimental music technology to clinical tool. In K. Stens\a eth (Ed.), Music, health, technology, and design. Oslo: Norwegian Academy of Music. Retrieved from http://urn.nb.no/URN:NBN:no-46186
  • Adde, L., Helbostad, J., Jensenius, A. R., Langaas, M., & Støen, R. (2013). Identification of fidgety movements and prediction of CP by the use of computer- based video analysis is more accurate when based on two video recordings. Physiotherapy Theory and Practice, 29(6), 469–475. http://doi.org/10.3109/09593985.2012.757404
  • Jensenius, A. R. (2013). Some video abstraction techniques for displaying body movement in analysis and performance. Leonardo, 46(1), 53–60. http://urn.nb.no/URN:NBN:no-38076
  • Adde, L., Langaas, M., Jensenius, A. R., Helbostad, J. L., & Støen, R. (2011). Computer Based Assessment of General Movements in Young Infants using One or Two Video Recordings. Pediatric Research, 70, 295–295. http://doi.org/10.1038/pr.2011.520
  • Adde, L., Helbostad, J. L., Jensenius, A. R., Taraldsen, G., Grunewaldt, K. H., & Støen, R. (2010). Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study. Developmental Medicine & Child Neurology, 52(8), 773–778. http://doi.org/10.1111/j.1469-8749.2010.03629.x
  • Adde, L., Helbostad, J. L., Jensenius, A. R., Taraldsen, G., & Støen, R. (2009). Using computer-based video analysis in the study of fidgety movements. Early Human Development, 85(9), 541–547. http://doi.org/10.1016/j.earlhumdev.2009.05.003
  • Jensenius, A. R. (2007). Action–Sound: Developing Methods and Tools to Study
    Music-Related Body Movement (PhD thesis). University of Oslo.
Performing with the Norwegian Noise Orchestra

Performing with the Norwegian Noise Orchestra

Performing with the Norwegian Noise OrchestraYesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.

For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.sonifyer~). The technique is based on creating a motion image from the live camera input (the webcam of my laptop in this case), and use this to draw a motiongram over time, which again is converted to sound through an “inverse FFT” process.

In the performance I experimented with how different types of video filters and effects influenced the sonic output. The end result was, in fact, quite noisy, as it should be at a noise performance.

To document my contribution, I have made a quick and dirty edit of some of the video recordings I did during the performance. Unfortunately, the audio recording of the cameras used does not do justice to the excellent noise in the venue, but it gives an impression of what was going on.

Paper #1 at SMC 2012: Evaluation of motiongrams

Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.


Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.


  • Full paper [PDF]
  • Poster [PDF]


Jensenius, A. R. (2012). Evaluating how different video features influence the visual quality of resultant motiongrams. In Proceedings of the 9th Sound and Music Computing Conference, pages 467–472, Copenhagen.


   Address = {Copenhagen},
   Author = {Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the 9th Sound and Music Computing Conference},
   Pages = {467--472},
   Title = {Evaluating How Different Video Features Influence the Visual Quality of Resultant Motiongrams},
   Year = {2012}}

Record videos of sonification

I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.

It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max. Below is a screenshot from a patch (sonifyer-recorder.maxpat) doing this:

The most important part here is to remember to input a 4 plane matrix to jit.vcr, otherwise it will complain. For this I use the little jcom.luma2rgb% component, which will automagically convert the video stream from 1 to 4 matrices, if needed. Here I have also combined the original video, motion image as well as motiongram into one image that I record, alongside the sonification of the motion. The output from this patch looks something like this: