From Basic Music Research to Medical Tool

The Research Council of Norway is evaluating the research being done in the humanities these days, and all institutions were given the task to submit cases of how societal impact. Obviously, basic research is per definition not aiming at societal impact in the short run, and my research definitely falls into category.Still it is interesting to see that some of my basic research is, indeed, on the verge of making a societal impact in the sense that policy makers like to think about. So I submitted the impact case “From Music to Medicine”, based on the system Computer-based Infant Movement Assessment (CIMA).

Musical Gestures Toolbox

CIMA is based on the Musical Gestures Toolbox, which started its life in the early 2000s, and which (in different forms) has been shared publicly since 2005.

My original aim of developing the MGT was to study musicians’ and dancers’ motion in a simple and holistic way.The focus was always on trying to capture as much relevant information as possible from a regular video recording, with a particular eye on the temporal development of human motion.

The MGT was first developed as standalone modules in the graphical programming environment Max, and was in 2006 merged into the Jamoma framework. This is a modular system developed and used by a group of international artists, under the lead of Timothy Place and Trond Lossius. The video analysis tools have since been used in a number of music/dance productions worldwide and are also actively used in arts education.

Studying ADHD

In 2006, I presented this research at the annual celebration of Norwegian research in the Oslo concert hall, after which professor Terje Sagvolden asked to test the video analysis system in his research on ADHD/ADD at Oslo University Hospital. This eventually lead to a collaboration in which the Musical Gestures Toolbox was used to analyse 16 rat caves in his lab. The system was also tested in the large-scale clinical ADHD study at Ullevål University Hospital in 2008 (1000+ participants). This collaboration ended abruptly with Sagvolden’s decease in 2011.

Studying Cerebral Palsy

The unlikely collaboration between researchers in music and medicine was featured in a newspaper article and a TV documentary in 2008, after which physiotherapist Lars Adde from the Department of Laboratory Medicine, Women’s and Children’s Health at the Norwegian University of Science and Technology (NTNU) called me to ask whether the tools could also be used to study infants. This has led to a long and fruitful collaboration and the development of the prototype Computer-based Infant Movement Assessment (CIMA) which is currently being tested in hospitals in Norway, USA, India, China and Turkey. A pre-patent has been filed and the aim is to provide a complete video-based solution for screening infants for the risk of developing cerebral palsy (CP).

It is documented that up to 18% of surviving infants who are born extremely preterm develop cerebral palsy (CP), and the total rate of neurological impairments is up to 45%. Specialist examination may be used to detect infants in the risk of developing CP, but this resource is only available at some hospitals. The CIMA aims to offer a standardised and affordable computer-based screening solution so that a much larger group of infants can be screened at an early stage, and the ones that fall in the risk zone may receive further specialist examination. Early intervention is critical to improving the motor capacities of the infants. The success of the CIMA methods developed on the MGT framework are to a large part based on the original focus on studying human motion through a holistic, simple and time-based approach.

The unlikely collaboration was featured in a new TV documentary in 2014.

References

  • Valle, S. C., Støen, R., Sæther, R., Jensenius, A. R., & Adde, L. (2015). Test–retest reliability of computer-based video analysis of general movements in healthy term- born infants. Early Human Development, 91(10), 555–558. http://doi.org/10.1016/j.earlhumdev.2015.07.001
  • Jensenius, A. R. (2014). From experimental music technology to clinical tool. In K. Stens\a eth (Ed.), Music, health, technology, and design. Oslo: Norwegian Academy of Music. Retrieved from http://urn.nb.no/URN:NBN:no-46186
  • Adde, L., Helbostad, J., Jensenius, A. R., Langaas, M., & Støen, R. (2013). Identification of fidgety movements and prediction of CP by the use of computer- based video analysis is more accurate when based on two video recordings. Physiotherapy Theory and Practice, 29(6), 469–475. http://doi.org/10.3109/09593985.2012.757404
  • Jensenius, A. R. (2013). Some video abstraction techniques for displaying body movement in analysis and performance. Leonardo, 46(1), 53–60. http://urn.nb.no/URN:NBN:no-38076
  • Adde, L., Langaas, M., Jensenius, A. R., Helbostad, J. L., & Støen, R. (2011). Computer Based Assessment of General Movements in Young Infants using One or Two Video Recordings. Pediatric Research, 70, 295–295. http://doi.org/10.1038/pr.2011.520
  • Adde, L., Helbostad, J. L., Jensenius, A. R., Taraldsen, G., Grunewaldt, K. H., & Støen, R. (2010). Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study. Developmental Medicine & Child Neurology, 52(8), 773–778. http://doi.org/10.1111/j.1469-8749.2010.03629.x
  • Adde, L., Helbostad, J. L., Jensenius, A. R., Taraldsen, G., & Støen, R. (2009). Using computer-based video analysis in the study of fidgety movements. Early Human Development, 85(9), 541–547. http://doi.org/10.1016/j.earlhumdev.2009.05.003
  • Jensenius, A. R. (2007). Action–Sound: Developing Methods and Tools to Study
    Music-Related Body Movement (PhD thesis). University of Oslo.
    http://urn.nb.no/URN:NBN:no-18922

Simple video editing in Ubuntu

I have been using Ubuntu as my main OS for the past year, but have often relied on my old MacBook for doing various things that I haven’t easily figured out how to do in Linux. One of those things is to trim video files non-destructively. This is quite simple to do in QuickTime, although Apple now forces you to save the file with a QuickTime container (.mov) even though there is still only MPEG-4 compression in the file (h.264).

There are numerous linux video editors available, but most of these offer way too many features and hence the need to re-compress the files. But I have found two solutions that work well.

The first one, ffmpeg, should be obvious, although I hadn’t thought that it could also do trimming. However, I often like GUI software, and I have found that Avidemux can do what I need very easily. Just open a file, add start and stop markers for the section to be trimmed, and click save. As opposed to QuickTime, it also allows for saving directly to MPEG-4 files (.mp4) without recoding the file.

There was only one thing that I had to look up, and that was the need for starting the trim section on a keyframe in the video. This is quite obvious when wanting to avoid re-encoding the file, but unfortunately Avidemux doesn’t help in explaining this but only gives an error message. The trick was to use the >> arrows to jump to the next keyframe, and then the file saved nicely.

New department video

As I have mentioned previously, life has been quite hectic over the last year, becoming Head of Department at the same time as getting my second daughter. So my research activities have slowed down considerably, and also the activity on this blog.

When it comes to blogging, I have focused on building up my Head of Department blog (in Norwegian), which I use to comment on things happening in the Department as well as relevant (university) political issues. My longterm plan, though, is also to write some posts about being Head of Department on this English-language blog.

Today I would like to point to our new department video, targeted at recruiting new students:

The video is made by video journalist Camilla Smaadal, who is also responsible for a set of video presentations of our faculty. Most of these are in Norwegian, but we are planning to add English subtitles through YouTube.

The new video is aiming at giving students an impression of all the cool things happening in our Department. There are a lot of new music education programs popping up everywhere these days, so we realise that we need to be more active in promoting the qualities of our university education. This video is one little step towards this goal.

New publication: Non-Realtime Sonification of Motiongrams

SMC-poster-thumbToday I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.

Title
Non-Realtime Sonification of Motiongrams

Links

Abstract
The paper presents a non-realtime implementation of the sonomotiongram method, a method for the sonification of motiongrams. Motiongrams are spatiotemporal displays of motion from video recordings, based on frame-differencing and reduction of the original video recording. The sonomotiongram implementation presented in this paper is based on turning these visual displays of motion into sound using FFT filtering of noise sources. The paper presents the application ImageSonifyer, accompanied by video examples showing the possibilities of the sonomotiongram method for both analytic and creative applications

Reference
Jensenius, A. R. (2013). Non-realtime sonification of motiongrams. In Proceedings of Sound and Music Computing, pages 500–505, Stockholm.

BibTeX

 @inproceedings{Jensenius:2013f,
    Address = {Stockholm},
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {Proceedings of Sound and Music Computing},
    Pages = {500--505},
    Title = {Non-Realtime Sonification of Motiongrams},
    Year = {2013}}