Interdisciplinarity

I am happy to see that the first point in the new UiO strategy plan is interdisciplinarity, or more specifically: “Et grensesprengende universitet”. Interdisciplinarity is always easier in theory than in practice, and this is something I am debating in a feature article in the latest volume (pages 32-33) of Forskerforum, the journal of the The Norwegian Association of Researchers (Forskerforbundet).

I have written about interdisciplinarity on this blog several times before (here, here and here). In the new article I use interdisciplinarity to not only refer to adjacent scientific disciplines, but in a more general sense. I use some of my own work as the point of departure: the video analysis work that ended up as the Musical Gestures Toolbox started out as an artistic project, was later developed within my scientific PhD work, and is now being used for both artistic projects (e.g. by Victoria Johnson), research on ADHD (Terje Sagvolden’s group) and clinical use in the analysis of children with cerebral palsy (Lars Adde).

Unfortunately, getting support (economically, administrative, etc.) for such interdisciplinary research (including both scientific and artistic research) is currently not possible in Norway. In fact, the Norwegian Research Council does not fund artistic research at all, and the Research fellowship in the arts program does not fund scientific research.

In the end of my feature article I suggest three points to the Norwegian universities and the Norwegian Research Council for how to improve the conditions for interdisciplinary research in Norway:

  1. Set up truly interdisciplinary committees for all research funding
  2. Open for projects that contain both scientific and artistic research
  3. Set aside 10% of all research funding (in all disciplines) to be used for artistic work

 

GDIF recording and playback

Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here.

We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:

New motiongram features

Inspired by the work Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:

About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.g. waveform displays and sono/spectrograms). The method is based on creating a motion image, doing a matrix reduction on it and plotting the resultant 1xn or nx1 matrices over time either horizontally or vertically.

Comparison of motiongrams Below is an image showing three different types of motiongrams:

  1. Single line scan based on regular image
  2. Average scan based on regular image
  3. Average scan based on motion image

I think all of them are interesting, so the use of them will have to be adjusted according to what type of material you are working with.

 

Quantity of motion of an arbitrary number of inputs

In video analysis I have been working with what is often referred to as “quantity of motion” (which should not be confused with momentum, the product of mass and velocity p=mv), i.e. the sum of all active pixels in a motion image. In this sense, QoM is 0 if there is no motion, and has a positive value if there is motion in any direction.

Working with various types of sensor and motion capture systems, I see the same need to know how much motion there is in the system, independent of the number of variables and dimensions in the system studied. Thus, whether we use a single 1-dimensional MIDI slider or 32 6-dimensional sensors in a motion capture system, we still need to be able to say whether there is any movement in the system, and approximately how much movement there is.

So I have made a small abstraction in Max that sums up all incoming values, divides by the number of values, finds the first derivative and takes the absolute value of this.

I had two optimization questions while working on the patch:

  1. Does it matter whether derivation is done before or after summing up the values?
  2. Is it more efficient to use Max objects than Jitter objects?

Answers:

  1. No, it does not matter.
  2. Max objects are ~3 times faster

A screenshot of the efficiency test patch is shown below, and a zip-file of the patches.