GDIF recording and playback

Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here.

We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:

New motiongram features

Inspired by the work Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:

About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.g. waveform displays and sono/spectrograms). The method is based on creating a motion image, doing a matrix reduction on it and plotting the resultant 1xn or nx1 matrices over time either horizontally or vertically.

Comparison of motiongrams Below is an image showing three different types of motiongrams:

  1. Single line scan based on regular image
  2. Average scan based on regular image
  3. Average scan based on motion image

I think all of them are interesting, so the use of them will have to be adjusted according to what type of material you are working with.


Quantity of motion of an arbitrary number of inputs

In video analysis I have been working with what is often referred to as “quantity of motion” (which should not be confused with momentum, the product of mass and velocity p=mv), i.e. the sum of all active pixels in a motion image. In this sense, QoM is 0 if there is no motion, and has a positive value if there is motion in any direction.

Working with various types of sensor and motion capture systems, I see the same need to know how much motion there is in the system, independent of the number of variables and dimensions in the system studied. Thus, whether we use a single 1-dimensional MIDI slider or 32 6-dimensional sensors in a motion capture system, we still need to be able to say whether there is any movement in the system, and approximately how much movement there is.

So I have made a small abstraction in Max that sums up all incoming values, divides by the number of values, finds the first derivative and takes the absolute value of this.

I had two optimization questions while working on the patch:

  1. Does it matter whether derivation is done before or after summing up the values?
  2. Is it more efficient to use Max objects than Jitter objects?


  1. No, it does not matter.
  2. Max objects are ~3 times faster

A screenshot of the efficiency test patch is shown below, and a zip-file of the patches.


i.e. and e.g.

A quick observation this morning as I was brushing up on a couple of grammatical things over at Grammar Girl while finishing a book chapter: Concerning the abbreviations i.e. (that is) and e.g. (for example), most American English dictionaries seem to suggest that they should be followed by a comma, while in British English it is fine to leave the commas out.