KinectRecorder

I am currently working on a paper describing some further exploration of the sonifyer technique and module that I have previously published on. The new thing is that I am now using the inputs from a Kinect device as the source material for the sonification, which opens up for using also the depth in the image as an element in the process.

To be able to create figures for the paper, I needed to record the input from a Kinect to a regular video file. For that reason I have created a small Max patch called KinectRecorder, which allows for easy recording of one combined video file from the two inputs (regular video image and depth image) from the Kinect. As the screenshot below shows, there is not much more to the patch than starting the video input from the Kinect, and then start the recording. Files will be stored as with MJPEG compression and named with the current date and time.

KinectRecorder

The patch is not particularly fancy, but I imagine that it could be useful for other people interested in recording video from the Kinect, either for analytical applications or for testing performance setups when not having access to a Kinect device. So here it is:

Below is a short video recorded with the patch, showing some basic movement patterns. This video is not particularly interesting in itself, but I can reveal that it actually leads to some interesting sonic results when run through my sonifyer technique. More on that later…

MultiControl v.0.6.2

MultiControl_062 MultiControl is by far the most popular software application I have created, as can be seen in the web traffic here on my site, and also on the download site at the University of Oslo where the app resides. This is a tiny application that passes on data from a human interface device (mouse, game controller) through either OSC or MIDI. When I first created it back in 2004, there were not so many other options. Today, however, users would typically find more features in an application like Osculator or Steim’s Junxion. Still, MultiControl is downloaded hundreds of times per month, which should indicate that some people think it is interesting and useful.

Unfortunately, I do not have much time for development these days, so I will probably never get around to implement all the cool and exciting features I once wished for in MultiControl. But since it is my most popular application, I feel bad about also abandoning the whole thing. So I will try to keep it updated for the latest operating systems.

I just made a fresh build of the application using the latest version of Max. It works fine here on my Mountain Lion system, and I would imagine that it should also work on Lion (but perhaps not previous versions). Since I have received some feedback about problems with opening zip-files, I have now created a dmg-file instead. To avoid problems with broken links in the future, I will just point to the folder in which the latest version can be found.

Have fun, and let me know if you experience any problems.

Performing with the Norwegian Noise Orchestra

Performing with the Norwegian Noise OrchestraYesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.

For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.sonifyer~). The technique is based on creating a motion image from the live camera input (the webcam of my laptop in this case), and use this to draw a motiongram over time, which again is converted to sound through an “inverse FFT” process.

In the performance I experimented with how different types of video filters and effects influenced the sonic output. The end result was, in fact, quite noisy, as it should be at a noise performance.

To document my contribution, I have made a quick and dirty edit of some of the video recordings I did during the performance. Unfortunately, the audio recording of the cameras used does not do justice to the excellent noise in the venue, but it gives an impression of what was going on.

Teaching in Aldeburgh

I am currently in beautiful Aldeburgh, a small town on the east coast of England, teaching at the Britten-Pears Young Artist Programme together with Rolf Wallin and Tansy Davies. This post is mainly to summarise the things I have been going through, and provide links for various things.

Theoretical stuff

My introductory lectures went through some of the theory of an embodied understanding of the experience of music. One aspect of this theory that I find very relevant for the development of interactive works is what I call action-sound relationships. By this I mean that we have an intuitive understanding of how certain actions may produce certain sounds. This is the cognitive basis for the fact that we can “hear” an action we only see, and “see” the action of a sound we can only hear. These ideas are presented and discussed more thoroughly in my PhD dissertation.

Examples of realtime motion capture

I went through a number of examples of how to use motion capture in musical contexts. Here are but a few of the examples:

Transformation is an improvisation piece for electric violin and live electronics. It is based on the idea of letting the performer control a large collection of sound fragments while moving around on stage. The technical setup for the piece is based on a video-based motion tracking system developed in Jamoma, coupled to the playback of sounds using concatenative synthesis in CataRT. Transformatillon is described more thoroughly in a paper in the upcoming issue of Computer Music Journal (winter 2012).

The SoundSaber motion capture instrument is tracking the position of a rod in space using an infrared marker-based motion capture system. The setup is described in more detail in this NIME 2011 paper.

Dance Jockey is a setup/piece in which the Xsens MVN inertial motion capture suit is used to control both sample playback and synthesis using a combination of posture and action recognition of the full body of the performer. It is described in this NIME 2012 paper.

Technical resources

The Max patches used in the course are available here (Aldeburgh-patches.zip). Here are pointers to other useful things:

  • Maxobjects.com is a database with pointers to a number of third party externals for Max. It is a great resource to find whatever you need.
  • Jamoma is a large collection of modules and externals, including video analysis and mapping tools
  • CNMAT depot is a large collection of externals, tutorials, and example patches.

Wii

The Wii controllers are a great way of getting started to use accelerometer data in Max. They are wireless (Bluetooth), and once you figure out the pairing with your computer they work quite well. There are several ways of using the with Max, amongst others:

  • ajh.wiimote is an external for getting data from the Wii controllers into Max.
  • OSCulator is a multipurpose control and mapping tool working with different types of controllers, including the Wii. Even though it may be a little more work passing OSC messages into Max from a separate application, OSCulator is what I find the most stable way of working with Wii controllers in Max.
  • Junxion is another multipurpose control and mapping tool, developed at STEIM. Works with lots of controllers + it also does video tracking.

Phidgets

The Phidgets kits are an inexpensive, user-friendly and soldering-free way of getting started working with musical electronics. The kits come with a sensor interface and a number of different types of sensors to test out.

  • The driver is necessary to get the data into your computer.
  • Then use the Phidgets Max externals to get data into Max.
  • Phidgets2MIDI is a small Max application developed for working more easily with data from the Phidgets.

Kinect

The MS Kinect controller is a great solution for getting started with full-body motion capture.

  • The jit.freenect object lets you grab the video image or depth image from the Kinect into Max. You can then use e.g. the Jamoma video modules or any other Jitter tools in Max.
  • Synapse is a standalone application for tracking a skeleton model. You need to set it up so that you can get OSC messages back into Max.
  • Descriptions of more advanced uses of the Kinect can be found at the fourMs wiki

Paper #1 at SMC 2012: Evaluation of motiongrams

Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.

Abstract

Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.

Downloads

  • Full paper [PDF]
  • Poster [PDF]


Reference

Jensenius, A. R. (2012). Evaluating how different video features influence the visual quality of resultant motiongrams. In Proceedings of the 9th Sound and Music Computing Conference, pages 467–472, Copenhagen.

BibTeX

@inproceedings{Jensenius:2012h,
   Address = {Copenhagen},
   Author = {Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the 9th Sound and Music Computing Conference},
   Pages = {467--472},
   Title = {Evaluating How Different Video Features Influence the Visual Quality of Resultant Motiongrams},
   Year = {2012}}