Tag Archives: mocap

Filtering motion capture data for real-time applications

2013-05-27-DSCN7126We have three papers from our fourMs group at this year’s NIME conference in Daejeon. The first one was presented today by Ståle Skogstad, and is based on his work on trying minimize the delay when filtering motion capture data.

Title
Filtering motion capture data for real-time applications

Links:

Abstract:
In this paper we present some custom designed filters for real-time motion capture applications. Our target application is motion controllers, i.e. systems that interpret hand motion for musical interaction. In earlier research we found effective methods to design nearly optimal filters for realtime applications. However, to be able to design suitable filters for our target application, it is necessary to establish the typical frequency content of the motion capture data we want to filter. This will again allow us to determine a reasonable cutoff frequency for the filters. We have therefore conducted an experiment in which we recorded the hand motion of 20 subjects. The frequency spectra of these data together with a method similar to the residual analysis method were then used to determine reasonable cutoff frequencies. Based on this experiment, we propose three cutoff frequencies for different scenarios and filtering needs: 5, 10 and 15 Hz, which correspond to heavy, medium and light filtering, respectively. Finally, we propose a range of real-time filters applicable to motion controllers. In particular, low-pass filters and low-pass differentiators of degrees one and two, which in our experience are the most useful filters for our target application.

Reference:
Skogstad, S. A., Nymoen, K., Høvin, M., Holm, S., and Jensenius, A. R. (2013). Filtering motion capture data for real-time applications. In Proceedings of the International Conference on New Interfaces For Musical Expression, pages 196–197, Daejeon, Korea.

BibTeX:

@inproceedings{Skogstad:2013,
   Address = {Daejeon, Korea},
   Author = {Skogstad, St{\aa}le A. and Nymoen, Kristian and Hovin, Mats and Holm, Sverre and Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
   Pages = {196--197},
   Title = {Filtering Motion Capture Data for Real-Time Applications},
   Year = {2013}
}

filterdelayillu

Mocap workshop in Trondheim

I will participate in a motion capture workshop at the Norwegian University of Science and Technology (NTNU) tomorrow. My contribution will consist of the following:

  • Lecture: Introduction to motion capture (in music analysis and performance)
  • Demo 1: Working with video analysis using the Musical Gestures Toolbox
  • Demo 2: The Xsens MVN BioMech mobile mocap suit
  • Workshop: Analysis and performance with Wii controllers, Phidgets accelerometers and Kinect

Below are various resources.

Introduction

PDF of the presentation.

Demo 1: Musical Gestures Toolbox

The Musical Gestures Toolbox is a collection of modules and abstractions developed in and for the graphical programming environment Max. The toolbox is currently being developed within the Jamoma open platform for interactive art-based research and performance.

Download: Jamoma + UserLib and Max

The toolbox is probably most useful for people that are already familiar with Max programming. People looking for more easy-to-use solutions can check out some of my standalone applications at the fourMs software page. These applications should work on most versions of OSX, as well as on WinXP. I know that there are various issues with Windows Vista and Windows 7, and will try to get these problems ironed out as soon as possible.

Demo 2: XSens

Xsens MVN BioMech is a mobile mocap suit based on inertial sensors (accelerometer, gyroscopes, magnetometers). It excels over camera-based systems in that it is portable and allows for mobile motion capture. I will show how the system can be used both for analysis and performance:

  • Analysis: mocap recordings will be made with the internal Xsens software, and exported to C3D files that will be imported in Matlab using the Mocap Toolbox.

  • Performance: I will also show how the system can be used in realtime, passing data to Max within which the packets will be parsed and used to control sound in realtime.

Workshop:

During the hands-on workshop, participants will be able to try out the above mentioned tools and systems, as well as:

  • Phidgets USB motion sensors.

  • Nintendo Wii controllers, which allow for wireless inertial motion sensing. Here there are several tools available.

    • OSCulator a general purpose tool for reading data from various human interface devices.
    • WiiDataCapture can record data coming from OSCulator, and format them so that they can easily be imported by the MoCap Toolbox
    • junXion is an application for passing on either OSC or MIDI from Wii controllers and other human interface devices.
  • Microsoft Kinect sensor, which allows for inexpensive “3D” motion capture using depth-cameras.

    • Wiki page describing how to work with Kinect in Max

Audio recordings as motion capture

I spend a lot of time walking around the city with my daughter these days, and have been wondering how much I move and how the movement is distributed over time. To answer these questions, and to try out a method for easy and cheap motion capture, I decided to record today’s walk to the playground.

I could probably have recorded the accelerometer data in my phone, but I wanted to try an even more low-tech solution: an audio recorder.

While cleaning up some old electronics boxes the other day I found an old Creative ZEN Nano MP3 player. I had totally forgotten about the thing, and I cannot even remember ever using it. But when I found it I remembered that it actually has a built-in microphone and audio recording functionality. The recording quality is horrible, but that doesn’t really matter for what I want to use it for. The good thing is that it can record for hours on the 1GB built-in memory, using some odd compressed audio format (DVI ADPCM).

Since I am mainly interested in recording motion, I decided to put it in my sock and see if that would be a good solution for recording the motion of my foot. I imagined that the sound of my footsteps would be sufficiently loud that they would be easily detected. This is a fairly reduced recording of all my motion, but I was interested in seeing if it was relevant at all.

The result: a 35 MB audio file with 2,5 hours of foot sounds! In case you are interested, here is a 2-minute sample of regular walking. While it is possible to hear a little bit of environmental sounds, the foot steps are very loud and clear.

Now, what can you do with a file like this? To get the file useable for analysis, I started by converting it to a standard AIFF file using Perian in QuickTime 7. After that I loaded it into Matlab using the wonderful MIRToolbox, resampling it to 100 Hz (from 8kHz). It can probably be resampled at an even lower sampling late for this type of data, but I will look more into that later.

The waveform of the 2,5 hour recording looks like this, and reveals some of the structure:

But calculating the smoothed envelope of the curve gives a clearer representation of the motion:

Here we can clearly identify some of the structure of what I (or at least my right foot) was doing for those 2,5 hours. Not bad at all, and definitely relevant for macro-level motion capture.

Based on the findings of a 2 Hz motion peak in the data reported my MacDougall and Moore, I was curious to see if I could find the same in my data. Taking the FFT of the signal gives this overall spectrum:

Clearly, my foot motion shows the strongest peaks at 4 and 5 Hz. I will have to dive into the material a bit more to understand more about these numbers.

The conclusion so far, though, is that this approach may actually be a quite good, cheap and easy method for recording long-term movement data. And with 8kHz sampling rate, this method may also allow for studying micro-movement in more detail. More about that later.