Tag Archives: motiongram

Are you jumping or bouncing?

walslagOne of the most satisfying things of being a researcher, is to see that ideas, theories, methods, software and other things that you come up with, are useful to others. Today I received the master’s thesis of Per Erik Walslag, titled Are you jumping or bouncing? A case-study of jumping and bouncing in classical ballet using the motiongram computer program, in which he has made excellent use of my motiongram technique and my VideoAnalysis software. This thesis was completed at NTNU last year within the program Nordic Master’s in Dance (NoMAds). That master program is in itself a great example of how a group of fairly small departments can come up with an excellent collaborative study program. I was invited to guest lecture at the program back in 2009, and am very happy to see that my lecture inspired some thoughts and ideas in the students.

Performing with the Norwegian Noise Orchestra

Performing with the Norwegian Noise OrchestraYesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.

For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.sonifyer~). The technique is based on creating a motion image from the live camera input (the webcam of my laptop in this case), and use this to draw a motiongram over time, which again is converted to sound through an “inverse FFT” process.

In the performance I experimented with how different types of video filters and effects influenced the sonic output. The end result was, in fact, quite noisy, as it should be at a noise performance.

To document my contribution, I have made a quick and dirty edit of some of the video recordings I did during the performance. Unfortunately, the audio recording of the cameras used does not do justice to the excellent noise in the venue, but it gives an impression of what was going on.

Hi-speed guitar recording

I was in Hamburg last week, teaching at the International Summer Shool in Systematic Musicology (ISSSM). While there, I was able to test a newly acquired high-speed video camera (Phantom V711) at the Department of Musicology.

The beautiful building of the Department of Musicology in Hamburg
The beautiful building of the Department of Musicology in Hamburg
They have some really cool drawings in the ceiling at the entrance of the Department of Musicology in Hamburg.

Master student Niko Plath was friendly enough to set aside some time to set up the system and do a test recording. Niko has been doing some fascinating work on measuring the motion of individual piano strings using the high-speed camera. For this type of study a camera-based approach makes it possible to measure the vibrations of individual strings without having to attach anything to the string or to the board.

Niko Plath setting up the high-speed camera system.

While Niko has recorded the piano strings with a very high speed (500 KHz!) and low resolution(124 x 8 pixels), I was interested in seeing how the camera worked at the maximum resolution (1280 x 800 pixels). At this resolution, the maximum speed is 7 500 frames per second, and the maximum recording duration is 1.1 second.

Even though the recording is short, the processing and exporting of the file (21GB) takes quite some time. So I only had time to make one recording to try things out: a single strumming of all the (open) strings on a guitar, filming the vibrating strings over the sound board.

The setup used for the recording: guitar, two LED lamps and the high-speed camera.

This was just a quick test, so there are several minor problems with the recording: one being that the guitar was placed upside down, so that the lower strings are at the bottom in the recording. Also, I did not hit the upper string very well, so that one only resonates a little in the beginning and decays quickly. Still, there is nothing as beautiful as watching high-speed recordings in slow-motion. Here you can see a version of the recording being played back at 100 frames per second:

Of course, I was interested in creating a motiongram of the recording. Rather than making this using a regular average technique, I rather used a slit-scan approach, selecting a single pixel column in the middle of the soundhole on the guitar. This was done using a few Jamoma modules in Max, and the patch looked like this:

The Max patch used to generate the motiongram from the high-speed video recording.

The full motiongram is available here (TIFF 11 069 x 800 pixels), and below is a JPEG version in a more screen-friendly format. Even though the recording is only a little more than one second long, it is still possible to see the decay of the vibration of the strings, particularly the first strings (from above).

Motiongram of the entire high-speed guitar recording.

Below is a version showing only the beginning of the motiongram, and how the individual strings were strummed. Notice the difference in the “cut-off” of the shape of the wave of each of the strings.

The first 1000 frames of the recording, showing how the strings were strummed.

No new scientific insights here, but it is always fun to see periodic motion with the naked eye. It is a good reminder that auditory and visual phenomena (and the perception of them) are related. Thanks to Niko for helping out, and to his supervisor Rolf Bader for letting me try the system.

Paper #1 at SMC 2012: Evaluation of motiongrams

Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.

Abstract

Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.

Downloads

  • Full paper [PDF]
  • Poster [PDF]


Reference

Jensenius, A. R. (2012). Evaluating how different video features influence the visual quality of resultant motiongrams. In Proceedings of the 9th Sound and Music Computing Conference, pages 467–472, Copenhagen.

BibTeX

@inproceedings{Jensenius:2012h,
   Address = {Copenhagen},
   Author = {Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the 9th Sound and Music Computing Conference},
   Pages = {467--472},
   Title = {Evaluating How Different Video Features Influence the Visual Quality of Resultant Motiongrams},
   Year = {2012}}