Hi-speed guitar recording

I was in Hamburg last week, teaching at the International Summer Shool in Systematic Musicology (ISSSM). While there, I was able to test a newly acquired high-speed video camera (Phantom V711) at the Department of Musicology.

The beautiful building of the Department of Musicology in Hamburg
The beautiful building of the Department of Musicology in Hamburg
They have some really cool drawings in the ceiling at the entrance of the Department of Musicology in Hamburg.

Master student Niko Plath was friendly enough to set aside some time to set up the system and do a test recording. Niko has been doing some fascinating work on measuring the motion of individual piano strings using the high-speed camera. For this type of study a camera-based approach makes it possible to measure the vibrations of individual strings without having to attach anything to the string or to the board.

Niko Plath setting up the high-speed camera system.

While Niko has recorded the piano strings with a very high speed (500 KHz!) and low resolution(124 x 8 pixels), I was interested in seeing how the camera worked at the maximum resolution (1280 x 800 pixels). At this resolution, the maximum speed is 7 500 frames per second, and the maximum recording duration is 1.1 second.

Even though the recording is short, the processing and exporting of the file (21GB) takes quite some time. So I only had time to make one recording to try things out: a single strumming of all the (open) strings on a guitar, filming the vibrating strings over the sound board.

The setup used for the recording: guitar, two LED lamps and the high-speed camera.

This was just a quick test, so there are several minor problems with the recording: one being that the guitar was placed upside down, so that the lower strings are at the bottom in the recording. Also, I did not hit the upper string very well, so that one only resonates a little in the beginning and decays quickly. Still, there is nothing as beautiful as watching high-speed recordings in slow-motion. Here you can see a version of the recording being played back at 100 frames per second:

Of course, I was interested in creating a motiongram of the recording. Rather than making this using a regular average technique, I rather used a slit-scan approach, selecting a single pixel column in the middle of the soundhole on the guitar. This was done using a few Jamoma modules in Max, and the patch looked like this:

The Max patch used to generate the motiongram from the high-speed video recording.

The full motiongram is available here (TIFF 11 069 x 800 pixels), and below is a JPEG version in a more screen-friendly format. Even though the recording is only a little more than one second long, it is still possible to see the decay of the vibration of the strings, particularly the first strings (from above).

Motiongram of the entire high-speed guitar recording.

Below is a version showing only the beginning of the motiongram, and how the individual strings were strummed. Notice the difference in the “cut-off” of the shape of the wave of each of the strings.

The first 1000 frames of the recording, showing how the strings were strummed.

No new scientific insights here, but it is always fun to see periodic motion with the naked eye. It is a good reminder that auditory and visual phenomena (and the perception of them) are related. Thanks to Niko for helping out, and to his supervisor Rolf Bader for letting me try the system.

Paper #1 at SMC 2012: Evaluation of motiongrams

Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.

Abstract

Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.

Downloads

  • Full paper [PDF]
  • Poster [PDF]


Reference

Jensenius, A. R. (2012). Evaluating how different video features influence the visual quality of resultant motiongrams. In Proceedings of the 9th Sound and Music Computing Conference, pages 467–472, Copenhagen.

BibTeX

@inproceedings{Jensenius:2012h,
   Address = {Copenhagen},
   Author = {Jensenius, Alexander Refsum},
   Booktitle = {Proceedings of the 9th Sound and Music Computing Conference},
   Pages = {467--472},
   Title = {Evaluating How Different Video Features Influence the Visual Quality of Resultant Motiongrams},
   Year = {2012}}

Record videos of sonification

I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.

It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max. Below is a screenshot from a patch (sonifyer-recorder.maxpat) doing this:

The most important part here is to remember to input a 4 plane matrix to jit.vcr, otherwise it will complain. For this I use the little jcom.luma2rgb% component, which will automagically convert the video stream from 1 to 4 matrices, if needed. Here I have also combined the original video, motion image as well as motiongram into one image that I record, alongside the sonification of the motion. The output from this patch looks something like this:

Sonification of motiongrams

A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.

See below for the full paper and video examples.

The Paper

Abstract: The paper presents a method for sonification of human body motion based on motiongrams. Motiongrams show the spatiotemporal development of body motion by plotting average matrices of motion images over time. The resultant visual representation resembles spectrograms, and is treated as such by the new sonifyer module for Jamoma for Max, which turns motiongrams into sound by reading a part of the matrix and passing it on to an oscillator bank. The method is surprisingly simple, and has proven to be useful for analytical applications and in interactive music systems.

Full reference: A. R. Jensenius. Motion-sound interaction using sonification based on motiongrams. In ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions, pages 170–175. IARIA, 2012.

@inproceedings{Jensenius:2012d,
    Author = {Jensenius, Alexander Refsum},
    Booktitle = {ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions},
    Pages = {170--175},
    Publisher = {IARIA},
    Title = {Motion-sound Interaction Using Sonification based on Motiongrams},
    Year = {2012}}

Video examples

Video 1: A screencast demonstrating the jmod.sonifyer~ module.

Video 2: Examples of sonification of some basic movement patterns: up-down, sideways, diagonal and circular.

Video 3: One attempt at sonifying the two axes at the same time. Here both horizontal and vertical motiongrams are created from the same video recording, and the sonifications of the two motiongrams have been mapped to the left and right audio channel respectively.

Video 4: Examples of the importance of filtering and thresholding of the motion image for the final sounding result. The recordings were done at high-speed (200 fps) and played back at 25 fps.

Video 5: Sonification of a short violin improvisation (courtesy of Victoria Johnson).

Video 6: Sonification of a piece by a French-Canadian fiddler (courtesy of Erwin Schoonderwaldt).

Video 7: Sonification of free dance to music.

Video 8: Soniperforma: Performing with the sonifyer at Biermannsgården in Oslo on 18 December 2010. The performance was improvised and based on applying only video effects to change the sonic quality.