All posts by alexarje

Alexander Refsum Jensenius is a music researcher and research musician living in Oslo, Norway.

New publication: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music

After several years of hard work, we are very happy to announce a new publication coming out of the MICRO project that I am leading: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music (Frontiers Psychology).

From the setup of the experiment in which we tested the effects of listening to headphones and speakers.

This is the first journal article of my PhD student Agata Zelechowska, and it reports on a standstill study conducted a couple of years ago. It is slightly different than the paradigm we have used for the Championships of Standstill. While the latter is based on single markers on the head of multiple people, Agata’s experiment was conducted with full-body motion capture of individuals.

The most exciting thing about this new study, is that we have investigated whether there are any differences in people’s micromotion when they listen through either headphones or speakers. Is there a difference? Yes, it is! People move (a little) more when listening through headphones.

Want to know more? The article is Open Access, so you can read the whole thing here. The short summary is here:

Previous studies have shown that music may lead to spontaneous body movement, even when people try to stand still. But are spontaneous movement responses to music similar if the stimuli are presented using headphones or speakers? This article presents results from an exploratory study in which 35 participants listened to rhythmic stimuli while standing in a neutral position. The six different stimuli were 45 s each and ranged from a simple pulse to excerpts from electronic dance music (EDM). Each participant listened to all the stimuli using both headphones and speakers. An optical motion capture system was used to calculate their quantity of motion, and a set of questionnaires collected data about music preferences, listening habits, and the experimental sessions. The results show that the participants on average moved more when listening through headphones. The headphones condition was also reported as being more tiresome by the participants. Correlations between participants’ demographics, listening habits, and self-reported body motion were observed in both listening conditions. We conclude that the playback method impacts the level of body motion observed when people are listening to music. This should be taken into account when designing embodied music cognition studies.

Method chapter freely available

I am a big supporter of Open Access publishing, but for various reasons some of my publications are not openly available by default. This is the case for the chapter Methods for Studying Music-Related Body Motion that I have contributed to the Springer Handbook of Systematic Musicology.

I am very happy to announce that the embargo on the book ran out today, which means that a pre-print version of my chapter is finally freely available in UiO’s digital repository. This chapter is a summary of my experiences with music-related motion analysis, and I often recommend it to students. Therefore it is great that it is finally available to download from everywhere.

Abstract

This chapter presents an overview of some methodological approaches and technologies that can be used in the study of music-related body motion. The aim is not to cover all possible approaches, but rather to highlight some of the ones that are more relevant from a musicological point of view. This includes methods for video-based and sensor-based motion analyses, both qualitative and quantitative. It also includes discussions of the strengths and weaknesses of the different methods, and reflections on how the methods can be used in connection to other data in question, such as physiological or neurological data, symbolic notation, sound recordings and contextual data.

Pixel array images of long videos in FFmpeg

Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.

I find that creating motiongrams or videograms is a good way to visualize the content of videos. They are abstract representations, but still reveal some of what is going on. However, for longer videos, motiongrams may be a bit tricky to look at, and they also take a lot of time to generate (hours, or even days). For that reason I was excited to see how pixel array images would work on some of my material.

First I tried on my “standard” dance video:

which gives this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

Yes, that is mainly a blue line, resulting from the average colour of the video being blue throughout the entire video.

Then I tried with one of the videos from the AIST Dance Video Database:

Which results in this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

And, yes, that is mainly a gray line… I realize that this method does not work very well with single-shot videos.

To try something very different, I also decided to make a pixel array image of Bergensbanen, a 7-hour TV production of the train between Oslo and Bergen. I made videograms of this recording some years ago, which turned out to be quite nice. So I was excited to see how a pixel array image would work. The end result looks like this (1920 pixels wide):

Pixel array image (1920 pixels wide) of the 7-hour TV production Bergensbanen

As you see, not much is changing, but that also represents the slowness of the train ride. While I originally thought this would be a smart representation, I still think that my videograms were more informative, such as this one:

Bergensbanen
Videogram of Bergensbanen

The big difference between the two visualizations, is that each frame is represented with vertical information in the videogram. The pixel array image, on the other hand, only displays one single pixel per frame. That said, it took only some minutes to generate the pixel array image, and I recall spending several days on generating the videogram.

To sum up, I think that pixel array images are probably more useful for movies and video material in which there are lots of changes throughout. They would be better suited for such a reduction technique. For my videos, in which I always use single-shot stationary cameras, motiongrams and videograms may still be the preferred solution.

Convert MPEG-2 files to MPEG-4

Image result for Canon XF-105
Canon XF105

This is a note to self, and could potentially also be useful to others in need of converting “old-school” MPEG-2 files into more modern MPEG-4 files using FFmpeg.

In the fourMs lab we have a bunch of Canon XF105 video cameras that record .MXF files with MPEG-2 compression. This is not a very useful format for other things we are doing, so I often have to recompress them to something else.

Inspecting one of the files, I just also discovered that they record the audio onto two mono channels:

Stream #0:0: Video: mpeg2video (4:2:2), yuv422p(tv, bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 50000 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc

Stream #0:1: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s

Stream #0:2: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s

So I also want to merge these two mono tracks (which are the left and right inputs of the camera) to a stereo track. FFmpeg comes in handy (as always), and I figured out that this little one-liner will do the trick:

ffmpeg -i input.mxf -vf yadif -vcodec libx264 -q:v 3 -filter_complex "[0:a:0][0:a:1]amerge,channelmap=channel_layout=stereo[st]" -map 0:v -map "[st]" output.mp4

An explanation of some of these settings:

  • yadif: this is for deinterlacing the video
  • libx264: this is probably unnecessary, but forces to use the better MPEG-4 compressor
  • q:v 3: I find this to be a good setting for the video compressor
  • filter_complex: this complex string (courtesy of reddit) does the merging of the two mono sources

Will probably try to add it to MGT-terminal at some point, but this blog post will suffice for now.

Simple tips for better video conferencing

Image result for video meeting

Very many people are currently moving to video-based meetings. For that reason I have written up some quick advise on how to improve your setup. This is based on my interview advise, but grouped differently.

Network

Image result for network clipart

The first important thing is to have as good a network as you can. Video conferencing requires a lot of bandwidth, so even though your e-mail and regular browsing works fine, it may still not be sufficient for good video transmission.

  • Cabled network: If you are able to connect with an Ethernet cable to your router, that would usually always be the best and most solid solution.
  • Wireless network: If cable won’t work for you (it is also difficult logistically in my own apartment), try to get as close as possible to your wi-fi router.

Audio

Image result for headset clipart

I would argue that improving the audio is more important than the video for video conferencing. Most video conferencing systems (Skype, Zoom, etc.) will prioritize the audio channel, which means that the video may stutter while the audio is passing through fine.

The main trick is to aim for separating the “foreground” as much as possible from the “background”. There are some very basic audio principles to follow:

  • Use a headset: The best way to get decent sound for video conferencing, is to move the microphone as close as possible to your mouth. Headsets with a microphone boom in front of your face are the best, but a regular mobile phone headset (the one that came with your mobile phone, for example) would still be better than nothing.
  • Use headphones: If you for some reason do not have a headset with built-in microphone, using a regular pair of headphones is still better than using the speakers on your computer. With this setup you use the microphone on the computer, which may not be ideal, but at least you won’t get feedback problems.
  • Avoid reverberant rooms: If you aim for clarity in conversation, it is typically better to sit in a smaller and more damped room than a large one. That means that a bedroom is typically better than a larger living room. If you use a headset this is less important, but particularly if you only use the built-in microphone and speakers on a laptop, this could make a huge difference in how your voice gets through.
  • Mute yourself: In most system there is a button to mute yourself. If you are not talking all the time, it helps to mute yourself from the discussion. Just remember to unmute when you want to say something!

Video

Image result for webcam clipart

The same principle of separating “foreground” from “background” applies to the video.

  • Lighting: To obtain the best possible video image, think about your placement with respect to lighting. It is, for example, not ideal to sit in front of a window, since a bright light in the background will make it difficult to see your face.
  • Background: The best is to sit in front of a plain wall. If that is not possible, consider whether the background of your image is what you want to show to your fellow students/colleagues.
  • Video angle: If you are using the built-in camera on your computer you may not have too many options for how to place the camera. But you may still consider shifting the camera position so that you and your surroundings look as good as possible.

Summing up

There are, of course, many ways to improve your video conferencing setup. Many people believe that you need to invest in expensive equipment to get good results. But even cheap consumer products are very capable of producing decent results these days. So it is more a matter of optimizing what you have. Good luck!