Testing simple camera and microphone setups for quick interviews

We just started a new run of our free online course Music Moves. Here we have a tradition of recording wrap-up videos every Friday, in which some of the course educators answer questions from the learners. We have recorded these in many different ways over the years, from using high-end cameras and microphones to just using a handheld phone. We have found that using multiple cameras and microphones is just too time-consuming, both in terms of setup and editing. Using only a mobile phone is extremely easy to set up, but we have had challenges with the audibility of the speech. Before recording this semester’s wrapup videos I therefore decided to test out some solutions based on equipment I had lying around:

  • GoPro Hero 7 w/o audio connector
  • Sony RX100 V
  • Zoom Q8
  • Samsung Galaxy Note 8
  • Røde Smartlav+ lavalier microphone
  • DPA Core 4060 lavalier microphone

In the following I will show some of the results of the testing. I decided to skip the Sony camera in this write-up, because it doesn’t have the option of connecting a separate microphone.

Testing various devices in my office.

GoPro Hero 7

The first example is of a GoPro Hero 7 with just the built-in microphone. This worked much better than expected. The audio is quite clear and it is easy to hear what I am saying. The colours of the video are vivid, but the image is compressed quite a bit. The video is very wide-angled, which is super-practical for such an interview setting, although it looks a bit skewed on the edges. But overall this was a positive surprise.

Connecting a Røde Smartlav+ to the GoPro results in a very clean sound. In fact, this could have been a very nice setup, had it not been for some challenges with placing the camera. That is because the audio dongle for the GoPro is (1) bent downwards and (2) this makes it impossible to use the housing needed to put it on a tripod (as can be seen in the picture to the right). This makes it super-clumpsy to use this setup in a real-life situation. I hear rumours about a new audio add-on for new GoPro cameras, and that may be very interesting to check out.

Zoom Q8

My next device is the Zoom Q8. This is actually a sound recorder with a built-in camera, so one would expect that the audio is the main priority. This is also the case. The video is quite noisy, but the sound quality is much better than with the GoPro. Still I find that the microphone picks up quite a bit of the room. This is good for music recordings, but not so good when the focus is on speech quality.

Hooking up a DPA 4060 lavalier microphone to the Zoom Q8 definitely helps. This is a high-quality microphone, and it needs phantom power (which the Zoom Q8 can deliver). As expected, this gives great sound, very loud and clear. The downside is that it requires bringing an extra XLR cable together with the microphone and camera, since the cable of the DPA is too short for such an interview setup. I like the wide-angle of the video, but the quality of the video is not very good.

Samsung Galaxy Note 8

Mobile phones are becoming increasingly powerful, and I also had to try the camera of my Samsung Galaxy Note 8. I have a small Manfrotto mobile phone stand which makes it possible to place it on a tripod at a suitable distance. After recording I realized how much less wide-angle the phone image is than the GoPro and Zoom cameras, leaving my head cut off in the shots. This doesn’t matter for the testing here, however. The first video is using the built-in microphone of the mobile phone. I am very positively surprised about how crisp and clear my voice is coming through here. In fact, it is quite similar to the GoPro. The video quality is also very good, and clearly the best of the three devices being compared here (the Sony camera has much better video, but it was discarded due to the lack of a microphone input).

And, finally, I connected the SmartLav+ lavalier microphone to the Samsung phone. Here the sound is, of course, very similar to the GoPro recordings.

Conclusion

It is not entirely straight forward to conclude from this testing, but here are some of my thoughts after this very rapid and not very systematic testing:

  • Using on-body microphones (lavalier) greatly improves the audibility as compared to using built-in microphones.
  • The DPA 4060 is great, but the the Smartlav+ is more than good enough for interviews.
  • The GoPro could have been a great device for such interviews, had it not been for the skewed image and the clumsiness of the audio adaptor.
  • The Zoom Q8 is the best audio device (as it should!), but its video is too bad, unfortunately.
  • All in all, I think that the easiest and best solution is the Samsung phone with Smartlav+.

Testing Blackmagic Web Presenter

Blackmagic Web PresenterWe are rapidly moving towards the start of our new Master’s programme Music, Communication & Technology. This is a unique programme in that it is split between two universities (in Oslo and Trondheim), 500 kilometres apart. We are working on setting up a permanent high-quality, low-latency connection that will be used as the basis for our communication. But in addition to this permanent setup we need solutions for quick and easy communication. We have been (and will be) testing a lot of different software and hardware solutions, and in a series of blog posts I will describe some of the pros and cons of these.

Today I have been testing the Blackmagic Web Presenter. This is a small box with two video inputs (one HDMI and one SDI), and two audio inputs (one XLR and one stereo RCA). The box functions as a very basic video/audio mixer, but the most interesting thing is that it shows up as a normal web camera on the computer (even in Ubuntu, without drivers!). This means that it can be used in most communication platforms, including Skype, Teams, Hangouts, Appear.in, Zoom, etc., and be the centerpiece of slightly more advanced communication.

My main interest in testing it now was to see if I could connect a regular camera (Canon XF105) and a document camera (Lumens DC193) to the device. As you can see in the video below, this worked flawlessly, and I was able to do a quick recording using the built-in video recorder (Cheese) in Ubuntu.

So to the verdict:

Positive:

  • No-frills setup, even on Ubuntu!
  • Very positive that it scales the video correctly. My camera was running 1080i and the document camera 780p, and the scaling worked flawlessly (you need the same inputs for video transition effects to work, though, but not really a problem for my usage).
  • Hardware encoding makes it easy to connect also to fairly moderate PCs.
  • Nice price tag (~$500).

Negative:

  • Most people have HDMI devices, but SDI is rare. We have a lot of SDI stuff, so it works fine for our use.
  • No phantom power for the XLR. This is perhaps the biggest problem, I think. You can use a dynamic microphone, but I would have preferred a condenser. Now I ended up connecting a wireless lavalier microphone, with a line-level XLR connection in the receiver. It is also possible to use a mixer, but the whole point of this box is to have a small, portable and easy set up.
  • 720p output is ok for many things we will use it for, but is not particularly future-proof.
  • It has a fan. It makes a little more noise than when my laptop fan kicks in, but is not noticeable if it is moved one meter away.

Not perfect, but for its usage I think it works very nicely. For meetings and teaching where it is necessary to have a little more than just a plain web camera, I think it does it job nicely.

Trim video files using FFmpeg

This is a note to self, and hopefully to others, about how to easily and quickly trim videos without recompressing the file.

I often have long video recordings that I want to split or trim (side note: sometimes people call this “cropping”, but in my world cropping is to cut out parts of the image, that is, a spatial transformation. Splitting and trimming are temporal transformations).

You can split and trim files in most video editing software, but these will typically also recompress the file on export. This reduces the quality of the video, and it also takes a long time. A much better solution is to perform “lossless” trimming, and fortunately there is a way to do this with the wonder-tool FFmpeg. Being a command line utility (available on most platforms) it has a ton of different options, and I never remember these. So here it goes, this is what I use (on Ubuntu) to trim out parts of a long video file:

ffmpeg -i input.mp4 -ss 01:19:27 -to 02:18:51 -c:v copy -c:a copy output.mp4

This will cut out the section from about 1h19min to 2h18min, and will only take a few seconds to run. If you instead want to specify a fixed duration, you can use:

ffmpeg -i input.mp4 -ss 00:01:10 -t 00:01:05 -c:v copy -c:a copy output.mp4

This will extract 1min5sec starting from 1min10sec in the file.

Visualisations of a timelapse video

Yesterday, I posted a blog entry on my TimeLapser application, and how it was used to document the working process of the making of the sculpture Hommage til kaffeselskapene by my mother. The final timelapse video looks like this:

Now I have run this timelapse video through my VideoAnalysis application, to see what types of analysis material can come out of such a video.

The average image displays a “summary” of the entire video recording, somehow similar to an “open shutter” in traditional photography. This image allows for seeing what has been moving and what has not been moving throughout the entire sequence.

Average image
Average image 

The motion average image is somehow similar to the average image, but it summarises the motion images through the entire sequence, that is, only the parts of the image that changed.

Motion average image
Motion average image

What I call a motion history image, is the motion average image overlaid only a single frame from the original video. I typically create such motion history images using both the first and last frames of the video, as can be seen below.

Motion history image, based on first video frame
Motion history image, based on first video frame

Motion history image, based on last video frame
Motion history image, based on last video frame

Finally, I have also created both horisontal and vertical motiongrams of the timelapse video. The horisontal motiongram displays the vertical motion, which in this case is how the sculptor moved back and forth when sitting at the table. The edge of the table can be seen as the “stripe” running throughout the image.

Horisontal motiongram, displaying vertical motion
Horisontal motiongram, displaying vertical motion

The vertical motiongram, on the other hand, displays horisontal motion, that is, how the artist moved sideways throughout the process. Here it is very interesting to note the rhythmic swaying pattern, as the sculptor moved back and forth in what seems to be a periodic pattern.

Vertical motiongram, displaying horisontal motion
Vertical motiongram, displaying horisontal motion

I also have some more motion data, which it will be interesting to study in more detail in Matlab.

Timelapser

TimeLapser-screenshotI have recently started moving my development efforts over to GitHub, to keep everything in one place. Now I have also uploaded a small application I developed for a project by my mother, Norwegian sculptor Grete Refsum. She wanted to create a timelapse video of her making a new sculpture, “Hommage til kaffeselskapene”, for her installation piece Tante Vivi, fange nr. 24 127 Ravensbrück.

There are lots of timelapse software available, but none of them that fitted my needs. So I developed a small Max patch called TimeLapser. TimeLapser takes an image from a webcam at a regular interval (1 minute). Each image is saved with the time code as the name of the file, making it easy to use the images for documentation purposes or assembling the images into timelapse videos. The application was originally developed for an art project, but can probably be useful for other timelapse applications as well.

The application will only store separate image files, which can easily be assembled into timelapse movies using for example Quicktime.

Below is a video showing the final timelapse of my mother’s sculpture: