Testing simple camera and microphone setups for quick interviews

We just started a new run of our free online course Music Moves. Here we have a tradition of recording wrap-up videos every Friday, in which some of the course educators answer questions from the learners. We have recorded these in many different ways over the years, from using high-end cameras and microphones to just using a handheld phone. We have found that using multiple cameras and microphones is just too time-consuming, both in terms of setup and editing. Using only a mobile phone is extremely easy to set up, but we have had challenges with the audibility of the speech. Before recording this semester’s wrapup videos I therefore decided to test out some solutions based on equipment I had lying around:

  • GoPro Hero 7 w/o audio connector
  • Sony RX100 V
  • Zoom Q8
  • Samsung Galaxy Note 8
  • Røde Smartlav+ lavalier microphone
  • DPA Core 4060 lavalier microphone

In the following I will show some of the results of the testing. I decided to skip the Sony camera in this write-up, because it doesn’t have the option of connecting a separate microphone.

Testing various devices in my office.

GoPro Hero 7

The first example is of a GoPro Hero 7 with just the built-in microphone. This worked much better than expected. The audio is quite clear and it is easy to hear what I am saying. The colours of the video are vivid, but the image is compressed quite a bit. The video is very wide-angled, which is super-practical for such an interview setting, although it looks a bit skewed on the edges. But overall this was a positive surprise.

Connecting a Røde Smartlav+ to the GoPro results in a very clean sound. In fact, this could have been a very nice setup, had it not been for some challenges with placing the camera. That is because the audio dongle for the GoPro is (1) bent downwards and (2) this makes it impossible to use the housing needed to put it on a tripod (as can be seen in the picture to the right). This makes it super-clumpsy to use this setup in a real-life situation. I hear rumours about a new audio add-on for new GoPro cameras, and that may be very interesting to check out.

Zoom Q8

My next device is the Zoom Q8. This is actually a sound recorder with a built-in camera, so one would expect that the audio is the main priority. This is also the case. The video is quite noisy, but the sound quality is much better than with the GoPro. Still I find that the microphone picks up quite a bit of the room. This is good for music recordings, but not so good when the focus is on speech quality.

Hooking up a DPA 4060 lavalier microphone to the Zoom Q8 definitely helps. This is a high-quality microphone, and it needs phantom power (which the Zoom Q8 can deliver). As expected, this gives great sound, very loud and clear. The downside is that it requires bringing an extra XLR cable together with the microphone and camera, since the cable of the DPA is too short for such an interview setup. I like the wide-angle of the video, but the quality of the video is not very good.

Samsung Galaxy Note 8

Mobile phones are becoming increasingly powerful, and I also had to try the camera of my Samsung Galaxy Note 8. I have a small Manfrotto mobile phone stand which makes it possible to place it on a tripod at a suitable distance. After recording I realized how much less wide-angle the phone image is than the GoPro and Zoom cameras, leaving my head cut off in the shots. This doesn’t matter for the testing here, however. The first video is using the built-in microphone of the mobile phone. I am very positively surprised about how crisp and clear my voice is coming through here. In fact, it is quite similar to the GoPro. The video quality is also very good, and clearly the best of the three devices being compared here (the Sony camera has much better video, but it was discarded due to the lack of a microphone input).

And, finally, I connected the SmartLav+ lavalier microphone to the Samsung phone. Here the sound is, of course, very similar to the GoPro recordings.

Conclusion

It is not entirely straight forward to conclude from this testing, but here are some of my thoughts after this very rapid and not very systematic testing:

  • Using on-body microphones (lavalier) greatly improves the audibility as compared to using built-in microphones.
  • The DPA 4060 is great, but the the Smartlav+ is more than good enough for interviews.
  • The GoPro could have been a great device for such interviews, had it not been for the skewed image and the clumsiness of the audio adaptor.
  • The Zoom Q8 is the best audio device (as it should!), but its video is too bad, unfortunately.
  • All in all, I think that the easiest and best solution is the Samsung phone with Smartlav+.

Motiongram of high-speed violin bowing

I came across a high-speed recording of bowing on a violin string today, and thought it would be interesting to try to analyze it with the new version of the Musical Gestures Toolbox for Python. This is inspired by results from the creation of motiongrams of a high-speed guitar recording that I did some years ago.

Here is the original video:

From this I generated the following motion video:

And from this we get the following motiongram showing the vertical motion of the string (time running from left to right):

This motiongram shows the horizontal motion of the string (time running downwards):

Great example of a sound-producing action!

Podcast on Open Research

I was in Tromsø to hold a keynote lecture at the Munin conference a month ago, and was asked to contribute to a podcast they are running called Open Science Talk. Now it is out, and I am happy to share:

In this episode, we talk about Music Research, and how it is to practice open research within this field. Our guest is Alexander Jensenius, Associate Professor at the Department of Musicology Centre for Interdisciplinary Studies in Rhythm, Time and Motion (IMV) at the University of Oslo. He is also behind MusicLAb, an event-based project where data is collected, during a musical performance, and analyzed on the fly.

Thanks to Erik Lieungh and the rest of the team at the University Library at UIT The Arctic University of Norway. They are doing a great job in developing Open Science tools and strategies!

Teaching with a document camera

How does an “old-school” document camera work for modern-day teaching? Remarkably well, I think. Here are some thoughts on my experience over the last few years.

The reason I got started with a document camera was because I felt the need for a more flexible setup for my classroom teaching. Conference presentations with limited time are better done with linear presentation tools, I think, since the slides help with the flow. But for classroom teaching, in which dialogue with students is at the forefront, such linear presentation tools do not give me the flexibility that I need.

Writing on a black/whiteboard could have been an option, but in many modern classrooms these have been replaced by projector screens. I also find that writing on a board is much more tricky than writing with pen on paper. So a document camera, which is essentially a modernized “overhead projector”, is a good solution.

After a little bit of research some years back, I ended up buying a Lumens Ladibug DC193. The reason I went for this one, was because it had the features I needed, combined with being the only nice-looking document camera I could find (aesthetics is important!). A nice feature is that it has a built-in light, which helps in creating a better image also when the room lighting is not very bright.

My Lumens Ladibug DC193 document camera is red and has a built-in light.

One very useful feature of the document camera, is the ability to connect my laptop to the HDMI input on the Ladibug, and then connect the Ladibug HDMI output to the screen. The built-in “video mixer” makes it possible to switch between the document camera and the computer screen. This is a feature I have been using much more than I expected, and allows me to change between slides shown on the PC, some hand-writing on paper, and showing parts of web pages.

When I first got the document camera, I thought that I was going to use the built-in recording functionality a lot. It is possible to connect a USB drive directly to the camera, and make recordings. Unfortunately, the video quality is not very good, and the audio quality from the built-in mono microphone is horrible.

One of the best things about a document camera is that it can be used for other things than just showing text on paper. This is particularly useful when I teach with small devices (instruments and electronics) that are difficult to see at a distance. Placing them on the table below the camera makes them appear large and clear on the screen. One challenge, however, is that the document camera is optimized for text on white paper. So I find that it is best to place a white paper sheet under what I want to show.

Things became a little more complicated when I started to teach in the MCT programme. Here all teaching is happening in the Portal, which connects the two campuses in Oslo and Trondheim. Here we use Zoom for the basic video communication, with a number of different computers connected to make it all work together. I was very happy to find that the Ladibug showed up as a regular “web camera” when I connected it to my PC with a USB cable. This makes it possible to connect and send it as a video source to one of the Zoom screens in our setup.

When teaching in the MCT Portal, I connect the Ladibug with USB to my PC, and then send the video to Zoom from my laptop.

The solution presented above works well in the Portal, where we already have a bunch of other cameras and computers that handle the rest of the communication. For streaming setups outside of the Portal I have previously shown how it is possible to connect the document camera to the Blackmagic web presenter, which allows for also connecting a regular video camera to the SDI input.

More recently I have also explored the use of a video mixer (Sony MCX-500), which allows for connecting more video cameras and microphones at once. Since the video mixer cannot be connected directly to a PC, it is necessary to also add in the Blackmagic web presenter in the mix. This makes for a quite large and complex setup. I used it for one remote lecture once, and even though it worked, it was not as streamlined as I hoped for. So I will need to find an easier solution in the future.

Exploring a more complex remote teaching setup, including a video mixer in addition to document camera and web presenter.

What is clear, however, is that a document camera is very useful for my teaching style. The Ladibug has served me well for some time, but I will soon start to look for a replacement. I particularly miss having full HD, better calibration of the image, as well as better recording functionality. I hope manufacturers are still developing this type of niche product, ideally also nice-looking ones!

Creating individual image files from presentation slides

How do you create full-screen images from each of the slides of a Google Docs presentation without too much manual work? For the previous blog post on my Munin keynote, I wanted to include some pictures from my 90-slide presentation. There is probably a point and click solution to this problem, but it is even more fun to use some command line tools to help out. These commands have been tested on Ubuntu 19.10, but should probably work on many other systems as well, as long as you have installed pdfseparate and convert.

After exporting a PDF from the Google Presentation, I made a separate PDF file of each slide using this command:

pdfseparate input.pdf output%d.pdf

This creates a bunch of PDF files with a running number. Then I ran this little for loop:

for i in *.pdf; do name=`echo $i | cut -d'.' -f1`; convert -density 200 "$i" "$name.png"; done

And voila, then I had nice PNG files of all my slides. I found that the trick is to use the “-density 200” setting (choose the density that suit your needs), since the default resolution and quality is too low.