How to work with plug-in-power microphones

I have never thought about how so-called plug-in-power microphones actually work. Over the years, I have used several of them for various applications, including small lavalier microphones for cameras and mobile phones. The nice thing about plug-and-play devices is that they are, well, plug and play. The challenge, however, is when they don’t work. Then it is time to figure out what is actually going on. This is the story of how I managed to use a Røde SmartLav+ lavalier microphone with a Zoom Q8 recorder.

Powered microphones

The Shure SM58 is a classic dynamic microphone, which doesn’t require any power to function.

When speaking about large (normal) microphones, we typically differentiate between two types: dynamic and condenser. The dynamic microphones are typically used for singing and talking and don’t require any power. You can plug them into a mixer or sound card, and they will work. Dynamic microphones are very versatile and are great in that they often don’t lead to much feedback. The downside to this is that they don’t pick up soft sounds very well, so you need to speak/sing quite loudly directly into them to get a good signal.

AKG condenser microphone with XLR cable.

Condenser microphones are much more sensitive and allow for picking up more details than dynamic ones. However, to make them work, condenser microphones need to be supplied with 48-volt power, often called phantom power. Most mixers and sound cards have the ability to serve phantom power over an XLR connection, so it is usually no problem to get a good signal from a condenser microphone. Since there is only one connection type (XLR) and one type of power (48 volts) things are fairly straight forward (yes, I know, there are some exceptions here, but it holds for most cases).

Lavalier microphones

As I have been doing more video recording and streaming over the last years, I have also gotten used to working with lavalier microphones. These are the tiny microphones you can place on your shirt to get a good sound quality when speaking on video. Over the years, I have been working with various microphones that come bundled as part of wireless packages. You have a transmitter to which you attach the microphone and a receiver to plug into your video camera. The transmitter and receiver run on batteries, but I have always thought that the power was used for the wireless transmission. Now I have learned that these microphones receive power from the transmitter. That is quite obvious when you think about it. After all, they pick up sound like a large condenser microphone. But I never really thought about them as powered microphones before.

I nowadays often use my mobile phone for quick and dirty video recordings. This works well for many things, and, as they say, the best camera is the one you bring with you. The sound, however, is less than optimal. I, therefore, wanted to use a lavalier microphone with my phone. Then the problems started.

It turns out that the world of lavalier microphones is much more complex than I would have imagined. To start with, there are numerous connectors for such microphones, including minijack plugs of different sizes (2.5 and 3.5 mm) and a number of rings (TS, TRS and TRRS), mini-XLR plugs with a different number of pins (TA-, TA-4, TA-5), in addition to Hirose, LEMO, and so on.

The Røde SmartLav+ lavalier microphone.

As I looked around the collection of my own lavalier microphones and also the ones we have in the lab, none of them had a 3.5 mm minijack connector that I could plug straight into my phone (yes, I still have a minijack plug on my phone!). So I quickly gave up and looked around on the web. Many people recommended the Røde SmartLav+, so I decided to get one to try out.

I liked the SmartLav+ so much (a comparison with some other devices) that I bought another one, some extender cables, and a small adapter to connect two of them to my phone simultaneously. Voila, and I have a nice and small kit for recording two people at a time. I have been using this to record many educational videos this last year, and it has worked very well. So if you want a small, simple, and (comparably) cheap setup to improve audio on your mobile phone recordings, you should get something like this. I should say that I have no particular reason for recommending the Røde SmartLav+ over other ones. Now I see that many people also recommend Shure MVL, which is probably equally good.

Connecting the SmartLav+ to a GoPro camera

I had been using the SmartLav+ with my phone for a while when I decided to try it with a GoPro 8. With the MediaMod accessory, it is possible to connect a microphone with a minijack plug. But plugging in the SmartLav+ does not work. This was when I started thinking more about the fact that the SmartLav+ has a so-called TRRS plug (as opposed to TRS and TS plugs).

Differences between TS, TRS, and TRRS connectors.

In many consumer products, these three types are used for mono signals, stereo signals, and headsets (mono microphone + stereo output), respectively (although things are not always that easy).

A common way of thinking about how the different plugs are used in consumer devices.
The Røde SC3 TRRS-TRS adaptor is designed with a grey side. Practical!

To work with regular audio input (on the GoPro) the SmartLav+ signal needs to be converted from TRRS to TRS. Fortunately, there are adaptors for this, and it turned out that I had a few lying around in my office. I still decided to buy a Røde SC3 because it has a grey colour on the TRRS side, making it easier to see the connector type.

When I plugged in the microphone (with adaptor) to the GoPro, it worked nicely right out of the box. I, therefore, didn’t think much about the need to power the microphone. I have later learned from DC Rainmaker that the GoPro actually has a setting for choosing between different types of microphone inputs:

The settings available on the GoPro with a MediaMod.

The list above says that the GoPro defaults to non-powered mics, but my camera defaults to plug-in-power. They might have changed things along the way.

Connecting the SmartLav+ to a Zoom Q8 recorder

When I tried to connect the SmartLav+ to a Zoom Q8 recorder, I started having problems. First, I connected with a minijack-to-jack adaptor (with the TRRS-TRS adaptor in between). This resulted in no sound input on the Q8. I then switched to an XLR adaptor, but still no sound. I then took out a dynamic microphone to check that the Q8 input actually worked.

This was when I realized that SmartLav+ is actually a powered microphone. After reading up more on this and other lavalier microphones, I understand that I have had a big gap in my microphone knowledge. This is slightly embarrassing. After all, a professor of music technology should know about such things. To my excuse, perhaps, I would argue that lavalier microphones are not something that music technologists typically deal with. Most of the time, we work with large microphones and XLR cables. Such small microphones are typically used more for video recording and media production.

Embarrassments aside, I am primarily interested in finding a solution to my problem. How do I connect the SmartLav+, or any other powered minijack-microphone, with a sound recorder?

Solution 1

It turned out that Røde actually has a solution to the problem in the form of the minijack-to-XLR adapter VXLR+. This is not just a passive device converting from one to the other (I already had some of those lying around). No, this one actually converts the 48-volt power coming from the XLR cable to the 2.7 volts required by the SmartLav+. To complicate things, though, the adapter takes a TRS minijack as input, so it is also necessary to add the TRRS-TRS adapter in between. So after hooking it all up, and turning on phantom power, I now finally have a loud and clear sound on the Q8. The sound is not as good as with microphones like the DPA 4060, of course, but not bad for voice recordings.

One of the reasons I wanted to connect the SmartLav+ to the Zoom recorder in the first place, was to have a simple and portable setup for recording conversations with multiple people (4-8). Of course, I could set up an omnidirectional microphone or a stereo pair, but that wouldn’t give the type of intimate sound that I am looking for. In the lab, I could always set up many large microphones on stands, but that is not a very portable solution. So I was thinking about possibly connecting multiple lavalier microphones to a multichannel sound recorder instead. Now, I have found that this could actually work well. For example, a Zoom H8 with many lavalier microphones could be a nice and portable setup. While searching for such a setup, however, a different solution came to my attention.

Solution 2

Given that more and more people are using lavalier microphones these days, I was curious about the market for minijack-based mixers. Strangely enough, there aren’t many around and none from the big manufacturers. But one mixer seemed to pop up in various webshops: Just MIC IV by Maker Hart. It features four minijack inputs, and, most importantly, it can provide power to the microphones. In fact, it can provide both 48v and 1.5v.

The Just MIC IV is a small mini mixer for minijack-based microphones.

This mixer looked like the perfect solution for my needs, so I decided to give it a try. After playing with it for a little while, I have found it to be almost exactly what I need. The functionality is great. It supplies power to the microphones. They should ideally get 2.7v, but the 1.5v supplied from the mixer seems to work fine. The panning is a rudimentary left-middle-right switch, which is not ideal but can place people in a stereo image. It only has a 2-channel output, so no multi-channel recording here. But it will suffice for quick recordings of four people.

The biggest problem with the Just MIC IV is that it picks up electric disturbances very easily. I often get an annoying buzz when connecting it to a wall socket. So I have ended up running it from a USB battery pack instead. Not ideal, but better than nothing.

Conclusion

After a lot of searching and testing I now know a lot more about lavalier microphones, different minijack congifurations, and interfacing possibilities. I still do not have an optimal solution for my needs, but am getting closer. Given that so many people are getting into sound recording these days, from podcasts to teaching, I think there is a potential market here for easy to use solutions. Products like the SmartLav+ has made it much easier to make good audio recordings on a mobile phone. I wish there was a decent small and simple mixer for such microphones. The Just MIC IV is almost there, but is too noisy. Any company out there that can make a small, solid, high-quality 8-channel mini-mixer?

New paper: Who Moves to Music? Empathic Concern Predicts Spontaneous Movement Responses to Rhythm and Music

A few days after Agata Zelechowska defended her PhD dissertation, we got the news that her last paper was finally published in Music & Science. It is titled Who Moves to Music? Empathic Concern Predicts Spontaneous Movement Responses to Rhythm and Music and was co-authored by Victor Gonzalez Sanchez, Bruno Laeng, Jonna Vuoskoski, and myself.

The paper is based on Agata’s headphones-speakers experiment. We have previously published a paper showing that people move more when listening on headphones. This, however, the focus was on the data gathered on individual differences. Many variables were tested, but it was only empathic concern that turned out to be a motion predictor.

Here is a short video teaser about the article:

And here is the abstract:

Moving to music is a universal human phenomenon, and previous studies have shown that people move to music even when they try to stand still. However, are there individual differences when it comes to how much people spontaneously respond to music with body movement? This article reports on a motion capture study in which 34 participants were asked to stand in a neutral position while listening to short excerpts of rhythmic stimuli and electronic dance music. We explore whether personality and empathy measures, as well as different aspects of music-related behaviour and preferences, can predict the amount of spontaneous movement of the participants. Individual differences were measured using a set of questionnaires: Big Five Inventory, Interpersonal Reactivity Index, and Barcelona Music Reward Questionnaire. Liking ratings for the stimuli were also collected. The regression analyses show that Empathic Concern is a significant predictor of the observed spontaneous movement. We also found a relationship between empathy and the participants’ self-reported tendency to move to music.

And the full reference is:

Zelechowska, A., Gonzalez Sanchez, V. E., Laeng, B., Vuoskoski, J. K., & Jensenius, A. R. (2020). Who Moves to Music? Empathic Concern Predicts Spontaneous Movement Responses to Rhythm and Music. Music & Science, 3, 2059204320974216. https://doi.org/10.1177/2059204320974216

The different parts of the experiment (from left to right): preparation, first listening session, first set of questionnaires, second listening session, second set of questionnaires.

We are working on preparing the data of the experiment for sharing. It will be shared as part of the Oslo Standstill Database.

Running a Hybrid Disputation

Yesterday, I wrote about Agata Zelechowska’s disputation. We decided to run it as a hybrid production, even though there was no audience present. It would, of course, have been easier to run it as an online-only event. However, we expect that hybrid is the new “normal” for such events, and therefore thought that it would be good to get started exploring the hybrid format right away. In this blog post, I will write up some of our experiences.

The setup in the hall, with the candidate presenting to the left and the disputation leader to the right.

The disputation was run in Forsamlingssalen, a nice lecture room in Harald Schjelderups hus, where RITMO is located. We had seen the need for recording lectures in the hall and had even before corona installed two PTZ cameras and a video mixer in the hall. This setup was primarily intended for recording lectures and secondary for streaming on YouTube. We actually never got around to use the new system before corona closed down the university in the spring. So the disputation was a good chance for getting the system up and running.

Forsamlingssalen, seen from the lecture podium. One PTZ camera is placed on the left wall and one in the back next to the projector. LED lights help illuminate the speakers.

What turned out to be the most challenging part of the setup was to figure out how to interface the system with Zoom. We quickly decided to use a Zoom Webinar instead of a Zoom Room. The Webinar solution is better for public events where you want to control the “production”. It is also safer since only invited panellists are allowed to show their camera and to speak.

Zoom (both the Webinar solution and regular Rooms) is in many ways a small video production tool in its own right. However, we realised that it is quite challenging to use in a more traditional multi-camera setup. Its strength is on allowing multiple people with single-camera setups to interact. We did make it work in the end, but it was quite a puzzle to get right.

People and PCs

There were four people visible in the disputation: the candidate and disputation leader were present in Forsamlingssalen and the two opponents that joined remotely. The two opponents used the normal Zoom client on their PCs, so their part was easy enough (of course, we ensured that they had good audio and video quality).

The second opponent projected on the wall during the disputation.

For the setup in Forsamlingssalen, the candidate was standing at the podium with a desktop PC with two screens and cabled ethernet. Her presentation was shown on the right screen and her notes on the left. We played with the idea of showing her presentation as a video stream through the video mixer but ended up using Zoom’s screen sharing function. That meant that we had to run Zoom on the PC, and then we could also add her image from a web camera sitting on top of the left screen.

The speaker desk, with two screens, a desk microphone, and a web camera above the screen to the left.

The image from that PC is also what goes to the projected screen in the hall. That was not so important now, since there was no audience. But for future hybrid disputations (and other events) we need to also think about people in the hall.

The hall has a nice microphone setup, with a swan neck microphone next to the PC screens and several wireless microphones. We ended up equipping both the candidate and disputation leader with wireless clip-on mics to ensure that they had a good sound coming through. The mixed microphone signal was then fed into the lecture PC to be shared through Zoom.

Two wireless microphones were used, one for the candidate and the other for the disputation leader. These microphones were connected to the PA system in the hall and sent as inputs to the desk PC to be included in the Zoom stream.
The disputation leader only had an empty lecturer desk. His image was captured from one of the PTZ cameras in the back of the hall, and his sound was captured through a clip-on mic that was connected to the sound mixer.

Cabling

The cabling in the room is set up so that there is a combined HDMI signal being sent from the podium to the video mixer. This signal contains the main image from the PC, which is also projected on the wall. It also contains the combined audio signal coming from the microphones and the PC, played over the hall’s loudspeakers. As such, we can easily tap the same signal that an audience in the hall will experience. Also, the two PTZ cameras’ signals in the hall go into separate channels on the video mixer. Below is a sketch of the cabling in the room.

The AV routing in the hall.

The original plan was to run productions from the video mixer, which can stream directly to various servers and also record on an SD card. Since we used Zoom for the disputation, however, we hooked up a separate PC in the control room, to which we fed the mixed video signal through a video grabber. This PC was then connected to Zoom, and we could switch between the two PTZ cameras in the live stream.

Research assistant Aleksander Tidemann controlling the two PTZ cameras, video mixer, and Zoom PC in the control room.

Lessons learned

Many things worked well, but we also learned some lessons.

  1. Running a hybrid event is (much) more difficult than doing a physical or online-only event. It is challenging to create something that works well both in the room and with an online audience.
  2. Having good audio is imperative. This is particularly tricky in a hybrid situation, in which you can easily get into feedback problems. Fortunately, we have a very robust PA system available with high-quality clip-on microphones.
  3. Combining Zoom with a multi-camera production pipeline is challenging. Zoom is good for connecting multiple people with one PC+camera+microphone each. Adding in a multi-camera video in one Zoom channel worked, but it is difficult to mix video for a window size you don’t know. Zoom, the viewer can choose the size and position of windows. Doing picture-in-picture on the video mixer, for example, may lead to video images that are too small to watch if the viewer is using “gallery” mode.
  4. It is challenging to be the main “producer” of a hybrid Zoom event. I have run many Zoom Room meetings and also several Webinars. But this was the first time I tried a large-scale Webinar event with a hybrid setup. It worked ok in the end, but it was tricky when I did not have access to the necessary “tools” to actually control what was going on. For example, as a host, you are allowed to turn people’s video and audio off. But you cannot turn them on again. I had originally planned to turn the camera and microphone on the lecture podium off during breaks, and then turn them back on again when we started each session. The candidate should not have to think about such things, but this meant that I had to physically go over to the machine to turn things on when we should start. That also meant that I had to sit in the hall because it would be too far to get to the control room one floor up.
A view of the hall from the control room, one floor up from the hall.

I think the final result was fine, as can be seen in a recording of the event:

In the future, however, I would probably not run such an event as a Zoom Webinar. Given that we have a nice multi-camera setup in the hall, it would be better to run it as a regular video stream. Then we would have full control over the production. The candidate could stand at the podium and focus on giving her presentation, and we could mix the audio and video in the back.

However, the challenging part with such a setup would be to figure out how to best add in the opponents in the mix. I would probably opt to connect on a separate PC (trough a Zoom Room) that would be shown separately from the presentation. Exactly how to do that will be an experiment for our next disputation!

PhD disputation of Agata Zelechowska

I am happy to announce that Agata Zelechowska yesterday successfully defended her PhD dissertation during a public disputation. The dissertation is titled Irresistible Movement: The Role of Musical Sound, Individual Differences and Listening Context in Movement Responses to Music and has been carried out as part of my MICRO project at RITMO.

A “zoom-style” disputation photo. From top left: Maria Witek (second opponent), Rolf Inge Godøy (committee leader), Marc Leman (first opponent), Agata Zelechowska (candidate), Peter Edwards (disputation leader), me (main supervisor), Bruno Laeng (co-supervisor), and Jonna Vuoskoski (co-supervisor, “photo-shopped” in).

The dissertation is composed of five papers and an extended introduction. The abstract reads:

This dissertation examines the phenomenon of spontaneous movement responses to music. It attempts to grasp and illustrate the complexity of this behaviour by viewing it from different perspectives. Unlike most previous studies on music and body movement, this dissertation places the focus on barely visible manifestations of movement, such as those that may occur when listening to music while standing still. The point of departure is a reflection on movement responses to music and why such responses are considered universal among humans. This is followed by a discussion on the different approaches to studying how music ‘inspires’ movement, and an overview of the different factors that can potentially contribute to the emergence of movement responses to music. The first goal of the empirical research was to verify the common conception that ‘music makes us move’ and examine whether such movement responses can be involuntary. Three of the five included papers show that music can, indeed, make people move, even when they try to stand as still as possible. The second goal is to explore different factors that contribute to movement responses to music. Throughout the included papers, several topics are examined, including rhythmic complexity, tempo, music genres, individual differences and playback systems. The theoretical chapters show how these topics fit into three broader components of the music experience: music, listener and context. Overall, the results suggest that several factors seem to increase movement responses to music: the clear underlying pulse in the sound stimuli, the rhythmic complexity, a tempo of around 120 beats per minute, listening on headphones rather than speakers and high empathy of the listener. All in all, this dissertation contributes to bridging several gaps in the literature on music-related body movement. It also broadens the perspective on why, how and when music moves us.

Abstract of Agata Zelechowska’s dissertation

Congratulations to Agata on the dissertation and the defense!

Opportunities and Challenges with Citizen Science

Citizen Science is on everyone’s lips these days, at least on the lips of people working in research administration, funding agencies and in institutional leadership. As a member of the EUA Expert Group on Open Science/Science 2.0, I am also involved in ongoing discussions on the topic.

Yesterday, I took part in the workshop Citizen Science in an institutional context organized by EUA and OpenAire. A recording of my talk is available here:

Video is good for many things, but textual information may be easier to search for and skim through, so in this blog post, I will summarize some of the points from my talk.

As always, I started by briefly explaining my reasoning for talking about Open Research instead of Open Science. This is particularly important for people working within the arts and humanities, for whom “science” may not be the best term.

Defining Citizen Science

There are lots of definitions of citizen science, such as this one:

Citizen science […] is scientific research conducted, in whole or in part, by amateur (or nonprofessional) scientists.

Wikipedia

That is fine on a general level, but it is more unclear what it means in practice. In my experience, many people think of citizen science as primarily involving citizens in the data collection. Then citizen science is just one building block in the (open) research ecosystem:

A narrow definition of citizen science.

Another, more open, definition of citizen science, focuses on the inclusion of citizens in all parts of the research process. This is the approach that I think is most interesting, and is the one I am focusing on.

A broader definition of citizen science.

Opportunities of Citizen Science

I have never thought of myself as a “citizen science researcher”. Some people build all their research on such an approach. For me, it has more been an add-on to other research activities. Still, I have done several citizen science-like projects over the years. In the talk, I presented two of these briefly: Hjernelæring and MusicLab. I will present these briefly in the following.

Case 1: Hjernelæring

The first case is my collaboration with Hjernelæring, a Norwegian company producing educational material for schools. I was challenged to create an exercise that could be used in classrooms, which could also be used to collect research data. My main research project at the moment (MICRO) is focusing on music-related micromotion, so it was natural to build an exercise around this. We have done several studies in the lab over the years, including the Norwegian Championship of Standstill. The latter has been an efficient way of attracting many participants to the study. We also try to give something back. All participants get a chance to download plots of their own micromotion, and we make the data available in the Oslo Standstill Database. So they are free to use and analyze the data if they wish.

Since we couldn’t rely on any particular technology in the classrooms, I ended up making an exercise where the kids would stand still with and without music, and then draw their experiences on a piece of paper. The teacher would then scan these drawings and send to me for analysis. I think this is a nice example of how to get involved with schools. It is research dissemination, because the kids learn about the research we are doing, why we do it, and what can come out of it. And it is data collection, since the teachers provide us with research data.

Case 2: MusicLab

The second case I presented was on MusicLab. This is an innovation project at the University of Oslo, where we at RITMO collaborate with the Science Library in exploring an extreme version of Open Research. Each MusicLab event is organized around a concert. The idea is to collect various types of data (motion capture, physiology, audio, video, etc.) of both performers and audience members that can be used in studying the event. There is usually also a workshop on a topic related to the concert, a panel discussion, and a data jockeying session in which some of the data is analyzed on the fly. As such, we try to open the entire research process to the public, and also include everyone in the data collection and analysis.

The main parts of a MusicLab event.

Challenges of Citizen Science

The last part of the presentation was devoted to presenting some of the challenges of Citizen Science. These were particularly focused on institutional challenges. There are, obviously, also many research challenges. Still, at the moment, I think it is important to help institutions to develop policies and tools for helping researchers to run Citizen Science projects.

My list of challenges include the need for (more):

  • technical infrastructure for data collection, handling, storage, and archiving. Many institutions have built up internal systems for data flows, but it is usually difficult to share data openly. I also see that IT departments are usually involved in handling storage solutions, while libraries are involved in archiving. This creates an unfortunate gap between the two (storage and archiving).
  • channels for connecting to citizens. Working with an external partner is usually a good strategy for connecting with citizens. Still, it also means that the researchers (and institutions) have to rely on a third party in communication with citizens. Some universities have built up their own Citizen Science centres, which may help with facilitating communication.
  • legal support (privacy+copyright). All the “normal” challenges of GDPR, copyright, etc., become even more difficult when involving citizens at all stages of the research process. Clearly, there is a need for more support to solve all the legal issues involved.
  • data management support. It is both a skill and a craft to collect data, handle data, equip data with metadata, store it, and archive it properly. Researchers need to learn all of these to some extent, but we also need more professional data managers to help at all stages. I think libraries’ future will largely be connected to the data management of various kinds.
  • strategies for avoiding bias and pressure from citizens. One of the big criticisms/scepticisms of Citizen Science is that it may lead to all sorts of unfortunate effects. Research is under pressure many places, and by involving more people in the research process, this may also lead to several challenges. I believe that more openness is the answer to this problem. Transparency at all levels will help expose whatever goes on in the data collection and analysis phase. This may mitigate potential challenges arising from people trying to push the research in one way or another. This, of course, requires the development of solid infrastructures, proper metadata, persistent IDs, version control, etc.
  • incentives and rewards for researchers (and institutions). Citizen Science is still new to many. As for anything else, if we want to make a change, it is necessary to support people interested in trying things out.
A sketch of how our new MusicLab app allows for secure data collection to UiO servers. The challenging part is to figure out how to give citizens access to the data, and how to handle the data archiving.

Conclusion

In sum, I believe there is a huge potential in citizen science. After thinking about it for a while, I think that more focus on Citizen Science is a natural extension of current Open Science initiatives. For that to happen, however, we need to solve all of the above. A good starting point is to develop policies from a top-down perspective. Equally important is to give researchers time (and some money) to set up pilot projects to try things out. After all, there will be no Citizen Science, if there are no researchers to initiate it in the first place.