Experiences with running a hub-based conference

For the last couple of days, I have participated in the NordicSMC conference. It was organized by a team of Ph.D. fellows from Aalborg University Copenhagen, supported by the Nordic Sound and Music Computing network. UiO is happy to be a partner in this network, together with colleagues in Copenhagen (AAU), Stockholm (KTH), Helsinki (Aalto), and Reykjavik (UoI).

Choosing a conference format

When we began discussing the conference earlier this year, it quickly became apparent that it was unrealistic to meet in person. Restrictions have been lifted in most Nordic countries, but the pandemic is still ongoing. One option could have been to run it as an online-only conference. But since we are an adventurous group of researchers, we wanted to explore running it as a hub-based hybrid conference. Hybrid means that it was run both in-person and online. Hub-based means that there were multiple physical locations. This makes sense, given that there are five partners in the network.

Over the last years, we have gained much experience running hybrid events, such as a hybrid disputation and a hybrid conference. Also, most RITMO events have been hybrid over the last years. We also have much experience with hub-based teaching in the MCT master’s programme, with daily teaching between Oslo and Trondheim for three years. All of these experiences have made us aware of all the technical details that need to be in place for successfully mastering the different “formats.” But also the social aspects of handling various constellations of people in local and remote locations.

NordicSMC2021 was our first attempt at running a multi-hub conference. The idea is old, and several attempts have been running hub-based conferences over the last few years. For example, Richard Parncutt has made interesting reflections on running ESCOM/ICMPC conferences in hubs on different continents. Then timezone issues may be the biggest challenge for a successful event.

In comparison, NordicSMC is a small conference without timezone problems. That also makes it easier to experiment with the setup. Here are some thoughts on what we did and how it worked.

Benefits of hub-based conferences

The easiest would, of course, have been to run an online-only conference using the Zoom Webinar functionality. Most people know this format well, and it is technically easy to set up and run smoothly. I say easy, but you still need to pay attention to many details to run such events well.

However, many of us are tired of sitting alone in our offices, particularly when we can meet on campus. The nice thing about a hub-based conference is that people can meet in various interconnected hubs. This means that you get some of the social aspects of being together in your local hub while at the same time taking part in something larger.

Setting up for a hub-based conference is not particularly difficult. Most places have decent audiovisual equipment these days. Still, there are many ways of making things work well.

Camera

The camera choice and placement is an essential element of such a setup. We used a seminar room with a Crestron UC-SB1-CAM system containing a Huddly camera placed below the TV. That is a very wide-angle camera, which has the benefit of capturing the whole space. The downside is that people are tiny in the image. I tested setting up a Logitech web camera on top of the screen instead. I think the image quality was better there, but the bird’s eye view didn’t work too well. So we decided to use the Huddly camera.

One nice option in the Huddly camera is that it can automatically “zoom” and “pan” in on people in the room. This is a software feature based on some computer vision algorithm running in the background. That only partly worked, though. So we ended up having to move the image around manually. Not ideal; we should try to figure out why the auto-tracking didn’t work correctly.

Another option could have been to use one wide-angle camera (like the Huddly) to capture the whole room and a second camera for close-ups of people talking. We have taken this approach for large-scale hybrid events, but it requires more equipment and a small production team. We wanted to explore what can quickly be done in a typical seminar room with video conferencing equipment.

The video setup also requires attention to the physical organization in the room. From the MCT programme we have experienced that sitting in a V-shape work well from a communicative point of view. Such a setup allows the local participants to see each other while at the same time seeing the screen. The aim is to create a sense of both being together locally and remotely at the same time.

Sound

Being a conference on sound and music, we care about the sound quality. We know from other activities that the Crestron sound system below the TV is ok for short meetings. However, that system sounds like, eh, a video conferencing sound system. Given the size of the speaker elements, it sounds “thin” and does not project music very well. I struggle sitting through hours of meetings with the semi-poor audio quality found in regular video conferencing systems. Therefore, we decided to use the B&W speaker setup in the room instead. Of course, a good sound system doesn’t save people’s bad microphone quality. But it makes it a joy to listen to those that care about their sound, such as the excellent keynote lecture by Ludvig Elblaus. He also used the stereo sound functionality in Zoom (usually the sound is mono only) for his sound examples, which came through beautifully.

We chose to play sound over the B&W speaker system in the room instead of the Crestron panel. A Catchbox provided good sound from our side.

One of the benefits of using a combined video conferencing solution is that it typically has sound feedback cancellation built-in. This saves a lot of trouble when it comes to handling unwanted feedback issues. So when we decided to move to the Hi-Fi sound system in the room, we had to figure out another microphone solution.

We have for some time used a Catchbox microphone in meetings at RITMO. It is a wireless microphone embedded in a softbox that can be thrown around. And the best thing is that it has some very smart anti-feedback and anti-noise circuitry. While testing the setup, we found that the microphone array located in the Crestron panel could pick up talking in the room. However, when you sit some meters from a microphone, it will sound relatively muffled—having the Catchbox close by results in much better sound quality.

Another nice thing about the Catchbox is that it helps clarify who is talking. Others could see the box, so remote participants could understand who had the microphone even when we didn’t zoom in on people. Having to wait for the microphone is also disciplining. The downside to using such a semi-directional microphone is that it does not capture the sonic ambiance of the room. That is something to explore more later.

The need for 2 screens

We started with only having one connected TV screen in the room. That worked well for presentations, but we quickly realized it was challenging to keep track of the chat and Q&A windows simultaneously. This is also problematic when sitting on your own computer, and I don’t understand why Zoom didn’t solve these multi-sub-window problems a long time ago.

The image quickly became cluttered with multiple windows.

Since we had a second TV on another wall in the room, we connected that one to move the chat and Q&A windows there. Not ideal to have them on separate walls, though. This made me realize that “old-style” video conferencing setups with two TVs may be a better solution for such events.

As for running presentations, our default option nowadays is always to connect and run these from a separate laptop. It is much easier to have a dedicated Zoom machine to handle the communication part. It is less risky from a technical point of view and makes it easier to arrange images in the way you want.

We ended up connecting two TVs to get enough screen-estate in the room.

In-person or online chairing

During the conference, we explored different solutions for chairing the sessions. Most chairs were doing it online, which allowed for more easily using their local multi-view setups. However, my colleague Stefano Fasciani decided to chair his session from our hub.

Stefano Fasciani chairing a session from the Oslo hub.

This worked well, I think. The Catchbox provided good sound, and the zooming from the Huddly camera made it possible to see him in the image when speaking. This was before we connected the second TV, so he monitored the chat and Q&A through a Zoom session running on his own laptop.

Also, from a conceptual point of view, I think it was nice to have the session chair in the room. It felt more like a “normal” conference. For that reason, I also decided to be present in the hub for the final panel discussion.

Handling interactions

As always, juggling multiple platforms is a challenge for such events. There was audiovisual communication happening in Zoom, together with the chat and Q&A windows. Then we used Discord as a social channel in between. I didn’t want to keep a separate Zoom window run on my laptop for the entire two days, which meant that I couldn’t easily follow or contribute in the chat and Q&A windows. For such events, I think there should be an option to turn off incoming video in Zoom. It feels like a waste of bandwidth that everyone in a room should run separate Zoom instances to communicate in text.

Many conferences use Slack for in-between discussions. This time we used Discord, which had the same functionality. Still, it is a challenge to figure out where and how to interact. In online-only events, written communication in various channels has worked well. But when we were trying to create a hub-based setup, it was more challenging. When you sit next to people, you naturally talk to them instead of sending a message.

When it came to the Q&A sessions, I realized that I ended up asking questions aurally rather than in writing. This was because I didn’t have Zoom running on my laptop, and I had access to a microphone. Of course, the ability to ask questions live was limited to those of us in the hubs. The online participants had to interact through the written channels.

For such a small conference, we could have used a regular Zoom meeting instead of a Webinar. That would have allowed everyone to show their face and talk. However, there are risks involved in having relatively large Zoom meetings, so it is much safer on the organizing side to run a Webinar. I generally think that running a Webinar with always-on hub rooms worked well. That gave the presenters the feel of someone being present. And the local hub hosts were in control of the technical and social communication.

It would be interesting to hear how such an asymmetric communication form was experienced by those attending remotely. They, obviously, got a very different experience than those of us present in the hubs. I would imagine that they didn’t feel as connected as those in the hubs, but this may also be what they preferred. One could say that the ability to interact more directly could be a motivating factor to join a hub (for those who can and want to).

Hub-based conferences are the future

All in all, I think this year’s NordicSMC conference was a great success. Many engaging presentations showed the breadth of activities in sound and music computing in our region.

Technically speaking, I also think things went well. The AAU hosts ran things smoothly. The conference was built on what has become the “normal” setup: a Zoom webinar with pre-recorded presentations, panel-based Q&A, and written communication through both Zoom and Discord.

What was new was that we tried out a hub-based approach. We ended up only having hubs in Oslo, Stockholm, and Copenhagen, and we all set them up slightly differently. Still, we managed to create a sense of being “together”. We weren’t that many people in the Oslo hub, but people came in and out during the conference. As such, it felt like being at a regular conference.

Aleksander Tidemann got into a lively discussion after his presentation.

The experience was different than if I had been sitting alone in my office. I noticed that I followed the presentations more carefully than I would have otherwise done. And it was great chatting with colleagues and students during the breaks.

There are lots of small details that can be improved, both technically and conceptually. On the technical side, I think paying attention to the camera and microphone placements are critical factors. Often the equipment is ok, but it is set up and used sub-optimally. One challenge is that you are often not in complete control of the systems in university seminar rooms. For example, we did not have admin privileges on the Zoom computer, making it difficult to reach some settings.

I also think it is essential to pay attention to social interaction. One often says that running a hybrid event is more complex than running in-person or online-only. That is because you need to think about two different social groups at once. In that respect, running a hub-based and hybrid conference is triple difficult. You need to cater to the well-being of the people in all the hubs and online at the same time. The best solution here is to assign “hub hosts” responsible for the social interaction in their own hubs. It is also vital that these hub hosts interact with each other. If done well, I think this can make such hub-based events successful. They would not be the same as in-person events, but they can capture the feeling of being together.

Workshop: Open NIME

This week I led the workshop “Open Research Strategies and Tools in the NIME Community” at NIME 2019 in Porto Alegre, Brazil. We had a very good discussion, which I hope can lead to more developments in the community in the years to come. Below is the material that we wrote for the workshop.

Workshop organisers

  • Alexander Refsum Jensenius, University of Oslo
  • Andrew McPherson, Queen Mary University of London
  • Anna Xambó, NTNU Norwegian University of Science and Technology
  • Dan Overholt, Aalborg University Copenhagen
  • Guillaume Pellerin, IRCAM
  • Ivica Ico Bukvic, Virginia Tech
  • Rebecca Fiebrink, Goldsmiths, University of London
  • Rodrigo Schramm, Federal University of Rio Grande do Sul

Workshop description

The development of more openness in research has been in progress for a fairly long time, and has recently received a lot of more political attention through the Plan S initiative, The Declaration on Research Assessment (DORA), EU’s Horizon Europe, and so on. The NIME community has been positive to openness since the beginning, but still has not been able to fully explore this within the community. We call for a workshop to discuss how we can move forward in making the NIME community (even) more open throughout all its activities.

The Workshop

The aim of the workshop is to:

  1. Agree on some goals as a community.
  2. Showcase best practice examples as a motivation for others.
  3. Promote existing solutions for NIME researcher’s needs.
  4. Consider developing new solutions, where needed.
  5. Agree on a set of recommendations for future conferences, to be piloted in 2020.

Workshop Programme

TimeTitleResponsible
11:30WelcomeIntroduction of participantsIntroduction to the topicAlexander Refsum Jensenius

11:45Open Publication perspectivesAlexander Refsum JenseniusDan OverholtRodrigo Schramm
12:15Group-based discussion:How can we improve the NIME publication template?Should we think anew about the reviewing process (open review?)Should we open for a “lean publishing” model?How do we handle the international nature of NIME?
12:45Plenary discussion
13:00Lunch Break
14:30Open Research perspectivesGuillaume PellerinAnna XambóAndrew McPhersonIvica Ico Bukvic
15:00Group-based discussion:What are some best practice Open NIME examples?What tools/solutions/systems should be promoted at NIME?Who should do the job?
15:30Final discussion
16:00End of workshop

Background information

The following sections present some more information about the topic, including current state of affairs in the field.

What is Open Research?

There are numerous definitions of what Open Research constitutes. The FOSTER initiative has made a taxonomy, with these overarching branches :

  • Open Access: online, free of cost access to peer reviewed scientific content with limited copyright and licensing restrictions.
  • Open Data: online, free of cost, accessible data that can be used, reused and distributed provided that the data source is attributed.
  • Open Reproducible Research: the act of practicing Open Science and the provision of offering to users free access to experimental elements for research reproduction.
  • Open Science Evaluation: an open assessment of research results, not limited to peer-reviewers, but requiring the community’s contribution.
  • Open Science Policies: best practice guidelines for applying Open Science and achieving its fundamental goals.
  • Open Science Tools: refers to the tools that can assist in the process of delivering and building on Open Science.

Not all of these are equally relevant in the NIME community, while others are missing.

Openness in the NIME Community

The only aspect that has been institutionalized in the NIME community is the conference proceedings repository. This has been publicly available from the start at nime.org, and in later years all publications have also enforced CC-BY-licensing.

Other approaches to openness are also encouraged, and NIME community members are using various types of open platforms and tools (see the appendix for details):

  • Source code repositories
  • Experiment data repositories
  • Music performance repositories
  • MIR-type repositories
  • Hardware repositories

The question is how we can proceed in making the NIME community more open. This includes the conferences themselves, but also other activities in the community. A workshop on making hardware designs openly available was held in connection to NIME 2016 , and the current project proposal may be seen as a natural extension of that discussion.

The Problem with the Term “Open Science”

Many of the initiatives driving the development of more openness in research refer to this as “Open Science”. In a European context this is particularly driven by some of the key players, including the European Union (EU), the European Research Council (ERC), and the European University Association (EUA). Consequently a number of other smaller institutions and individuals are also using the term, often without thinking very much about the wording.

The main problem with using Open Science as a general term, is that it sounds like this is not something for researchers working in the arts and humanities. This was never the intention, of course, but was more the result of the movement developing from the sciences, and it is difficult to change a term when it has gotten some momentum.

NIME is—and is striving to continue to be—an inclusive community of researchers and practitioners coming from a variety of backgrounds. Many people at NIME would not consider that they work (only) in “science”, but would perhaps feel more comfortable under the umbrella “research”. This term can embrace “scientific research”, but also “artistic research” and R & D found outside of academic institutions. Thus using the term “Open Research” fits better for the NIME community than “Open Science”.

Free

The question of freedom is also connected to the that of openness. In the world of software development, one often talks about “free as in Speech” (libre) or “free as in Beer” (gratis). This point also relates to issues connected to licensing, copyright and reuse. Many people in the community are not affiliated with institutions, and receive payment from their work. Open research might have a close connection with open source, open hardware and open patent. This modern context for research and development of new musical technologies are also beyond academia and must be well planned in order to also attract the industry as partners. How can this be balanced with the needs for openness?

FAIR Principles

Another term that is increasingly used in the community is that of the FAIR principles, which stands for Findable, Accessible, Interoperable and Reusable. It is important to point out that FAIR is not the same as Open. Even though openness is an overarching aim, there is an understanding that privacy matters and copyright issues are preventing general openness of everything. Still the aim is to make data as open as possible, as closed as necessary. By applying the FAIR principles, it is possible to make metadata available so that it is openly known what types of data exist, and how to ask for access, even though the data may have to be closed.

General Repositories

There are various “bucket-based” repositories that may be used, such as:

What is positive about such repositories is that you can store anything of (more or less) any size. The challenge, however, is the lack of specific metadata, specialized tools (such as visualization methods), and a community.

There are also specific solutions, such as Github for code sharing.

As of 2018 a new repository aimed at coupling benefits of the aforesaid “bucket-based” approach with a robust metadata framework, titled COMPEL, has been introduced. It seeks to provide a convergence point to the diverse NIME-related communities and provide a means of linking their research output.

Openness in the Music Technology community

Looking at many other disciplines, the music technology community has embraced open perspectives in many years. A number of the conferences make their archives publicly available, such as:

There are also various types of open repositories and tools, including:

Best Practice Examples

  • CompMusic as a best practice project in the music technology field 
  • COMPEL focuses on the preservation of reproducible interactive art and more specifically interactive music
  • Bela platform

New paper: MuMYO – Evaluating and Exploring the MYO Armband for Musical Interaction

usertest3Yesterday, I presented my microinteraction paper here at the NIME conference (New Interfaces for Musical Expression), organised at Louisiana State University, Baton Rouge, LA. Today I am presenting a poster based on a paper written together with two of my colleagues at UiO.

Title
MuMYO – Evaluating and Exploring the MYO Armband for Musical Interaction

Authors
Kristian Nymoen, Mari Romarheim Haugen, Alexander Refsum Jensenius

Abstract
The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.

Files

BibTeX

@inproceedings{nymoen_mumyo_2015,
    address = {Baton Rouge, LA},
    title = {{MuMYO} - {Evaluating} and {Exploring} the {MYO} {Armband} for {Musical} {Interaction}},
    abstract = {The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband's sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new ``standard'' controller in the NIME community.},
    booktitle = {Proceedings of the International Conference on New Interfaces For Musical Expression},
    author = {Nymoen, Kristian and Haugen, Mari Romarheim and Jensenius, Alexander Refsum},
    year = {2015}
}

ICMC 2006 proceedings details

A colleague of mine recently asked if I could help her find the bibligraphic details of the ICMC 2006 proceedings. Apparently this information is not easily available online, and she had spent a great deal of research time trying to find the information.

I was lucky enough to participate in this wonderful event at Tulane University, and still have the paper version of the proceedings in my office. So here is the relevant information, in case anyone else also wonders about these details:

  • Editors (Paper chairs): Georg Essl and Ichiro  Fujinaga
  • November 6-11 2006
  • Publisher: International Computer Music Association, San Francisco, CA & The Music Department, Tulane University, New Orleans, LA
  • ISBN: 0-9713192-4-3

 

 

NIME 2013

Back from a great NIME 2013 conference in Daejeon + Seoul! For Norwegian readers out there, I have written a blog post about the conference on my head of department blog. I would have loved to write some more about the conference in English, but I think these images from my Flickr account will have to do for now:

2013-05-26-DSCN70162013-05-26-DSCN70232013-05-26-DSCN70242013-05-26-DSCN70272013-05-26-DSCN70292013-05-26-DSCN7031
2013-05-26-DSCN70322013-05-26-DSCN70372013-05-26-DSCN70432013-05-26-DSCN70452013-05-26-DSCN70502013-05-27-DSCN7055
2013-05-27-DSCN70632013-05-27-DSCN70662013-05-27-DSCN70702013-05-27-DSCN70742013-05-27-DSCN70832013-05-27-DSCN7084
2013-05-27-DSCN70882013-05-27-DSCN70972013-05-27-DSCN70982013-05-27-DSCN71012013-05-27-DSCN71042013-05-27-DSCN7106

At the last of the conference it was also announced that next year’s conference will be held in London and hosted by the Embodied AudioVisual Interaction Group at Goldsmiths. Future chair Atau Tanaka presented this teaser video: