Workshop: Open NIME

This week I led the workshop “Open Research Strategies and Tools in the NIME Community” at NIME 2019 in Porto Alegre, Brazil. We had a very good discussion, which I hope can lead to more developments in the community in the years to come. Below is the material that we wrote for the workshop.

Workshop organisers

  • Alexander Refsum Jensenius, University of Oslo
  • Andrew McPherson, Queen Mary University of London
  • Anna Xambó, NTNU Norwegian University of Science and Technology
  • Dan Overholt, Aalborg University Copenhagen
  • Guillaume Pellerin, IRCAM
  • Ivica Ico Bukvic, Virginia Tech
  • Rebecca Fiebrink, Goldsmiths, University of London
  • Rodrigo Schramm, Federal University of Rio Grande do Sul

Workshop description

The development of more openness in research has been in progress for a fairly long time, and has recently received a lot of more political attention through the Plan S initiative, The Declaration on Research Assessment (DORA), EU’s Horizon Europe, and so on. The NIME community has been positive to openness since the beginning, but still has not been able to fully explore this within the community. We call for a workshop to discuss how we can move forward in making the NIME community (even) more open throughout all its activities.

The Workshop

The aim of the workshop is to:

  1. Agree on some goals as a community.
  2. Showcase best practice examples as a motivation for others.
  3. Promote existing solutions for NIME researcher’s needs.
  4. Consider developing new solutions, where needed.
  5. Agree on a set of recommendations for future conferences, to be piloted in 2020.

Workshop Programme

TimeTitleResponsible
11:30WelcomeIntroduction of participantsIntroduction to the topicAlexander Refsum Jensenius

11:45Open Publication perspectivesAlexander Refsum JenseniusDan OverholtRodrigo Schramm
12:15Group-based discussion:How can we improve the NIME publication template?Should we think anew about the reviewing process (open review?)Should we open for a “lean publishing” model?How do we handle the international nature of NIME?
12:45Plenary discussion
13:00Lunch Break
14:30Open Research perspectivesGuillaume PellerinAnna XambóAndrew McPhersonIvica Ico Bukvic
15:00Group-based discussion:What are some best practice Open NIME examples?What tools/solutions/systems should be promoted at NIME?Who should do the job?
15:30Final discussion
16:00End of workshop

Background information

The following sections present some more information about the topic, including current state of affairs in the field.

What is Open Research?

There are numerous definitions of what Open Research constitutes. The FOSTER initiative has made a taxonomy, with these overarching branches :

  • Open Access: online, free of cost access to peer reviewed scientific content with limited copyright and licensing restrictions.
  • Open Data: online, free of cost, accessible data that can be used, reused and distributed provided that the data source is attributed.
  • Open Reproducible Research: the act of practicing Open Science and the provision of offering to users free access to experimental elements for research reproduction.
  • Open Science Evaluation: an open assessment of research results, not limited to peer-reviewers, but requiring the community’s contribution.
  • Open Science Policies: best practice guidelines for applying Open Science and achieving its fundamental goals.
  • Open Science Tools: refers to the tools that can assist in the process of delivering and building on Open Science.

Not all of these are equally relevant in the NIME community, while others are missing.

Openness in the NIME Community

The only aspect that has been institutionalized in the NIME community is the conference proceedings repository. This has been publicly available from the start at nime.org, and in later years all publications have also enforced CC-BY-licensing.

Other approaches to openness are also encouraged, and NIME community members are using various types of open platforms and tools (see the appendix for details):

  • Source code repositories
  • Experiment data repositories
  • Music performance repositories
  • MIR-type repositories
  • Hardware repositories

The question is how we can proceed in making the NIME community more open. This includes the conferences themselves, but also other activities in the community. A workshop on making hardware designs openly available was held in connection to NIME 2016 , and the current project proposal may be seen as a natural extension of that discussion.

The Problem with the Term “Open Science”

Many of the initiatives driving the development of more openness in research refer to this as “Open Science”. In a European context this is particularly driven by some of the key players, including the European Union (EU), the European Research Council (ERC), and the European University Association (EUA). Consequently a number of other smaller institutions and individuals are also using the term, often without thinking very much about the wording.

The main problem with using Open Science as a general term, is that it sounds like this is not something for researchers working in the arts and humanities. This was never the intention, of course, but was more the result of the movement developing from the sciences, and it is difficult to change a term when it has gotten some momentum.

NIME is—and is striving to continue to be—an inclusive community of researchers and practitioners coming from a variety of backgrounds. Many people at NIME would not consider that they work (only) in “science”, but would perhaps feel more comfortable under the umbrella “research”. This term can embrace “scientific research”, but also “artistic research” and R & D found outside of academic institutions. Thus using the term “Open Research” fits better for the NIME community than “Open Science”.

Free

The question of freedom is also connected to the that of openness. In the world of software development, one often talks about “free as in Speech” (libre) or “free as in Beer” (gratis). This point also relates to issues connected to licensing, copyright and reuse. Many people in the community are not affiliated with institutions, and receive payment from their work. Open research might have a close connection with open source, open hardware and open patent. This modern context for research and development of new musical technologies are also beyond academia and must be well planned in order to also attract the industry as partners. How can this be balanced with the needs for openness?

FAIR Principles

Another term that is increasingly used in the community is that of the FAIR principles, which stands for Findable, Accessible, Interoperable and Reusable. It is important to point out that FAIR is not the same as Open. Even though openness is an overarching aim, there is an understanding that privacy matters and copyright issues are preventing general openness of everything. Still the aim is to make data as open as possible, as closed as necessary. By applying the FAIR principles, it is possible to make metadata available so that it is openly known what types of data exist, and how to ask for access, even though the data may have to be closed.

General Repositories

There are various “bucket-based” repositories that may be used, such as:

What is positive about such repositories is that you can store anything of (more or less) any size. The challenge, however, is the lack of specific metadata, specialized tools (such as visualization methods), and a community.

There are also specific solutions, such as Github for code sharing.

As of 2018 a new repository aimed at coupling benefits of the aforesaid “bucket-based” approach with a robust metadata framework, titled COMPEL, has been introduced. It seeks to provide a convergence point to the diverse NIME-related communities and provide a means of linking their research output.

Openness in the Music Technology community

Looking at many other disciplines, the music technology community has embraced open perspectives in many years. A number of the conferences make their archives publicly available, such as:

There are also various types of open repositories and tools, including:

Best Practice Examples

  • CompMusic as a best practice project in the music technology field 
  • COMPEL focuses on the preservation of reproducible interactive art and more specifically interactive music
  • Bela platform

Towards Convergence in Research Assessment

I have a short article in the latest edition of LINK, the magazine of the European Association of Research Managers and Administrators.

You can look up my text on page 14 above, and for convenience here is the text version:

Open Science is on everyone’s lips these days. There are many reasons why this shift is necessary and wanted, and also several hurdles. One big challenge is the lack of incentives and rewards. Underlying this is the question of what we want to incentivize and reward, which ultimately boils down to the way we assess research and researchers. This is not a small thing. After all, we are talking about the cornerstone of people’s careers, whether an inspiring academic gets a job, promotion, and project funding.

Most research institutions and funding bodies have clear criteria in place for research assessment. Some of these are more qualitative, typically based on some kind of peer review. Others are more quantitative, often based on some metrics related to publications. With increasing time pressure, the latter is often the easiest solution. This is the reason why simplified metrics have become popular, of which citation counts (H-index, etc.) and/or journal impact factors (JIF) are the most popular. The problem is that such simple numbers do not even try to reveal the complex, multidimensional reality of academic practice. This type of metric also actively discourages academic activities beyond journal publications, including a large part of Open Science activities, such as code and data sharing, education, and so on.

From my own perspective, both as a researcher, research leader, and former head of department, I have been involved in numerous committees assessing both researchers and research over the last decade. I know the Norwegian landscape best, but have also sat on committees in numerous other countries in Europe and North America, as well as for the European Research Council. My experience is that all the institutions have clear criteria in place, but they differ largely in naming, interpretation and weighting.

What is the solution? In my opinion we need to work towards convergence on assessment criteria. There are currently several initiatives being undertaken, of which I am fortunate to be involved in the one coordinated by the European University Association (EUA). Inspiration is coming from the Open Science Career Evaluation Matrix (OS-CAM), which was proposed by the “Working Group on Rewards under Open Science” to the European Commission in 2017. The OS-CAM is organized into six main topics: (1) Research Output, (2) Research Process, (3) Service And Leadership, (4) Research Impact, (5) Teaching and Supervision, (6) Professional Experience. Each of these have a set of criteria, so there is a total of 23 criteria to assess, several of which target Open Science practices directly.

Many people will recognize the OS-CAM criteria from their own research assessment experience. It may not be that the proposed criteria are the ones that we will end up with. But if we within European institutions can agree on certain common topics and criteria to use when assessing researchers and research, we have taken a giant step forward in acknowledging the richness of academic activity, including that of Open Science practices.