Towards Convergence in Research Assessment

I have a short article in the latest edition of LINK, the magazine of the European Association of Research Managers and Administrators.

You can look up my text on page 14 above, and for convenience here is the text version:

Open Science is on everyone’s lips these days. There are many reasons why this shift is necessary and wanted, and also several hurdles. One big challenge is the lack of incentives and rewards. Underlying this is the question of what we want to incentivize and reward, which ultimately boils down to the way we assess research and researchers. This is not a small thing. After all, we are talking about the cornerstone of people’s careers, whether an inspiring academic gets a job, promotion, and project funding.

Most research institutions and funding bodies have clear criteria in place for research assessment. Some of these are more qualitative, typically based on some kind of peer review. Others are more quantitative, often based on some metrics related to publications. With increasing time pressure, the latter is often the easiest solution. This is the reason why simplified metrics have become popular, of which citation counts (H-index, etc.) and/or journal impact factors (JIF) are the most popular. The problem is that such simple numbers do not even try to reveal the complex, multidimensional reality of academic practice. This type of metric also actively discourages academic activities beyond journal publications, including a large part of Open Science activities, such as code and data sharing, education, and so on.

From my own perspective, both as a researcher, research leader, and former head of department, I have been involved in numerous committees assessing both researchers and research over the last decade. I know the Norwegian landscape best, but have also sat on committees in numerous other countries in Europe and North America, as well as for the European Research Council. My experience is that all the institutions have clear criteria in place, but they differ largely in naming, interpretation and weighting.

What is the solution? In my opinion we need to work towards convergence on assessment criteria. There are currently several initiatives being undertaken, of which I am fortunate to be involved in the one coordinated by the European University Association (EUA). Inspiration is coming from the Open Science Career Evaluation Matrix (OS-CAM), which was proposed by the “Working Group on Rewards under Open Science” to the European Commission in 2017. The OS-CAM is organized into six main topics: (1) Research Output, (2) Research Process, (3) Service And Leadership, (4) Research Impact, (5) Teaching and Supervision, (6) Professional Experience. Each of these have a set of criteria, so there is a total of 23 criteria to assess, several of which target Open Science practices directly.

Many people will recognize the OS-CAM criteria from their own research assessment experience. It may not be that the proposed criteria are the ones that we will end up with. But if we within European institutions can agree on certain common topics and criteria to use when assessing researchers and research, we have taken a giant step forward in acknowledging the richness of academic activity, including that of Open Science practices.

Micro-education is the future

I have a commentary published in the Norwegian academic newspaper Khrono today with the title “Micro-education is the future”. The reason I ended up writing the piece was because of my frustration with working “against” the Norwegian system when it comes exploring new educational strategies.

As I have written about here on the blog before, I have tested a number of different educational methods and formats over the last years, including Music Moves, Carpentry-style workshops, and, of course, our joint master’s programme Music, Communication & Technology. With all of these, I have experienced difficulties getting them registered in our course system (Felles studentsystem (FS)). For the master’s programme, we have solved this by splitting up courses on the two universities involved. This makes it possible to run the programme, but it creates some unfortunate side-effects, such that it is difficult to have non-programme students sign up for the courses. I am not going to write more about these issues here today, as I am quite confident that we are going to find “in-house” solutions to these problems.

For Music Moves and the workshops, however, we have not been able to find proper workarounds. The end result is that people do not get credits for following these courses. Hopefully, they take the courses because they want to learn, and not because they need credits. But I see that other universities are able to provide credits for MOOCs and workshops, so why should we not be able to do this at the University of Oslo?

Since there are no credits awarded to the students, there are no money paid out to the university for the courses. In Norway we have a model in which a part of a university’s funding is based on the number of credits “produced” every year. We have the same reward system for “research points”, which researchers care a lot about. Much less attention is given to study points, but there is a lot more money paid out in this category. Hence getting students through courses is a big incentive for the institutions.

Since no money comes in from these courses, there is little interest in spending time on such things from an institutional perspective. We have a quite elaborate way of counting our working hours, at least in the part of the position that is set aside for teaching. I have been head of department myself, so I know that these things matter when you are talking to people about what they should spend their (limited) working ours on. Considering whether you should teach a course for-credits versus a not-for-credits MOOC or workshop, is an easy question for any head of department. Since I am back in the teacher role myself, the choice is not so obvious. My main motivation for teaching is not to generate study points, but to disseminate knowledge and start academic discussions. Here I see that the knowledge-per-person ratio is much higher with a MOOC attracting, say, 1000 people than a course with 30 students.

As I write in the Khrono commentary, we need to think anew about how we incentivize higher education. In Norway, we currently have a committee (Ekspertutvalg for etterog videreutdanning) working on how to improve solutions for life-long learning. MOOCs and Carpentry-style workshops are, in my opinion, an obvious solution for how people outside of universities can learn new things. There are a couple of ways to improve the system:

  1. We need to open for awarding credits for MOOCs and workshops, of course given that they follow proper university education guidelines. For a MOOC with a workload of around 40 hours, this could typically be 1 ECTS, while for shorter workshops it could be 0.1-0.2 ECTS. I know that most study officers would probably say that such credit values are too small to handle. And that is exactly my point. Our current system is set up for handling full study programmes, and semester-long courses, most of which are 10 ECTS. We need to revise the system so that it is practically possible to handle smaller credit points.
  2. Our system is currently set up so that you need to have study rights at the University of Oslo. It is possible to apply for getting access to individual courses, but this is a time-consuming process that was made for people following semester-long courses. For MOOCs and workshops with lots of participants it is not possible (neither for the university nor for the learners) to go through the process the way it works today. We need a way of securing student “mobility” in the digital age. This is not something we can solve in Norway alone, it needs to be an international initiative.

Hopefully, the committee will address some of these issues. As for developing an international solution, I hope the European Commission or the European University Association can push for a change.

Testing reveal.js for teaching

I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities. This is an even more complex organisational structure than the MCT programme that I am involved in myself.

In addition to running a workshop with the Qualisys motion capture system that they have (similar to the one in our fourMs Lab at RITMO), I was asked to also present an introduction to motion capture in general, and also some video-based methods. I have made the more technically oriented tutorial Quantitative Video analysis for Qualitative Research, which is describing how to use the Musical Gestures Toolbox for Matlab. Since Matlab was outside the scope of this session, I decided to create a non-technical presentation focusing more on the concepts.

Most of my recent presentations have been made in Google Presentation, a tool that really shows the potential of web-based applications (yes, I think it has matured to a point where we can actually talk about an application in the browser). The big benefit of using a web-based presentation solution, is that I can share links to the presentation both before and after it was held, and I avoid all the hassle of issues with moving large video files around, etc.

Even though Google Presentation has been working fine, I would prefer moving to an open source solution. I have for a long time also wanted to try out markdown-based presentation solutions, since I use markdown for most of my other writing. I have tried out a few different solutions, but haven’t really found anything that worked smoothly enough. Many of the solutions add too much complexity to the way you need to write your markdown code, which then removes some of the weightlessness of this approach. The easiest and most nice-looking solution so far seems to be reveal.js, but I haven’t really found a way to integrate it into my workflow.

Parallel to my presentation experimentation, I have also been exploring Jupyter Notebook for analysis. The nice thing about this approach, is that you can write cells of code that can be evaluated on the fly, and be shown seamlessly in the browser. This is great for developing code, sharing code, teaching code, and also for moving towards more Open Research practices.

One cool thing I discovered, is that Jupyter Notebook has built-in support for reveal.js! This means that you can just export your complete notebook as a nice presentation. This is definitely something I am going to explore more with my coding tutorials, but for today’s workshop I ended up using it with only markdown code.

I created three notebooks, one for each topic I was talking about, and exported them as presentations:

A really cool feature in reveal.js, is the ability to move in two dimensions. That means that you can keep track of the main sections of the presentation horizontally, while filling in with more content vertically. Hitting the escape button, it is possible to “zoom” out, and look at the entire presentation, as shown below:

The overview mode in reveal.js presentations.

The tricky part of using Jupyter Notebook for plain markdown presentations, is that you need to make individual cell blocks for each part of the presentation. This works, but it would make even more sense if I had some python code in between. That is for next time, though.

Music Moves #4 has started

We have just kicked off the fourth round of Music Moves, the free, online course we have developed at University of Oslo. The course introduces a lot of the core theories, concepts and methodologies that we work with at RITMO. This time around we also have participants from both the MCT master’s programme and the NordicSMC Winter School taking the course as an introduction to further on-campus studies.

To help with running the course, we have recruited Ruby Topping, who is currently an exchange student at University of Oslo. She was a learner in the first round of Music Moves, and it is great to have her onboard as a mentor for the other learners.

Why join the course?

Music is movement. A bold statement, but one that we will explore together in this free online course.

Together we will study music through different types of body movement. This includes everything from the sound-producing keyboard actions of a pianist to the energetic dance moves in a club.

You will learn about the theoretical foundations for what we call embodied music cognition and why body movement is crucial for how we experience the emotional moods in music. We will also explore different research methods used at universities and conservatories. These include advanced motion capture systems and sound analysis methods.

You will be guided by a group of music researchers from the University of Oslo, with musical examples from four professional musicians. The course is rich in high-quality text, images, video, audio and interactive elements.

Join us to learn more about terms such as entrainment and musical metaphors, and why it is difficult to sit still when you experience a good groove.

Why I am positive to Plan S

Plan S has been the biggest political topic in the research community here in Norway this fall. Several researchers have raised their concerned voices. I am among the positive ones, and I will here try to explain why.

Just to rewind a little first. Coalition S is a group of national research funding organisations, with the support of the European Commission and the European Research Council (ERC). On 4 September 2018 they announced Plan S, an initiative to make full and immediate Open Access to research publications a reality:

“By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms.”

There has been a political shift to making research openly available over the last decade. In many countries this has also been added to various types of policy documents. Here in Norway it is part of the Government’s longterm research plan. The difference with Plan S is that it is not just “fluffy” words, it is a concrete plan with a concrete deadline.

The 10 Principles

Plan S is based on 10 principles:

  1. Authors retain copyright of their publication with no restrictions. All publications must be published under an open license, preferably the Creative Commons Attribution Licence CC BY. In all cases, the license applied should fulfil the requirements defined by the Berlin Declaration;
  2. The Funders will ensure jointly the establishment of robust criteria and requirements for the services that compliant high quality Open Access journals and Open Access platforms must provide;
  3. In case such high quality Open Access journals or platforms do not yet exist, the Funders will, in a coordinated way, provide incentives to establish and support them when appropriate; support will also be provided for Open Access infrastructures where necessary;
  4. Where applicable, Open Access publication fees are covered by the Funders or universities, not by individual researchers; it is acknowledged that all scientists should be able to publish their work Open Access even if their institutions have limited means;
  5. When Open Access publication fees are applied, their funding is standardised and capped (across Europe);
  6. The Funders will ask universities, research organisations, and libraries to align their policies and strategies, notably to ensure transparency;
  7. The above principles shall apply to all types of scholarly publications, but it is understood that the timeline to achieve Open Access for monographs and books may be longer than 1 January 2020;
  8. The importance of open archives and repositories for hosting research outputs is acknowledged because of their long-term archiving function and their potential for editorial innovation;
  9. The ‘hybrid’ model of publishing is not compliant with the above principles;
  10. The Funders will monitor compliance and sanction non-compliance.

My reason to support Plan S

I have both idealistic and practical reasons for supporting Plan S.

I am working at a public university, and think it is obvious that the results of what I am doing should be available for anyone. That is why I also for a long time have been uploading my publications to our institutional archive (DUO), have made my source code available on GitHub, uploaded educational material to YouTube, and so on.

I have published my research in a number of different channels over the years. Several of these, and particularly all the conference proceedings, are freely available online. In my field we are also lucky to have some high quality open access journals, such as Empirical Musicology Review, Music & Science, and TISMIR. There are also the more general open access publishers, such as Plos ONE, Frontiers, etc. All of these are still fairly new, but have been helped by the political push for open access.

Many of the critics of Plan S argue that they don’t have anywhere to publish if they cannot use their traditional channels. The answer to that is that now is the time for traditional publishers to change their business model. The ones that do today, will be the winners tomorrow. If they do not, there will be new alternatives developed, either by the researchers themselves (as has happened in my field), or by the new commercial players (Plos ONE, Frontiers, PeerJ, etc.).

The challenging thing right now, and the reason why it is important that Plan S has a short deadline, is that it is tricky to leave with two different publication systems at the same time. Currently, most of the money in the system is spent on paying for old-school, expensive subscriptions to (some few) commercial publication giants. That means that there is little money left for those of us who want to pay so-called article processing charge (APCs). Changing the model quickly is therefore important, freeing up money for a pay-to-publish model.

An important thing in the discussion, is that the individual researcher should not suffer. Changing the publication system, and the underlying payment system, needs to be done at an institutional/national level. That has been very difficult up until now, because the players have been too small and not coordinated. Plan S changes this, since many of the major European players are on board, and more are joining forces from the rest of the world as we speak.

Plan S is disruptive. That is the point, and that is why it will succeed!