Towards Convergence in Research Assessment

I have a short article in the latest edition of LINK, the magazine of the European Association of Research Managers and Administrators.

You can look up my text on page 14 above, and for convenience here is the text version:

Open Science is on everyone’s lips these days. There are many reasons why this shift is necessary and wanted, and also several hurdles. One big challenge is the lack of incentives and rewards. Underlying this is the question of what we want to incentivize and reward, which ultimately boils down to the way we assess research and researchers. This is not a small thing. After all, we are talking about the cornerstone of people’s careers, whether an inspiring academic gets a job, promotion, and project funding.

Most research institutions and funding bodies have clear criteria in place for research assessment. Some of these are more qualitative, typically based on some kind of peer review. Others are more quantitative, often based on some metrics related to publications. With increasing time pressure, the latter is often the easiest solution. This is the reason why simplified metrics have become popular, of which citation counts (H-index, etc.) and/or journal impact factors (JIF) are the most popular. The problem is that such simple numbers do not even try to reveal the complex, multidimensional reality of academic practice. This type of metric also actively discourages academic activities beyond journal publications, including a large part of Open Science activities, such as code and data sharing, education, and so on.

From my own perspective, both as a researcher, research leader, and former head of department, I have been involved in numerous committees assessing both researchers and research over the last decade. I know the Norwegian landscape best, but have also sat on committees in numerous other countries in Europe and North America, as well as for the European Research Council. My experience is that all the institutions have clear criteria in place, but they differ largely in naming, interpretation and weighting.

What is the solution? In my opinion we need to work towards convergence on assessment criteria. There are currently several initiatives being undertaken, of which I am fortunate to be involved in the one coordinated by the European University Association (EUA). Inspiration is coming from the Open Science Career Evaluation Matrix (OS-CAM), which was proposed by the “Working Group on Rewards under Open Science” to the European Commission in 2017. The OS-CAM is organized into six main topics: (1) Research Output, (2) Research Process, (3) Service And Leadership, (4) Research Impact, (5) Teaching and Supervision, (6) Professional Experience. Each of these have a set of criteria, so there is a total of 23 criteria to assess, several of which target Open Science practices directly.

Many people will recognize the OS-CAM criteria from their own research assessment experience. It may not be that the proposed criteria are the ones that we will end up with. But if we within European institutions can agree on certain common topics and criteria to use when assessing researchers and research, we have taken a giant step forward in acknowledging the richness of academic activity, including that of Open Science practices.

Open Research vs Open Science

Open Science is on everyone’s lips these days. But why don’t we use Open Research more?

This is a question I have been asking regularly after I was named Norwegian representative in EUA’s Expert Group on Science 2.0 / Open Science committee earlier this year. For those who don’t know, the European University Association (EUA) represents more than 800 universities and national rectors’ conferences in 48 European countries. It is thus a very interesting organization when it comes to influencing the European higher education and research environment.

The problem with the term Open Science

It appears that EUA has adopted the term Open Science because it is used by the European Commission. I understand that there has been a lot of political investment (branding, if you like) in the term over the last years, but I still think it is unfortunate.

My biggest problem with using Open Science as a general term in European academia, is that it indicates that this is something that researchers in the arts and humanities should not think about. Of course, this was never the intention. I have yet to meet anyone that means the Open Science is only meant for people working in the sciences. The result is that you sometimes see strange sentences like “… the sciences (including the arts and humanities) …”.

All this confusion could easily be resolved by using Open Research as the general term. This is more inclusive, making all the arts and humanities researchers feel involved, but also including researchers working outside academia. They too may be interested in opening their research, even though they would not call themselves “scientists”.

Usage

I have not had time to do proper research on this, but some quick googl’ing reveals around 3.3 million hits for “open science” and 2.5 million for “open research”. So Open Research is obviously used a lot, at least outside official European channels. Searching in books, however, reveals that “open research” is used a lot more than “open science”, as shown in the ngram below:

On a side note, it is interesting to see that Open Research, and even Open Access Research, is used by the UK Research and Innovation.

The situation in Norway

We had some very interesting discussions about open research during the Universities Norway conference earlier this year. As expected there was a lot of confusion about the terms “open science” (“åpen vitenskap”) and “open research” (“åpen forskning”). The Minister of Research and Higher Education even managed to use both terms interchangeably in her opening speech.

Fortunately, the CEO of the Research Council of Norway, John-Arne Røttingen, was very clear in saying that they only use the term “åpen forskning” (“open research”) in their communication.

Sitting in different national committees, I am now trying to be careful to always talk about Open Research, and it seems like this will end up being the “official” Norwegian term.

Being a researcher (from the arts and humanities!), I know that terminology is important for a discussion. I therefore hope that more people will rethink their usage of the term Open Science. Why not try Open Research instead?

Participating in the opening of The Guild

I participated in the opening of the Guild of Research Universities in Brussels yesterday. The Guild is

a transformative network of research-led universities from across the European continent, formed to strengthen the voice of universities in Europe, and to lead the way through new forms of collaboration in research, innovation and education.

The topic of the opening symposium is that of Open Innovation, a hot topic these days, and something that the European Commission is putting a lot of pressure on. I was invited to present an example of how open research can lead to innovation and to participate in a panel discussion. Below is an image of the setting, in the lovely Solvay Library in the heart of Brussels (and great to see that the 360-degree plugin works in WordPress!):


Ole Petter Ottersen, Chair of The Guild and Rector of the University of Oslo opened the symposium (click and drag to rotate image).

From basic music research to hospital application

In the symposium I showed a shortened version of the TV documentary that tells the unlikely story of how my basic music research has led to medical innovation. In 2005 I developed a method for visualizing the movements of dancers – motiongrams – with a set of accompanying software tools. As an open source advocate, I made these software tools freely available online, and witnessed how my code was picked up by artists, designers, hackers and researchers. Now my method is at the core of the system Computer-based Infant Movement Assessment (CIMA). This is a clinical system currently being tested in hospitals around the world, with the aim of detecting early-born infants’ risk of developing cerebral palsy.

Panel discussion

The panel discussion centered mainly on policy, and it was great to see that both European university leaders and the Commission embrace openness in all its entirety. Head of Cabinet Antonio Vicente effectively argued that Europe started late, but is quickly catching up in pushing for openness (access, data, research, innovation). The question is now how we get to do this.

I think that the EU should get a lot of credit for their brave move within open research, but individual universities need to push for the same type of openness throughout their institutions. Perhaps the biggest challenge is to change the mentality of peers, who ultimately are the key persons in making decisions as to who should get project funding, appointments and promotions. I see that we often fail in recruiting young researchers with an inclination towards open research. Such applicants consistently get evaluated as “weaker” in comparison with researchers that are following more traditional academic pathways.

Moving forwards, we need to continue with an (inter)national push, but we should not forget about the need for a culture change among individuals. This is something we need to work on at an institutional level.


A view from my panel position during the symposium (click and drag to rotate image).