NIME publication: “NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing”

The MCT master’s programme has been running for a year now, and everyone involved has learned a lot. In parallel to the development of the programme, and teaching it, we are also running the research project SALTO. Here the idea is to systematically reflect on our educational practice, which again will feed back into better development of the MCT programme.

One outcome of the SALTO project, is a paper that we presented at the NIME conference in Porto Alegre this week:

Xambó, Anna, Sigurd Saue, Alexander Refsum Jensenius, Robin Støckert, and Øyvind Brandtsegg. “NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing.” In Proceedings of the International Conference on New Interfaces for Musical Expression. Porto Alegre, 2019.

MCT at NIME
Anna Xambó presents the paper “NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing” at NIME 2019.

Abstract:

In this paper, we present a workshop of physical computing applied to NIME design based on science, technology, engineering, arts, and mathematics (STEAM) education. The workshop is designed for master students with multidisciplinary backgrounds. They are encouraged to work in teams from two university campuses remotely connected through a portal space. The components of the workshop are prototyping, music improvisation and reflective practice. We report the results of this course, which show a positive impact on the students on their intention to continue in STEM fields. We also present the challenges and lessons learned on how to improve the teaching and delivery of hybrid technologies in an interdisciplinary context across two locations, with the aim of satisfying both beginners and experts. We conclude with a broader discussion on how these new pedagogical perspectives can improve NIME-related courses.

RaveForce: A Deep Reinforcement Learning Environment for Music Generation

My PhD student Qichao Lan is at SMC in Malaga this week, presenting the paper:

Lan, Qichao, Jim Tørresen, and Alexander Refsum Jensenius. “RaveForce: A Deep Reinforcement Learning Environment for Music Generation.” Proceedings of the Sound and Music Computing Conference. Malaga, 2019.

The framework that Qichao has developed runs nicely with a bridge between Jupyter Notebook and SuperCollider. This opens for lots of interesting experiments in the years to come.

Abstract:

RaveForce is a programming framework designed for a computational music generation method that involves audio sample level evaluation in symbolic music representation generation. It comprises a Python module and a SuperCollider quark. When connected with deep learning frameworks in Python, RaveForce can send the symbolic music representation generated by the neural network as Open Sound Control messages to the SuperCollider for non-realtime synthesis. SuperCollider can convert the symbolic representation into an audio file which will be sent back to the Python as the input of the neural network. With this iterative training, the neural network can be improved with deep reinforcement learning algorithms, taking the quantitative evaluation of the audio file as the reward. In this paper, we find that the proposed method can be used to search new synthesis parameters for a specific timbre of an electronic music note or loop.

Rotate lots of image on Ubuntu

I often find myself with a bunch of images that are not properly rotated. Many cameras write the rotation information to the EXIF header of the image file, but the file itself is not actually rotated. Some photo editors do this automagically when you import the files, but I prefer to copy files manually to my drive.

I therefore have a little one-liner that can rotate all the files in a folder:

find . *.jpg -exec jhead -autorot {} \;

It works recursively, and is very quick!

Towards Convergence in Research Assessment

I have a short article in the latest edition of LINK, the magazine of the European Association of Research Managers and Administrators.

You can look up my text on page 14 above, and for convenience here is the text version:

Open Science is on everyone’s lips these days. There are many reasons why this shift is necessary and wanted, and also several hurdles. One big challenge is the lack of incentives and rewards. Underlying this is the question of what we want to incentivize and reward, which ultimately boils down to the way we assess research and researchers. This is not a small thing. After all, we are talking about the cornerstone of people’s careers, whether an inspiring academic gets a job, promotion, and project funding.

Most research institutions and funding bodies have clear criteria in place for research assessment. Some of these are more qualitative, typically based on some kind of peer review. Others are more quantitative, often based on some metrics related to publications. With increasing time pressure, the latter is often the easiest solution. This is the reason why simplified metrics have become popular, of which citation counts (H-index, etc.) and/or journal impact factors (JIF) are the most popular. The problem is that such simple numbers do not even try to reveal the complex, multidimensional reality of academic practice. This type of metric also actively discourages academic activities beyond journal publications, including a large part of Open Science activities, such as code and data sharing, education, and so on.

From my own perspective, both as a researcher, research leader, and former head of department, I have been involved in numerous committees assessing both researchers and research over the last decade. I know the Norwegian landscape best, but have also sat on committees in numerous other countries in Europe and North America, as well as for the European Research Council. My experience is that all the institutions have clear criteria in place, but they differ largely in naming, interpretation and weighting.

What is the solution? In my opinion we need to work towards convergence on assessment criteria. There are currently several initiatives being undertaken, of which I am fortunate to be involved in the one coordinated by the European University Association (EUA). Inspiration is coming from the Open Science Career Evaluation Matrix (OS-CAM), which was proposed by the “Working Group on Rewards under Open Science” to the European Commission in 2017. The OS-CAM is organized into six main topics: (1) Research Output, (2) Research Process, (3) Service And Leadership, (4) Research Impact, (5) Teaching and Supervision, (6) Professional Experience. Each of these have a set of criteria, so there is a total of 23 criteria to assess, several of which target Open Science practices directly.

Many people will recognize the OS-CAM criteria from their own research assessment experience. It may not be that the proposed criteria are the ones that we will end up with. But if we within European institutions can agree on certain common topics and criteria to use when assessing researchers and research, we have taken a giant step forward in acknowledging the richness of academic activity, including that of Open Science practices.

Micro-education is the future

I have a commentary published in the Norwegian academic newspaper Khrono today with the title “Micro-education is the future”. The reason I ended up writing the piece was because of my frustration with working “against” the Norwegian system when it comes exploring new educational strategies.

As I have written about here on the blog before, I have tested a number of different educational methods and formats over the last years, including Music Moves, Carpentry-style workshops, and, of course, our joint master’s programme Music, Communication & Technology. With all of these, I have experienced difficulties getting them registered in our course system (Felles studentsystem (FS)). For the master’s programme, we have solved this by splitting up courses on the two universities involved. This makes it possible to run the programme, but it creates some unfortunate side-effects, such that it is difficult to have non-programme students sign up for the courses. I am not going to write more about these issues here today, as I am quite confident that we are going to find “in-house” solutions to these problems.

For Music Moves and the workshops, however, we have not been able to find proper workarounds. The end result is that people do not get credits for following these courses. Hopefully, they take the courses because they want to learn, and not because they need credits. But I see that other universities are able to provide credits for MOOCs and workshops, so why should we not be able to do this at the University of Oslo?

Since there are no credits awarded to the students, there are no money paid out to the university for the courses. In Norway we have a model in which a part of a university’s funding is based on the number of credits “produced” every year. We have the same reward system for “research points”, which researchers care a lot about. Much less attention is given to study points, but there is a lot more money paid out in this category. Hence getting students through courses is a big incentive for the institutions.

Since no money comes in from these courses, there is little interest in spending time on such things from an institutional perspective. We have a quite elaborate way of counting our working hours, at least in the part of the position that is set aside for teaching. I have been head of department myself, so I know that these things matter when you are talking to people about what they should spend their (limited) working ours on. Considering whether you should teach a course for-credits versus a not-for-credits MOOC or workshop, is an easy question for any head of department. Since I am back in the teacher role myself, the choice is not so obvious. My main motivation for teaching is not to generate study points, but to disseminate knowledge and start academic discussions. Here I see that the knowledge-per-person ratio is much higher with a MOOC attracting, say, 1000 people than a course with 30 students.

As I write in the Khrono commentary, we need to think anew about how we incentivize higher education. In Norway, we currently have a committee (Ekspertutvalg for etterog videreutdanning) working on how to improve solutions for life-long learning. MOOCs and Carpentry-style workshops are, in my opinion, an obvious solution for how people outside of universities can learn new things. There are a couple of ways to improve the system:

  1. We need to open for awarding credits for MOOCs and workshops, of course given that they follow proper university education guidelines. For a MOOC with a workload of around 40 hours, this could typically be 1 ECTS, while for shorter workshops it could be 0.1-0.2 ECTS. I know that most study officers would probably say that such credit values are too small to handle. And that is exactly my point. Our current system is set up for handling full study programmes, and semester-long courses, most of which are 10 ECTS. We need to revise the system so that it is practically possible to handle smaller credit points.
  2. Our system is currently set up so that you need to have study rights at the University of Oslo. It is possible to apply for getting access to individual courses, but this is a time-consuming process that was made for people following semester-long courses. For MOOCs and workshops with lots of participants it is not possible (neither for the university nor for the learners) to go through the process the way it works today. We need a way of securing student “mobility” in the digital age. This is not something we can solve in Norway alone, it needs to be an international initiative.

Hopefully, the committee will address some of these issues. As for developing an international solution, I hope the European Commission or the European University Association can push for a change.