Keynote: Experimenting with Open Research Experiments

Yesterday I gave a keynote lecture at the Munin Conference on Scholarly Publishing in Tromsø. This is an annual conference that gathers librarians, research administrators and publishers, but also some researchers and students. It is my first time to the conference, and found it to be a very diverse, interesting and welcoming group of people.

A poster tweet from the Munin conference team.

Most of the other presenters talked about issues related to publishing academic texts, and with a particular focus on the transition to open access (OA). My presentation was focused on MusicLab, an open research pilot project we are running at the University of Oslo.

MusicLab is a collaboration between RITMO and the University Library, and it is a great example of how cool things can happen when progressive librarians work together with cutting-edge researchers. If you never heard about it before, here is a 42-second introduction to what MusicLab is all about:

Since lots of people talked about Open Science at the conference, I started out by arguing for why I believe that Open Research is a more inclusive term than Open Science. I then went on to identify some of the parts that people think about when talking about Open Research:

Some of the building blocks of an Open Research ecosystem.

As can be seen from the slide above, Open Access (which should probably be called Open Publication instead, since many people mistake it to mean Open Research) is just one part of the whole picture. In the picture above, I am also thinking about these building blocks as being placed on a “timeline” going from left to right, although there may certainly be recursive parts of the model as well.

As a researcher, the publication part is typically happening fairly late in the process, so I always try to remind people that the actual research happens before it is published. For example, the writing process is also something that should be thought of as open process, I think, I mentioned some of my explorations into using various tools for writing Open Manuscripts:

None of these are perfect, however, and for some upcoming projects I am thinking about exploring Authorea and Jupyter Notebook as writing tools. After my talk I also got a recommendation for Bookdown, which I would like to look more at as well (although I have for a long time avoided getting into R, since I am currently investing some time in moving my code from Matlab to Python).

MusicLab

After the fairly long introduction, I finally got to the main point of the talk, which is that of MusicLab. Here are some of the slides from that part:

A MusicLab event is built around a concert, but also typically contains a workshop, panel discussion, data collection, and data jockeying.
Some photos from MusicLab vol. 1, which was focused on muscles, and with a performance by Marco Donnarumma (Photos: Simen Kjellin, UiO).
The MusicLab events are part of a pilot project which is aimed at discovering new ways of doing research, education, and dissemination in open ways.

Challenges

One of the points of MusicLab is to jump in and do something that everyone says is “impossible”… We have of course, have our set of challenges, and particularly related to:

  • Privacy (GDPR)
  • Copyright and licenses
  • Storage
  • Archive

I will write more about all of these later, but here just some slides to summarize some points:

Dividing the people at a MusicLab into three groups, helps when it comes to identifying and solving issues of privacy.
We have not solved the problem of copyright in relation to Open Research yet, but we start to get an overview of all the problems…
Storage is not only about saving files somewhere. They need to be usable as well, ideally right away.
This is the list of files from MusicLab vol. 4, and some of the tools we want to use to analyze them.

We have more challenges than solutions at the moment. But it is good to see that things are moving in the right direction. The dream scenario would be a combination of the multimedia visualization tools from Repovizz combined with the interconnectivity of Trompa, the CC-spirit of Audio Commons, the versioning of GitHub, the accessibility and community of Wikipedia, and the longterm archiving of Zenodo. While that may sound entirely far-fetched right now, it could be a reality with some more interoperability.

I got lots of interesting feedback after my talk. It was particularly interesting to hear several people commenting on the importance of having more people from the arts and humanities involved in discussions about Open Research. I am happy to be one such voice, and hopefully MusicLab can inspire others to push the boundaries for what is currently possible.

If you want to watch the entire thing, it can be found towards the end of this recorded live stream:

Converting MXF files to MP4 with FFmpeg

We have a bunch of Canon XF105 at RITMO, a camera that records MXF files. This is not a particularly useful file format (unless for further processing). Since many of our recordings are just for documentation purposes, we often see the need to convert to MP4. Here I present two solutions for converting MXF files to MP4, both as individual files and a combined file from a folder. These are shell scripts based on the very useful FFmpeg.

Convert individual MXF files to individual MP4 files

The first solution is based on converting a bunch of MXF files to individual MP4 files. This is practical if there are multiple, individual shots.

Save the script above as mxf2mp4.sh, make it executable, with a command like:

chmod u+x mxf2mp4.sh

and run the file:

./mxf2mp4.sh

Convert a folder of MXF files to one MP4 file

The second solution is when we have made one long recording, which is split up into individual MXF files of 1.9 GB size (the maximum size of FAT32-formatted drives) in the camera. Then the aim is to merge all of these together to one MP4 file. This script will do the trick:

Do the same as above to run the script.

Tips for doing your job interview over Skype

I have been interviewing a lot of people for various types of university positions over the years. Most often these interviews are conducted using a video-conferencing system. Here I provide some tips to help people prepare for a video-based job interview:

  • We (and many others) typically use Skype for interviews, not because it is the best system out there (of commercial platforms I prefer Zoom), but because it is the most widespread solution. The most important thing to do when preparing for an interview, is to check that you have the latest version of Skype (or whatever other program is required) installed. You don’t want to get an upgrade button when you are starting up for your interview.
  • Ensure that you have a reliable Internet connection. If you can, use a cabled connection. It will most certainly be more stable than wireless.
  • Only use your mobile phone in an interview if you do not have any other options, or if your computer fails in the last minute. Even though you may be used to talking to people from phone to phone, remember that your image will most likely be projected on a big TV/screen, and your sound will be played over a speaker system. Then the “phone quality” will certainly be visible/audible. Also: if you do use your phone, remember to put it in landscape mode. Otherwise, the image will look weird when it only covers a small part of the projection.
  • Sit in a suitable place where you will not be disturbed and where there is no noise. Avoid public spaces in which people may walk in on you.
  • To obtain the best possible video image, think about your placement with respect to lighting. Do not sit in front of a window, since a bright light in the background will make it difficult to see your face. It is better to sit in front of a plain wall with light in your face. If you don’t have a plain wall at hand, consider whether the background is suitable for an interview situation. I have seen all sorts of weird images, messy rooms, etc. This does not give a professional impression.
  • Do not sit with your computer in your lap. Then it will move all the time, making the committee seasick.
  • When positioning yourself in relation to the camera, remember that most likely you will be shown on a large TV or projected on the wall. It is better to sit so that your entire upper body can be seen. Otherwise, your face will be big!
  • Use a headset with a microphone located close to your mouth. This will pick up the sound better than most built-in computer microphones. Using a headset will also prevent feedback during the conversation, and it will not pick up sound if you are typing on the keyboard.

If you experience any issues with your setup, stay calm. Remember that the committee will be positive towards you, otherwise you would not have made it to the interview. Committees are used to all sorts of issues in video-based interviews. Sometimes the error is also on our side. Seeing how you tackle the stress of an unforeseen situation may convince the committee about your personal qualities.

Good luck!

Reflecting on some flipped classroom strategies

I was invited to talk about my experiences with flipped classroom methodologies at a seminar at the Faculty of Humanities last week. Preparing for the talk got me to revisit my own journey of working towards flipped teaching methodologies. This has also involved explorations of various types of audio/video recording. I will go through them in chronological order.

Podcasting

Back in 2009-2011, I created “podcasts” of my lectures a couple of semesters, such as in the course MUS2006 Music and Body Movements (which was at the time taught in Norwegian). What I did was primarily to record the audio of the lectures and make them available for the students to listen/download. I experimented with different setups, microphones, etc., and eventually managed to find something that was quite time-efficient.

The problem, however, was that I did not find the cost-benefit ratio to be high enough. This is a course with fairly few students (20-40), and not many actually listened to the lectures. I don’t blame them, though, as listening to 2×45 minutes of lecturing is not the most efficient way of learning.

Lecture recording

I organized the huge NIME conference in 2011, and then decided to explore the new video production facilities available in the auditorium we were using. All of the lectures and performances of the conference were made available on Vimeo shortly after the conference. Some of the videos have actually been played quite a lot, and I have also used them as reference material in other courses.

Making these videos required a (at the time) quite expensive setup, one person that was in charge of the live mixing, and quite a lot of man-hours in uploading everything afterwards. So I quickly realized that this is not something that one can do for regular teaching.

Screencast tutorials

After my “long-lecture” recording trials, I found that what I was myself finding useful, was fairly short video tutorials on particular topics. So when I was developing the course MUS2830 Interaktiv musikk, I also started exploring making short screencast videos with introductory material to the graphical programming environment PD. These videos go through the most basic stuff, things that the students really need to get going, hence it is important that they can access it even if they missed the opening classes.

The production of these were easy, using Camtasia for screencasting (I was still using OSX at the time), a headset to get better audio, and very basic editing before uploading to our learning platform and also sharing openly on YouTube. The videos are short (5-10 minutes) and I still refer students to them.

Besides the video stuff, there are also several other interesting flipped classroom aspects of the course, which are described in the paper An Action-Sound Approach to Teaching Interactive Music.

MOOC

The experimentation with all of the above had wet my appetite for new teaching and learning strategies. So when the UiO called for projects to develop a MOOC – Massive Open Online Course – I easily jumped on. The result became Music Moves, a free online course on the FutureLearn platform.

There are a number of things to say about developing a MOOC, but the short story is that it is much more work than we had anticipated. It would have never worked without a great team, including several of my colleagues, a professional video producer, an external project manager, and many more.

The end result is great, though, and we have literally had thousands of people following the course during the different runs we have had. The main problem is the lack of a business model around MOOCs here in Norway. Since education is free, we cannot earn any money on running a MOOC. Teaching allocations are based on the number of study points generated from courses, but a MOOC does not count as a normal course, hence the department does not get any money, and the teachers involved don’t get any hours allocated to re-run the MOOC.

We have therefore been experimenting with running the MOOC as part of the course MUS2006 Music and Body Movements. That has been both interesting and challenging, since you need to guide your attention both to the on-campus students but also to focus on the online learners’ experience. We are soon to run Music Moves for the fourth time, and this time in connection with the NordicSMC Winter School. Our previous on/off-campus teaching has been happening in parallel. Now we are planning that all winter school attendees will have to complete the online course before the intensive week in Oslo. It will be interesting to see how this works out in practice.

Flipped, joint master’s

Our most extreme flipped classroom experiment to date, is the design of a completely flipped master’s programme: Music, Communication and Technology. This is not only flipped in terms of the way it is taught, but it is also shared between UiO and NTNU, which adds additional complexity to the setup. I will write a lot more about this programme in later blog posts, but to summarize: it has been a hectic first semester, but also great fun. And we are looking forwards to recruiting new students to start in 2019.

Musical Gestures Toolbox for Matlab

Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.

The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.

Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago. A lot of the Matlab coding for the new version was done in the master’s thesis by Bo Zhou.

The new MGT is available on Github, and there is a more or less complete introduction to the main features in the software carpentry workshop Quantitative Video analysis for Qualitative Research.