Keynote: Experimenting with Open Research Experiments

Yesterday I gave a keynote lecture at the Munin Conference on Scholarly Publishing in Tromsø. This is an annual conference that gathers librarians, research administrators and publishers, but also some researchers and students. It is my first time to the conference, and found it to be a very diverse, interesting and welcoming group of people.

A poster tweet from the Munin conference team.

Most of the other presenters talked about issues related to publishing academic texts, and with a particular focus on the transition to open access (OA). My presentation was focused on MusicLab, an open research pilot project we are running at the University of Oslo.

MusicLab is a collaboration between RITMO and the University Library, and it is a great example of how cool things can happen when progressive librarians work together with cutting-edge researchers. If you never heard about it before, here is a 42-second introduction to what MusicLab is all about:

Since lots of people talked about Open Science at the conference, I started out by arguing for why I believe that Open Research is a more inclusive term than Open Science. I then went on to identify some of the parts that people think about when talking about Open Research:

Some of the building blocks of an Open Research ecosystem.

As can be seen from the slide above, Open Access (which should probably be called Open Publication instead, since many people mistake it to mean Open Research) is just one part of the whole picture. In the picture above, I am also thinking about these building blocks as being placed on a “timeline” going from left to right, although there may certainly be recursive parts of the model as well.

As a researcher, the publication part is typically happening fairly late in the process, so I always try to remind people that the actual research happens before it is published. For example, the writing process is also something that should be thought of as open process, I think, I mentioned some of my explorations into using various tools for writing Open Manuscripts:

None of these are perfect, however, and for some upcoming projects I am thinking about exploring Authorea and Jupyter Notebook as writing tools. After my talk I also got a recommendation for Bookdown, which I would like to look more at as well (although I have for a long time avoided getting into R, since I am currently investing some time in moving my code from Matlab to Python).

MusicLab

After the fairly long introduction, I finally got to the main point of the talk, which is that of MusicLab. Here are some of the slides from that part:

A MusicLab event is built around a concert, but also typically contains a workshop, panel discussion, data collection, and data jockeying.
Some photos from MusicLab vol. 1, which was focused on muscles, and with a performance by Marco Donnarumma (Photos: Simen Kjellin, UiO).
The MusicLab events are part of a pilot project which is aimed at discovering new ways of doing research, education, and dissemination in open ways.

Challenges

One of the points of MusicLab is to jump in and do something that everyone says is “impossible”… We have of course, have our set of challenges, and particularly related to:

  • Privacy (GDPR)
  • Copyright and licenses
  • Storage
  • Archive

I will write more about all of these later, but here just some slides to summarize some points:

Dividing the people at a MusicLab into three groups, helps when it comes to identifying and solving issues of privacy.
We have not solved the problem of copyright in relation to Open Research yet, but we start to get an overview of all the problems…
Storage is not only about saving files somewhere. They need to be usable as well, ideally right away.
This is the list of files from MusicLab vol. 4, and some of the tools we want to use to analyze them.

We have more challenges than solutions at the moment. But it is good to see that things are moving in the right direction. The dream scenario would be a combination of the multimedia visualization tools from Repovizz combined with the interconnectivity of Trompa, the CC-spirit of Audio Commons, the versioning of GitHub, the accessibility and community of Wikipedia, and the longterm archiving of Zenodo. While that may sound entirely far-fetched right now, it could be a reality with some more interoperability.

I got lots of interesting feedback after my talk. It was particularly interesting to hear several people commenting on the importance of having more people from the arts and humanities involved in discussions about Open Research. I am happy to be one such voice, and hopefully MusicLab can inspire others to push the boundaries for what is currently possible.

If you want to watch the entire thing, it can be found towards the end of this recorded live stream:

Tips for doing your job interview over Skype

I have been interviewing a lot of people for various types of university positions over the years. Most often these interviews are conducted using a video-conferencing system. Here I provide some tips to help people prepare for a video-based job interview:

  • We (and many others) typically use Skype for interviews, not because it is the best system out there (of commercial platforms I prefer Zoom), but because it is the most widespread solution. The most important thing to do when preparing for an interview, is to check that you have the latest version of Skype (or whatever other program is required) installed. You don’t want to get an upgrade button when you are starting up for your interview.
  • Ensure that you have a reliable Internet connection. If you can, use a cabled connection. It will most certainly be more stable than wireless.
  • Only use your mobile phone in an interview if you do not have any other options, or if your computer fails in the last minute. Even though you may be used to talking to people from phone to phone, remember that your image will most likely be projected on a big TV/screen, and your sound will be played over a speaker system. Then the “phone quality” will certainly be visible/audible. Also: if you do use your phone, remember to put it in landscape mode. Otherwise, the image will look weird when it only covers a small part of the projection.
  • Sit in a suitable place where you will not be disturbed and where there is no noise. Avoid public spaces in which people may walk in on you.
  • To obtain the best possible video image, think about your placement with respect to lighting. Do not sit in front of a window, since a bright light in the background will make it difficult to see your face. It is better to sit in front of a plain wall with light in your face. If you don’t have a plain wall at hand, consider whether the background is suitable for an interview situation. I have seen all sorts of weird images, messy rooms, etc. This does not give a professional impression.
  • Do not sit with your computer in your lap. Then it will move all the time, making the committee seasick.
  • When positioning yourself in relation to the camera, remember that most likely you will be shown on a large TV or projected on the wall. It is better to sit so that your entire upper body can be seen. Otherwise, your face will be big!
  • Use a headset with a microphone located close to your mouth. This will pick up the sound better than most built-in computer microphones. Using a headset will also prevent feedback during the conversation, and it will not pick up sound if you are typing on the keyboard.

If you experience any issues with your setup, stay calm. Remember that the committee will be positive towards you, otherwise you would not have made it to the interview. Committees are used to all sorts of issues in video-based interviews. Sometimes the error is also on our side. Seeing how you tackle the stress of an unforeseen situation may convince the committee about your personal qualities.

Good luck!

Reflecting on some flipped classroom strategies

I was invited to talk about my experiences with flipped classroom methodologies at a seminar at the Faculty of Humanities last week. Preparing for the talk got me to revisit my own journey of working towards flipped teaching methodologies. This has also involved explorations of various types of audio/video recording. I will go through them in chronological order.

Podcasting

Back in 2009-2011, I created “podcasts” of my lectures a couple of semesters, such as in the course MUS2006 Music and Body Movements (which was at the time taught in Norwegian). What I did was primarily to record the audio of the lectures and make them available for the students to listen/download. I experimented with different setups, microphones, etc., and eventually managed to find something that was quite time-efficient.

The problem, however, was that I did not find the cost-benefit ratio to be high enough. This is a course with fairly few students (20-40), and not many actually listened to the lectures. I don’t blame them, though, as listening to 2×45 minutes of lecturing is not the most efficient way of learning.

Lecture recording

I organized the huge NIME conference in 2011, and then decided to explore the new video production facilities available in the auditorium we were using. All of the lectures and performances of the conference were made available on Vimeo shortly after the conference. Some of the videos have actually been played quite a lot, and I have also used them as reference material in other courses.

Making these videos required a (at the time) quite expensive setup, one person that was in charge of the live mixing, and quite a lot of man-hours in uploading everything afterwards. So I quickly realized that this is not something that one can do for regular teaching.

Screencast tutorials

After my “long-lecture” recording trials, I found that what I was myself finding useful, was fairly short video tutorials on particular topics. So when I was developing the course MUS2830 Interaktiv musikk, I also started exploring making short screencast videos with introductory material to the graphical programming environment PD. These videos go through the most basic stuff, things that the students really need to get going, hence it is important that they can access it even if they missed the opening classes.

The production of these were easy, using Camtasia for screencasting (I was still using OSX at the time), a headset to get better audio, and very basic editing before uploading to our learning platform and also sharing openly on YouTube. The videos are short (5-10 minutes) and I still refer students to them.

Besides the video stuff, there are also several other interesting flipped classroom aspects of the course, which are described in the paper An Action-Sound Approach to Teaching Interactive Music.

MOOC

The experimentation with all of the above had wet my appetite for new teaching and learning strategies. So when the UiO called for projects to develop a MOOC – Massive Open Online Course – I easily jumped on. The result became Music Moves, a free online course on the FutureLearn platform.

There are a number of things to say about developing a MOOC, but the short story is that it is much more work than we had anticipated. It would have never worked without a great team, including several of my colleagues, a professional video producer, an external project manager, and many more.

The end result is great, though, and we have literally had thousands of people following the course during the different runs we have had. The main problem is the lack of a business model around MOOCs here in Norway. Since education is free, we cannot earn any money on running a MOOC. Teaching allocations are based on the number of study points generated from courses, but a MOOC does not count as a normal course, hence the department does not get any money, and the teachers involved don’t get any hours allocated to re-run the MOOC.

We have therefore been experimenting with running the MOOC as part of the course MUS2006 Music and Body Movements. That has been both interesting and challenging, since you need to guide your attention both to the on-campus students but also to focus on the online learners’ experience. We are soon to run Music Moves for the fourth time, and this time in connection with the NordicSMC Winter School. Our previous on/off-campus teaching has been happening in parallel. Now we are planning that all winter school attendees will have to complete the online course before the intensive week in Oslo. It will be interesting to see how this works out in practice.

Flipped, joint master’s

Our most extreme flipped classroom experiment to date, is the design of a completely flipped master’s programme: Music, Communication and Technology. This is not only flipped in terms of the way it is taught, but it is also shared between UiO and NTNU, which adds additional complexity to the setup. I will write a lot more about this programme in later blog posts, but to summarize: it has been a hectic first semester, but also great fun. And we are looking forwards to recruiting new students to start in 2019.

Audio recordings as motion capture

I spend a lot of time walking around the city with my daughter these days, and have been wondering how much I move and how the movement is distributed over time. To answer these questions, and to try out a method for easy and cheap motion capture, I decided to record today’s walk to the playground.

I could probably have recorded the accelerometer data in my phone, but I wanted to try an even more low-tech solution: an audio recorder.

While cleaning up some old electronics boxes the other day I found an old Creative ZEN Nano MP3 player. I had totally forgotten about the thing, and I cannot even remember ever using it. But when I found it I remembered that it actually has a built-in microphone and audio recording functionality. The recording quality is horrible, but that doesn’t really matter for what I want to use it for. The good thing is that it can record for hours on the 1GB built-in memory, using some odd compressed audio format (DVI ADPCM).

Since I am mainly interested in recording motion, I decided to put it in my sock and see if that would be a good solution for recording the motion of my foot. I imagined that the sound of my footsteps would be sufficiently loud that they would be easily detected. This is a fairly reduced recording of all my motion, but I was interested in seeing if it was relevant at all.

The result: a 35 MB audio file with 2,5 hours of foot sounds! In case you are interested, here is a 2-minute sample of regular walking. While it is possible to hear a little bit of environmental sounds, the foot steps are very loud and clear.

Now, what can you do with a file like this? To get the file useable for analysis, I started by converting it to a standard AIFF file using Perian in QuickTime 7. After that I loaded it into Matlab using the wonderful MIRToolbox, resampling it to 100 Hz (from 8kHz). It can probably be resampled at an even lower sampling late for this type of data, but I will look more into that later.

The waveform of the 2,5 hour recording looks like this, and reveals some of the structure:

But calculating the smoothed envelope of the curve gives a clearer representation of the motion:

Here we can clearly identify some of the structure of what I (or at least my right foot) was doing for those 2,5 hours. Not bad at all, and definitely relevant for macro-level motion capture.

Based on the findings of a 2 Hz motion peak in the data reported my MacDougall and Moore, I was curious to see if I could find the same in my data. Taking the FFT of the signal gives this overall spectrum:

Clearly, my foot motion shows the strongest peaks at 4 and 5 Hz. I will have to dive into the material a bit more to understand more about these numbers.

The conclusion so far, though, is that this approach may actually be a quite good, cheap and easy method for recording long-term movement data. And with 8kHz sampling rate, this method may also allow for studying micro-movement in more detail. More about that later.

AudioAnalysis v0.5

I am teaching a course in sound theory this semester, and therefore thought it was time to update a little program I developed several years ago, called SoundAnalysis. While there are many excellent sound analysis programs out there (SonicVisualiserPraat, etc.), they all work on pre-recorded sound material. That is certainly the best approach to sound analysis, but it is not ideal in a pedagogical setting where you want to explain things in realtime.

There are not so many realtime audio analysis programs around, at least not anyone that looks and behaves similar on both OSX and Windows. One exception that is worth mentioning is the excellent sound tools from Princeton, but they lack some of the analysis features I am interested in showing to the students.

So my update of the SoundAnalysis program, should hopefully cover a blank spot in the area of realtime sound visualisation and analysis. The new version provides a larger spectrogram view, and the option to change various spectrogram features on the fly. The quantitative features have been moved to a separate window, and now also includes simple beat tracking.

Below is a screenshot giving an overview of the new version:

Overview of AudioAnalysis

Other new selling points include a brand new name… I have also decided to rename it to AudioAnalysis, so that it harmonizes with my AudioVideoAnalysis and VideoAnalysis programs.

The program can be found over on the fourMs software page, and here is a short tutorial video:

Please let me know if you find bugs or other strange things in the program, and I will try to fix them as soon as possible (I expect there to be some Win 64-bit issues…).