Digital competency

What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.

Competencies vs skills

First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware. This is not necessarily bad, but it is not the most productive thing in higher education, in my opinion. Developing competency goes beyond learning new skills.

Some argue that skill is only one of three parts of competency, with knowledge and abilities being the others:

Skills + Knowledge + Abilities = Competencies

So a skill can be seen as part of competency, but it is not the same. This is particularly important in higher education, where the aim is to train students for life-long careers. As university teachers, we need to develop our students’ competencies, not only their skills.

Digital vs technological competency

Another misunderstanding is that “digital” and “technology” are synonyms, and they are not. Technologies can be either digital or analogue (or a combination). Think of “computers”. The word originated from humans (often women) that manually computed advanced calculations. Human computers were eventually replaced by mechanical machine computers, while today we mainly find digital computers. Interestingly, there is a growing amount of research on analogue computers again.

I often argue that traditional music notation is a digital representation. Notes such as “C”, “D”, and “E” are symbolic representations of a discrete nature, and these digital notes may be transformed into analogue tones once performed.

One often talks about the differences between acoustic and digital instruments. This is a division I criticise in my upcoming book, but I will leave that argument aside for now. Independent of the sound production, I have over the years grown increasingly fond of Tellef Kvifte’s approach to separating between analogue and digital control mechanisms of musical instruments. Then one could argue that an acoustic piano is a digital instrument because it is based on discrete control (with separate keys for “C”, “D”, “E”…).

Four levels of technology research and usage

When it comes to music technologies, I often like to think of four different layers: basic research, applied research and development, usage, and various types of meta-perspectives. I have given some examples of what these may entail in the table below.

Basic researchApplied research and developmentUsageMeta-perspectives
Music theory
Music cognition
Musical interaction
Hardware
Software
Algorithms
Databases
Network
Interaction design
Instrument making
Composing
Producing
Performing
Analysing
PedagogyPsychology
Sociology
History
Aesthetics
Digital representation
Signal processing
Machine learning
Searching
Writing
Illustrating
Four layers of (music) technology research and usage.

Most of our research activities can be categorised as being on the basic research side (plus various types of applied R&D, although mainly at a prototyping stage) or on the meta-perspectives side. To generalise, one could say that the former is more “technology-oriented” while the latter is more “humanities-oriented.” That is a simplification of a complex reality, but it may suffice for now.

The problem is that many educational activities (ours and others) focus on the use of technologies. However, today’s kids don’t need to learn how to use technologies. Most agree that they are eager technology users from the start. It is much more critical that they learn more fundamental issues related to digitalisation and why technologies work the way they do.

Digital representation

Given the level of digitisation that has happened around us over the last decades, I am often struck by the lack of understanding of digital representation. By that, I mean a fundamental understanding of what a digital file contains and how its content ended up in a digital form. This also influences what can be done to the content. Two general examples:

  • Text: even though the content may appear somewhat identical for those looking at a .TXT file versus a .DOCX/ODT file, these are two completely different ways of representating textual information.
  • Numbers: storing numbers in a .DOCX/ODT table is completely different from storing the same numbers in a .XLSX/ODS file (or a .CSV file for that matter).

One can think about these as different file formats that one can convert between. But the underlying question is about what type of digital representation one wants to capture and preserve, which also influences what you can do to the content.

From a musical perspective, there are many types of digital representations:

  • Scores: MIDI, notation formats, MusicXML
  • Audio: uncompressed vs. compressed formats, audio descriptor formats
  • Video: uncompressed vs. compressed formats, video descriptor formats
  • Sensor data: motion capture, physiological sensors, brain imagery

Students (and everyone else) need to understand what such digital representations mean and what they can be used for.

Algorithmic thinking

Computers are based on algorithms, a well-defined set of instructions for doing something. Algorithms can be written in computer code, but they can also be written with a pen on paper or drawn in a flow diagram. The main point is that algorithmic thinking is a particular type of reasoning that people need to learn. It is essential to understand that any complex problem can be broken down into smaller pieces that can be solved independently.

Not everyone will become programmers or software engineers, but there is an increased understanding that everyone should learn basic coding. Then algorithmic thinking is at the core. At UiO, this has been implemented widely in the Faculty for Mathematics and Natural Sciences through the Computing in Science Education. We don’t have a similar initiative in the Faculty of Humanities, but several departments have increased the number of courses that teach such perspectives.

Artificial Intelligence

There is a lot of buzz around AI, but most people don’t understand what it is all about. As I have written about several times on this blog (here and here), this makes people either overly enthusiastic or sceptical about the possibilities of AI. Not everyone can become an AI expert, but more people need to understand AI’s possibilities and limitations. We tried to explain that in the “AI vs Ary” project, as documented in this short documentary (Norwegian only):

The future is analogue

In all the discussions about digitisation and digital competency, I find it essential to remind people that the future is analogue. Humans are analogue; nature is analogue. We have a growing number of machines based on digital logic, but these machines contain many analogue components (such as the mechanical keys that I am typing this text on). Much of the current development in AI is bio-inspired, and there are even examples of new analogue computers. Understanding the limitations of digital technologies is also a competency that we need to teach our students.

All in all, I am optimistic about the future. There is a much broader understanding of the importance of digital competency these days. Still, we need to explain that this entails much more than learning how to use particular software or hardware devices. It is OK to learn such skills, but it is even more important to develop knowledge about how and why such technologies work in the first place.

Some thoughts on non-linear presentation tools

Many people rely on what I will call linear presentation tools when they lecture. This includes software such as LibreOffice Impress, Google Presentation, MS PowerPoint, or Keynote. These tools are great for smooth, timed, linear lectures. I also use them from time to time, but mainly if I know exactly what to say. They are also good when I lecture with others, and we need to develop a presentation together. However, linear presentation tools do not work equally well for general teaching, where spontaneity is required. For example, I often like to take questions during lectures. Answering questions may quickly lead to a different presentation order than what I had originally planned. For that reason, I have explored different non-linear presentation tools.

Document camera as presentation tool

Sometimes, but seldom, I only speak when I teach. I am a person that thinks very visually, so when I want to explain something, I usually prefer to show something as well. I used to be quite happy with using a black- or whiteboard when teaching, but some years ago I invested in a document camera.

Teaching with my document camera in the MCT Portal.

The benefit of teaching with a document camera is that I can show small instruments or electronic parts while teaching. It, of course, also works well to write and draw with pen and paper. In fact, I prefer this to write on a whiteboard.

When we started up the MCT master’s programme, I found that the document camera also worked well for online teaching, and during the pandemic, I have used it for several online presentations. Here is an example of how this looks like from a RITMO presentation about microphones earlier this year.

Such a setup allows me for writing with pen on paper, which leads to a very different delivery than if I am using pre-made slides. It also allows for showing things in front of the camera. The downside to using a document camera is that you need to make all the content on the fly. I usually have a draft of what I want to say, which helps in structuring my thoughts. Sometimes I even pre-make some “slides” that can be shown in front of the camera. But there are also times where I want to pre-make more material. Then I have found that mind-mapping works well.

Mind maps as a presentation tool

I have often found that my drafts for document camera-based lectures were developed as mind maps. That is, multi-dimensional drawings spreading out from a core title or concept. For that reason, I wanted to test whether I could use mind mapping software for presentations.

Over the last couple of years, I have tested various solutions. In the end, I have found Mindomo to fit my needs very well. It is online-based, but they also have a multi-platform app that works well on Ubuntu. It is not the most feature-reach mind mapping software out there, but it has a nice balance of features versus usability. I also like that it has a presentation mode that removes all the editing tools. As such, it works very well for mind map-based presentations.

I have primarily used mind map-based presentations for teaching and internal seminars, but some weeks ago I decided to test it for a research presentation. I was asked to present at the EnTimeMent workshop run by Qualisys, but as I was preparing the presentation I didn’t know exactly who the audience would be and the format of the workshop. Then it is difficult to plan for a linear presentation. Since I had lots of video material to show, this wasn’t an ideal time to use the document camera, either. So I decided to test out a mind map-based presentation.

Below is an embed of the presentation I made:

And here are screenshots showing the fully collapsed and fully open versions of the mind map.

I had planned a structure of how I would run the presentation, moving clockwise through the material. I kept with that plan, more or less. What was nice was that I could adjust how many levels I should dig into the material. After listening to some of the speakers before me, I decided to skip certain parts. This was easy because I could leave out opening some of the sublevels of the presentation.

Here is a recording of the presentation:

I had some issues with the network connection in the beginning (yes, presenting over wifi is not a good idea, but it is sometimes unavoidably), so apologies for the poor audio/video in some parts of the presentation.

I still have to get more familiar with moving around in such presentations, but all in all, I am happy about the flexibility of such a presentation tool. It allows for developing a fairly large pool of material that it is possible to draw on when presenting. Rather than deleting/hiding slides in a linear presentation, a mind map-based presentation can easily be adjusted by not opening various parts.

What is a musical instrument?

A piano is an instrument. So is a violin. But what about the voice? Or a fork? Or a mobile phone? So what is (really) a musical instrument? That was the title of a short lecture I held at UiO’s Open Day today.

The 15-minute lecture is a very quick version of some of the concepts I have been working on for a new book project. Here I present a model for understanding what a musical instrument is and how new technology changes how we make and experience music.

The original lecture was in Norwegian, but I got inspired and recorded an English version right afterwards:

If you rather prefer the original, Norwegian version, here it is:

And, if you do want to learn more about these things, you can apply for one of our study programmes before 15 April: bachelor or master of musicology, or master of music, communication and technology.

New run of Music Moves

I am happy to announce a new run (the 6th) of our free online course Music Moves: Why Does Music Make You Move?. Here is a 1-minute welcome that I recorded for Twitter:

The course starts on Monday (25 January 2021) and will run for six weeks. In the course, you will learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.

We developed the course 5 years ago, but the content is still valid. I also try to keep it up to date by recording new weekly wrap-ups with interviews with researchers around here at UiO.

I highly recommend joining the course on FutureLearn, that is the only way to get all the content, including videos, articles, quizzes, and, most importantly, the dialogue with other learners. But if you are only interested in watching videos, all of them are available on this UiO page and this YouTube playlist.

Teaching with a document camera

How does an “old-school” document camera work for modern-day teaching? Remarkably well, I think. Here are some thoughts on my experience over the last few years.

The reason I got started with a document camera was because I felt the need for a more flexible setup for my classroom teaching. Conference presentations with limited time are better done with linear presentation tools, I think, since the slides help with the flow. But for classroom teaching, in which dialogue with students is at the forefront, such linear presentation tools do not give me the flexibility that I need.

Writing on a black/whiteboard could have been an option, but in many modern classrooms these have been replaced by projector screens. I also find that writing on a board is much more tricky than writing with pen on paper. So a document camera, which is essentially a modernized “overhead projector”, is a good solution.

After a little bit of research some years back, I ended up buying a Lumens Ladibug DC193. The reason I went for this one, was because it had the features I needed, combined with being the only nice-looking document camera I could find (aesthetics is important!). A nice feature is that it has a built-in light, which helps in creating a better image also when the room lighting is not very bright.

My Lumens Ladibug DC193 document camera is red and has a built-in light.

One very useful feature of the document camera, is the ability to connect my laptop to the HDMI input on the Ladibug, and then connect the Ladibug HDMI output to the screen. The built-in “video mixer” makes it possible to switch between the document camera and the computer screen. This is a feature I have been using much more than I expected, and allows me to change between slides shown on the PC, some hand-writing on paper, and showing parts of web pages.

When I first got the document camera, I thought that I was going to use the built-in recording functionality a lot. It is possible to connect a USB drive directly to the camera, and make recordings. Unfortunately, the video quality is not very good, and the audio quality from the built-in mono microphone is horrible.

One of the best things about a document camera is that it can be used for other things than just showing text on paper. This is particularly useful when I teach with small devices (instruments and electronics) that are difficult to see at a distance. Placing them on the table below the camera makes them appear large and clear on the screen. One challenge, however, is that the document camera is optimized for text on white paper. So I find that it is best to place a white paper sheet under what I want to show.

Things became a little more complicated when I started to teach in the MCT programme. Here all teaching is happening in the Portal, which connects the two campuses in Oslo and Trondheim. Here we use Zoom for the basic video communication, with a number of different computers connected to make it all work together. I was very happy to find that the Ladibug showed up as a regular “web camera” when I connected it to my PC with a USB cable. This makes it possible to connect and send it as a video source to one of the Zoom screens in our setup.

When teaching in the MCT Portal, I connect the Ladibug with USB to my PC, and then send the video to Zoom from my laptop.

The solution presented above works well in the Portal, where we already have a bunch of other cameras and computers that handle the rest of the communication. For streaming setups outside of the Portal I have previously shown how it is possible to connect the document camera to the Blackmagic web presenter, which allows for also connecting a regular video camera to the SDI input.

More recently I have also explored the use of a video mixer (Sony MCX-500), which allows for connecting more video cameras and microphones at once. Since the video mixer cannot be connected directly to a PC, it is necessary to also add in the Blackmagic web presenter in the mix. This makes for a quite large and complex setup. I used it for one remote lecture once, and even though it worked, it was not as streamlined as I hoped for. So I will need to find an easier solution in the future.

Exploring a more complex remote teaching setup, including a video mixer in addition to document camera and web presenter.

What is clear, however, is that a document camera is very useful for my teaching style. The Ladibug has served me well for some time, but I will soon start to look for a replacement. I particularly miss having full HD, better calibration of the image, as well as better recording functionality. I hope manufacturers are still developing this type of niche product, ideally also nice-looking ones!