Digital competency

What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.

Competencies vs skills

First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware. This is not necessarily bad, but it is not the most productive thing in higher education, in my opinion. Developing competency goes beyond learning new skills.

Some argue that skill is only one of three parts of competency, with knowledge and abilities being the others:

Skills + Knowledge + Abilities = Competencies

So a skill can be seen as part of competency, but it is not the same. This is particularly important in higher education, where the aim is to train students for life-long careers. As university teachers, we need to develop our students’ competencies, not only their skills.

Digital vs technological competency

Another misunderstanding is that “digital” and “technology” are synonyms, and they are not. Technologies can be either digital or analogue (or a combination). Think of “computers”. The word originated from humans (often women) that manually computed advanced calculations. Human computers were eventually replaced by mechanical machine computers, while today we mainly find digital computers. Interestingly, there is a growing amount of research on analogue computers again.

I often argue that traditional music notation is a digital representation. Notes such as “C”, “D”, and “E” are symbolic representations of a discrete nature, and these digital notes may be transformed into analogue tones once performed.

One often talks about the differences between acoustic and digital instruments. This is a division I criticise in my upcoming book, but I will leave that argument aside for now. Independent of the sound production, I have over the years grown increasingly fond of Tellef Kvifte’s approach to separating between analogue and digital control mechanisms of musical instruments. Then one could argue that an acoustic piano is a digital instrument because it is based on discrete control (with separate keys for “C”, “D”, “E”…).

Four levels of technology research and usage

When it comes to music technologies, I often like to think of four different layers: basic research, applied research and development, usage, and various types of meta-perspectives. I have given some examples of what these may entail in the table below.

Basic researchApplied research and developmentUsageMeta-perspectives
Music theory
Music cognition
Musical interaction
Hardware
Software
Algorithms
Databases
Network
Interaction design
Instrument making
Composing
Producing
Performing
Analysing
PedagogyPsychology
Sociology
History
Aesthetics
Digital representation
Signal processing
Machine learning
Searching
Writing
Illustrating
Four layers of (music) technology research and usage.

Most of our research activities can be categorised as being on the basic research side (plus various types of applied R&D, although mainly at a prototyping stage) or on the meta-perspectives side. To generalise, one could say that the former is more “technology-oriented” while the latter is more “humanities-oriented.” That is a simplification of a complex reality, but it may suffice for now.

The problem is that many educational activities (ours and others) focus on the use of technologies. However, today’s kids don’t need to learn how to use technologies. Most agree that they are eager technology users from the start. It is much more critical that they learn more fundamental issues related to digitalisation and why technologies work the way they do.

Digital representation

Given the level of digitisation that has happened around us over the last decades, I am often struck by the lack of understanding of digital representation. By that, I mean a fundamental understanding of what a digital file contains and how its content ended up in a digital form. This also influences what can be done to the content. Two general examples:

  • Text: even though the content may appear somewhat identical for those looking at a .TXT file versus a .DOCX/ODT file, these are two completely different ways of representating textual information.
  • Numbers: storing numbers in a .DOCX/ODT table is completely different from storing the same numbers in a .XLSX/ODS file (or a .CSV file for that matter).

One can think about these as different file formats that one can convert between. But the underlying question is about what type of digital representation one wants to capture and preserve, which also influences what you can do to the content.

From a musical perspective, there are many types of digital representations:

  • Scores: MIDI, notation formats, MusicXML
  • Audio: uncompressed vs. compressed formats, audio descriptor formats
  • Video: uncompressed vs. compressed formats, video descriptor formats
  • Sensor data: motion capture, physiological sensors, brain imagery

Students (and everyone else) need to understand what such digital representations mean and what they can be used for.

Algorithmic thinking

Computers are based on algorithms, a well-defined set of instructions for doing something. Algorithms can be written in computer code, but they can also be written with a pen on paper or drawn in a flow diagram. The main point is that algorithmic thinking is a particular type of reasoning that people need to learn. It is essential to understand that any complex problem can be broken down into smaller pieces that can be solved independently.

Not everyone will become programmers or software engineers, but there is an increased understanding that everyone should learn basic coding. Then algorithmic thinking is at the core. At UiO, this has been implemented widely in the Faculty for Mathematics and Natural Sciences through the Computing in Science Education. We don’t have a similar initiative in the Faculty of Humanities, but several departments have increased the number of courses that teach such perspectives.

Artificial Intelligence

There is a lot of buzz around AI, but most people don’t understand what it is all about. As I have written about several times on this blog (here and here), this makes people either overly enthusiastic or sceptical about the possibilities of AI. Not everyone can become an AI expert, but more people need to understand AI’s possibilities and limitations. We tried to explain that in the “AI vs Ary” project, as documented in this short documentary (Norwegian only):

The future is analogue

In all the discussions about digitisation and digital competency, I find it essential to remind people that the future is analogue. Humans are analogue; nature is analogue. We have a growing number of machines based on digital logic, but these machines contain many analogue components (such as the mechanical keys that I am typing this text on). Much of the current development in AI is bio-inspired, and there are even examples of new analogue computers. Understanding the limitations of digital technologies is also a competency that we need to teach our students.

All in all, I am optimistic about the future. There is a much broader understanding of the importance of digital competency these days. Still, we need to explain that this entails much more than learning how to use particular software or hardware devices. It is OK to learn such skills, but it is even more important to develop knowledge about how and why such technologies work in the first place.

Completing the MICRO project

I wrote up the final report on the project MICRO – Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.

Aims and objectives

The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion. Micromotion is here used to describe the smallest motion that we can produce and experience, typically at a rate lower than 10 mm/s.

Example plots of the micromotion observed in the motion capture data of a person standing still for 10 minutes.

The last decades have seen an increased focus on the role of the human body in both the performance and the perception of music. Up to now, however, the micro-level of these experiences has received little attention.

The main objective of MICRO was broken down into three secondary objectives:

  1. Define a set of sub-categories of music-related micromotion.
  2. Understand more about how musical sound influences the micromotion of perceivers and which musical features (such as melody, harmony, rhythm, timbre, loudness, spatialization) come into play.
  3. Develop conceptual models for controlling sound through micromotion, and develop prototypes of interactive music systems based on these models.

Results

The project completed most of its planned activities and several more:

  1. The scientific results include many insights about human music-related micromotion. Results have been presented in one doctoral dissertation, two master theses, several journal papers, and at numerous conferences. As hypothesized, music influences human micromotion. This has been verified with different types of music in all the collected datasets. We have also found that music with a regular and strong beat, particularly electronic dance music, leads to more motion. Our data also supports the idea that music with a pulse of around 120 beats per minute is more motion-inducing than music with slower or faster tempi. In addition, we found that people generally moved more when listening with headphones. Towards the end of the project, we began studying whether there are individual differences. One study found that people who score high on empathic concern move more to music than others. This aligns with findings from recent studies of larger-scale music-related body motion.
  2. Data collected from the project has been released openly in Oslo Standstill Database. The database contains data from all Championships of Standstill, the Headphones-Speakers study, and from the Sverm project that preceded MICRO.
  3. Software developed during the project has been made openly available. This includes various analysis scrips implemented in Jupyter Notebooks. Several of the developed software modules have been wrapped up in the Musical Gestures Toolbox for Python.
  4. The scientific results have inspired a series of artistic explorations, including several installations and performances with the Self-playing Guitars, Oslo Muscle Band, and the Micromotion Apps.
  5. The project and its results have been featured in many media appearances, including a number of newspaper stories and several times on national TV and radio.

Open Research

MICRO has been an Open Research flagship project. This includes making the entire project as open as possible but as closed as necessary. The project shares publications, data, source code, application, and other parts of the research process openly.

Summing up

I am very happy about the outcomes of the MICRO project. This is largely thanks to the fantastic team, particularly postdoctoral fellow Victor Gonzalez Sanchez and doctoral fellow Agata Zelechowska.

Results from the Sverm project inspired the MICRO project, and many lines of thought will continue in my new AMBIENT project. I am looking forward to researching unconscious and involuntary micromotion in the years to come.

Recruiting for the AMBIENT project

I am happy to announce that I am recruiting for my new research project AMBIENT: Bodily Entrainment to Audiovisual Rhythms. The project will continue my line of research into the effects of sound and visuals on our bodies and minds and the creative use of such effects. Here is a short video in which I explain the motivation for the project:

Now hiring

The idea is to put together a multidisciplinary team of three early career researchers experienced with one or more of the following methods: sound analysis, video analysis, interviews, questionnaires, motion capture, physiological sensing, statistics, signal processing, machine learning, interactive (sound/music) systems. The announcement texts are available here:

Application deadline: 15 March 2022. Do not hesitate to get in touch if you have any questions about the positions.

About the project

Much focus has been devoted to understanding the “foreground” of human activities: things we say, actions we do, sounds we hear. AMBIENT will study the sonic and visual “background” of indoor environments: the sound of a ventilation system in an office, the footsteps of people in a corridor, or people’s fidgeting in a classroom.

Examples of periodic auditory and visual stimuli in the environments to be studied in AMBIENT: individuals in offices (WP2), physical-virtual coworking (WP3), telematic classroom (WP4).

The project aims to study how such elements influence people’s bodily behaviour and how people feel about the rhythms in an environment. This will be done by studying how different auditory and visual stimuli combine to create rhythms in various settings.

Rhythms can be constructed from different elements: (a) visual, (b) auditory, (c) audiovisual, (d) spatiotemporal, or (e) a combination of audiovisual and spatiotemporal. The numbers in (e) indicate the cyclic, temporal order of the events.

The hypothesis is that various types of rhythms influence people’s bodily behaviour through principles of entrainment, that is, the process by which independent rhythmical systems interact with each other.

Objectives

The primary objective of AMBIENT is to understand more about bodily entrainment to audiovisual rhythms in both local and telematic environments. This will be studied within everyday workspaces like offices and classrooms.

The primary objective can be broken down into three secondary objectives:

  1. Understand more about the rhythms of in-door environments, and make a theoretical model of such rhythms that can be implemented in software.
  2. Understand more about how people interact with the rhythms of in-door environments, both when working alone – and together.
  3. Explore how such rhythms can be captured and (re)created in a different environment using state-of-the-art audiovisual technologies.

Work packages

The work in AMBIENT is divided into five work packages:

  • WP1: Theoretical Development
  • WP2: Observation study of individuals in their offices
  • WP3: Observation study of physical-virtual workspaces
  • WP4: Exploration of (re)creation of ambience in a telematic classroom
  • WP5: Software development

The work packages overlap and feed into each other in various ways.

The relationships between work packages. The small boxes within WP2–4 indicate the different studies (a/b/c) and their phases (1/2/3). See WP sections for explanations.

Open Research

The AMBIENT project is an open research lighthouse project. The aim is to keep the entire research as open as possible, including sharing methods, data, publications, etc.

Funding

The Research Council of Norway, project number 324003, 2021-2025

MusicLab receives Danish P2 Prisen

Yesterday, I was in Copenhagen to receive the Danish Broadcasting Company’s P2 Prisen for “event of the year”. The prize was awarded to MusicLab Copenhagen, a unique “research concert” last October after two years of planning.

The main person behind MusicLab Copenhagen is Simon Høffding, a former postdoc at RITMO, now an associate professor at The University of Southern Denmark. He has collaborated with the world-leading Danish String Quartet for a decade, focusing on understanding more about musical absorption.

Simon and I met up to have a quick discussion about the prize before the ceremony.

The organizers asked if we could do some live data capturing during the prize award ceremony. However, we could not repeat what we did during MusicLab Copenhagen. Then, a team of 20 researchers spent a day setting up before the concert. Instead, I created some real-time video analysis using MGT for Max of the Danish Baroque orchestra. That at least gave some idea about what it is possible to extract from a video recording.

Testing the video visualization during the dress rehearsal.

The prize is a fantastic recognition of a unique event. MusicLab is an innovation project between RITMO and the University Library in Oslo. The aim is to explore how it is possible to carry out Open Research in real-world settings. MusicLab Copenhagen is the largest and most complex MusicLab we have organized to date. In fact, we did one complete concert and one test run of the setup to be sure that everything would work well.

While Simon, Fredrik (from the DSQ), and I were on stage to receive the prize, it should be said that we received it on behalf of many others. Around 20 people from RITMO and many others contributed to the event. Thanks to everyone for making MusicLab Copenhagen a reality!

MusicLab Copenhagen was a huge team effort. Here, many of us gathered in front of Musikhuset in Copenhagen before setting up equipment for the concert in October 2021.

The status of FAIR in higher education

I participated in the closing event of the FAIRsFAIR project last week. For that, I was asked to share thoughts on the status of FAIR in higher education. This is a summary of the notes that I wrote for the event.

What is FAIR?

First of all, The FAIR principles state that data should be:

  • Findable: The first step in (re)using data is to find them. Metadata and data should be easy to find for both humans and computers. Machine-readable metadata are essential for automatic discovery of datasets and services, so this is an essential component of the FAIRification process.
  • Accessible: Once the user finds the required data, she/he/they need to know how they can be accessed, possibly including authentication and authorisation.
  • Interoperable: The data usually need to be integrated with other data. In addition, the data need to interoperate with applications or workflows for analysis, storage, and processing.
  • Reusable: The ultimate goal of FAIR is to optimise the reuse of data. To achieve this, metadata and data should be well-described so that they can be replicated and/or combined in different settings.

This sounds all good, but the reality is that FAIRifying data is a non-trivial task.

FAIR is not (necessarily) Open

Readers of this blog know that I am an open research advocate, and that also means that I embrace the FAIR principles. It is impossible to make data genuinely open if they are not FAIR. It is essential to say that FAIR data is not the same as Open Data. It is perfectly possible to “FAIRify” closed data. That would entail making metadata about the data openly available according to the FAIR principles while still closing the actual data.

There are many cases where it is impossible to make data openly available. In the fourMs Lab, we often have to keep the data closed. There are usually two reasons for this:

  • Privacy: we perform research on and with humans and, therefore, almost always also record audio and video files from which it is possible to identify the people involved. Some participants in our studies consent to us sharing their data, but others do not. So we try to anonymize our material, such as blurring faces, creating motion videos, or keeping only motion capture data, but sometimes that is not possible. Anonymized data make sense in some cases, such as when we capture hundreds of people for the Championships of Standstill. But if we study expert musicians, they are not so easy to anonymize. The sound of an expert hardanger fiddler is enough for anyone in the community to recognize the person in question. And, their facial expressions may be an important part in understanding the effects of a performance. So if we cannot share their face and their sound, the data is not useful.
  • Copyright: even more challenging than the privacy matters, are questions related to copyright. When working with real music (as opposed to the “synthetic” music used in many music psychology studies) we need to consider the rights of composers, musicians, producers, and so on. This is a tricky matter. On one hand, we would like to work with new music by living musicians. On the other hand, the legal territory is challenging. There are a lot of actors, national legislations, copyright unions, and so on. Unfortunately, we do not have the legal competency and administrative capacity to tackle all the practicalities of ensuring that we are allowed to share a lot of the music we use openly.

These challenges make sharing all our data openly difficult, but we can still make them FAIR.

How far have we come?

Three things need to be in place to FAIRify data properly:

  1. Good and structured data. This is the main responsibility of the researcher. However, it is much easier said than done. Take the example of MusicLab Copenhagen, an event we ran in October. It was a huge undertaking with lots of data and media being collected and recorded by around 20 people. We are still working on organizing the data in meaningful ways. The plan is to release as much as possible as fast as possible, but it takes an astonishing amount of time to pre-process the data and structure it in a meaningful way. After all, if the data does not make sense to the person that collected it, nobody else will be able to use it either.
  2. Data repository to store the data. Once the data is structured, it needs to be stored somewhere that provides the necessary tools, and, in particular, persistent identifiers (such as DOIs). We don’t have our own data repository at UiO, so here we need to rely on other solutions. There are two main types of repositories: (a) “bucket-based” repositories that researchers can use themselves, (b) data archives run by institutions with data curators. That brings me to the third point:
  3. Data wranglers and curators. With some training and the right infrastructures, researchers may take on this role themselves. Tools such as Zenodo, Figshare, and OSF, allow researchers to drop in their files and get DOIs, version control, and timetagging. However, in my experience, even though these technical preservation parts are in place, the data may not be (a) good and structured enough in itself, and/or (b) have sufficient metadata and “padding” to be meaningful to others. That is why the institutional archives have professional data wranglers and curators employed to help with these large parts.

More people and institutions have started to realize that data handling is a skill and profession of its own. Many universities have begun to employ data curators in their libraries to help with the FAIRification process. However, the challenge is that they are too few and are too far removed from where the research is happening. In my experience, much trouble can be avoided at the later part of the data archiving “ladder” if the data is handled better from the start.

At RITMO, we have two lab engineers that double as data managers. When we hired them back in 2018, they were among the first local data managers at UiO, and that has proven to be essential for moving things forward. As lab engineers, they are involved in data collection from the start, and they can therefore help with data curation long before we get to the archival stage. They also help train new researchers in thinking about data management and follow data from beginning to end.

Incentives and rewards

There are many challenges to solve before we have universal FAIRification in place. Fortunately, many things move in the right direction. Policies and recommendations are made at international, national, and institutional levels, and Infrastructures are established, and personnel trained.

The biggest challenge now, I think, is to get incentives and rewards in place. Sharing data openly, or at least making the data FAIR, is still seen as costly, cumbersome, and or/unnecessary. That is because of the lack of incentives and rewards for doing so. We are still at a stage where publications “count” the most in the system. Committees primarily look at publication lists and h-indexes when making decisions about hiring, promotion, or funding allocations.

Fortunately, there is a lot of focus on research assessment these days. I have been involved in developing the Norwegian Career Assessment Matrix (NOR-CAM) as a model for broadening the focus. I am also pleased to see that research evaluation and assessment is at the forefront at the Paris Open Science European Conference (OSEC) starting today. When researchers get proper recognition for FAIRifying their data, we will see a radical change.