Tag: defense
September 21, 2023
Some tips for a public PhD defense
Yesterday, I gave some PhD dissertation advice. Today, I will present some tips for PhD candidates ready for public defense.
In Norway, the public defense is a formal event with colleagues, friends, and family present—we typically also stream them on YouTube. The good thing is that when you are ready for the defense, the dissertation has already been accepted. Now it is time to show lecturing skills in the trial lecture and the ability to engage with peers in the disputation.
Tag: disputation
September 21, 2023
Some tips for a public PhD defense
Yesterday, I gave some PhD dissertation advice. Today, I will present some tips for PhD candidates ready for public defense.
In Norway, the public defense is a formal event with colleagues, friends, and family present—we typically also stream them on YouTube. The good thing is that when you are ready for the defense, the dissertation has already been accepted. Now it is time to show lecturing skills in the trial lecture and the ability to engage with peers in the disputation.
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
September 17, 2021
Running a hybrid disputation in a Zoom Webinar
I have been running the disputation of Guilherme Schmidt Câmara today. At RITMO, we have accepted that “hybrid mode” will be the new normal. So also for disputations. Fortunately, we had already many years of experience with video conferencing before the corona crisis hit. We have also gained lots of experience by running the Music, Communication and Technology master’s programme for some years.
In another blog post, I summarized some experiences of running our first hybrid disputation.
December 12, 2020
Running a hybrid disputation on Zoom
Yesterday, I wrote about Agata Zelechowska’s disputation. We decided to run it as a hybrid production, even though there was no audience present. It would, of course, have been easier to run it as an online-only event. However, we expect that hybrid is the new “normal” for such events, and therefore thought that it would be good to get started exploring the hybrid format right away. In this blog post, I will write up some of our experiences.
December 11, 2020
PhD disputation of Agata Zelechowska
I am happy to announce that Agata Zelechowska yesterday successfully defended her PhD dissertation during a public disputation. The dissertation is titled Irresistible Movement: The Role of Musical Sound, Individual Differences and Listening Context in Movement Responses to Music and has been carried out as part of my MICRO project at RITMO.
The dissertation is composed of five papers and an extended introduction. The abstract reads:
This dissertation examines the phenomenon of spontaneous movement responses to music.
February 20, 2013
New PhD Thesis: Kristian Nymoen
I am happy to announce that fourMs researcher Kristian Nymoen has successfully defended his PhD dissertation, and that the dissertation is now available in the DUO archive. I have had the pleasure of co-supervising Kristian’s project, and also to work closely with him on several of the papers included in the dissertation (and a few others).
Reference K. Nymoen. Methods and Technologies for Analysing Links Between Musical Sound and Body Motion.
Tag: dissertation
September 21, 2023
What should a PhD dissertation look like?
I am supervising several PhD fellows at the moment and have found that I repeat myself in the one-to-one meetings. So I will write blog posts summarizing general advice I give everyone. This post deals with what a PhD dissertation should look like.
The classic Ph.D. dissertation Dear PhD fellow (in Norway, PhD fellows are employees, not students): All dissertations are different, yours included. You can write it however you want as long as it is good!
Tag: phd
September 21, 2023
Some tips for a public PhD defense
Yesterday, I gave some PhD dissertation advice. Today, I will present some tips for PhD candidates ready for public defense.
In Norway, the public defense is a formal event with colleagues, friends, and family present—we typically also stream them on YouTube. The good thing is that when you are ready for the defense, the dissertation has already been accepted. Now it is time to show lecturing skills in the trial lecture and the ability to engage with peers in the disputation.
September 21, 2023
What should a PhD dissertation look like?
I am supervising several PhD fellows at the moment and have found that I repeat myself in the one-to-one meetings. So I will write blog posts summarizing general advice I give everyone. This post deals with what a PhD dissertation should look like.
The classic Ph.D. dissertation Dear PhD fellow (in Norway, PhD fellows are employees, not students): All dissertations are different, yours included. You can write it however you want as long as it is good!
January 14, 2013
New publication: Some video abstraction techniques for displaying body movement in analysis and performance
Today the MIT Press journal Leonardo has published my paper entitled “Some video abstraction techniques for displaying body movement in analysis and performance”. The paper is a summary of my work on different types of visualisation techniques of music-related body motion. Most of these techniques were developed during my PhD, but have been refined over the course of my post-doc fellowship.
The paper is available from the Leonardo web page (or MUSE), and will also be posted in the digital archive at UiO after the 6 month embargo period.
July 30, 2012
Open PhD position on music and motion in Oslo
Over the years we have built up an exciting research group (fourMs) here in Oslo, and we are happy to announce an open PhD position on music and body motion. The chosen candidate will be employed in the Department of Musicology and will work with the fourMs group, and will have full access to the fantastic lab facilities we have built up here over the last years (motion capture, multichannel sound, electronics, 3D-printing, robotics, etc.
May 23, 2008
Janer's dissertation
I had a quick read of Jordi Janer’s dissertation today: Singing-Driven Interfaces for Sound Synthesizers. The dissertation presents a good overview of various types of voice analysis techniques, and suggestions for various ways of using the voice as a controller for synthesis. I am particularly interested in his suggestion of a GDIF namespace for structuring parameters for voice control:
/gdif/instrumental/excitation/loudness x
/gdif/instrumental/modulation/pitch x
/gdif/instrumental/modulation/formants x1 x2
/gdif/instrumental/modulation/breathiness x
/gdif/instrumental/selection/phoneticclass x
January 5, 2008
Dissertation is printed!
My dissertation came from the printing company yesterday. Here’s a picture of some of them:
It feels a bit weird to see the final book lying there, being the result of a year of planning and three years of hard work. I wrote most of it last spring, submitting the manuscript in July. Now, about half a year later, I have a much more distant relationship to the whole thing. Seeing the final result is comforting, but it is also sad to let go.
February 8, 2007
Two-dimensional Interdisciplinarity Sketch
I am working on the introduction to my dissertation, and am trying to place my work in a context. Officially, I’m in a musicology program (Norwegian musicology ≈ science of music) in the Faculty of Humanities, but most of my interests are probably closer to psychology and computer science. Quite a lot of what I have been doing has also been used creatively (concerts and installations) although that is not really the focus of my current research.
Tag: supervision
September 21, 2023
What should a PhD dissertation look like?
I am supervising several PhD fellows at the moment and have found that I repeat myself in the one-to-one meetings. So I will write blog posts summarizing general advice I give everyone. This post deals with what a PhD dissertation should look like.
The classic Ph.D. dissertation Dear PhD fellow (in Norway, PhD fellows are employees, not students): All dissertations are different, yours included. You can write it however you want as long as it is good!
Tag: ChatGPT
August 7, 2023
Making image parts transparent in Python
As part of my year-long #StillStanding project, I post an average image of the spherical video recordings on Mastodon daily. These videos have black padding outside the fisheye-like images, and this padding also appears in the average image.
It is possible to manually remove the black parts in some image editing software (of which open-source GIMP is my current favorite). However, as I recently started exploring ChatGPT for research, I decided to ask for help.
August 2, 2023
Finding duration and pixel dimensions for a bunch of video files
As part of my #StillStanding project I need to handle a lot of video files on a daily basis. Today, I wanted to check the duration and pixel dimensions of a bunch of files in different folders. As always, I turned to FFmpeg, or more specifically FFprobe, for help. However, figuring out all the details of how to get out the right information is tricky. So I decided to ask ChatGPT for help.
June 23, 2023
The ventilation system in my office
I’m sitting in my office, listening to the noisy ventilation system that inspired my AMBIENT project. Here is a short sample:
At the moment, I am primarily focusing on completing my book Still Standing. However, as part of my year-long #StillStanding project, I have also started thinking about the sounds found in indoor environments.
Asking ChatGPT for help I have yet to begin a proper literature review on ventilation noise, but as a start, I asked ChatGPT for help.
December 16, 2022
Exploring Essay Writing with You.com
There has been much discussion about ChatGPT recently, a chat robot that can write meaningful answers to questions. I haven’t had time to test it out properly, and it was unavailable when I wanted to check it today. Instead, I have played around with YouWrite, a service that can write text based on limited input.
I thought it would be interesting to ask it to write about something I know well, so I asked it to write a text based on an abbreviated version of the abstract of my new book:
Tag: image
August 7, 2023
Making image parts transparent in Python
As part of my year-long #StillStanding project, I post an average image of the spherical video recordings on Mastodon daily. These videos have black padding outside the fisheye-like images, and this padding also appears in the average image.
It is possible to manually remove the black parts in some image editing software (of which open-source GIMP is my current favorite). However, as I recently started exploring ChatGPT for research, I decided to ask for help.
December 9, 2022
Optimizing JPEG files
I have previously written about how to resize all the images in a folder. That script was based on lossy compression of the files. However, there are also tools for optimizing image files losslessly. One approach is to use the .jpgoptim](https://github.com/tjko.jpgoptim) function available on ubuntu. Here is an excellent explanation of how it works.
Lossless optimization As part of moving my blog to Hugo, I took the opportunity to optimize all the images in all my image folders.
April 13, 2022
Programmatically resizing a folder of images
This is a note to self about how to programmatically resize and crop many images using ImageMagick.
It all started with a folder full of photos with different pixel sizes and ratios. That is because they had been captured with various cameras and had also been manually cropped. This could be verified by running this command to print their pixel sizes:
identify -format "%wx%h\n" *.JPG Fortunately, all the images had a reasonably large pixel count, so I decided to go for a 5MP pixel count (2560x1920 in 4:3 ratio).
September 13, 2019
Creating circular thumbnails in the terminal
Circular pictures (like the one to the right) has become increasingly popular on the web. We have, for example, included circular pictures in RITMO’s annual report, and we therefore also wanted to use circular pictures in a presentation at our upcoming LARGO conference. The question, then, is how to create such circular pictures?
The circular pictures in the annual report are made through a CSS overlay. So if you try to right-click and save one of those, you will get the original rectangular version.
July 16, 2011
Image size
While generating the videograms of Bergensbanen, I discovered that Max/Jitter cannot export images from matrices that are larger than 32767 pixels wide/tall. This is still fairly large, but if I was going to generate a videogram with one pixel stripe per frame in the video, I would need to create an image file that is 1 302 668 pixels wide.
This made me curious as to what type of limitations exist around images.
Tag: jupyter
August 7, 2023
Making image parts transparent in Python
As part of my year-long #StillStanding project, I post an average image of the spherical video recordings on Mastodon daily. These videos have black padding outside the fisheye-like images, and this padding also appears in the average image.
It is possible to manually remove the black parts in some image editing software (of which open-source GIMP is my current favorite). However, as I recently started exploring ChatGPT for research, I decided to ask for help.
June 12, 2023
Running a Jupyter Notebook in Conda Environment
I have been running Python-based Jupyter Notebooks for some time but never thought about using environments before quite recently. I have heard people talking about environments, but I didn’t understand why I would need it.
Two days ago, I tried to upgrade to the latest version of the Musical Gestures Toolbox for Python and got stuck in a dependency nightmare. I tried to upgrade one of the packages that choked, but that only led to other packages breaking.
May 20, 2023
The effect of skipping frames for video visualization
I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
January 12, 2023
Running a workshop with a Jupyter Notebook presentation
Today, I ran a workshop called Video Visualization together with RITMO research assistant Joachim Poutaraud. The workshop was part of the Digital Scholarship Days 2023 organized by the University of Oslo Library, four days packed of hands-on tutorials of various useful things.
Presentation slides made by Jupyter Notebook Joachim has done a fantastic job updating the Wiki with all the new things he has implemented in the toolbox. However, the Wiki is not the best thing to use in a workshop, it has too much information and would create an information overload for the participants.
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
December 30, 2022
Adding Title and Author to PDFs exported from Jupyter Notebook
I am doing some end of the year cleaning on my hard drive and just uploaded the Jupyter Notebook I used in the analysis of a mobile phone lying still earlier this year.
For some future studies, I thought it would be interesting to explore the PDF export functionality from Jupyter. That worked very well except for that I didn’t get any title or author name on top:
Then I found a solution on Stack Overflow.
August 7, 2022
Analyzing Recordings of a Mobile Phone Lying Still
What is the background “noise” in the sensors of a mobile phone? In the fourMs Lab, we have a tradition of testing the noise levels of various devices. Over the last few years, we have been using mobile phones in multiple experiments, including the MusicLab app that has been used in public research concerts, such as MusicLab Copenhagen.
I have yet to conduct a systematic study of many mobile phones lying still, but today I tried recording my phone—a Samsung Galaxy Ultra S21—lying still on the table for ten minutes.
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
November 13, 2021
Releasing the Musical Gestures Toolbox for Python
After several years in the making, we finally “released” the Musical Gestures Toolbox for Python at the NordicSMC Conference this week. The toolbox is a collection of modules targeted at researchers working with video recordings.
Below is a short video in which Bálint Laczkó and I briefly describe the toolbox:
https://youtu.be/tZVX\_lDFrwc About MGT for Python The Musical Gestures Toolbox for Python includes video visualization techniques such as creating motion videos, motion history images, and motiongrams.
August 27, 2020
Why is open research better research?
I am presenting at the Norwegian Forskerutdanningskonferansen on Monday, which is a venue for people involved in research education. I have been challenged to talk about why open research is better research. In the spirit of openness, this blog post is an attempt to shape my argument. It can be read as an open notebook for what I am going to say.
Open Research vs Open Science My first point in any talk about open research is to explain why I think “open research” is better than “open science”.
November 29, 2019
Keynote: Experimenting with Open Research Experiments
Yesterday I gave a keynote lecture at the Munin Conference on Scholarly Publishing in Tromsø. This is an annual conference that gathers librarians, research administrators and publishers, but also some researchers and students. It is my first time to the conference, and found it to be a very diverse, interesting and welcoming group of people.
Abstract Is it possible to do experimental music research completely openly? And what can we gain by opening up the research process from beginning to end?
May 30, 2019
RaveForce: A Deep Reinforcement Learning Environment for Music Generation
My PhD student Qichao Lan is at SMC in Malaga this week, presenting the paper:
Lan, Qichao, Jim Tørresen, and Alexander Refsum Jensenius. “RaveForce: A Deep Reinforcement Learning Environment for Music Generation.” Proceedings of the Sound and Music Computing Conference. Malaga, 2019.
Download
The framework that Qichao has developed runs nicely with a bridge between Jupyter Notebook and SuperCollider. This opens for lots of interesting experiments in the years to come.
January 25, 2019
Testing reveal.js for teaching
I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities.
Tag: python
August 7, 2023
Making image parts transparent in Python
As part of my year-long #StillStanding project, I post an average image of the spherical video recordings on Mastodon daily. These videos have black padding outside the fisheye-like images, and this padding also appears in the average image.
It is possible to manually remove the black parts in some image editing software (of which open-source GIMP is my current favorite). However, as I recently started exploring ChatGPT for research, I decided to ask for help.
June 12, 2023
Running a Jupyter Notebook in Conda Environment
I have been running Python-based Jupyter Notebooks for some time but never thought about using environments before quite recently. I have heard people talking about environments, but I didn’t understand why I would need it.
Two days ago, I tried to upgrade to the latest version of the Musical Gestures Toolbox for Python and got stuck in a dependency nightmare. I tried to upgrade one of the packages that choked, but that only led to other packages breaking.
January 12, 2023
Running a workshop with a Jupyter Notebook presentation
Today, I ran a workshop called Video Visualization together with RITMO research assistant Joachim Poutaraud. The workshop was part of the Digital Scholarship Days 2023 organized by the University of Oslo Library, four days packed of hands-on tutorials of various useful things.
Presentation slides made by Jupyter Notebook Joachim has done a fantastic job updating the Wiki with all the new things he has implemented in the toolbox. However, the Wiki is not the best thing to use in a workshop, it has too much information and would create an information overload for the participants.
December 30, 2022
Adding Title and Author to PDFs exported from Jupyter Notebook
I am doing some end of the year cleaning on my hard drive and just uploaded the Jupyter Notebook I used in the analysis of a mobile phone lying still earlier this year.
For some future studies, I thought it would be interesting to explore the PDF export functionality from Jupyter. That worked very well except for that I didn’t get any title or author name on top:
Then I found a solution on Stack Overflow.
November 13, 2021
Releasing the Musical Gestures Toolbox for Python
After several years in the making, we finally “released” the Musical Gestures Toolbox for Python at the NordicSMC Conference this week. The toolbox is a collection of modules targeted at researchers working with video recordings.
Below is a short video in which Bálint Laczkó and I briefly describe the toolbox:
https://youtu.be/tZVX\_lDFrwc About MGT for Python The Musical Gestures Toolbox for Python includes video visualization techniques such as creating motion videos, motion history images, and motiongrams.
May 30, 2019
RaveForce: A Deep Reinforcement Learning Environment for Music Generation
My PhD student Qichao Lan is at SMC in Malaga this week, presenting the paper:
Lan, Qichao, Jim Tørresen, and Alexander Refsum Jensenius. “RaveForce: A Deep Reinforcement Learning Environment for Music Generation.” Proceedings of the Sound and Music Computing Conference. Malaga, 2019.
Download
The framework that Qichao has developed runs nicely with a bridge between Jupyter Notebook and SuperCollider. This opens for lots of interesting experiments in the years to come.
May 15, 2008
Mobile Python on S60 to Max/MSP
Richard Widerberg held a workshop today on using mobile python on Nokia phones running Symbian OS S60. He has gathered some links to everything that is needed to get a connection up and running with PD. Now I got a simple script up and running and communicating with Max/MSP through the serial object. It works, but it feels a bit limiting to only have one-dimensional control joystick up/down + number keys for interaction.
Tag: stillstanding
August 7, 2023
Making image parts transparent in Python
As part of my year-long #StillStanding project, I post an average image of the spherical video recordings on Mastodon daily. These videos have black padding outside the fisheye-like images, and this padding also appears in the average image.
It is possible to manually remove the black parts in some image editing software (of which open-source GIMP is my current favorite). However, as I recently started exploring ChatGPT for research, I decided to ask for help.
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
June 12, 2023
Running a Jupyter Notebook in Conda Environment
I have been running Python-based Jupyter Notebooks for some time but never thought about using environments before quite recently. I have heard people talking about environments, but I didn’t understand why I would need it.
Two days ago, I tried to upgrade to the latest version of the Musical Gestures Toolbox for Python and got stuck in a dependency nightmare. I tried to upgrade one of the packages that choked, but that only led to other packages breaking.
June 8, 2023
Oddly ticking clock
Today, I stood still in a meeting room with an oddly ticking clock. This was part of my annual #StillStanding project which is documented on my Mastodon channel.
There was nothing special about today’s session but the clock. The meeting room was furnished with a large table in the middle, a screen on the wall, and glass walls on both sides. The large ventilation system led to a noticeable low-frequency “hum” dominating the soundscape.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
December 30, 2022
Adding Title and Author to PDFs exported from Jupyter Notebook
I am doing some end of the year cleaning on my hard drive and just uploaded the Jupyter Notebook I used in the analysis of a mobile phone lying still earlier this year.
For some future studies, I thought it would be interesting to explore the PDF export functionality from Jupyter. That worked very well except for that I didn’t get any title or author name on top:
Then I found a solution on Stack Overflow.
August 24, 2022
Still Standing Manuscript in Preparation
I sent off the final proofs for my Sound Actions book before the summer. I don’t know when it will actually be published, but since it is off my table, I have had time to work on new projects.
My new project AMBIENT will start soon, but I still haven’t been able to write up all the results from my two projects on music-related micro-motion: Sverm and MICRO. This will be the topic of the book I have started writing this summer, with the working title Still Standing: Exploring Human Micromotion.
Tag: ffmpeg
August 2, 2023
Finding duration and pixel dimensions for a bunch of video files
As part of my #StillStanding project I need to handle a lot of video files on a daily basis. Today, I wanted to check the duration and pixel dimensions of a bunch of files in different folders. As always, I turned to FFmpeg, or more specifically FFprobe, for help. However, figuring out all the details of how to get out the right information is tricky. So I decided to ask ChatGPT for help.
August 9, 2022
Add fade-in and fade-out programmatically with FFmpeg
There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
March 31, 2022
Merge multiple MP4 files
I have been doing several long recordings with GoPro cameras recently. The cameras automatically split the recordings into 4GB files, which leaves me with a myriad of files to work with. I have therefore made a script to help with the pre-processing of the files.
This is somewhat similar to the script I made to convert MXF files to MP4, but with better handling of the temp file for storing information about the files to merge:
February 12, 2022
Edit video rotation metadata in FFmpeg
I am recording a lot of short videos these days for my sound actions project. Sometimes the recordings end up being rotated, which is based on the orientation sensor (probably the gyroscope) of my mobile phone. This rotation is not part of the recorded video data, it is just information written into the header of the MPEG file. That also means that it is possible to change the rotation without recoding the file.
January 28, 2022
Preparing videos for FutureLearn courses
This week we started up our new online course, Motion Capture: The Art of Studying Human Activity, and we are also rerunning Music Moves: Why Does Music Make You Move? for the seventh time. Most of the material for these courses is premade, but we record a new wrap-up video at the end of each week. This makes it possible to answer questions that have been posed during the week and add some new and relevant material.
January 9, 2022
Frame differencing with FFmpeg
I often want to create motion videos, that is, videos that only show what changed between frames. Such videos are nice to look at, and so-called “frame differencing” is also the start point for many computer vision algorithms.
We have made several tools for creating motion videos (and more) at the University of Oslo: the standalone VideoAnalysis app (Win/Mac) and the different versions of the Musical Gestures Toolbox. These are all great tools, but sometimes it would be nice also to create motion videos in the terminal using FFmpeg.
December 21, 2021
Pre-processing Garmin VIRB 360 recordings with FFmpeg
I have previously written about how it is possible to “flatten” a Ricoh Theta+ recording using FFmpeg. Now, I have spent some time exploring how to process some recordings from a Garmin VIRB camera.
Some hours of recordings The starting point was a bunch of recordings from our recent MusicLab Copenhagen featuring the amazing Danish String Quartet. A team of RITMO researchers went to Copenhagen and captured the quartet in both rehearsal and performance.
November 17, 2021
Preparing video for Matlab analysis
Typical video files, such as MP4 files with H.264 compression, are usually small in size and with high visual quality. Such files are suitable for visual inspection but do not work well for video analysis. In most cases, computer vision software prefers to work with raw data or other compression formats.
The Musical Gestures Toolbox for Matlab works best with these file types:
Video: use .jpg (Motion.jpg) as the compression format.
October 27, 2021
Rotate video using FFmpeg
Here is another FFmpeg-related blog post, this time to explain how to rotate a video using the command-line tool FFmpeg. There are two ways of doing this, and I will explain both in the following.
Rotation in metadata The best first try could be to make the rotation by only modifying the metadata in the file. This does not work for all file types, but should work for some (including .
October 26, 2021
Crop video files with FFmpeg
I have previously written about how to trim video files with FFmpeg. It is also easy to crop a video file. Here is a short how-to guide for myself and others.
Cropping is not the same as trimming This may be basic, but I often see the concepts of cropping and trimming used interchangeably. So, to clarify, trimming a video file means making it shorter by removing frames in the beginning and/or end.
October 13, 2021
Converting a .WAV file to .AVI
Sometimes, there is a need to convert an audio file into a blank video file with an audio track. This can be useful if you are on a system that does not have a dedicated audio player but a video player (yes, rare, but I work with odd technologies…). Here is a quick recipe
FFmpeg to the rescue When it comes to converting from one media format to another, I always turn to FFmpeg.
June 17, 2021
Normalize audio in video files
We are organizing the Rhythm Production and Perception Workshop at RITMO next week. As mentioned in another blog post, we have asked presenters to send us pre-recorded videos. They are all available on the workshop page.
During the workshop, we will play sets of videos in sequence. When doing a test run today, we discovered that the sound levels differed wildly between files. There is clearly the need for normalizing the sound levels to create a good listener experience.
June 15, 2021
Making 100 video poster images programmatically
We are organizing the Rhythm Production and Perception Workshop 2021 at RITMO a week from now. Like many other conferences these days, this one will also be run online. Presentations have been pre-recorded (10 minutes each) and we also have short poster blitz videos (1 minute each).
Pre-recorded videos People have sent us their videos in advance, but they all have different first “slides”. So, to create some consistency among the videos, we decided to make an introduction slide for each of them.
May 11, 2021
Combining audio and video files with FFmpeg
When working with various types of video analysis, I often end up with video files without audio. So I need to add the audio track by copying either from the source video file or from a separate audio file. There are many ways of doing this. Many people would probably reach for a video editor, but the problem is that you would most likely end up recompressing both the audio and video file.
January 24, 2021
Convert between video containers with FFmpeg
In my ever-growing collection of smart FFmpeg tricks, here is a way of converting from one container format to another. Here I will convert from a QuickTime (.mov) file to a standard MPEG-4 (.mp4), but the recipe should work between other formats too.
If you came here to just see the solution, here you go:
ffmpeg -i infile.mov -acodec copy -vcodec copy outfile.mp4 In the following I will explain everything in a little more detail.
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
November 6, 2020
Visual effect of the different tblend functions in FFmpeg
FFmpeg is a fantastic resource for doing all sorts of video manipulations from the terminal. However, it has a lot of features, and it is not always easy to understand what they all mean.
I was interested in understanding more about how the tblend function works. This is a function that blends successive frames in 30 different ways. To get a visual understanding of how the different operations work, I decided to try them all out on the same video file.
March 20, 2020
Pixel array images of long videos in FFmpeg
Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.
I find that creating motiongrams or videograms is a good way to visualize the content of videos.
March 19, 2020
Convert MPEG-2 files to MPEG-4
{width=“300”}
This is a note to self, and could potentially also be useful to others in need of converting “old-school” MPEG-2 files into more modern MPEG-4 files using FFmpeg.
In the fourMs lab we have a bunch of Canon XF105 video cameras that record .MXF files with MPEG-2 compression. This is not a very useful format for other things we are doing, so I often have to recompress them to something else.
March 15, 2020
Flattening Ricoh Theta 360-degree videos using FFmpeg
I am continuing my explorations of the great terminal-based video tool FFmpeg. Now I wanted to see if I could “flatten” a 360-degree video recorded with a Ricoh Theta camera. These cameras contain two fisheye lenses, capturing two 180-degree videos next to each other. This results in video files like the one I show a screenshot of below.
These files are not very useful to watch or work with, so we need to somehow “flatten” them into a more meaningful video file.
March 1, 2020
Creating different types of keyframe displays with FFmpeg
In some recent posts I have explored the creation of motiongrams and average images, multi-exposure displays, and image masks. In this blog post I will explore different ways of generating keyframe displays using the very handy command line tool FFmpeg.
As in the previous posts, I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first attempt is to create a 3x3 grid image by just sampling frames from the original image.
February 21, 2020
Creating image masks from video file
As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.
I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first step is to extract keyframes from the video file using this one-liner ffmpeg command:
February 21, 2020
Creating multi-exposure keyframe image displays with FFmpeg and ImageMagick
While I was testing visualization of some videos from the AIST database earlier today, I wanted to also create some “keyframe image displays”. This can be seen as a way of doing multi-exposure photography, and should be quite straightforward to do. Still it took me quite some time to figure out exactly how to implement it. It may be that I was searching for the wrong things, but in case anyone else is looking for the same, here is a quick write up.
November 3, 2019
Converting MXF files to MP4 with FFmpeg
We have a bunch of Canon XF105 at RITMO, a camera that records MXF files. This is not a particularly useful file format (unless for further processing). Since many of our recordings are just for documentation purposes, we often see the need to convert to MP4. Here I present two solutions for converting MXF files to MP4, both as individual files and a combined file from a folder. These are shell scripts based on the handy FFmpeg.
May 18, 2018
Trim video files using FFmpeg
This is a note to self, and hopefully others, about how to easily and quickly trim videos without recompressing the file.
I often have long video recordings that I want to split or trim. Splitting and trimming are temporal transformations and should not be confused with the spatial transformation cropping. Cropping a video means cutting out parts of the image, and I have another blog post on cropping video files using FFmpeg.
Tag: FFprobe
August 2, 2023
Finding duration and pixel dimensions for a bunch of video files
As part of my #StillStanding project I need to handle a lot of video files on a daily basis. Today, I wanted to check the duration and pixel dimensions of a bunch of files in different folders. As always, I turned to FFmpeg, or more specifically FFprobe, for help. However, figuring out all the details of how to get out the right information is tricky. So I decided to ask ChatGPT for help.
Tag: video
August 2, 2023
Finding duration and pixel dimensions for a bunch of video files
As part of my #StillStanding project I need to handle a lot of video files on a daily basis. Today, I wanted to check the duration and pixel dimensions of a bunch of files in different folders. As always, I turned to FFmpeg, or more specifically FFprobe, for help. However, figuring out all the details of how to get out the right information is tricky. So I decided to ask ChatGPT for help.
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
May 25, 2023
Understanding the GoPro Max' File Formats
I use a GoPro Max 360-degree camera in my annual #StillStanding project. That means that I also have had an excellent chance to work with GoPro files and try to understand their inner logic. In this blog post, I will summarize some of my findings.
What is recorded? Recording “a video” with a GoPro Max results in recording multiple files. For example, each of my daily 10-minute recordings ends up with something like this:
May 20, 2023
The effect of skipping frames for video visualization
I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps.
May 10, 2023
Visualization of Musique de Table
Musique de Table is a wonderful piece written by Thierry de Mey. I have seen it performed live several times, and here came across a one-shot video recording that I thought it would be interesting to analyse:
The test with some video visualization tools in the Musical Gestures Toolbox for Python.
For running the commands below, you first need to import the toolbox in Python:
import musicalgestures as mg I started the process by importing the source video:
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
December 31, 2022
365 Sound Actions
1 January this year, I set out to record one sound action per day. The idea was to test out the action–sound theory from my book Sound Actions. One thing is writing about action–sound couplings and mappings, another is to see how the theory works with real-world examples. As I commented on after one month, the project has been both challenging and inspiring. Below I write about some of my experiences but first, here is the complete list:
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
March 31, 2022
Merge multiple MP4 files
I have been doing several long recordings with GoPro cameras recently. The cameras automatically split the recordings into 4GB files, which leaves me with a myriad of files to work with. I have therefore made a script to help with the pre-processing of the files.
This is somewhat similar to the script I made to convert MXF files to MP4, but with better handling of the temp file for storing information about the files to merge:
February 12, 2022
Edit video rotation metadata in FFmpeg
I am recording a lot of short videos these days for my sound actions project. Sometimes the recordings end up being rotated, which is based on the orientation sensor (probably the gyroscope) of my mobile phone. This rotation is not part of the recorded video data, it is just information written into the header of the MPEG file. That also means that it is possible to change the rotation without recoding the file.
February 3, 2022
Different 16:9 format resolutions
I often have to convert between different resolutions of videos and images and always forget the pixel dimensions that correspond to a 16:9 format. So here is a cheat-sheet:
**2160p: **3840×2160 **1440p: **2560×1440 **1080p: **1920×1080 **720p: **1280×720 540p: 960x540 **480p: **854×480 **360p: **640×360 **240p: **426×240 120p: 213x120 I also came across this complete list of true 16:9 resolution combinations, but the ones above suffice for my usage. Happy converting!
January 31, 2022
One month of sound actions
One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.
Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:
January 28, 2022
Preparing videos for FutureLearn courses
This week we started up our new online course, Motion Capture: The Art of Studying Human Activity, and we are also rerunning Music Moves: Why Does Music Make You Move? for the seventh time. Most of the material for these courses is premade, but we record a new wrap-up video at the end of each week. This makes it possible to answer questions that have been posed during the week and add some new and relevant material.
January 9, 2022
Frame differencing with FFmpeg
I often want to create motion videos, that is, videos that only show what changed between frames. Such videos are nice to look at, and so-called “frame differencing” is also the start point for many computer vision algorithms.
We have made several tools for creating motion videos (and more) at the University of Oslo: the standalone VideoAnalysis app (Win/Mac) and the different versions of the Musical Gestures Toolbox. These are all great tools, but sometimes it would be nice also to create motion videos in the terminal using FFmpeg.
January 7, 2022
Try not to headbang challenge
I recently came across a video of the so-called Try not to headbang challenge, where the idea is to, well, not to headbang while listening to music. This immediately caught my attention. After all, I have been researching music-related micromotion over the last years and have run the Norwegian Championship of Standstill since 2012.
Here is an example of Nath & Johnny trying the challenge:
https://www.youtube.com/watch?v=-I4CBsDT37I As seen in the video, they are doing ok, although they are far from sitting still.
December 21, 2021
Pre-processing Garmin VIRB 360 recordings with FFmpeg
I have previously written about how it is possible to “flatten” a Ricoh Theta+ recording using FFmpeg. Now, I have spent some time exploring how to process some recordings from a Garmin VIRB camera.
Some hours of recordings The starting point was a bunch of recordings from our recent MusicLab Copenhagen featuring the amazing Danish String Quartet. A team of RITMO researchers went to Copenhagen and captured the quartet in both rehearsal and performance.
December 17, 2021
Flamenco video analysis
I continue my testing of the new Musical Gestures Toolbox for Python. One thing is to use the toolbox on controlled recordings with stationary cameras and non-moving backgrounds (see examples of visualizations of AIST videos). But it is also interesting to explore “real world” videos (such as the Bergensbanen train journey).
I came across a great video of flamenco dancer Selene Muñoz, and wondered how I could visualize what is going on there:
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
November 17, 2021
Preparing video for Matlab analysis
Typical video files, such as MP4 files with H.264 compression, are usually small in size and with high visual quality. Such files are suitable for visual inspection but do not work well for video analysis. In most cases, computer vision software prefers to work with raw data or other compression formats.
The Musical Gestures Toolbox for Matlab works best with these file types:
Video: use .jpg (Motion.jpg) as the compression format.
November 13, 2021
Releasing the Musical Gestures Toolbox for Python
After several years in the making, we finally “released” the Musical Gestures Toolbox for Python at the NordicSMC Conference this week. The toolbox is a collection of modules targeted at researchers working with video recordings.
Below is a short video in which Bálint Laczkó and I briefly describe the toolbox:
https://youtu.be/tZVX\_lDFrwc About MGT for Python The Musical Gestures Toolbox for Python includes video visualization techniques such as creating motion videos, motion history images, and motiongrams.
October 27, 2021
Rotate video using FFmpeg
Here is another FFmpeg-related blog post, this time to explain how to rotate a video using the command-line tool FFmpeg. There are two ways of doing this, and I will explain both in the following.
Rotation in metadata The best first try could be to make the rotation by only modifying the metadata in the file. This does not work for all file types, but should work for some (including .
October 26, 2021
Crop video files with FFmpeg
I have previously written about how to trim video files with FFmpeg. It is also easy to crop a video file. Here is a short how-to guide for myself and others.
Cropping is not the same as trimming This may be basic, but I often see the concepts of cropping and trimming used interchangeably. So, to clarify, trimming a video file means making it shorter by removing frames in the beginning and/or end.
October 13, 2021
Converting a .WAV file to .AVI
Sometimes, there is a need to convert an audio file into a blank video file with an audio track. This can be useful if you are on a system that does not have a dedicated audio player but a video player (yes, rare, but I work with odd technologies…). Here is a quick recipe
FFmpeg to the rescue When it comes to converting from one media format to another, I always turn to FFmpeg.
June 27, 2021
Running a successful Zoom Webinar
I have been involved in running some Zoom Webinars over the last year, culminating with the Rhythm Production and Perception Workshop 2021 this week. I have written a general blog post about the production. Here I will write a little more about some lessons learned on running large Zoom Webinars.
In previous Webinars, such as the RITMO Seminars by Rebecca Fiebrink and Sean Gallagher, I ran everything from my office. These were completely online events, based on each person sitting with their own laptop.
June 17, 2021
Normalize audio in video files
We are organizing the Rhythm Production and Perception Workshop at RITMO next week. As mentioned in another blog post, we have asked presenters to send us pre-recorded videos. They are all available on the workshop page.
During the workshop, we will play sets of videos in sequence. When doing a test run today, we discovered that the sound levels differed wildly between files. There is clearly the need for normalizing the sound levels to create a good listener experience.
June 15, 2021
Making 100 video poster images programmatically
We are organizing the Rhythm Production and Perception Workshop 2021 at RITMO a week from now. Like many other conferences these days, this one will also be run online. Presentations have been pre-recorded (10 minutes each) and we also have short poster blitz videos (1 minute each).
Pre-recorded videos People have sent us their videos in advance, but they all have different first “slides”. So, to create some consistency among the videos, we decided to make an introduction slide for each of them.
May 11, 2021
Combining audio and video files with FFmpeg
When working with various types of video analysis, I often end up with video files without audio. So I need to add the audio track by copying either from the source video file or from a separate audio file. There are many ways of doing this. Many people would probably reach for a video editor, but the problem is that you would most likely end up recompressing both the audio and video file.
February 10, 2021
Some thoughts on microphones for streaming and recording
Many people have asked me about what types of microphones to use for streaming and recording. This is really a jungle, with lots of devices and things to think about. I have written some blog posts about such things previously, such as tips for doing Skype job interviews, testing simple camera/mic solutions, running a Hybrid Disputation, and how to work with plug-in-power microphones.
Earlier today I held a short presentation about microphones at RITMO.
January 28, 2021
Analyzing a double stroke drum roll
Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:
January 24, 2021
Convert between video containers with FFmpeg
In my ever-growing collection of smart FFmpeg tricks, here is a way of converting from one container format to another. Here I will convert from a QuickTime (.mov) file to a standard MPEG-4 (.mp4), but the recipe should work between other formats too.
If you came here to just see the solution, here you go:
ffmpeg -i infile.mov -acodec copy -vcodec copy outfile.mp4 In the following I will explain everything in a little more detail.
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
December 11, 2020
PhD disputation of Agata Zelechowska
I am happy to announce that Agata Zelechowska yesterday successfully defended her PhD dissertation during a public disputation. The dissertation is titled Irresistible Movement: The Role of Musical Sound, Individual Differences and Listening Context in Movement Responses to Music and has been carried out as part of my MICRO project at RITMO.
The dissertation is composed of five papers and an extended introduction. The abstract reads:
This dissertation examines the phenomenon of spontaneous movement responses to music.
November 6, 2020
Visual effect of the different tblend functions in FFmpeg
FFmpeg is a fantastic resource for doing all sorts of video manipulations from the terminal. However, it has a lot of features, and it is not always easy to understand what they all mean.
I was interested in understanding more about how the tblend function works. This is a function that blends successive frames in 30 different ways. To get a visual understanding of how the different operations work, I decided to try them all out on the same video file.
September 3, 2020
Embed YouTube video with subtitles in different languages
This is primarily a note to self post, but could hopefully also be useful for others. At least, I spent a little too long to figure out to embed a YouTube video with a specific language on the subtitles.
The starting point is that I had this project video that I wanted to embed on a project website:
However, then I found that you can add info about the specific language you want to use by adding this snippet after the URL:
March 20, 2020
Pixel array images of long videos in FFmpeg
Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.
I find that creating motiongrams or videograms is a good way to visualize the content of videos.
March 19, 2020
Convert MPEG-2 files to MPEG-4
{width=“300”}
This is a note to self, and could potentially also be useful to others in need of converting “old-school” MPEG-2 files into more modern MPEG-4 files using FFmpeg.
In the fourMs lab we have a bunch of Canon XF105 video cameras that record .MXF files with MPEG-2 compression. This is not a very useful format for other things we are doing, so I often have to recompress them to something else.
March 18, 2020
Simple tips for better video conferencing
Very many people are currently moving to video-based meetings. For that reason I have written up some quick advise on how to improve your setup. This is based on my interview advise, but grouped differently.
Network {width=“200” height=“100”}
The first important thing is to have as good a network as you can. Video conferencing requires a lot of bandwidth, so even though your e-mail and regular browsing works fine, it may still not be sufficient for good video transmission.
March 15, 2020
Flattening Ricoh Theta 360-degree videos using FFmpeg
I am continuing my explorations of the great terminal-based video tool FFmpeg. Now I wanted to see if I could “flatten” a 360-degree video recorded with a Ricoh Theta camera. These cameras contain two fisheye lenses, capturing two 180-degree videos next to each other. This results in video files like the one I show a screenshot of below.
These files are not very useful to watch or work with, so we need to somehow “flatten” them into a more meaningful video file.
February 21, 2020
Creating image masks from video file
As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.
I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first step is to extract keyframes from the video file using this one-liner ffmpeg command:
February 21, 2020
Creating multi-exposure keyframe image displays with FFmpeg and ImageMagick
While I was testing visualization of some videos from the AIST database earlier today, I wanted to also create some “keyframe image displays”. This can be seen as a way of doing multi-exposure photography, and should be quite straightforward to do. Still it took me quite some time to figure out exactly how to implement it. It may be that I was searching for the wrong things, but in case anyone else is looking for the same, here is a quick write up.
February 21, 2020
Visualizing some videos from the AIST Dance Video Database
Researchers from AIST have released an open database of dance videos, and I got very excited to try out some visualization methods on some of the files. This was also a good chance to test out some new functionality in the Musical Gestures Toolbox for Matlab that we are developing at RITMO. The AIST collection contains a number of videos. I selected one hip-hop dance video based on a very steady rhythmic pattern, and a contemporary dance video that is more fluid in both motion and music.
February 14, 2020
Testing simple camera and microphone setups for quick interviews
We just started a new run of our free online course Music Moves. Here we have a tradition of recording wrap-up videos every Friday, in which some of the course educators answer questions from the learners. We have recorded these in many different ways, from using high-end cameras and microphones to just using a handheld phone. We have found that using multiple cameras and microphones is too time-consuming, both in setup and editing.
December 27, 2019
Teaching with a document camera
How does an “old-school” document camera work for modern-day teaching? Remarkably well, I think. Here are some thoughts on my experience over the last few years.
The reason I got started with a document camera was because I felt the need for a more flexible setup for my classroom teaching. Conference presentations with limited time are better done with linear presentation tools, I think, since the slides help with the flow.
November 3, 2019
Converting MXF files to MP4 with FFmpeg
We have a bunch of Canon XF105 at RITMO, a camera that records MXF files. This is not a particularly useful file format (unless for further processing). Since many of our recordings are just for documentation purposes, we often see the need to convert to MP4. Here I present two solutions for converting MXF files to MP4, both as individual files and a combined file from a folder. These are shell scripts based on the handy FFmpeg.
October 23, 2019
Tips for doing your job interview over Skype
I have been interviewing a lot of people for various types of university positions over the years. Most often these interviews are conducted using a video-conferencing system. Here I provide some tips to help people prepare for a video-based job interview:
We (and many others) typically use Skype for interviews, not because it is the best system out there (of commercial platforms I prefer Zoom), but because it is the most widespread solution.
November 25, 2018
Reflecting on some flipped classroom strategies
I was invited to talk about my experiences with flipped classroom methodologies at a seminar at the Faculty of Humanities last week. Preparing for the talk got me to revisit my own journey of working towards flipped teaching methodologies. This has also involved explorations of various types of audio/video recording. I will go through them in chronological order.
Podcasting Back in 2009-2011, I created “podcasts” of my lectures a couple of semesters, such as in the course MUS2006 Music and Body Movements (which was at the time taught in Norwegian).
September 28, 2018
Musical Gestures Toolbox for Matlab
Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.
The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.
Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago.
June 18, 2018
Testing Blackmagic Web Presenter
We are rapidly moving towards the start of our new Master’s programme Music, Communication & Technology. This is a unique programme in that it is split between two universities (in Oslo and Trondheim), 500 kilometres apart. We are working on setting up a permanent high-quality, low-latency connection that will be used as the basis for our communication. But in addition to this permanent setup we need solutions for quick and easy communication.
May 18, 2018
Trim video files using FFmpeg
This is a note to self, and hopefully others, about how to easily and quickly trim videos without recompressing the file.
I often have long video recordings that I want to split or trim. Splitting and trimming are temporal transformations and should not be confused with the spatial transformation cropping. Cropping a video means cutting out parts of the image, and I have another blog post on cropping video files using FFmpeg.
November 22, 2016
From Basic Music Research to Medical Tool
The Research Council of Norway is evaluating the research being done in the humanities these days, and all institutions were given the task to submit cases of how societal impact. Obviously, basic research is per definition not aiming at societal impact in the short run, and my research definitely falls into category.Still it is interesting to see that some of my basic research is, indeed, on the verge of making a societal impact in the sense that policy makers like to think about.
April 12, 2015
Simple video editing in Ubuntu
I have been using Ubuntu as my main OS for the past year, but have often relied on my old MacBook for doing various things that I haven’t easily figured out how to do in Linux. One of those things is to trim video files non-destructively. This is quite simple to do in QuickTime, although Apple now forces you to save the file with a QuickTime container (.mov) even though there is still only MPEG-4 compression in the file (h.
February 25, 2014
New department video
[As I have mentioned previously, life has been quite hectic over the last year, becoming Head of Department at the same time as getting my second daughter. So my research activities have slowed down considerably, and also the activity on this blog.]{style=“line-height: 1.5;”}
When it comes to blogging, I have focused on building up my Head of Department blog (in Norwegian), which I use to comment on things happening in the Department as well as relevant (university) political issues.
February 25, 2014
New fourMs video
Not only do we have a new Department video, but we have also made a short video documentary about our fourMs group. It is in Norwegian (subtitles coming soon), but even though you do not understand the language, the video has lots of nice shots from the labs and the background music is made by Professor Rolf Inge Godøy.
August 1, 2013
New publication: Non-Realtime Sonification of Motiongrams
Today I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.
July 19, 2013
Calculating duration of QuickTime movie files
I have been doing video analysis on QuickTime (.mov) files for several years, but have never really had the need to use the time information of the movie files. For a project, I now had the need for getting the timecode in seconds out of the files, and this turned out to be a little more tricky than first expected. Hence this little summary for other people that may be in the same situation.
July 15, 2013
Documentation of the NIME project at Norwegian Academy of Music
From 2007 to 2011 I had a part-time research position at the Norwegian Academy of Music in a project called New Instruments for Musical Exploration, and with the acronym NIME. This project was also the reason why I ended up organising the NIME conference in Oslo in 2011.
The NIME project focused on creating an environment for musical innovation at the Norwegian Academy of Music, through exploring the design of new physical and electronic instruments.
June 26, 2013
Visualisations of a timelapse video
Yesterday, I posted a blog entry on my TimeLapser application, and how it was used to document the working process of the making of the sculpture Hommage til kaffeselskapene by my mother. The final timelapse video looks like this:
Now I have run this timelapse video through my VideoAnalysis application, to see what types of analysis material can come out of such a video.
The average image displays a “summary” of the entire video recording, somehow similar to an “open shutter” in traditional photography.
June 25, 2013
Timelapser
I have recently started moving my development efforts over to GitHub, to keep everything in one place. Now I have also uploaded a small application I developed for a project by my mother, Norwegian sculptor Grete Refsum. She wanted to create a timelapse video of her making a new sculpture, “Hommage til kaffeselskapene”, for her installation piece Tante Vivi, fange nr. 24 127 Ravensbrück.
There are lots of timelapse software available, but none of them that fitted my needs.
May 28, 2013
Kinectofon: Performing with shapes in planes
Yesterday, Ståle presented a paper on mocap filtering at the NIME conference in Daejeon. Today I presented a demo on using Kinect images as input to my sonomotiongram technique.
Title
Kinectofon: Performing with shapes in planes
Links
Paper (PDF) Poster (PDF) Software Videos (coming soon) Abstract
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB and depth image streams retrieved from a Microsoft Kinect sensor device.
April 6, 2013
ImageSonifyer
Earlier this year, before I started as head of department, I was working on a non-realtime implementation of my sonomotiongram technique (a sonomotiongram is a sonic display of motion from a video recording, created by sonifying a motiongram). Now I finally found some time to wrap it up and make it available as an OSX application called ImageSonifyer. The Max patch is also available, for those that want to look at what is going on.
January 22, 2013
KinectRecorder
I am currently working on a paper describing some further exploration of the sonifyer technique and module that I have previously published on. The new thing is that I am now using the inputs from a Kinect device as the source material for the sonification, which opens up for using also the depth in the image as an element in the process.
To be able to create figures for the paper, I needed to record the input from a Kinect to a regular video file.
January 14, 2013
New publication: Some video abstraction techniques for displaying body movement in analysis and performance
Today the MIT Press journal Leonardo has published my paper entitled “Some video abstraction techniques for displaying body movement in analysis and performance”. The paper is a summary of my work on different types of visualisation techniques of music-related body motion. Most of these techniques were developed during my PhD, but have been refined over the course of my post-doc fellowship.
The paper is available from the Leonardo web page (or MUSE), and will also be posted in the digital archive at UiO after the 6 month embargo period.
January 8, 2013
New publication: Performing the Electric Violin in a Sonic Space
I am happy to announce that a paper I wrote together with Victoria Johnson has just been published in Computer Music Journal. The paper is based on the experiences that Victoria and I gained while working on the piece Transformation for electric violin and live electronics (see video of the piece below).
Citation
A. R. Jensenius and V. Johnson. Performing the electric violin in a sonic space. Computer Music Journal, 36(4):28–39, 2012.
January 2, 2013
Sverm video #4
The last of the four Sverm videos by Lavasir Nordrum hast just been posted on Vimeo. The first short movie was titled Micromovements, then followed Microsounds and Excitation, and the last one is called Resonance. It has been exciting to work with the video medium in addition to the performances, and it has given a very different perspective on the project.
December 5, 2012
Sverm video #3
Video artist Lavasir Nordrum hast just posted the third of four short movies created together with the Sverm group. The first short movie was titled Micromovements, and the second was titled Microsounds. This month’s short movie is called Excitation, and is focused on the first half of an even or action. This will be followed by a short movie called Resonance to be released on 1 January.
November 2, 2012
Sverm video #2
As I wrote about last month, the Sverm group has teamed up with video artist Lavasir Nordrum. The plan is that he will create four short and poetic videos documenting four of the main topics we have been working on in the Sverm project. The production plan for the videos is quite tight: we shoot content for the videos during a few hours in the middle of each month, and then Lavasir publishes the final video two weeks later.
October 10, 2012
Sverm video #1
For the last couple of years I have been involved in an artistic research project called Sverm, in which we investigate the artistic potential of bodily micromovements and microsound. We are currently working towards a series of intimate lab performances in the end of November.
As a side-project to the performances, we are also working with video artist Lavasir Nordrum, on the making of four short videos documenting the four main parts of the project: micromovement, microsound, excitation, resonance.
September 11, 2012
McLaren's Dots
I am currently working on some extensions to my motiongram-sonifyer, and came across this beautiful little film by Norman McLaren from 1940:
The sounds heard in the film are entirely synthetic, created by drawing in the sound-track part of the film. McLaren explained this a 1951 BBC interview:
I draw a lot of little lines on the sound-track area of the 35-mm. film. Maybe 50 or 60 lines for every musical note.
September 5, 2012
Teaching in Aldeburgh
I am currently in beautiful Aldeburgh, a small town on the east coast of England, teaching at the Britten-Pears Young Artist Programme together with Rolf Wallin and Tansy Davies. This post is mainly to summarise the things I have been going through, and provide links for various things.
Theoretical stuff My introductory lectures went through some of the theory of an embodied understanding of the experience of music. One aspect of this theory that I find very relevant for the development of interactive works is what I call action-sound relationships.
August 16, 2012
fourMs videos
Over the years we have uploaded various videos to YouTube of our fourMs lab activities. Some of these videos have been uploaded using a shared YouTube user, others by myself and others. I just realised that a good solution for gathering all the different videos is just to create a playlist, and then add all relevant videos there. Then it should also be possible to embed this playlist in web pages, like below:
July 12, 2012
Paper #1 at SMC 2012: Evaluation of motiongrams
Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
June 25, 2012
Record videos of sonification
I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.
It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max.
August 5, 2011
Flickr introduces long photos
Flickr has opened for uploading videos, or, rather, what they call “long photos”. As such, they are not trying to compete with YouTube or Vimeo, but rather making it possible to upload videos that are closer to a photography than a movie (i.e. with a narrative). I like this approach, and it resonates with how I am often recording a video as if it was a photography.
The difference between what I could call a photo video and a movie video, can be seen as analog to the difference between music compostion/production and soundscaping.
June 17, 2011
Hurtigruten
One of the more bizarre TV programs ever may be the current screening of Hurtigruten by Norwegian public broadcaster NRK. Following the success of the screening of the train ride from Bergen to Oslo, they are now filming the entire (5+ days) journey of the boat trip from Bergen to Kirkenes.
Here is some info on how and why they are doing this, or you can just follow the journey live here.
August 27, 2010
Screen recording in QuickTime X
I just discovered that QuickTime X has built in support for screen recording. I have been using iShowU for screen recordings for a while, and while it has the advantage of recording only a portion of the screen, the QT approach seems to be easier and quicker to work with. Short tutorial below:
August 9, 2010
Evaluating a semester of podcasting
Earlier this year I wrote a post about how I was going to try out podcasting during the course MUS2006 Musikk og bevegelse this spring semester. As I am preparing for new courses this fall, now is the time to evaluate my podcasting experience, and decide on whether I am going to continue doing this.
Why podcasting? The first question I should ask myself is why I would be interested in setting up a podcast from my lectures?
July 2, 2010
New motiongram features
Inspired by the work [[[Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale]{.entry-content}]{.status-content}]{.status-body} a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:
About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.
May 12, 2010
NTNU PhD defense
Two weeks ago Lars Adde defended his PhD entitled Prediction of cerebral palsy in young infants. Computer based assessment of general movements at NTNU in Trondheim. I have contributed to this research through development of the General Movement Toolbox, a variant of my Musical Gestures Toolbox. This toolbox he has used to analyse video material of children with fidgety movements, with the aim of being able to predict cerebral palsy at an early stage.
July 17, 2008
Black box in the lab
Last week we started setting up a “black box” in the new lab space. It is great to finally have a more permanent motion lab set up that we can use for various types of observation studies and recording sessions.
June 17, 2008
AudioVideoAnalysis
To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.
Download AudioVideoAnalysis for OS X (8MB) It currently has the following features:
Draws a spectrogram from any connected microphone Draws a motiongram/videogram from any connected camera Press the escape button to toggle fullscreen mode Built with Max/MSP by Cycling ‘74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.
May 15, 2008
Sonification of Traveling Landscapes
I just heard a talk called “Real-Time Synaesthetic Sonification of Traveling Landscapes” (PDF) by Tim Pohle and Peter Knees from the Department of Computational Perception (great name!) in Linz. They have made an application creating music from a moving video camera. The implementation is based on grabbing a one pixel wide column from the video, plotting these columns and sonifying the image. Interestingly enough, the images they get out (see below) of this are very close to the motiongrams and videograms I have been working on.
November 14, 2007
GeoVision MPEG4 Codec
I recently received a video file with some material I am supposed to analyse. The problem was that I couldn’t figure out what type of codec was used. VLC told me that it uses a codec called GMP4. After some research I have found that this means an MPEG-4 codec developed by GeoVision. I have found a windows version of this codec, but nothing for OS X. If anyone has any ideas, please shout out.
October 15, 2007
Flash Movie Conversion on OS X
Looking for a solution to make flash movies on OS X, I came across this nice tutorial based on FFMPEGX. In terms of video quality I prefer to create videos with H.264 compression using MPEG Streamclip, but since flash seems to be the de facto standard on the web these days I will try to use this for an upcoming project.
{#image497}
September 14, 2007
Video broadcasting
Vegard mentioned QuickTime Broadcaster in a blog entry yesterday. While QT broadcaster is certainly easy to set up and use, I have found it even easier to use some of the video broadcasting solutions in Max/MSP/Jitter. The jit.qt.broadcast object allows for QT streaming, but I have found the jit.broadcast object using RTSP to be somewhat more stable. Using Jitter also opens for all sorts of image manipulation, text overlays etc. as we are used to in the Max/MSP world.
April 5, 2007
Choosing the Right Video Format
The discussion about video standards for live processing has been summarised as:
Codec: Motion.jpg (for interlaced footage) or Photo.jpg. Compression ratio/quality: Quality 80 is a decent baseline for.jpg, though you can crank as high as 97 to improve quality. Keyframes: Encode a keyframe on every frame so it’s ‘scratch-ready’. Alpha channels: For video containing alpha channels, PNG is the format of choice. Sounds like more or less the same conclusion that has been reached in the Jitter forum, when this question comes up there once in a while.
January 29, 2007
Optical Illusions and Visual Phenomena
{#image381}Talking about optical illusions, here’s a bunch of them. Great that many of them are made as javascripts so that the effects can be changed.
[ youtube=http://youtube.com/w/?v=_dIya1aJJKA]
November 1, 2006
Motiongrams
Challenge Traditional keyframe displays of videos are not particularly useful when studying single-shot studio recordings of music-related movements, since they mainly show static postural information and no motion.
Using motion images of various kinds helps in visualizing what is going on in the image. Below can be seen (from left): motion image, with noise reduction, with edge detection, with “trails” and added to the original image.
Making Motiongrams We are used to visualizing audio with spectrograms, and have been exploring different techniques for visualizing music-related movements in a similar manner.
November 1, 2006
Sony HDR-SR1
An interesting review of the new Sony HDR-SR1 HDD based HD video camera. Except for the fact that there are no decent video software to edit this type of video format, and the lack of support for OS X, this looks like a great camera.
August 18, 2006
Lasse - Hyperactive
{#image258}Lasse - Hyperactive is a very simple and low-cost videomusic production, but also very powerful and funny.
August 18, 2006
Moving towards HDD video cameras
{#image261}I have been using the JVC Everio GZMC500, one of the first hard drive based video cameras with a decent price tag and ok features, for more than half a year and my general impressions are very positive.
Positive things:
No tapes!!! 3CCD, excellent for recording in dark concert/lecture halls Very small and handy Negative things:
No microphone/line input (this was a major drawback with this model, but luckily the built-in stereo microphone is not too bad…) Storing files in an MPEG-2 format which is probably good for writing directly to DVD, but a hazzle to work with on a computer (at least Macs) since they have to be re-encoded to something that is more easily playable in QuickTime.
June 21, 2006
ICMC papers
My paper entitled “Using motiongrams in the study of musical gestures” was accepted to ICMC 06 in New Orleans. The abstract is:
Navigating through hours of video material is often time-consuming, and it is similarly difficult to create good visualization of musical gestures in such a material. Traditional displays of time-sampled video frames are not particularly useful when studying single-shot studio recordings, since they present a series of still images and very little movement related information.
April 24, 2006
Visual Scratch
{#image139}Jesse Kriss has developed Visual Scratch a realtime visualization of scratch DJ performance, built using Processing, Max/MSP, Ms. Pinky, and MaxLink.
April 22, 2006
Palindrome
Found some interesting dance/performance examples at the web site of German/American performance company Palindrome. They are also developing the EyeCon video software for interactive performance.
March 29, 2006
Daniel Rozin Wooden Mirrors
Daniel Rozin has made some Wooden Mirrorsfrom various materials. Any person standing in front of one of these pieces is instantly reflected on its surface. The mechanical mirrors all have video cameras, motors and computers on board and produce a soothing sound as the viewer interacts with them.
March 27, 2006
MøB
{.imagelink}I’m participating in a workshop in Bergen, and got to meet Gisle Frøysland who is developing MøB, a software for installations and realtime manipulation of digital media in GNU/Linux-based networks. I am looking forward to seeing it in action during the course of the workshop.
March 24, 2006
Fogscreen
The Fogscreen is a new invention which makes objects seem to appear and move in thin air! It is a screen you can walk through! The FogScreen is created by using a suspended fog generating device, there is no frame around the screen. The installation is easy: just replace the conventional screen with FogScreen. You don´t need to change anything else - it works with standard video projectors.The fog we are using is dry, so it doesn’t make you wet even if you stay under the FogScreen device for a long time.
March 17, 2006
sCrAmBlEd?HaCkZ!
sCrAmBlEd?HaCkZ! is a Realtime-Mind-Music-Video-Re-De-Construction-Machine. It is a conceptual software which makes it possible to work with samples in a completely new way by making them available in a manner that does justice to their nature as concrete musical memories.
February 20, 2006
dbv
{#p95 .imagelink}dbv is a customizable vj tool built with Max/MSP/Jitter. Simple, but with some nice implementation details. I particularly like the way it displays video thumbnails, and adds extra pages if you have more videos than it is space for in the preview pane.
February 20, 2006
traer.physics
{#p94 .imagelink}traer.physics is a particle system physics engine for the Processing video programming environment. The user community of Processing seems to be growing rapidly these days, and from my few tests of the language it seems to be stable and efficient.
Would be interesting to see if it is possible to combine Processing with Max/MSP/Jitter. OSC is one option, but it would be nice if someone made a wrapper so that it could be possible to run Processing from a Max object.
February 5, 2006
Video Annotation Software
A short overview of various video annotation software:
- Anvil by Michael Kipp is a java-based program for storing several layers of annotations, like a text sequencer. Can only use avi files. Intended for gesture research (understood as gestures used when talking).
- Transana from University of Wisconsin, Madison, is developed mainly as a tool for transcribing and describing video and audio content. Seems like it is mainly intended for behavioural studies.
January 15, 2006
Converting MPEG-2 .MOD files
I have been struggling with figuring out the easiest way of converting MPEG-2 .MOD files coming out of a JVC Everio HD camera to something else, and finally found a good solution in Squared 5 - MPEG Streamclip which allows for converting these files to more or less all codecs that are available on the system. It is also a good idea to rename the .MOD files to .M2V or .
December 14, 2005
MP4 to WMV
I have been struggling with creating video files that are easily playable on both OS X and Windows. Of course it is possible to make an avi with some “ancient” video codec, but that is not very tempting when the new H.264 codec is so nice. Of course, it would be nice if Windows users could use QuickTime, but for those who decline to do so, I found an easy way for converting MPEG-4 files to WMV.
November 29, 2005
JVC GZ-MC500
I have been thinking about buying a new video camera. As I start to get very tired of working with dv-tapes, I was curious to check out some of the new hd cameras. Seems like it is still a bit early, as I guess the market will change next year, although this JVC GZ-MC500 camcorder looks very sweet.
A 4GB microdrive seems too small, though, and it is unfortunately disqualified since it doesn’t sport a microphone input.
November 28, 2001
Master exam concert
Last week I performed my master exam concert at the Department of Music and Theatre, University of Oslo. The program consisted of improvisations for piano and live electronics. Different MIDI, audio, and video processing techniques were used. Here I describe the different pieces.
Performa It is incredible how many exciting sounds one can get from a piano, and mallets are a nice change from playing on the keys. The computer helps with temporal adjustments and background sounds.
Tag: note
July 5, 2023
Horizontal and Vertical Averaging is not the same
For my year-long StillStanding project I am generating videograms for all the scenes. Since there is not much motion in these 10-minute recordings, they typically look like stripes.
Looking at today’s recording of an unspectacular hotel room in Kongsberg, I noticed how different the horizontal and vertical videogram look:
It is fascinating how two averages of the same video recording can be so different. The explanation is simple; they are based on averaging in two different dimensions (horizontal and vertical).
Tag: terminology
July 5, 2023
Horizontal and Vertical Averaging is not the same
For my year-long StillStanding project I am generating videograms for all the scenes. Since there is not much motion in these 10-minute recordings, they typically look like stripes.
Looking at today’s recording of an unspectacular hotel room in Kongsberg, I noticed how different the horizontal and vertical videogram look:
It is fascinating how two averages of the same video recording can be so different. The explanation is simple; they are based on averaging in two different dimensions (horizontal and vertical).
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
March 21, 2023
Sound vs Audio
What is the difference between sound and audio? I often hear people confuse the terms. Here are a couple of ways of thinking about the difference.
A good summary can be found in this blog post:
Sound is vibrations through materials Audio is the technology to hear sounds coming from natural or human-made sources Another good definition is that audio is electrical energy (active or potential) that represents sound. From this, a sound recording is stored as an audio file.
October 2, 2011
Difference between the terms movement and motion
Terminology is always challenging. I have previously written about definitions of actions and gesture several times (e.g. here, here, and here) and chapter 2 in the book Musical gestures: sound, movement, and meaning (Routledge, 2010).
Movement vs motion There are, however, two words/terms that I still find very challenging to define properly and to differentiate: movement and motion. In Norwegian, we only have one word (bevegelse) for describing movement/motion, which makes everything much simpler.
February 3, 2011
Analysis terminology
I was involved in a discussion about the difference between some terms that are frequently used: analysis, data processing, feature extraction, etc. To summarize my thoughts on how these terms are related, I made the little sketch below:
{width=“260” height=“227”}
Rather than just storing it in my digital archive, I thought it might be useful for others, and could also hopefully lead to some interesting comments.
November 13, 2010
Music content processing
Dear child has many names. In [[this call]{style=“text-decoration: underline;”}]{style=“color: #0000ee;”} for a special journal issue on music and robotics, I see the use of the word music content processing (MCP). I have been around in the larger music technology community for a while now, but I haven’t really thought of this as a concept in itself before.
Using Google as a reference, I see that “music content processing” actually returns 34 100 hits, so obviously it is being used quite extensively.
Tag: tone
July 5, 2023
Horizontal and Vertical Averaging is not the same
For my year-long StillStanding project I am generating videograms for all the scenes. Since there is not much motion in these 10-minute recordings, they typically look like stripes.
Looking at today’s recording of an unspectacular hotel room in Kongsberg, I noticed how different the horizontal and vertical videogram look:
It is fascinating how two averages of the same video recording can be so different. The explanation is simple; they are based on averaging in two different dimensions (horizontal and vertical).
Tag: audio
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
March 21, 2023
Sound vs Audio
What is the difference between sound and audio? I often hear people confuse the terms. Here are a couple of ways of thinking about the difference.
A good summary can be found in this blog post:
Sound is vibrations through materials Audio is the technology to hear sounds coming from natural or human-made sources Another good definition is that audio is electrical energy (active or potential) that represents sound. From this, a sound recording is stored as an audio file.
December 31, 2022
365 Sound Actions
1 January this year, I set out to record one sound action per day. The idea was to test out the action–sound theory from my book Sound Actions. One thing is writing about action–sound couplings and mappings, another is to see how the theory works with real-world examples. As I commented on after one month, the project has been both challenging and inspiring. Below I write about some of my experiences but first, here is the complete list:
August 9, 2022
Add fade-in and fade-out programmatically with FFmpeg
There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
January 31, 2022
One month of sound actions
One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.
Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
November 13, 2021
Releasing the Musical Gestures Toolbox for Python
After several years in the making, we finally “released” the Musical Gestures Toolbox for Python at the NordicSMC Conference this week. The toolbox is a collection of modules targeted at researchers working with video recordings.
Below is a short video in which Bálint Laczkó and I briefly describe the toolbox:
https://youtu.be/tZVX\_lDFrwc About MGT for Python The Musical Gestures Toolbox for Python includes video visualization techniques such as creating motion videos, motion history images, and motiongrams.
October 13, 2021
Converting a .WAV file to .AVI
Sometimes, there is a need to convert an audio file into a blank video file with an audio track. This can be useful if you are on a system that does not have a dedicated audio player but a video player (yes, rare, but I work with odd technologies…). Here is a quick recipe
FFmpeg to the rescue When it comes to converting from one media format to another, I always turn to FFmpeg.
June 17, 2021
Normalize audio in video files
We are organizing the Rhythm Production and Perception Workshop at RITMO next week. As mentioned in another blog post, we have asked presenters to send us pre-recorded videos. They are all available on the workshop page.
During the workshop, we will play sets of videos in sequence. When doing a test run today, we discovered that the sound levels differed wildly between files. There is clearly the need for normalizing the sound levels to create a good listener experience.
May 11, 2021
Combining audio and video files with FFmpeg
When working with various types of video analysis, I often end up with video files without audio. So I need to add the audio track by copying either from the source video file or from a separate audio file. There are many ways of doing this. Many people would probably reach for a video editor, but the problem is that you would most likely end up recompressing both the audio and video file.
March 18, 2021
Splitting audio files in the terminal
I have recently played with AudioStellar, a great tool for “sound object”-based exploration and musicking. It reminds me of CataRT, a great tool for concatenative synthesis. I used CataRT quite a lot previously, for example, in the piece Transformation. However, after I switched to Ubuntu and PD instead of OSX and Max, CataRT was no longer an option. So I got very excited when I discovered AudioStellar some weeks ago. It is lightweight and cross-platform and has some novel features that I would like to explore more in the coming weeks.
February 10, 2021
Some thoughts on microphones for streaming and recording
Many people have asked me about what types of microphones to use for streaming and recording. This is really a jungle, with lots of devices and things to think about. I have written some blog posts about such things previously, such as tips for doing Skype job interviews, testing simple camera/mic solutions, running a Hybrid Disputation, and how to work with plug-in-power microphones.
Earlier today I held a short presentation about microphones at RITMO.
January 24, 2021
Convert between video containers with FFmpeg
In my ever-growing collection of smart FFmpeg tricks, here is a way of converting from one container format to another. Here I will convert from a QuickTime (.mov) file to a standard MPEG-4 (.mp4), but the recipe should work between other formats too.
If you came here to just see the solution, here you go:
ffmpeg -i infile.mov -acodec copy -vcodec copy outfile.mp4 In the following I will explain everything in a little more detail.
March 19, 2020
Convert MPEG-2 files to MPEG-4
{width=“300”}
This is a note to self, and could potentially also be useful to others in need of converting “old-school” MPEG-2 files into more modern MPEG-4 files using FFmpeg.
In the fourMs lab we have a bunch of Canon XF105 video cameras that record .MXF files with MPEG-2 compression. This is not a very useful format for other things we are doing, so I often have to recompress them to something else.
March 18, 2020
Simple tips for better video conferencing
Very many people are currently moving to video-based meetings. For that reason I have written up some quick advise on how to improve your setup. This is based on my interview advise, but grouped differently.
Network {width=“200” height=“100”}
The first important thing is to have as good a network as you can. Video conferencing requires a lot of bandwidth, so even though your e-mail and regular browsing works fine, it may still not be sufficient for good video transmission.
February 21, 2020
Visualizing some videos from the AIST Dance Video Database
Researchers from AIST have released an open database of dance videos, and I got very excited to try out some visualization methods on some of the files. This was also a good chance to test out some new functionality in the Musical Gestures Toolbox for Matlab that we are developing at RITMO. The AIST collection contains a number of videos. I selected one hip-hop dance video based on a very steady rhythmic pattern, and a contemporary dance video that is more fluid in both motion and music.
February 14, 2020
Testing simple camera and microphone setups for quick interviews
We just started a new run of our free online course Music Moves. Here we have a tradition of recording wrap-up videos every Friday, in which some of the course educators answer questions from the learners. We have recorded these in many different ways, from using high-end cameras and microphones to just using a handheld phone. We have found that using multiple cameras and microphones is too time-consuming, both in setup and editing.
October 23, 2019
Tips for doing your job interview over Skype
I have been interviewing a lot of people for various types of university positions over the years. Most often these interviews are conducted using a video-conferencing system. Here I provide some tips to help people prepare for a video-based job interview:
We (and many others) typically use Skype for interviews, not because it is the best system out there (of commercial platforms I prefer Zoom), but because it is the most widespread solution.
November 25, 2018
Reflecting on some flipped classroom strategies
I was invited to talk about my experiences with flipped classroom methodologies at a seminar at the Faculty of Humanities last week. Preparing for the talk got me to revisit my own journey of working towards flipped teaching methodologies. This has also involved explorations of various types of audio/video recording. I will go through them in chronological order.
Podcasting Back in 2009-2011, I created “podcasts” of my lectures a couple of semesters, such as in the course MUS2006 Music and Body Movements (which was at the time taught in Norwegian).
October 5, 2011
Audio recordings as motion capture
I spend a lot of time walking around the city with my daughter these days, and have been wondering how much I move and how the movement is distributed over time. To answer these questions, and to try out a method for easy and cheap motion capture, I decided to record today’s walk to the playground.
I could probably have recorded the accelerometer data in my phone, but I wanted to try an even more low-tech solution: an audio recorder.
October 11, 2010
AudioAnalysis v0.5
I am teaching a course in sound theory this semester, and therefore thought it was time to update a little program I developed several years ago, called SoundAnalysis. While there are many excellent sound analysis programs out there (SonicVisualiser, Praat, etc.), they all work on pre-recorded sound material. That is certainly the best approach to sound analysis, but it is not ideal in a pedagogical setting where you want to explain things in realtime.
August 9, 2010
Evaluating a semester of podcasting
Earlier this year I wrote a post about how I was going to try out podcasting during the course MUS2006 Musikk og bevegelse this spring semester. As I am preparing for new courses this fall, now is the time to evaluate my podcasting experience, and decide on whether I am going to continue doing this.
Why podcasting? The first question I should ask myself is why I would be interested in setting up a podcast from my lectures?
January 12, 2009
Triple boot on MacBook
I am back at work after a long vacation, and one of the first things I started doing this year was to reinstall several of my computers. There is nothing like a fresh start once in a while, with the added benefits of some extra hard disk space (not reinstalling all those programs I never use anyway) and performance benefits (incredible how fast a newly installed computer boots up!).
I have been testing Ubuntu on an Asus eee for a while, and have been impressed by how easy it was to install and use.
June 17, 2008
AudioVideoAnalysis
To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.
Download AudioVideoAnalysis for OS X (8MB) It currently has the following features:
Draws a spectrogram from any connected microphone Draws a motiongram/videogram from any connected camera Press the escape button to toggle fullscreen mode Built with Max/MSP by Cycling ‘74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.
November 28, 2001
Master exam concert
Last week I performed my master exam concert at the Department of Music and Theatre, University of Oslo. The program consisted of improvisations for piano and live electronics. Different MIDI, audio, and video processing techniques were used. Here I describe the different pieces.
Performa It is incredible how many exciting sounds one can get from a piano, and mallets are a nice change from playing on the keys. The computer helps with temporal adjustments and background sounds.
Tag: audition
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
Tag: light
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
Tag: sound
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
March 21, 2023
Sound vs Audio
What is the difference between sound and audio? I often hear people confuse the terms. Here are a couple of ways of thinking about the difference.
A good summary can be found in this blog post:
Sound is vibrations through materials Audio is the technology to hear sounds coming from natural or human-made sources Another good definition is that audio is electrical energy (active or potential) that represents sound. From this, a sound recording is stored as an audio file.
January 31, 2022
One month of sound actions
One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.
Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:
November 19, 2021
Rigorous Empirical Evaluation of Sound and Music Computing Research
At the NordicSMC conference last week, I was part of a panel discussing the topic Rigorous Empirical Evaluation of SMC Research. This was the original description of the session:
The goal of this session is to share, discuss, and appraise the topic of evaluation in the context of SMC research and development. Evaluation is a cornerstone of every scientific research domain, but is a complex subject in our context due to the interdisciplinary nature of SMC coupled with the subjectivity involved in assessing creative endeavours.
July 1, 2021
Sound and Music Computing at the University of Oslo
This year’s Sound and Music Computing (SMC) Conference has opened for virtual lab tours. When we cannot travel to visit each other, this is a great way to showcase how things look and what we are working on.
Stefano Fasciani and I teamed up a couple of weeks ago to walk around some of the labs and studios at the Department of Musicology and RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion.
March 18, 2021
Splitting audio files in the terminal
I have recently played with AudioStellar, a great tool for “sound object”-based exploration and musicking. It reminds me of CataRT, a great tool for concatenative synthesis. I used CataRT quite a lot previously, for example, in the piece Transformation. However, after I switched to Ubuntu and PD instead of OSX and Max, CataRT was no longer an option. So I got very excited when I discovered AudioStellar some weeks ago. It is lightweight and cross-platform and has some novel features that I would like to explore more in the coming weeks.
January 28, 2021
Analyzing a double stroke drum roll
Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:
January 7, 2021
How to work with plug-in-power microphones
I have never thought about how so-called plug-in-power microphones actually work. Over the years, I have used several of them for various applications, including small lavalier microphones for cameras and mobile phones. The nice thing about plug-and-play devices is that they are, well, plug and play. The challenge, however, is when they don’t work. Then it is time to figure out what is going on. This is the story of how I managed to use a Røde SmartLav+ lavalier microphone with a Zoom Q8 recorder.
September 28, 2018
Musical Gestures Toolbox for Matlab
Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.
The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.
Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago.
March 12, 2018
Nordic Sound and Music Computing Network up and running
I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.
This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.
October 16, 2017
Working with an Arduino Mega 2560 in Max
I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ‘74’s Max.
I have previously used Maxuino for interfacing Arduinos with Max.
September 11, 2017
Sverm-Resonans - Installation at Ultima Contemporary Music Festival
I am happy to announce the opening of our new interactive art installation at the Ultima Contemporary Music Festival 2017: Sverm-resonans.
Time and place: Sep. 12, 2017 12:30 PM - Sep. 14, 2017 3:30 PM, Sentralen
Conceptual information The installation is as much haptic as audible.
An installation that gives you access to heightened sensations of stillness, sound and vibration.
Stand still. Listen. Locate the sound. Move. Stand still. Listen. Hear the tension.
July 20, 2017
SMC paper based on data from the first Norwegian Championship of Standstill
We have been carrying out three editions of the Norwegian Championship of Standstill over the years, but it is first with the new resources in the MICRO project that we have finally been able to properly analyze all the data. The first publication coming out of the (growing) data set was published at SMC this year:
Reference: Jensenius, Alexander Refsum; Zelechowska, Agata & Gonzalez Sanchez, Victor Evaristo (2017). The Musical Influence on People’s Micromotion when Standing Still in Groups, In Tapio Lokki; Jukka Pa?
May 3, 2017
New publication: Sonic Microinteraction in the Air
I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.
{.csl-bib-body} {.csl-entry} Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
{.csl-entry}
{.csl-entry} Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale.
February 5, 2017
Music Moves on YouTube
We have been running our free online course Music Moves a couple of times on the FutureLearn platform. The course consists of a number of videos, as well as articles, quizzes, etc., all of which help create a great learning experience for the people that take part.
One great thing about the FutureLearn model (similar to Coursera, etc.) is that they focus on creating a complete course. There are many benefits to such a model, not least to create a virtual student group that interact in a somewhat similar way to campus students.
January 24, 2016
New MOOC: Music Moves
Together with several colleagues, and with great practical and economic support from the University of Oslo, I am happy to announce that we will soon kick off our first free online course (a so-called MOOC) called Music Moves.
Music Moves: Why Does Music Make You Move? Learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
[Go to course – starts 1 Feb](https://www.
July 15, 2013
New publication: An Action-Sound Approach to Teaching Interactive Music
My paper titled An action–sound approach to teaching interactive music has recently been published by Organised Sound. The paper is based on some of the theoretical ideas on action-sound couplings developed in my PhD, combined with how I designed the course Interactive Music based on such an approach to music technology.
**Abstract
**The conceptual starting point for an `action-sound approach’ to teaching music technology is the acknowledgment of the couplings that exist in acoustic instruments between sounding objects, sound-producing actions and the resultant sounds themselves.
June 3, 2013
Analyzing correspondence between sound objects and body motion
New publication:
**Title **
Analyzing correspondence between sound objects and body motion
Authors
Kristian Nymoen, Rolf Inge Godøy, Alexander Refsum Jensenius and Jim Tørresen has now been published in ACM Transactions on Applied Perception.
Abstract
Links between music and body motion can be studied through experiments called sound-tracing. One of the main challenges in such research is to develop robust analysis techniques that are able to deal with the multidimensional data that musical sound and body motion present.
December 13, 2012
Performing with the Norwegian Noise Orchestra
Yesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.
For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.
September 11, 2012
McLaren's Dots
I am currently working on some extensions to my motiongram-sonifyer, and came across this beautiful little film by Norman McLaren from 1940:
The sounds heard in the film are entirely synthetic, created by drawing in the sound-track part of the film. McLaren explained this a 1951 BBC interview:
I draw a lot of little lines on the sound-track area of the 35-mm. film. Maybe 50 or 60 lines for every musical note.
June 6, 2012
Sound files from MA thesis
Edit: These files are now more easily accessible from my UiO page.
While preparing a lecture for the PhD students at the Norwegian Academy of Music, I came across some of the sound files I created for my MA thesis on salience in (musical) sound perception. While the content of that thesis is now most interesting as a historical document, I had a good time listening to the sound examples again.
October 31, 2010
New screencast on the basics of creating reverb in PD
I have written about my making of a series of sreencasts of basic sound synthesis in puredata in an earlier blog post. The last addition to the series is the building of a patch that shows how a simple impulse response, combined with a delay, a feedback loop and a low pass filter, can be used to simulate reverberation. In fact, dependent on the settings, this patch can also be used for making phasor, flanger, chorus and echo as well.
October 25, 2010
Music is not only sound
After working with music-related movements for some years, and thereby arguing that movement is an integral part of music, I tend to react when people use “music” as a synonym for either “score” or “sound”.
I certainly agree that sound is an important part of music, and that scores (if they exist) are related to both musical sound and music in general. But I do not agree that music is sound.
October 11, 2010
AudioAnalysis v0.5
I am teaching a course in sound theory this semester, and therefore thought it was time to update a little program I developed several years ago, called SoundAnalysis. While there are many excellent sound analysis programs out there (SonicVisualiser, Praat, etc.), they all work on pre-recorded sound material. That is certainly the best approach to sound analysis, but it is not ideal in a pedagogical setting where you want to explain things in realtime.
September 3, 2010
PD introductions in Norwegian on YouTube
I am teaching two courses this semester:
Sound theory 1 (in English) Sound analysis (in Norwegian, together with Rolf Inge Godøy) In both courses I use Pure Data (PD) for demonstrating various interesting phenomena (additive synthesis, beating, critical bands, etc.), and the students also get various assignments to explore such things themselves. There are several PD introduction videos on YouTube in English, but I found that it could be useful to also have something in Norwegian.
Tag: vision
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
Tag: 360
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
May 25, 2023
Understanding the GoPro Max' File Formats
I use a GoPro Max 360-degree camera in my annual #StillStanding project. That means that I also have had an excellent chance to work with GoPro files and try to understand their inner logic. In this blog post, I will summarize some of my findings.
What is recorded? Recording “a video” with a GoPro Max results in recording multiple files. For example, each of my daily 10-minute recordings ends up with something like this:
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
December 21, 2021
Pre-processing Garmin VIRB 360 recordings with FFmpeg
I have previously written about how it is possible to “flatten” a Ricoh Theta+ recording using FFmpeg. Now, I have spent some time exploring how to process some recordings from a Garmin VIRB camera.
Some hours of recordings The starting point was a bunch of recordings from our recent MusicLab Copenhagen featuring the amazing Danish String Quartet. A team of RITMO researchers went to Copenhagen and captured the quartet in both rehearsal and performance.
March 15, 2020
Flattening Ricoh Theta 360-degree videos using FFmpeg
I am continuing my explorations of the great terminal-based video tool FFmpeg. Now I wanted to see if I could “flatten” a 360-degree video recorded with a Ricoh Theta camera. These cameras contain two fisheye lenses, capturing two 180-degree videos next to each other. This results in video files like the one I show a screenshot of below.
These files are not very useful to watch or work with, so we need to somehow “flatten” them into a more meaningful video file.
Tag: ambisonics
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
Tag: gopro
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
May 25, 2023
Understanding the GoPro Max' File Formats
I use a GoPro Max 360-degree camera in my annual #StillStanding project. That means that I also have had an excellent chance to work with GoPro files and try to understand their inner logic. In this blog post, I will summarize some of my findings.
What is recorded? Recording “a video” with a GoPro Max results in recording multiple files. For example, each of my daily 10-minute recordings ends up with something like this:
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
March 31, 2022
Merge multiple MP4 files
I have been doing several long recordings with GoPro cameras recently. The cameras automatically split the recordings into 4GB files, which leaves me with a myriad of files to work with. I have therefore made a script to help with the pre-processing of the files.
This is somewhat similar to the script I made to convert MXF files to MP4, but with better handling of the temp file for storing information about the files to merge:
December 21, 2021
Pre-processing Garmin VIRB 360 recordings with FFmpeg
I have previously written about how it is possible to “flatten” a Ricoh Theta+ recording using FFmpeg. Now, I have spent some time exploring how to process some recordings from a Garmin VIRB camera.
Some hours of recordings The starting point was a bunch of recordings from our recent MusicLab Copenhagen featuring the amazing Danish String Quartet. A team of RITMO researchers went to Copenhagen and captured the quartet in both rehearsal and performance.
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
February 14, 2020
Testing simple camera and microphone setups for quick interviews
We just started a new run of our free online course Music Moves. Here we have a tradition of recording wrap-up videos every Friday, in which some of the course educators answer questions from the learners. We have recorded these in many different ways, from using high-end cameras and microphones to just using a handheld phone. We have found that using multiple cameras and microphones is too time-consuming, both in setup and editing.
Tag: micromotion
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
April 22, 2020
New publication: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music
After several years of hard work, we are very happy to announce a new publication coming out of the MICRO project that I am leading: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music (Frontiers Psychology).
This is the first journal article of my PhD student Agata Zelechowska, and it reports on a standstill study conducted a couple of years ago. It is slightly different than the paradigm we have used for the Championships of Standstill.
August 7, 2018
New article: Correspondences Between Music and Involuntary Human Micromotion During Standstill
I am happy to announce a new journal article coming out of the MICRO project:
Victor E. Gonzalez-Sanchez, Agata Zelechowska and Alexander Refsum Jensenius
Correspondences Between Music and Involuntary Human Micromotion During Standstill
Front. Psychol., 07 August 2018 | https://doi.org/10.3389/fpsyg.2018.01382
Abstract: The relationships between human body motion and music have been the focus of several studies characterizing the correspondence between voluntary motion and various sound features. The study of involuntary movement to music, however, is still scarce.
May 3, 2017
New publication: Sonic Microinteraction in the Air
I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.
{.csl-bib-body} {.csl-entry} Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
{.csl-entry}
{.csl-entry} Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale.
April 13, 2017
New publication: Exploring music-related micromotion
{.alignright .wp-image-2617 .size-medium width=“197” height=“300”}I am happy to announce the publication of a new anthology that I have contributed a chapter to:
Jensenius, A. R. (2017). Exploring music-related micromotion. In C. Wöllner (Ed.), Body, Sound and Space in Music and Beyond: Multimodal Explorations (pp. 29–48). Routledge.
The chapter does not have an abstract, but the opening paragraph summarizes the content quite well:
As living human beings we are constantly in motion.
March 13, 2016
New project Funding: MICRO!
I am happy to announce that I have received funding from the Norwegian Research Council’s program Young Research Talents for the project: MICRO - Human Bodily Micromotion in Music Perception and Interaction. This is a 4-year long project and I will be looking for both a PhD and postdoctoral fellow to join the team. The call will be out later this year, but please do not hesitate to contact me right if you are interested.
June 2, 2015
New publication: Microinteraction in Music/Dance Performance
This week I am participating at the NIME conference (New Interfaces for Musical Expression), organised at Louisiana State University, Baton Rouge, LA. I am doing some administrative work as chair of the NIME steering committee, and I was happy to present a paper yesterday:
Title
Microinteraction in Music/Dance Performance
Abstract
This paper presents the scientific-artistic project Sverm, which has focused on the use of micromotion and microsound in artistic practice. Starting from standing still in silence, the artists involved have developed conceptual and experiential knowledge of microactions, microsounds and the possibilities of microinteracting with light and sound.
Tag: mobile phone
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
August 7, 2022
Analyzing Recordings of a Mobile Phone Lying Still
What is the background “noise” in the sensors of a mobile phone? In the fourMs Lab, we have a tradition of testing the noise levels of various devices. Over the last few years, we have been using mobile phones in multiple experiments, including the MusicLab app that has been used in public research concerts, such as MusicLab Copenhagen.
I have yet to conduct a systematic study of many mobile phones lying still, but today I tried recording my phone—a Samsung Galaxy Ultra S21—lying still on the table for ten minutes.
Tag: standstill
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
December 30, 2022
Adding Title and Author to PDFs exported from Jupyter Notebook
I am doing some end of the year cleaning on my hard drive and just uploaded the Jupyter Notebook I used in the analysis of a mobile phone lying still earlier this year.
For some future studies, I thought it would be interesting to explore the PDF export functionality from Jupyter. That worked very well except for that I didn’t get any title or author name on top:
Then I found a solution on Stack Overflow.
August 24, 2022
Still Standing Manuscript in Preparation
I sent off the final proofs for my Sound Actions book before the summer. I don’t know when it will actually be published, but since it is off my table, I have had time to work on new projects.
My new project AMBIENT will start soon, but I still haven’t been able to write up all the results from my two projects on music-related micro-motion: Sverm and MICRO. This will be the topic of the book I have started writing this summer, with the working title Still Standing: Exploring Human Micromotion.
August 7, 2022
Analyzing Recordings of a Mobile Phone Lying Still
What is the background “noise” in the sensors of a mobile phone? In the fourMs Lab, we have a tradition of testing the noise levels of various devices. Over the last few years, we have been using mobile phones in multiple experiments, including the MusicLab app that has been used in public research concerts, such as MusicLab Copenhagen.
I have yet to conduct a systematic study of many mobile phones lying still, but today I tried recording my phone—a Samsung Galaxy Ultra S21—lying still on the table for ten minutes.
January 7, 2022
Try not to headbang challenge
I recently came across a video of the so-called Try not to headbang challenge, where the idea is to, well, not to headbang while listening to music. This immediately caught my attention. After all, I have been researching music-related micromotion over the last years and have run the Norwegian Championship of Standstill since 2012.
Here is an example of Nath & Johnny trying the challenge:
https://www.youtube.com/watch?v=-I4CBsDT37I As seen in the video, they are doing ok, although they are far from sitting still.
August 7, 2018
New article: Correspondences Between Music and Involuntary Human Micromotion During Standstill
I am happy to announce a new journal article coming out of the MICRO project:
Victor E. Gonzalez-Sanchez, Agata Zelechowska and Alexander Refsum Jensenius
Correspondences Between Music and Involuntary Human Micromotion During Standstill
Front. Psychol., 07 August 2018 | https://doi.org/10.3389/fpsyg.2018.01382
Abstract: The relationships between human body motion and music have been the focus of several studies characterizing the correspondence between voluntary motion and various sound features. The study of involuntary movement to music, however, is still scarce.
July 20, 2017
SMC paper based on data from the first Norwegian Championship of Standstill
We have been carrying out three editions of the Norwegian Championship of Standstill over the years, but it is first with the new resources in the MICRO project that we have finally been able to properly analyze all the data. The first publication coming out of the (growing) data set was published at SMC this year:
Reference: Jensenius, Alexander Refsum; Zelechowska, Agata & Gonzalez Sanchez, Victor Evaristo (2017). The Musical Influence on People’s Micromotion when Standing Still in Groups, In Tapio Lokki; Jukka Pa?
March 13, 2016
New project Funding: MICRO!
I am happy to announce that I have received funding from the Norwegian Research Council’s program Young Research Talents for the project: MICRO - Human Bodily Micromotion in Music Perception and Interaction. This is a 4-year long project and I will be looking for both a PhD and postdoctoral fellow to join the team. The call will be out later this year, but please do not hesitate to contact me right if you are interested.
May 1, 2014
New publication: How still is still? exploring human standstill for artistic applications
I am happy to announce a new publication titled How still is still? exploring human standstill for artistic applications (PDF of preprint), published in the International Journal of Arts and Technology. The paper is based on the Sverm project, and was written and accepted two years ago. Sometimes academic publishing takes absurdly long, which this is an example of, but I am happy that the publication is finally out in the wild.
July 13, 2012
Paper #2 at SMC 2012: Noise level in IR mocap systems
Yesterday I presented a paper on motiongrams at the Sound and Music Computing conference in Copenhagen. Today I will present the paper A study of the noise-level in two infrared marker-based motion capture systems. This is a quite nerdy, in-depth study of the noise-level of two of our motion capture systems.
Abstract
With musical applications in mind, this paper reports on the level of noise observed in two commercial infrared marker-based motion capture systems: one high-end (Qualisys) and one affordable (OptiTrack).
March 6, 2012
Norwegian Championship in standstill
On Thursday we are organising the first Norwegian Championship of standstill at University of Oslo. This is part of the University’s Open Day, a day when potential new students can come and see what happens on campus.
Besides the competitive part, the championship is (of course) a great way to gather more data about how people stand still. The art of standing still is something that has been a great interest of mine for the last year or so, and I have been carrying out different types of smaller experiments to understand more about the micromovements observed when standing still.
November 10, 2011
Motionlessness
Yesterday Miles Phillips{.url} suggested that the word “motionlessness” may be what I am after when it comes to describing the act of standing still. He further pointed me to a web site with a list of the world records for motionlessness. The rules to compete in motionlessness is as follows:
The record is for continuously standing motionless. You must stand: sitting is not allowed. No facial movements are allowed other then the involuntary blinking of the eye.
October 26, 2011
The act of standing still: stillness or standstill?
[caption id=“attachment_1283” align=“alignright” width=“300” caption=“Plots of a neck marker from a 10 minute recording of standing still”][/caption]
As mentioned previously (here and here), I have been doing some experiments on standing still in silence. One thing is to do it, another is to talk (or write) about it. Then I need to have words describing what I have been doing.
To start with the simple; the word silence seems to be quite clearly defined as the “lack of sound”, and is similar to the Norwegian word stillhet.
Tag: tripod
July 1, 2023
Half a year of standing still
Today, I am halfway through my year-long #StillStanding project. Not so much has changed since I summed up the first 100 days. I still enjoy the experience, and there are new things to learn every day.
Here is a 10-minute video I have recorded that presents the project, explains its rationale, and reflects upon some experiences so far:
The biggest challenge moving forward is finding new spaces every day. I have already stood in the most accessible spaces, so I need to spend more time looking for unexplored rooms both at the university and close to my home.
April 10, 2023
100 Days and Still Standing
Today marks the 100th day of my annual #StillStanding project. In this blog post, I summarize some of my experiences so far.
Endurance Some people questioned whether I would be able to stand still every single day for an entire year. But, hey, it is only ten minutes (out of 1440) per day, and even though my life as a centre director is busy, it is always possible to find time for a standstill sometime during the day.
Tag: e-mail
June 30, 2023
Writing Explanatory Tripnote
I read somewhere (but never stored the link) that people should add a more lengthy description in their trip notes (or vacation messages or whatever people call it) and decided to try it. Usually, I have only added a very brief message about when I return, but I think the point of adding a longer one is to explain why one cannot be as accessible as one usually may be.
February 2, 2012
Recovery time after e-mail and phone calls
I have for some time tried to put my phone in silent mode and turn off my e-mail program before lunch. I am most productive in the mornings, and being able to work 3-4 hours without any interruptions, is very valuable.
My solution to the problem of minor (and larger) interruptions has come out of a need of getting more concentrated time to focus on working in the lab, programming, writing papers, etc.
Tag: ritmo
June 30, 2023
Writing Explanatory Tripnote
I read somewhere (but never stored the link) that people should add a more lengthy description in their trip notes (or vacation messages or whatever people call it) and decided to try it. Usually, I have only added a very brief message about when I return, but I think the point of adding a longer one is to explain why one cannot be as accessible as one usually may be.
June 19, 2023
Wearing Barefoot Shoes
I have used “barefoot shoes” for more than a decade. Only occasionally, I wear something else. It started with a pair of Vibram five fingers, but after family complaints about the weird-looking toes, I moved on to various types of “normal” minimalistic shoes, such as the ones from Vivo Barefoot. Yesterday, I wore a pair of Birkenstock sandals and immediately noticed how strange it felt when I started my daily standstill session.
June 7, 2023
Confession Case Study
I have previously written about the coauthorship exercise that we use at RITMO workshops when we have new groups of doctoral and postdoctoral fellows. Another concept we use from time to time is what we call a “confession workshop.” This builds on the fact that a researcher’s life is often filled with rejections and discouraging feedback. Too often, we only talk about successful stories, giving the skewed impression that there are no challenges in academia.
June 2, 2023
Coauthorship Exercise
I have previously written about the different publication cultures at RITMO. This includes different coauthorship traditions between our disciplines: musicology, psychology, and informatics. Our approach to avoid conflicts over (co)authorship is to discuss it often. We also have an exercise that we run occasionally at retreats. Since this may be a topic of interest to others, here I share the case we have developed. We typically allocate an hour for the exercise and split people into small groups (4–6 people) from different disciplines.
January 13, 2023
New MOOC: Pupillometry – The Eye as a Window Into the Mind
I am happy to announce a new online course from RITMO: Pupillometry – The Eye as a Window Into the Mind. This is the third so-called Massive Open Online Course (MOOC) I have been part of making, following Motion Capture and Music Moves. I am excited to get it started on Monday, 16 January.
Discover the applications of pupillometry research Pupillometry is a relatively new research method within the sciences, and it has wide-ranging applications within psychology, neuroscience, and beyond.
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
September 17, 2021
Running a hybrid disputation in a Zoom Webinar
I have been running the disputation of Guilherme Schmidt Câmara today. At RITMO, we have accepted that “hybrid mode” will be the new normal. So also for disputations. Fortunately, we had already many years of experience with video conferencing before the corona crisis hit. We have also gained lots of experience by running the Music, Communication and Technology master’s programme for some years.
In another blog post, I summarized some experiences of running our first hybrid disputation.
June 27, 2021
Running a hybrid conference
There are many ways to run conferences. Here is a summary of how we ran the Rhythm Production and Perception Workshop 2021 at RITMO this week. RPPW is called a workshop, but it is really a full-blown conference. Almost 200 participants enjoy 100 talks and posters, 2 keynote speeches, and 3 music performances spread across 4 days.
A hybrid format We started planning RPPW as an on-site event back in 2019.
April 26, 2021
Strings On-Line installation
We presented the installation Strings On-Line at NIME 2020. It was supposed to be a physical installation at the conference to be held in Birmingham, UK.
Due to the corona crisis, the conference went online, and we decided to redesign the proposed physical installation into an online installation instead. The installation ran continuously from 21-25 July last year, and hundreds of people “came by” to interact with it.
I finally got around to edit a short (1-minute) video promo of the installation:
March 17, 2021
23 tips to improve your web presence
I was challenged to say a few words about improving their personal web pages at the University of Oslo. This led to a short talk titled 23 tips to improve your web presence. The presentation was based on experiences with keeping my own personal page up to date, but hopefully, the tips can be useful for others.
Why should you care about your employee page? Some of my reasons include:
January 26, 2021
Some Thoughts on the Archival of Research Activities
Recently, I have been engaged in an internal discussion at the University of Oslo about our institutional web pages. This has led me to realize that a university’s web pages are yet another part of what I like to think of as an Open Research “puzzle”:
Cutting down on web pages The discussion started when our university’s communication department announced that they wanted to reduce the number of web pages. One way of doing that is by unpublishing a lot of pages.
December 12, 2020
Running a hybrid disputation on Zoom
Yesterday, I wrote about Agata Zelechowska’s disputation. We decided to run it as a hybrid production, even though there was no audience present. It would, of course, have been easier to run it as an online-only event. However, we expect that hybrid is the new “normal” for such events, and therefore thought that it would be good to get started exploring the hybrid format right away. In this blog post, I will write up some of our experiences.
December 11, 2020
PhD disputation of Agata Zelechowska
I am happy to announce that Agata Zelechowska yesterday successfully defended her PhD dissertation during a public disputation. The dissertation is titled Irresistible Movement: The Role of Musical Sound, Individual Differences and Listening Context in Movement Responses to Music and has been carried out as part of my MICRO project at RITMO.
The dissertation is composed of five papers and an extended introduction. The abstract reads:
This dissertation examines the phenomenon of spontaneous movement responses to music.
October 30, 2020
MusicTestLab as a Testbed of Open Research
Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab.
August 17, 2018
Moving to a new building
I have not been very good at blogging recently. This is not because nothing is happening, but rather because so much is happening that I don’t have time to write about it.
One of these things is the startup of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, that I am co-directing with Anne Danielsen. We got the funding last year, and have spent the year in planning, preparing and now executing the startup.
December 13, 2017
Come work with me! Lots of new positions at University of Oslo
I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.
October 9, 2017
And we're off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.
Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
Tag: vacation
June 30, 2023
Writing Explanatory Tripnote
I read somewhere (but never stored the link) that people should add a more lengthy description in their trip notes (or vacation messages or whatever people call it) and decided to try it. Usually, I have only added a very brief message about when I return, but I think the point of adding a longer one is to explain why one cannot be as accessible as one usually may be.
Tag: ai
June 23, 2023
The ventilation system in my office
I’m sitting in my office, listening to the noisy ventilation system that inspired my AMBIENT project. Here is a short sample:
At the moment, I am primarily focusing on completing my book Still Standing. However, as part of my year-long #StillStanding project, I have also started thinking about the sounds found in indoor environments.
Asking ChatGPT for help I have yet to begin a proper literature review on ventilation noise, but as a start, I asked ChatGPT for help.
December 16, 2022
Exploring Essay Writing with You.com
There has been much discussion about ChatGPT recently, a chat robot that can write meaningful answers to questions. I haven’t had time to test it out properly, and it was unavailable when I wanted to check it today. Instead, I have played around with YouWrite, a service that can write text based on limited input.
I thought it would be interesting to ask it to write about something I know well, so I asked it to write a text based on an abbreviated version of the abstract of my new book:
March 7, 2022
Digital competency
What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.
Competencies vs skills First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware.
September 22, 2021
Can AI replace humans?
Or, more specifically: can AI replace an artist? That is the question posed in a short documentary that I have contributed to for this year’s Research Days.
We were contacted before summer about trying to create a new song based on the catalogue of the Norwegian artist Ary. The idea was to use machine learning to generate the song. This has turned out to be an exciting project.
I was busy finishing the manuscript for my new book, so I wasn’t much involved in the development part myself.
November 22, 2020
Music and AI
Last week I was interviewed about music and artificial intelligence (AI). This led to several different stories on radio, TV, and as text. The reason for the sudden media interest in this topic was a story by The Guardian on the use of deep learning for creating music. They featured an example of the creation of Sinatra-inspired music made using a deep learning algorithm:
After these stories were published, I was asked about participating in a talk-show on Friday evening.
December 13, 2017
Come work with me! Lots of new positions at University of Oslo
I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.
April 19, 2006
Sounds Like Bach
Douglas Hofstadter is discussing music and artificial intelligence:
Back when I was young – when I wrote “Gödel, Escher, Bach” – I asked myself the question “Will a computer program ever write beautiful music?”, and then proceeded to speculate as follows: “There will be no new kinds of beauty turned up for a long time by computer music-composing programs… To think – and I have heard this suggested – that we might soon be able to command a preprogrammed mass-produced mail-order twenty-dollar desk-model ‘music box’ to bring forth from its sterile circuitry pieces which Chopin or Bach might have written had they lived longer is a grotesque and shameful misestimation of the depth of the human spirit.
Tag: HVAC
June 23, 2023
The ventilation system in my office
I’m sitting in my office, listening to the noisy ventilation system that inspired my AMBIENT project. Here is a short sample:
At the moment, I am primarily focusing on completing my book Still Standing. However, as part of my year-long #StillStanding project, I have also started thinking about the sounds found in indoor environments.
Asking ChatGPT for help I have yet to begin a proper literature review on ventilation noise, but as a start, I asked ChatGPT for help.
Tag: noise
June 23, 2023
The ventilation system in my office
I’m sitting in my office, listening to the noisy ventilation system that inspired my AMBIENT project. Here is a short sample:
At the moment, I am primarily focusing on completing my book Still Standing. However, as part of my year-long #StillStanding project, I have also started thinking about the sounds found in indoor environments.
Asking ChatGPT for help I have yet to begin a proper literature review on ventilation noise, but as a start, I asked ChatGPT for help.
December 13, 2012
Performing with the Norwegian Noise Orchestra
Yesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.
For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.
July 13, 2012
Paper #2 at SMC 2012: Noise level in IR mocap systems
Yesterday I presented a paper on motiongrams at the Sound and Music Computing conference in Copenhagen. Today I will present the paper A study of the noise-level in two infrared marker-based motion capture systems. This is a quite nerdy, in-depth study of the noise-level of two of our motion capture systems.
Abstract
With musical applications in mind, this paper reports on the level of noise observed in two commercial infrared marker-based motion capture systems: one high-end (Qualisys) and one affordable (OptiTrack).
Tag: ventilation
June 23, 2023
The ventilation system in my office
I’m sitting in my office, listening to the noisy ventilation system that inspired my AMBIENT project. Here is a short sample:
At the moment, I am primarily focusing on completing my book Still Standing. However, as part of my year-long #StillStanding project, I have also started thinking about the sounds found in indoor environments.
Asking ChatGPT for help I have yet to begin a proper literature review on ventilation noise, but as a start, I asked ChatGPT for help.
Tag: authorship
June 19, 2023
Wearing Barefoot Shoes
I have used “barefoot shoes” for more than a decade. Only occasionally, I wear something else. It started with a pair of Vibram five fingers, but after family complaints about the weird-looking toes, I moved on to various types of “normal” minimalistic shoes, such as the ones from Vivo Barefoot. Yesterday, I wore a pair of Birkenstock sandals and immediately noticed how strange it felt when I started my daily standstill session.
June 2, 2023
Coauthorship Exercise
I have previously written about the different publication cultures at RITMO. This includes different coauthorship traditions between our disciplines: musicology, psychology, and informatics. Our approach to avoid conflicts over (co)authorship is to discuss it often. We also have an exercise that we run occasionally at retreats. Since this may be a topic of interest to others, here I share the case we have developed. We typically allocate an hour for the exercise and split people into small groups (4–6 people) from different disciplines.
Tag: publications
June 19, 2023
Wearing Barefoot Shoes
I have used “barefoot shoes” for more than a decade. Only occasionally, I wear something else. It started with a pair of Vibram five fingers, but after family complaints about the weird-looking toes, I moved on to various types of “normal” minimalistic shoes, such as the ones from Vivo Barefoot. Yesterday, I wore a pair of Birkenstock sandals and immediately noticed how strange it felt when I started my daily standstill session.
June 2, 2023
Coauthorship Exercise
I have previously written about the different publication cultures at RITMO. This includes different coauthorship traditions between our disciplines: musicology, psychology, and informatics. Our approach to avoid conflicts over (co)authorship is to discuss it often. We also have an exercise that we run occasionally at retreats. Since this may be a topic of interest to others, here I share the case we have developed. We typically allocate an hour for the exercise and split people into small groups (4–6 people) from different disciplines.
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
May 6, 2012
Visual overviews in MS Academic Search
I have been using Google Scholar as one of my main sources for finding academic papers and books, and find that is has improved considerably over the last few years.
A while ago they also opened for creating your own academic profile. It is fairly basic, but they have done a great job in managing to find most of my papers, citations, etc.
Now also Microsoft has jumped on academic search, and has launched their own service.
June 7, 2010
Eigenvalues for journals
A while back Ola Nordal wrote about journal ranking in his blog, referring to a website called eigenfactors.org. The point is to rank journals based on two factors:
Eigenfactor Score (EF): “A measure of the overall value provided by all of the articles published in a given journal in a year”. Article Influence Score (AI): “a measure of a journal’s prestige based on per article citations and comparable to Impact Factor”.
Tag: environments
June 12, 2023
Running a Jupyter Notebook in Conda Environment
I have been running Python-based Jupyter Notebooks for some time but never thought about using environments before quite recently. I have heard people talking about environments, but I didn’t understand why I would need it.
Two days ago, I tried to upgrade to the latest version of the Musical Gestures Toolbox for Python and got stuck in a dependency nightmare. I tried to upgrade one of the packages that choked, but that only led to other packages breaking.
Tag: mgt
June 12, 2023
Running a Jupyter Notebook in Conda Environment
I have been running Python-based Jupyter Notebooks for some time but never thought about using environments before quite recently. I have heard people talking about environments, but I didn’t understand why I would need it.
Two days ago, I tried to upgrade to the latest version of the Musical Gestures Toolbox for Python and got stuck in a dependency nightmare. I tried to upgrade one of the packages that choked, but that only led to other packages breaking.
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
January 12, 2023
Running a workshop with a Jupyter Notebook presentation
Today, I ran a workshop called Video Visualization together with RITMO research assistant Joachim Poutaraud. The workshop was part of the Digital Scholarship Days 2023 organized by the University of Oslo Library, four days packed of hands-on tutorials of various useful things.
Presentation slides made by Jupyter Notebook Joachim has done a fantastic job updating the Wiki with all the new things he has implemented in the toolbox. However, the Wiki is not the best thing to use in a workshop, it has too much information and would create an information overload for the participants.
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
February 4, 2021
Visualising a Bach prelude played on Boomwhackers
I came across a fantastic performance of a Bach prelude played on Boomwhackers by Les Objets Volants.
https://www.youtube.com/watch?v=Y5seI0eJZCg
It is really incredible how they manage to coordinate the sticks and make it into a beautiful performance. Given my interest in the visual aspects of music performance, I reached for the Musical Gestures Toolbox to create some video visualisations.
I started with creating an average image of the video:
This image is not particularly interesting.
March 1, 2020
Creating different types of keyframe displays with FFmpeg
In some recent posts I have explored the creation of motiongrams and average images, multi-exposure displays, and image masks. In this blog post I will explore different ways of generating keyframe displays using the very handy command line tool FFmpeg.
As in the previous posts, I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first attempt is to create a 3x3 grid image by just sampling frames from the original image.
September 28, 2018
Musical Gestures Toolbox for Matlab
Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.
The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.
Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago.
May 12, 2010
NTNU PhD defense
Two weeks ago Lars Adde defended his PhD entitled Prediction of cerebral palsy in young infants. Computer based assessment of general movements at NTNU in Trondheim. I have contributed to this research through development of the General Movement Toolbox, a variant of my Musical Gestures Toolbox. This toolbox he has used to analyse video material of children with fidgety movements, with the aim of being able to predict cerebral palsy at an early stage.
Tag: rhythm
June 12, 2023
Running a Jupyter Notebook in Conda Environment
I have been running Python-based Jupyter Notebooks for some time but never thought about using environments before quite recently. I have heard people talking about environments, but I didn’t understand why I would need it.
Two days ago, I tried to upgrade to the latest version of the Musical Gestures Toolbox for Python and got stuck in a dependency nightmare. I tried to upgrade one of the packages that choked, but that only led to other packages breaking.
June 8, 2023
Oddly ticking clock
Today, I stood still in a meeting room with an oddly ticking clock. This was part of my annual #StillStanding project which is documented on my Mastodon channel.
There was nothing special about today’s session but the clock. The meeting room was furnished with a large table in the middle, a screen on the wall, and glass walls on both sides. The large ventilation system led to a noticeable low-frequency “hum” dominating the soundscape.
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
December 31, 2022
365 Sound Actions
1 January this year, I set out to record one sound action per day. The idea was to test out the action–sound theory from my book Sound Actions. One thing is writing about action–sound couplings and mappings, another is to see how the theory works with real-world examples. As I commented on after one month, the project has been both challenging and inspiring. Below I write about some of my experiences but first, here is the complete list:
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
September 8, 2020
Motiongrams of rhythmic chimpanzee swaying
I came across a very interesting study on the Rhythmic swaying induced by sound in chimpanzees. The authors have shared the videos recorded in the study (Open Research is great!), so I was eager to try out some analyses with the Musical Gestures Toolbox for Matlab.
Here is an example of one of the videos from the collection:
The video quality is not very good, so I had my doubts about what I could find.
February 21, 2020
Visualizing some videos from the AIST Dance Video Database
Researchers from AIST have released an open database of dance videos, and I got very excited to try out some visualization methods on some of the files. This was also a good chance to test out some new functionality in the Musical Gestures Toolbox for Matlab that we are developing at RITMO. The AIST collection contains a number of videos. I selected one hip-hop dance video based on a very steady rhythmic pattern, and a contemporary dance video that is more fluid in both motion and music.
October 9, 2017
And we're off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.
Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
Tag: flocking
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
Tag: nature
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
Tag: videogram
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
May 20, 2023
The effect of skipping frames for video visualization
I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps.
May 10, 2023
Visualization of Musique de Table
Musique de Table is a wonderful piece written by Thierry de Mey. I have seen it performed live several times, and here came across a one-shot video recording that I thought it would be interesting to analyse:
The test with some video visualization tools in the Musical Gestures Toolbox for Python.
For running the commands below, you first need to import the toolbox in Python:
import musicalgestures as mg I started the process by importing the source video:
January 7, 2022
Try not to headbang challenge
I recently came across a video of the so-called Try not to headbang challenge, where the idea is to, well, not to headbang while listening to music. This immediately caught my attention. After all, I have been researching music-related micromotion over the last years and have run the Norwegian Championship of Standstill since 2012.
Here is an example of Nath & Johnny trying the challenge:
https://www.youtube.com/watch?v=-I4CBsDT37I As seen in the video, they are doing ok, although they are far from sitting still.
December 17, 2021
Flamenco video analysis
I continue my testing of the new Musical Gestures Toolbox for Python. One thing is to use the toolbox on controlled recordings with stationary cameras and non-moving backgrounds (see examples of visualizations of AIST videos). But it is also interesting to explore “real world” videos (such as the Bergensbanen train journey).
I came across a great video of flamenco dancer Selene Muñoz, and wondered how I could visualize what is going on there:
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
February 4, 2021
Visualising a Bach prelude played on Boomwhackers
I came across a fantastic performance of a Bach prelude played on Boomwhackers by Les Objets Volants.
https://www.youtube.com/watch?v=Y5seI0eJZCg
It is really incredible how they manage to coordinate the sticks and make it into a beautiful performance. Given my interest in the visual aspects of music performance, I reached for the Musical Gestures Toolbox to create some video visualisations.
I started with creating an average image of the video:
This image is not particularly interesting.
January 28, 2021
Analyzing a double stroke drum roll
Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:
September 8, 2020
Motiongrams of rhythmic chimpanzee swaying
I came across a very interesting study on the Rhythmic swaying induced by sound in chimpanzees. The authors have shared the videos recorded in the study (Open Research is great!), so I was eager to try out some analyses with the Musical Gestures Toolbox for Matlab.
Here is an example of one of the videos from the collection:
The video quality is not very good, so I had my doubts about what I could find.
July 16, 2011
Image size
While generating the videograms of Bergensbanen, I discovered that Max/Jitter cannot export images from matrices that are larger than 32767 pixels wide/tall. This is still fairly large, but if I was going to generate a videogram with one pixel stripe per frame in the video, I would need to create an image file that is 1 302 668 pixels wide.
This made me curious as to what type of limitations exist around images.
July 13, 2011
Difference between videogram and motiongram
For some upcoming blog posts on videograms, I will start by explaining the difference between a motiongram and a videogram. Both are temporal (image) representations of video content (as explained here), and are produced almost in the same way. The difference is that videograms start with the regular video image, and motiongrams start with a motion image.
So for a video of my hand like this:
we will get this horizontal videogram:
July 13, 2011
Videogram of Bergensbanen
While on paternity leave, I (finally) have time to do small projects that require little brain activity and lots of computation time. One of the things I have wanted to do for a long time is to create a videogram of Bergensbanen (which I briefly mentioned last year). This was a project undertaken by the Norwegian broadcast company (NRK), where they filmed (and broadcast live) the entire train trip from Bergen to Oslo.
June 17, 2008
AudioVideoAnalysis
To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.
Download AudioVideoAnalysis for OS X (8MB) It currently has the following features:
Draws a spectrogram from any connected microphone Draws a motiongram/videogram from any connected camera Press the escape button to toggle fullscreen mode Built with Max/MSP by Cycling ‘74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.
May 15, 2008
Sonification of Traveling Landscapes
I just heard a talk called “Real-Time Synaesthetic Sonification of Traveling Landscapes” (PDF) by Tim Pohle and Peter Knees from the Department of Computational Perception (great name!) in Linz. They have made an application creating music from a moving video camera. The implementation is based on grabbing a one pixel wide column from the video, plotting these columns and sonifying the image. Interestingly enough, the images they get out (see below) of this are very close to the motiongrams and videograms I have been working on.
Tag: visualization
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
May 20, 2023
The effect of skipping frames for video visualization
I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps.
May 10, 2023
Visualization of Musique de Table
Musique de Table is a wonderful piece written by Thierry de Mey. I have seen it performed live several times, and here came across a one-shot video recording that I thought it would be interesting to analyse:
The test with some video visualization tools in the Musical Gestures Toolbox for Python.
For running the commands below, you first need to import the toolbox in Python:
import musicalgestures as mg I started the process by importing the source video:
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
December 17, 2021
Flamenco video analysis
I continue my testing of the new Musical Gestures Toolbox for Python. One thing is to use the toolbox on controlled recordings with stationary cameras and non-moving backgrounds (see examples of visualizations of AIST videos). But it is also interesting to explore “real world” videos (such as the Bergensbanen train journey).
I came across a great video of flamenco dancer Selene Muñoz, and wondered how I could visualize what is going on there:
February 4, 2021
Visualising a Bach prelude played on Boomwhackers
I came across a fantastic performance of a Bach prelude played on Boomwhackers by Les Objets Volants.
https://www.youtube.com/watch?v=Y5seI0eJZCg
It is really incredible how they manage to coordinate the sticks and make it into a beautiful performance. Given my interest in the visual aspects of music performance, I reached for the Musical Gestures Toolbox to create some video visualisations.
I started with creating an average image of the video:
This image is not particularly interesting.
January 28, 2021
Analyzing a double stroke drum roll
Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:
February 21, 2020
Visualizing some videos from the AIST Dance Video Database
Researchers from AIST have released an open database of dance videos, and I got very excited to try out some visualization methods on some of the files. This was also a good chance to test out some new functionality in the Musical Gestures Toolbox for Matlab that we are developing at RITMO. The AIST collection contains a number of videos. I selected one hip-hop dance video based on a very steady rhythmic pattern, and a contemporary dance video that is more fluid in both motion and music.
January 24, 2020
Motiongram of high-speed violin bowing
I came across a high-speed recording of bowing on a violin string today, and thought it would be interesting to try to analyze it with the new version of the Musical Gestures Toolbox for Python. This is inspired by results from the creation of motiongrams of a high-speed guitar recording that I did some years ago.
Here is the original video:
From this I generated the following motion video:
And from this we get the following motiongram showing the vertical motion of the string (time running from left to right):
January 14, 2013
New publication: Some video abstraction techniques for displaying body movement in analysis and performance
Today the MIT Press journal Leonardo has published my paper entitled “Some video abstraction techniques for displaying body movement in analysis and performance”. The paper is a summary of my work on different types of visualisation techniques of music-related body motion. Most of these techniques were developed during my PhD, but have been refined over the course of my post-doc fellowship.
The paper is available from the Leonardo web page (or MUSE), and will also be posted in the digital archive at UiO after the 6 month embargo period.
June 17, 2008
AudioVideoAnalysis
To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.
Download AudioVideoAnalysis for OS X (8MB) It currently has the following features:
Draws a spectrogram from any connected microphone Draws a motiongram/videogram from any connected camera Press the escape button to toggle fullscreen mode Built with Max/MSP by Cycling ‘74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.
Tag: gopro max
May 25, 2023
Understanding the GoPro Max' File Formats
I use a GoPro Max 360-degree camera in my annual #StillStanding project. That means that I also have had an excellent chance to work with GoPro files and try to understand their inner logic. In this blog post, I will summarize some of my findings.
What is recorded? Recording “a video” with a GoPro Max results in recording multiple files. For example, each of my daily 10-minute recordings ends up with something like this:
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
Tag: photo
May 25, 2023
Understanding the GoPro Max' File Formats
I use a GoPro Max 360-degree camera in my annual #StillStanding project. That means that I also have had an excellent chance to work with GoPro files and try to understand their inner logic. In this blog post, I will summarize some of my findings.
What is recorded? Recording “a video” with a GoPro Max results in recording multiple files. For example, each of my daily 10-minute recordings ends up with something like this:
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
December 9, 2022
Optimizing JPEG files
I have previously written about how to resize all the images in a folder. That script was based on lossy compression of the files. However, there are also tools for optimizing image files losslessly. One approach is to use the .jpgoptim](https://github.com/tjko.jpgoptim) function available on ubuntu. Here is an excellent explanation of how it works.
Lossless optimization As part of moving my blog to Hugo, I took the opportunity to optimize all the images in all my image folders.
September 18, 2022
Convert HEIC photos to.jpg
A quick note-to-self about how I managed to download a bunch of photos from an iPhone and convert them to.jpg on my laptop running Ubuntu 22.04.
As opposed to Android phones, iPhones do not show up as a regular disk with easy access to the DCIM folder storing photos. Fortunately, Rapid Photo Downloader managed to launch the iPhone and find all the images. Unfortunately, all the files were stored as HEIC files, using the High Efficiency Image File Format.
April 13, 2022
Programmatically resizing a folder of images
This is a note to self about how to programmatically resize and crop many images using ImageMagick.
It all started with a folder full of photos with different pixel sizes and ratios. That is because they had been captured with various cameras and had also been manually cropped. This could be verified by running this command to print their pixel sizes:
identify -format "%wx%h\n" *.JPG Fortunately, all the images had a reasonably large pixel count, so I decided to go for a 5MP pixel count (2560x1920 in 4:3 ratio).
August 5, 2011
Flickr introduces long photos
Flickr has opened for uploading videos, or, rather, what they call “long photos”. As such, they are not trying to compete with YouTube or Vimeo, but rather making it possible to upload videos that are closer to a photography than a movie (i.e. with a narrative). I like this approach, and it resonates with how I am often recording a video as if it was a photography.
The difference between what I could call a photo video and a movie video, can be seen as analog to the difference between music compostion/production and soundscaping.
November 28, 2004
Working with time and space in images: Chessboard Studies
I have been taking photographs for many years, and I wanted to see how I could develop this a little further. On a round-trip from Japan to Tanzania, as a participant in the [16th Ship for World Youth Programme]{style=“color: windowtext; text-decoration: none;”} organised by the Japanese government, I decided to work with my camera in the same way as I would think about improvising with a musical instrument. This was also inspired by David Crawford’s Stop Motion Studies where he is capturing the moment by projecting a series of still photos, shot right after each other, in a random sequence.
Tag: motiongram
May 20, 2023
The effect of skipping frames for video visualization
I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps.
May 10, 2023
Visualization of Musique de Table
Musique de Table is a wonderful piece written by Thierry de Mey. I have seen it performed live several times, and here came across a one-shot video recording that I thought it would be interesting to analyse:
The test with some video visualization tools in the Musical Gestures Toolbox for Python.
For running the commands below, you first need to import the toolbox in Python:
import musicalgestures as mg I started the process by importing the source video:
December 17, 2021
Flamenco video analysis
I continue my testing of the new Musical Gestures Toolbox for Python. One thing is to use the toolbox on controlled recordings with stationary cameras and non-moving backgrounds (see examples of visualizations of AIST videos). But it is also interesting to explore “real world” videos (such as the Bergensbanen train journey).
I came across a great video of flamenco dancer Selene Muñoz, and wondered how I could visualize what is going on there:
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
February 4, 2021
Visualising a Bach prelude played on Boomwhackers
I came across a fantastic performance of a Bach prelude played on Boomwhackers by Les Objets Volants.
https://www.youtube.com/watch?v=Y5seI0eJZCg
It is really incredible how they manage to coordinate the sticks and make it into a beautiful performance. Given my interest in the visual aspects of music performance, I reached for the Musical Gestures Toolbox to create some video visualisations.
I started with creating an average image of the video:
This image is not particularly interesting.
January 28, 2021
Analyzing a double stroke drum roll
Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:
September 8, 2020
Motiongrams of rhythmic chimpanzee swaying
I came across a very interesting study on the Rhythmic swaying induced by sound in chimpanzees. The authors have shared the videos recorded in the study (Open Research is great!), so I was eager to try out some analyses with the Musical Gestures Toolbox for Matlab.
Here is an example of one of the videos from the collection:
The video quality is not very good, so I had my doubts about what I could find.
February 21, 2020
Visualizing some videos from the AIST Dance Video Database
Researchers from AIST have released an open database of dance videos, and I got very excited to try out some visualization methods on some of the files. This was also a good chance to test out some new functionality in the Musical Gestures Toolbox for Matlab that we are developing at RITMO. The AIST collection contains a number of videos. I selected one hip-hop dance video based on a very steady rhythmic pattern, and a contemporary dance video that is more fluid in both motion and music.
January 24, 2020
Motiongram of high-speed violin bowing
I came across a high-speed recording of bowing on a violin string today, and thought it would be interesting to try to analyze it with the new version of the Musical Gestures Toolbox for Python. This is inspired by results from the creation of motiongrams of a high-speed guitar recording that I did some years ago.
Here is the original video:
From this I generated the following motion video:
And from this we get the following motiongram showing the vertical motion of the string (time running from left to right):
August 1, 2013
New publication: Non-Realtime Sonification of Motiongrams
Today I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.
May 28, 2013
Kinectofon: Performing with shapes in planes
Yesterday, Ståle presented a paper on mocap filtering at the NIME conference in Daejeon. Today I presented a demo on using Kinect images as input to my sonomotiongram technique.
Title
Kinectofon: Performing with shapes in planes
Links
Paper (PDF) Poster (PDF) Software Videos (coming soon) Abstract
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB and depth image streams retrieved from a Microsoft Kinect sensor device.
April 6, 2013
ImageSonifyer
Earlier this year, before I started as head of department, I was working on a non-realtime implementation of my sonomotiongram technique (a sonomotiongram is a sonic display of motion from a video recording, created by sonifying a motiongram). Now I finally found some time to wrap it up and make it available as an OSX application called ImageSonifyer. The Max patch is also available, for those that want to look at what is going on.
February 21, 2013
Are you jumping or bouncing?
One of the most satisfying things of being a researcher, is to see that ideas, theories, methods, software and other things that you come up with, are useful to others. Today I received the master’s thesis of Per Erik Walslag, titled Are you jumping or bouncing? A case-study of jumping and bouncing in classical ballet using the motiongram computer program, in which he has made excellent use of my motiongram technique and my VideoAnalysis software.
January 14, 2013
New publication: Some video abstraction techniques for displaying body movement in analysis and performance
Today the MIT Press journal Leonardo has published my paper entitled “Some video abstraction techniques for displaying body movement in analysis and performance”. The paper is a summary of my work on different types of visualisation techniques of music-related body motion. Most of these techniques were developed during my PhD, but have been refined over the course of my post-doc fellowship.
The paper is available from the Leonardo web page (or MUSE), and will also be posted in the digital archive at UiO after the 6 month embargo period.
December 13, 2012
Performing with the Norwegian Noise Orchestra
Yesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.
For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.
August 13, 2012
Hi-speed guitar recording
I was in Hamburg last week, teaching at the International Summer Shool in Systematic Musicology (ISSSM). While there, I was able to test a newly acquired high-speed video camera (Phantom V711) at the Department of Musicology.
[caption id=“attachment_1988” align=“alignnone” width=“300”] The beautiful building of the Department of Musicology in Hamburg[/caption]
[caption id=“attachment_1987” align=“alignnone” width=“300”] They have some really cool drawings in the ceiling at the entrance of the Department of Musicology in Hamburg.
July 12, 2012
Paper #1 at SMC 2012: Evaluation of motiongrams
Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
June 25, 2012
Record videos of sonification
I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.
It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max.
February 3, 2012
Sonification of motiongrams
A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.
See below for the full paper and video examples.
The Paper Download paper (PDF 2MB) Abstract: The paper presents a method for sonification of human body motion based on motiongrams.
July 13, 2011
Difference between videogram and motiongram
For some upcoming blog posts on videograms, I will start by explaining the difference between a motiongram and a videogram. Both are temporal (image) representations of video content (as explained here), and are produced almost in the same way. The difference is that videograms start with the regular video image, and motiongrams start with a motion image.
So for a video of my hand like this:
we will get this horizontal videogram:
November 9, 2010
Sonification of motiongrams
I have made a new Jamoma module for sonification of motiongrams called jmod.sonifyer~. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and “read” as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.
See the demonstration video below:
The module is available from the Jamoma source repository, and will probably make it into an official release at some point.
July 2, 2010
New motiongram features
Inspired by the work [[[Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale]{.entry-content}]{.status-content}]{.status-body} a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:
About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.
August 14, 2009
Presenting mocapgrams
Earlier today I held the presentation “Reduced Displays of Multidimensional Motion Capture Data Sets of Musical Performance” at the ESCOM conference in Jyväskylä, Finland. The presentation included an overview of different approaches to visualization of music-related movement, and also our most recent method: mocapgrams.
While motiongrams are reduced displays created from video files, mocapgrams are intended to work in a similar way, but created from motion capture data. They are conceptually similar, but otherwise quite different in the way they are generated.
August 26, 2008
Open lab
We have slowly been moving into our new lab spaces over the last weeks. The official opening of the labs is scheduled for Friday 26 September, but we had a pre-opening “Open lab” for the new music students last week, and here are some of the pictures shot by Anne Cathrine Wesnes during the presentation.
Here I am telling the students a little about our new research group, and showing the main room:
June 17, 2008
AudioVideoAnalysis
To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.
Download AudioVideoAnalysis for OS X (8MB) It currently has the following features:
Draws a spectrogram from any connected microphone Draws a motiongram/videogram from any connected camera Press the escape button to toggle fullscreen mode Built with Max/MSP by Cycling ‘74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.
May 15, 2008
Sonification of Traveling Landscapes
I just heard a talk called “Real-Time Synaesthetic Sonification of Traveling Landscapes” (PDF) by Tim Pohle and Peter Knees from the Department of Computational Perception (great name!) in Linz. They have made an application creating music from a moving video camera. The implementation is based on grabbing a one pixel wide column from the video, plotting these columns and sonifying the image. Interestingly enough, the images they get out (see below) of this are very close to the motiongrams and videograms I have been working on.
November 1, 2006
Motiongrams
Challenge Traditional keyframe displays of videos are not particularly useful when studying single-shot studio recordings of music-related movements, since they mainly show static postural information and no motion.
Using motion images of various kinds helps in visualizing what is going on in the image. Below can be seen (from left): motion image, with noise reduction, with edge detection, with “trails” and added to the original image.
Making Motiongrams We are used to visualizing audio with spectrograms, and have been exploring different techniques for visualizing music-related movements in a similar manner.
Tag: music
May 10, 2023
Visualization of Musique de Table
Musique de Table is a wonderful piece written by Thierry de Mey. I have seen it performed live several times, and here came across a one-shot video recording that I thought it would be interesting to analyse:
The test with some video visualization tools in the Musical Gestures Toolbox for Python.
For running the commands below, you first need to import the toolbox in Python:
import musicalgestures as mg I started the process by importing the source video:
December 13, 2022
New Book: Sound Actions - Conceptualizing Musical Instruments
I am happy to announce that my book Sound Actions - Conceptualizing Musical Instruments is now published! I am also thrilled that this is an open access book, meaning that is free to download and read. You are, of course, also welcome to pick up a paper copy!
Here is a quick video summary of the book’s content:
In the book, I combine perspectives from embodied music cognition and interactive music technology.
January 7, 2022
New online course: Motion Capture
After two years in the making, I am happy to finally introduce our new online course: Motion Capture: The art of studying human activity.
The course will run on the FutureLearn platform and is for everyone interested in the art of studying human movement. It has been developed by a team of RITMO researchers in close collaboration with the pedagogical team and production staff at LINK – Centre for Learning, Innovation & Academic Development.
January 7, 2022
Try not to headbang challenge
I recently came across a video of the so-called Try not to headbang challenge, where the idea is to, well, not to headbang while listening to music. This immediately caught my attention. After all, I have been researching music-related micromotion over the last years and have run the Norwegian Championship of Standstill since 2012.
Here is an example of Nath & Johnny trying the challenge:
https://www.youtube.com/watch?v=-I4CBsDT37I As seen in the video, they are doing ok, although they are far from sitting still.
November 27, 2021
New Book Chapter: Gestures in ensemble performance
I am happy to announce that Cagri Erdem and I have written a chapter titled “Gestures in ensemble performance” in the new book Together in Music: Coordination, Expression, Participation edited by Renee Timmers Freya Bailes, and Helena Daffern.
Video Teaser For the book launch, Cagri and I recorded a short video teaser:
https://youtu.be/Fd2kIAeorRk
Abstract The more formal abstract is:
The topic of gesture has received growing attention among music researchers over recent decades.
November 19, 2021
Rigorous Empirical Evaluation of Sound and Music Computing Research
At the NordicSMC conference last week, I was part of a panel discussing the topic Rigorous Empirical Evaluation of SMC Research. This was the original description of the session:
The goal of this session is to share, discuss, and appraise the topic of evaluation in the context of SMC research and development. Evaluation is a cornerstone of every scientific research domain, but is a complex subject in our context due to the interdisciplinary nature of SMC coupled with the subjectivity involved in assessing creative endeavours.
October 26, 2021
MusicLab Copenhagen
After nearly three years of planning, we can finally welcome people to MusicLab Copenhagen. This is a unique “science concert” involving the Danish String Quartet, one of the world’s leading classical ensembles. Tonight, they will perform pieces by Bach, Beethoven, Schnittke and folk music in a normal concert setting at Musikhuset in Copenhagen. However, the concert is nothing but normal.
Live music research During the concert, about twenty researchers from RITMO and partner institutions will conduct investigations and experiments informed by phenomenology, music psychology, complex systems analysis, and music technology.
September 22, 2021
Can AI replace humans?
Or, more specifically: can AI replace an artist? That is the question posed in a short documentary that I have contributed to for this year’s Research Days.
We were contacted before summer about trying to create a new song based on the catalogue of the Norwegian artist Ary. The idea was to use machine learning to generate the song. This has turned out to be an exciting project.
I was busy finishing the manuscript for my new book, so I wasn’t much involved in the development part myself.
July 1, 2021
Sound and Music Computing at the University of Oslo
This year’s Sound and Music Computing (SMC) Conference has opened for virtual lab tours. When we cannot travel to visit each other, this is a great way to showcase how things look and what we are working on.
Stefano Fasciani and I teamed up a couple of weeks ago to walk around some of the labs and studios at the Department of Musicology and RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion.
June 27, 2021
Running a hybrid conference
There are many ways to run conferences. Here is a summary of how we ran the Rhythm Production and Perception Workshop 2021 at RITMO this week. RPPW is called a workshop, but it is really a full-blown conference. Almost 200 participants enjoy 100 talks and posters, 2 keynote speeches, and 3 music performances spread across 4 days.
A hybrid format We started planning RPPW as an on-site event back in 2019.
June 17, 2021
New publication: NIME and the Environment
This week I presented the paper NIME and the Environment: Toward a More Sustainable NIME Practice at the International Conference on New Interfaces for Musical Expression (NIME) in Shanghai/online with Raul Masu, Adam Pultz Melbye, and John Sullivan. Below is our 3-minute video summary of the paper.
And here is the abstract:
This paper addresses environmental issues around NIME research and practice. We discuss the formulation of an environmental statement for the conference as well as the initiation of a NIME Eco Wiki containing information on environmental concerns related to the creation of new musical instruments.
April 26, 2021
Strings On-Line installation
We presented the installation Strings On-Line at NIME 2020. It was supposed to be a physical installation at the conference to be held in Birmingham, UK.
Due to the corona crisis, the conference went online, and we decided to redesign the proposed physical installation into an online installation instead. The installation ran continuously from 21-25 July last year, and hundreds of people “came by” to interact with it.
I finally got around to edit a short (1-minute) video promo of the installation:
March 11, 2021
What is a musical instrument?
A piano is an instrument. So is a violin. But what about the voice? Or a fork? Or a mobile phone? So what is (really) a musical instrument? That was the title of a short lecture I held at UiO’s Open Day today.
The 15-minute lecture is a very quick version of some of the concepts I have been working on for a new book project. Here I present a model for understanding what a musical instrument is and how new technology changes how we make and experience music.
February 4, 2021
Visualising a Bach prelude played on Boomwhackers
I came across a fantastic performance of a Bach prelude played on Boomwhackers by Les Objets Volants.
https://www.youtube.com/watch?v=Y5seI0eJZCg
It is really incredible how they manage to coordinate the sticks and make it into a beautiful performance. Given my interest in the visual aspects of music performance, I reached for the Musical Gestures Toolbox to create some video visualisations.
I started with creating an average image of the video:
This image is not particularly interesting.
January 22, 2021
New run of Music Moves
I am happy to announce a new run (the 6th) of our free online course Music Moves: Why Does Music Make You Move?. Here is a 1-minute welcome video:
The course starts on Monday (25 January 2021) and will run for six weeks. In the course, you will learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
We developed the course 5 years ago, but the content is still valid.
November 22, 2020
Music and AI
Last week I was interviewed about music and artificial intelligence (AI). This led to several different stories on radio, TV, and as text. The reason for the sudden media interest in this topic was a story by The Guardian on the use of deep learning for creating music. They featured an example of the creation of Sinatra-inspired music made using a deep learning algorithm:
After these stories were published, I was asked about participating in a talk-show on Friday evening.
April 22, 2020
New publication: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music
After several years of hard work, we are very happy to announce a new publication coming out of the MICRO project that I am leading: Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music (Frontiers Psychology).
This is the first journal article of my PhD student Agata Zelechowska, and it reports on a standstill study conducted a couple of years ago. It is slightly different than the paradigm we have used for the Championships of Standstill.
March 22, 2020
Method chapter freely available
I am a big supporter of Open Access publishing, but for various reasons some of my publications are not openly available by default. This is the case for the chapter Methods for Studying Music-Related Body Motion that I have contributed to the Springer Handbook of Systematic Musicology.
I am very happy to announce that the embargo on the book ran out today, which means that a pre-print version of my chapter is finally freely available in UiO’s digital repository.
June 6, 2019
NIME publication and performance: Vrengt
My PhD student Cagri Erdem developed a performance together with dancer Katja Henriksen Schia. The piece was first performed together with Qichao Lan and myself during the RITMO opening and also during MusicLab vol. 3. See here for a teaser of the performance:
This week Cagri, Katja and myself performed a version of the piece Vrengt at NIME in Porto Alegre.
We also presented a paper describing the development of the instrument/piece:
August 7, 2018
New article: Correspondences Between Music and Involuntary Human Micromotion During Standstill
I am happy to announce a new journal article coming out of the MICRO project:
Victor E. Gonzalez-Sanchez, Agata Zelechowska and Alexander Refsum Jensenius
Correspondences Between Music and Involuntary Human Micromotion During Standstill
Front. Psychol., 07 August 2018 | https://doi.org/10.3389/fpsyg.2018.01382
Abstract: The relationships between human body motion and music have been the focus of several studies characterizing the correspondence between voluntary motion and various sound features. The study of involuntary movement to music, however, is still scarce.
March 12, 2018
Nordic Sound and Music Computing Network up and running
I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.
This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.
December 13, 2017
Come study with me! New master's programme: Music, Communication and Technology
It has been fairly quiet here on the blog recently. One reason for this is that I am spending quite some time on setting up the new Music, Communication and Technology master’s programme. This is an exciting collaborative project with our colleagues at NTNU. The whole thing is focused around network-based communication, and the students will use, learn about, develop and evaluate technologies for musical communication between the two campuses in Oslo and Trondheim.
December 13, 2017
Come work with me! Lots of new positions at University of Oslo
I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.
October 9, 2017
And we're off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.
Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed.
September 11, 2017
Sverm-Resonans - Installation at Ultima Contemporary Music Festival
I am happy to announce the opening of our new interactive art installation at the Ultima Contemporary Music Festival 2017: Sverm-resonans.
Time and place: Sep. 12, 2017 12:30 PM - Sep. 14, 2017 3:30 PM, Sentralen
Conceptual information The installation is as much haptic as audible.
An installation that gives you access to heightened sensations of stillness, sound and vibration.
Stand still. Listen. Locate the sound. Move. Stand still. Listen. Hear the tension.
July 20, 2017
SMC paper based on data from the first Norwegian Championship of Standstill
We have been carrying out three editions of the Norwegian Championship of Standstill over the years, but it is first with the new resources in the MICRO project that we have finally been able to properly analyze all the data. The first publication coming out of the (growing) data set was published at SMC this year:
Reference: Jensenius, Alexander Refsum; Zelechowska, Agata & Gonzalez Sanchez, Victor Evaristo (2017). The Musical Influence on People’s Micromotion when Standing Still in Groups, In Tapio Lokki; Jukka Pa?
June 22, 2017
New Master's Programme: Music, Communication & Technology
{.description .introduction} We are happy to announce that “Music, Communication & Technology” will be the very first joint degree between NTNU and UiO, the two biggest universities in Norway. The programme is now approved by the UiO board and will soon be approved by the NTNU board.
www.uio.no/mct-master www.ntnu.edu/studies/mct This is a different Master’s programme. Music is at the core, but the scope is larger. The students will be educated as technological humanists, with technical, reflective and aesthetic skills.
May 3, 2017
New publication: Sonic Microinteraction in the Air
I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.
{.csl-bib-body} {.csl-entry} Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
{.csl-entry}
{.csl-entry} Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
March 10, 2017
New Book: A NIME Reader
I am happy to announce that Springer has now released a book that I have been co-editing with Michael J. Lyons: “A NIME Reader: Fifteen Years of New Interfaces for Musical Expression”. From the book cover:
What is a musical instrument? What are the musical instruments of the future? This anthology presents thirty papers selected from the fifteen year long history of the International Conference on New Interfaces for Musical Expression (NIME).
February 5, 2017
Music Moves on YouTube
We have been running our free online course Music Moves a couple of times on the FutureLearn platform. The course consists of a number of videos, as well as articles, quizzes, etc., all of which help create a great learning experience for the people that take part.
One great thing about the FutureLearn model (similar to Coursera, etc.) is that they focus on creating a complete course. There are many benefits to such a model, not least to create a virtual student group that interact in a somewhat similar way to campus students.
February 3, 2017
Starting up the MICRO project
I am super excited about starting up my new project - MICRO - Human Bodily Micromotion in Music Perception and Interaction - these days. Here is a short trailer explaining the main points of the project:
Now I have also been able to recruit two great researchers to join me, postdoctoral researcher Victor Evaristo Gonzalez Sanchez and PhD fellow Agata Zelechowska. Together we will work on human micromotion, how music influences such micromotion, and how we can get towards microinteraction in digital musical instruments.
September 7, 2016
New SMC paper: Optical or Inertial? Evaluation of Two Motion Capture Systems for Studies of Dancing to Electronic Dance Music
My colleague Ragnhild Torvanger Solberg and I presented a paper at the Sound and Music Computing conference in Hamburg last week called: “Optical or Inertial? Evaluation of Two Motion Capture Systems for Studies of Dancing to Electronic Dance Music”.
This is a methodological paper, trying to summarize our experiences with using our Qualisys motion capture system for group dance studies. We have two other papers in the pipeline that describes the actual data from the experiments in question.
July 15, 2016
New paper: NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs
At NIME we have a large archive of the conference proceedings, but we do not (yet) have a proper repository for instrument designs. For that reason I took part in a workshop on Monday with the aim to lay the groundwork for a new repository:
NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs [PDF]
This workshop will explore the potential creation of a community database of digital musical instrument (DMI) designs.
July 2, 2016
New paper: Exploring Sound-Motion Similarity in Musical Experience
New paper in Journal of New Music Research:
Exploring Sound-Motion Similarity in Musical Experience (fulltext)**
**Godøy, Rolf Inge; Song, Min-Ho; Nymoen, Kristian; Haugen, Mari Romarheim & Jensenius, Alexander Refsum
Abstract: People tend to perceive many and also salient similarities between musical sound and body motion in musical experience, as can be seen in countless situations of music performance or listening to music, and as has been documented by a number of studies in the past couple of decades.
March 13, 2016
New project Funding: MICRO!
I am happy to announce that I have received funding from the Norwegian Research Council’s program Young Research Talents for the project: MICRO - Human Bodily Micromotion in Music Perception and Interaction. This is a 4-year long project and I will be looking for both a PhD and postdoctoral fellow to join the team. The call will be out later this year, but please do not hesitate to contact me right if you are interested.
January 24, 2016
New MOOC: Music Moves
Together with several colleagues, and with great practical and economic support from the University of Oslo, I am happy to announce that we will soon kick off our first free online course (a so-called MOOC) called Music Moves.
Music Moves: Why Does Music Make You Move? Learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
[Go to course – starts 1 Feb](https://www.
June 2, 2015
New publication: Microinteraction in Music/Dance Performance
This week I am participating at the NIME conference (New Interfaces for Musical Expression), organised at Louisiana State University, Baton Rouge, LA. I am doing some administrative work as chair of the NIME steering committee, and I was happy to present a paper yesterday:
Title
Microinteraction in Music/Dance Performance
Abstract
This paper presents the scientific-artistic project Sverm, which has focused on the use of micromotion and microsound in artistic practice. Starting from standing still in silence, the artists involved have developed conceptual and experiential knowledge of microactions, microsounds and the possibilities of microinteracting with light and sound.
December 13, 2014
New publication: From experimental music technology to clinical tool
I have written a chapter called From experimental music technology to clinical tool in the newly published anthology Music, Health, Technology and Design, edited by Karette A. Stensæth from the Norwegian Academy of Music. Here is the summary of the book:
This anthology presents a compilation of articles that explore the many intersections of music, health, technology and design. The first and largest part of the book includes articles deriving from the multidisciplinary research project called RHYME (www.
November 5, 2014
My research on national TV
A couple of weeks ago, NRK, the Norwegian broadcasting company screened a documentary about my research together with the physiotherapists at NTNU in the CIMA project. The short story is that we have developed the tools I first made for the Musical Gestures Toolbox during my PhD, into a system with the ambition of detecting signs of cerebral palsy in infants.
The documentary was made for the science program Schrödingers Katt, and I am very happy that they spent so much time on developing the story, filming and editing.
May 1, 2014
New publication: How still is still? exploring human standstill for artistic applications
I am happy to announce a new publication titled How still is still? exploring human standstill for artistic applications (PDF of preprint), published in the International Journal of Arts and Technology. The paper is based on the Sverm project, and was written and accepted two years ago. Sometimes academic publishing takes absurdly long, which this is an example of, but I am happy that the publication is finally out in the wild.
July 15, 2013
Documentation of the NIME project at Norwegian Academy of Music
From 2007 to 2011 I had a part-time research position at the Norwegian Academy of Music in a project called New Instruments for Musical Exploration, and with the acronym NIME. This project was also the reason why I ended up organising the NIME conference in Oslo in 2011.
The NIME project focused on creating an environment for musical innovation at the Norwegian Academy of Music, through exploring the design of new physical and electronic instruments.
February 20, 2013
New PhD Thesis: Kristian Nymoen
I am happy to announce that fourMs researcher Kristian Nymoen has successfully defended his PhD dissertation, and that the dissertation is now available in the DUO archive. I have had the pleasure of co-supervising Kristian’s project, and also to work closely with him on several of the papers included in the dissertation (and a few others).
Reference K. Nymoen. Methods and Technologies for Analysing Links Between Musical Sound and Body Motion.
February 14, 2013
New Master Thesis: Freestyle Dressage: an equipage riding to music
I am happy to announce that the dissertation of one my master students has just been made available in the DUO archive:
Catherine Støver: Freestyle Dressage : an equipage riding to music Catherine wrote about the importance and influence of music in freestyle dressage. Most of my students are working on more music technological topics, and I can clearly say that supervising Catherine was both fun and a great learning experience for myself.
January 17, 2013
NIME 2013 deadline approaching
{.alignright .size-full .wp-image-2180 width=“211” height=“160”}
Here is a little plug for the submission deadline for this year’s NIME conference. I usually don’t write so much about deadlines here, but as the current chairof the international steering committee for the conference series, I feel that I should do my share in helping to spread the word. The NIME conference is a great place to meet academics, designers, technologists, and artists, all working on creating weird instruments and music.
December 13, 2012
Performing with the Norwegian Noise Orchestra
Yesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.
For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.
September 5, 2012
Teaching in Aldeburgh
I am currently in beautiful Aldeburgh, a small town on the east coast of England, teaching at the Britten-Pears Young Artist Programme together with Rolf Wallin and Tansy Davies. This post is mainly to summarise the things I have been going through, and provide links for various things.
Theoretical stuff My introductory lectures went through some of the theory of an embodied understanding of the experience of music. One aspect of this theory that I find very relevant for the development of interactive works is what I call action-sound relationships.
August 16, 2012
Reflections on the roles of instrument builder, composer, performer
One thing that has occurred to me over recent years, is how the new international trend of developing music controllers and instruments, as for example most notably seen at the annual NIME conferences, challenges many traditional roles in music. A traditional Western view has been that of a clear separation between instrument constructor, musician and composer. The idea has been that the constructor makes the instrument, the composer makes the score, the performer plays the score with the instrument, and the perceiver experiences the performance, as illustrated in the figure below.
February 3, 2012
Sonification of motiongrams
A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.
See below for the full paper and video examples.
The Paper Download paper (PDF 2MB) Abstract: The paper presents a method for sonification of human body motion based on motiongrams.
March 28, 2011
Concert: Victoria Johnson
Together with Victoria Johnson I have developed the piece Transformation, a piece where we are using video analysis to control sound selection and spatialisation. We have been developing the setup and piece during the last couple of years, and performed variations of the piece at MIC, the Opera house and at the music academy last year.
The piece will be performed again today, Monday 28 March 2011 at 19:00 at the Norwegian Academy of Music.
October 25, 2010
Music is not only sound
After working with music-related movements for some years, and thereby arguing that movement is an integral part of music, I tend to react when people use “music” as a synonym for either “score” or “sound”.
I certainly agree that sound is an important part of music, and that scores (if they exist) are related to both musical sound and music in general. But I do not agree that music is sound.
August 6, 2009
Book manuscript ready
Over the last year I have been working on a text book based on my dissertation. It started out as a translation of my dissertation into Norwegian, but I quickly realized that an educational text is much more useful. So in practice I have written a totally new book, although it is drawing on research from my dissertation. The title of the book is Musikk og bevegelse (Music and movement) and that is exactly what it is about.
June 6, 2008
Virtual slide guitar
Jyri Pakarinen just presented a paper on the Virtual Slide Guitar (VSG) here at NIME in Genova.
They used a commercial 6DOF head tracking solution from Naturalpoint called TrackIR 4 Pro. The manufacturer promises:
Experience real time 3D view control in video games and simulations just by moving your head! The only true 6DOF head tracking system of its kind. TrackIR takes your PC gaming to astonishing new levels of realism and immersion!
May 28, 2008
MT9 format
Seems like the new MT9 format, or Music 2.0 as the company Audizen calls it, is all over the news these days. The idea is simple, and has been explored for years in the research community: distribute multichannel audio, so that the end user can have control over the single tracks. The problem of course is to make this into a standard, and I see many challenges in how this could be implemented:
May 8, 2008
OLPC Sound Samples
I am doing some “house-cleaning” on my computer, and came across the link to the OLPC Sound Samples which were announced last month. This collection covers a lot of different sounds, ranging from the Berklee samples to sets created by people in the CSound community. Obviously, not all the 10GB is equally interesting, but the initiative is excellent, and along with the Freesound project, it makes a great resource for various projects.
April 1, 2008
David Huron: Listening Styles and Listening Strategies
In a presentation at the Society for Music Theory 2002 Conference, 2002, David Huron proposed 21 listening modes:
Distracted listening Tangential listening Metaphysical listening Signal listening Sing-along listening Lyric listening Programmatic listening Allusive listening Reminiscent listening Identity listening Retentive listening Fault listening Feature listening Innovation listening Memory scan listening Directed listening Distance listening Ecstatic listening Emotional listening Kinesthetic listening Performance listening and he concludes: “This list is not intended to be exhaustive.
February 15, 2008
Recordings in Casa Paganini
The location of the EyesWeb Week is the facilities of the DIST group in the beautiful Casa Paganina, including a large auditorium next to the laboratories. This allows for an ecological setting for experiments, since performers can actually perform on a real stage with real audience. I wish we could have something like this in Oslo!
Here a picture from an experimental setup where we are looking at the synchronisation between the musicians in a string trio.
February 14, 2008
Emotional music examples
The Peretz group has made available a set of musical excerpts with emotion ratings. Perhaps not the most exciting musical collection, but I think it is very important that the community starts building some data sets that can be used as reference for various type of analyses.
We really need to create a set of music recordings including motion capture and video, but this first requires that we develop some common format that can be used for synchronisation and sharing.
February 14, 2008
Syncing Movement and Audio using a VST-plugin
I just heard Esteban Maestre from UPF present his project on creating a database of instrumental actions of bowed instruments, for use in the synthesis of score-based material. They have come up with a very interesting solution to the recording and synchronisation of audio with movement data: Building a VST plugin which implements recording of motion capture data from a Polhemus Liberty, together with bow sensing through an Arduino. This makes it possible to load the VST-plugin inside regular audio sequencing software and do the recording from there.
November 8, 2007
Musical vs. Music-related
Working on a book chapter, I am trying to clarify some terminology. Right now I am thinking about the differences between “musical” and “music-related” movements/actions/gestures. What is the difference? I find that it makes sense to think about whether the action is direct or indirect. In other words:
Musical actions: actions involved in music making, e.g. performing an instrument (i.e. sound-producing actions). Music-related actions: actions that are the result of, or influenced by, music, e.
October 23, 2007
Music Performance Research
I heard about the initiative last year at Music & Gesture 2 in Manchester, and now I see that the new online journal Music Performance Research is actually up and running.
Music Performance Research is an international peer-reviewed journal that disseminates theoretical and empirical research on the performance of music. Its purpose is to disseminate research on the nature of music performance from both theoretical and empirical perspectives. The journal publishes contributions from all disciplines that are relevant to music performance, including archaeology, cultural studies, composition, computer science, education, ethnomusicology, history, medicine, music theory and analysis, musicology, philosophy, physics, psychology, neuroscience and sociology.
October 10, 2007
Debut of the NMH Laptop Orchestra
As part of the Ultima festival and the opening of this year’s Musikkteknologidagene, Kjell Tore Innervik and I organised the debut of the NMH Laptop Orchestra. Inspired by PLORK, we lined up with laptops and performed two pieces by Alan Tormey and Ge Wang. This was an immediate success, and we hope to establish this as a permanent ensemble from now.
{#image494}
September 28, 2007
Towards Active Music... (or not)
{#image489} I am doing some background research for a paper on “active music” and have been testing various audio software over the last few days. I was very excited about testing GarageBand ’08, since Apple has been shouting loudly about its new “magic” features. I have to say that I had some expectations that we would actually see some novel features here, especially since they promise a “hand-picked” band on a virtual stage.
June 12, 2007
Keyframe
Henrik Marstrander will present his master thesis project tomorrow. This is an interesting visual table for controlling musical sound.
Details: onsdag 13.6 kl 1230. Rom på venstre side i gangen på vei til Salen.
May 15, 2007
Journal of interdisciplinary music studies
There is a new music journal out titled Journal of interdisciplinary music studies, and which seems to be freely available online. I was particularly pleased to read Richard Parncutt’s opening paper on the history and future of systematic musicology. While it has been overshadowed (and to some extent suppressed) by historical musicology for the last decade, there seems to be a growing interest for systematic musicology today.
However, as Parncutt argues, much of this research is carried out under other names and in other departments, e.
March 19, 2007
Active Music
Tod Machover’s article Shaping Minds Musically is an interesting read, summarising much of the work on hyperinstruments that have happened at the MIT Media Lab during the last ten years. The main point he is trying to make, is that music should be active rather than passive. This comes from the observation that most people’s involvement with music is from a reception side rather than from production.
There is more music than ever in the air, but fewer of us actually play music, sing music, or create our own music.
March 15, 2007
ISSSM 2007
Students in musicology, music cognition and technology should consider ISSSM 2007:
Following on the success of the first international summer school in systematic musicology (ISSSM 2006), the summer school will be held for the second time at IPEM, the research centre of the Department of Musicology of Ghent University (Belgium). This year courses will focus on current topics in the research field such as embodied music cognition, music information retrieval and music and interactive media.
March 8, 2007
Open Form Workshop
In between everything else I will be participating in the Open Form Workshop at the Music Academy this weekend. Christian Wolff, the last living of the “New York composers”, is visiting Oslo and we will be working with him during the workshop.
I have only had time to participate in some of the rehearsals so far, and it is very interesting. The pieces range from being very strict to very open leaving most things up to the performers.
February 20, 2007
Recording Hoax
Craig Sapp (formerly at CCARH now at CHARM) writes:
I have been analyzing the performances of Chopin Mazurkas and have been noticing an unusual occurence: the performances of the same two pianists always matched whenever I do an analysis for a particular mazurka. In fact, they matched as well as two different re-releases of the same original recording.
The full story about how the tracks have been slightly time-stretched, panned and EQed before being rereleased is covered in a recent story in Gramophone.
February 17, 2007
Bob Ludwig on Surround Mixing
I went to a speech on surround mixing (5.1) last night by Bob Ludwig of Gateway Mastering. He spent a lot of time talking about gear and technicalities of mastering, and also discussed the different stages in mastering for various formats SACD, DVD-Audio etc. An interesting thing he commented on is the fact that when Dolby Digital is downmixed to stereo in consumer gear, the LFE channel is left out. So he advised to use the LFE (.
February 17, 2007
Movement, action, gesture
Ever since I started my PhD project I have been struggling with the word gesture. Now as I am working on a theory chapter for my dissertation, I have had to really try and decide on some terminology, and this is my current approach:
I use movement as the general term to describe the act of changing physical position of body parts related to music performance or perception. Action is used to denote goal-directed movements that form a separate unit.
February 17, 2007
Trond Lossius' fellowship report
I spent my flight to Montreal (which became much longer than I expected when I was rescheduled through Chicago) reading Trond Lossius’ report for the Fellowship in the arts program. He addresses a number of interesting topics:
Commenting on the necessity for carrying out research for instead of on art, he discusses the concept of “art as code”:
It is not only a question of developing tools. [..] Programming code becomes a meta-medium, and creating the program is creating the art work.
February 12, 2007
Brad Garton
I came across Brad Garton’s blog via Tim. It starts:
Last week I was diagnosed with multiple myeloma, a fairly bad cancer of the bone marrow. The good news is that I am relatively young to be diagnosed with this disease and it seems that it was detected early. The bad news is that, well, it’s a ‘bad’ cancer to have. I think I’m about to embark on yet another life adventure.
February 8, 2007
MSc in Music Tech at Georgia Tech
Georgia Tech has been hiring a young and interesting music tech faculty over the last years, and now they start a Master of Science program in music tech with a focus on the design and development of novel enabling music technologies. This is yet another truly interdisciplinary music tech program to appear over the last couple of years, and accepting students from a number of different backgrounds, including music, computing and engineering.
February 8, 2007
Windows Vista soundscape
I wrote this blog entry several months ago, but never posted it because I thought I would have time to go back and evaluate the sounds more. Since I don’t see that happen any time before I finish my dissertation, I just go along and post it now:
Microsoft has posted some info and examples of the Vista soundscape. The sounds are designed by Robert Fripp and will be some of the most well known sounds on the planet in not too long.
January 12, 2007
Vibrating Plates
Derek Kverno and Jim Nolen have studied the vibration of circular, square and rectangular plates with unbound edges, and have posted som very nice images of radiation patterns of vibrating plates.
January 11, 2007
Music for One Apartment and Six Drummers
A charming little Swedish Stomp-inspired video:
January 5, 2007
Visual Acoustics
Christian Frisson pointed me to Visual Acoustics, a wonderful little web based music improvisation tool. Very simple and elegant
December 31, 2006
Noise
{#image361}If you ever wanted some nice, pink noise in the background while working on your computer, Noise is the tool! Apparently, lots of people use this to try and shut out more distractive sounds. While I would prefer a program doing noise-cancelling (which would probably be tricky using the built-in microphone since it would also detect your own sounds while typing on the keyboard), this actually works ok.
December 20, 2006
Linear presentations
I have been thinking about what I wrote about improvisation a couple of weeks ago. While preparing for a presentation last week, I was thinking about how linear my presentation software (Apple’s Keynote) is. It is as bad as PowerPoint when it comes to locking you into a linear presentation style. This is fine if you have a clear idea of what you would like to say and which order you want to say things in, but I often find that I have several sections that could be organized differently dependent on the audience, the time constraints etc.
December 20, 2006
Movement-Sound Couplings
I am working on the theory chapter of my dissertation, and am trying to pin down some terminology. For a long time I have been using the concept of gesture-sound relationships to denote the intimate links between a physical movement and the resultant sound. However, since I am throwing away gesture for now, I also need to reconsider the rest of my vocabulary.
Hodgins (2004) uses the term music-movement structural correspondences, which I find problematic since it places music first.
December 8, 2006
Music troll performance
{#image349}I performed with the music troll yesterday. It has been resting in the lab for a couple of months, and was a bit “rusty” to start up. What caused the biggest headache was to get my performance patches to work on my new MacBook. Last month I found that PeRColate was released as UB, but I hadn’t tested them. First I had problems making Max finding them, which seemed to be because of the source-folder resting in the path (thanks to mzed for the tip).
December 6, 2006
On Improvisation
Yesterday, someone commented that improvisation is all about being able to play some random stuff, in realtime. My experience is really the opposite. Learning to improvise on a musical instrument is really all about learning scales, phrases, motifs, and getting experienced in putting them together in a structured way. In realtime.
The same is true for improvised presentations and speeches. After holding a number of presentations on my research lately, I have been thinking about how similar the preparation process for a presentation is to a music performance.
November 16, 2006
M-AUDIO - MidAir
M-Audio has released MidAir a wireless MIDI transmitter and receiver system.
{width=“460” height=“250”}
The system is also able to synchronize between several performers.
I just wish that some of these large companies would start to use OSC one day…
November 3, 2006
Tapestrea
TAPESTREA (or taps) is a unified framework for interactively analyzing, transforming and synthesizing complex sounds. Given one or more recordings, it provides well-defined means to:
identify points of interest in the sound and extract them into reusable templates transform sound components independently of the background and/or other events continually resynthesize the background texture in a perceptually convincing manner controllably place event templates over backgrounds, using a novel graphical user interface and/or scripts written in the ChucK audio programming language leverage similarity based retrieval to locate other interesting sound components Taps provides a new way to completely transform a sound scene, dynamically generate soundscapes of unlimited length, and compose and design sound by combining elements from different recordings.
November 2, 2006
Audacity 1.3
There’s a new beta of Audacity 1.3 out. Previous versions have been somewhat unstable and lacking features, but now it starts to improve:
- New selection bar and improved selection tools
Dockable toolbars New “Repair” effect, other improved effects Auto-save and automatic crash recovery {#image307}
October 16, 2006
NoMuTe 2006
Just back from the 1st Nordic Music Technology Conference organized by NTNU in connection with Trondheim MatchMaking organized by TEKS. This is the follow-up conference from Musikkteknologidagene which I organized in Oslo last year as an attempt to gather people working within the field.
Ola Nordahl has posted some nice pictures from the Opening day, where Paul Lansky held a great keynote about his compositions (check out his music page for examples of his work).
September 29, 2006
Norwegian Science Fair
Last weekend we participated (again) with a stand at a big science fair down in the city centre of Oslo during the Norwegian Research Days.
{.imagelink}
The most interesting thing, and also what I have spent the most time on lately was a “music troll” I have been making together with Einar Sneve Martinussen and Arve Voldsund. The troll is basically a box with four speakers on the sides, and four arms sticking out with heads with included sensors.
August 22, 2006
Soundflower
Soundflower from Cycling ‘74, a small freeware utility allowing internal audio routing
under OS X, is available in Universal Binary for MacTel computers. Soundflower is similar to Jack, and while the latter has some more advanced features, I find Soundflower easier to use. They are both perfect for recording for example streaming audio.
August 18, 2006
Lasse - Hyperactive
{#image258}Lasse - Hyperactive is a very simple and low-cost videomusic production, but also very powerful and funny.
August 2, 2006
Unhappy Hour
I found (via Trond’s blog) the funny story Unhappy Hour about a group of people getting stuck with a jukebox playing Brian Eno’s Thursday Afternoon. I bought the DVD not too long ago, and it has become one my favourites.
Eno writes in the liner notes: These pieces represent a response to what is presently the most interesting challenge of video: how does one make something that can be seen again and again in the way that a record can be listened to repeatedly?
July 17, 2006
New book: New Digital Musical Instruments: Control and Interaction Beyond the Keyboard
{.imagelink}Eduardo Miranda and Marcelo M. Wanderley have just released a new book called New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. The chapters are:
- Musical Gestures: Acquisition and Mapping
Gestural Controllers Sensors and Sensor-to-Computer Interfaces Biosignal Interfaces Toward Intelligent Musical Instruments So far most publications in this field have been in conference proceedings, so it is great to have a book that can be used in teaching.
July 15, 2006
Electromyography
For some experiments we are conducting on piano playing I have been looking for a way of measuring muscle activity, or electromyography as it is more properly called:
Electromyography (EMG) is a medical technique for evaluating and recording physiologic properties of muscles at rest and while contracting. EMG is performed using a instrument called an electromyograph, to produce a record called an electromyogram. An electromyograph detects the electrical potential generated by muscle cells when these cells contract, and also when the cells are at rest.
July 11, 2006
BEAM Foundation
In a discussion of “the most complex Max patch”, Barry Threw pointed to the patch used by TrioMetrik, the ensemble of the BEAM Foundation. There’s also a video with shots of musicians and patches.
July 11, 2006
Reactive Sound System
The Reactive Sound System adds sounds to the current soundscape, either to mask for example speech, or to make unpleasant sounds more pleasant. They have also developed an acoustic curtain with a microphone and flat speakers which can work with the system.
July 9, 2006
Reverse Engineering Autechre with Max/MSP and Reaktor
{#image232}Came across a web page with reverse engineered Autechre Max/MSP and Reaktor patches. Interesting.
June 22, 2006
NIME 06 Installations
Still trying to get through all my notes from Resonances… Of the many installations at NIME 06, I found three of them particularly interesting:
{#image227}Musical Loom by Kingsley Ng was based around an old loom standing in a dark room (or rather a “tent” built between the entrances to the toilets…). It was possible to “play” the loom and sounds and images would appear. The technical setup was built with a combination of infrared cameras and ultrasound sensors, and using EyesWeb for control.
June 21, 2006
Interaction Design
We have started a collaboration between between UiO and AHO, and some of the music technology students followed courses with the interaction designers at AHO this spring semester. This was a great success, and I was impressed with what came out of it.
Henrik Marstrander has worked on a table interface where he can control various musical parameters, and Jon Olav Eikenes and Marie Wennesland has made a multi-touch multi-touch interface modelled after Jeff Han.
June 8, 2006
NIME 06 Concerts
There were lots of concerts at NIME 06, and many interesting things to comment about:
Ben Neill played Mutantrumpet a hybrid acoustical and electronic instrument which was very interesting. {#image219}Circumference Cycles by Chris Strollo and Tina Blaine was very captivating. Glass plates suspended by metal wires with amplification and some effects, sounded great! Mari Kimura’s two pieces (Polytopia and Tricot) were great and showed how well electronics can be used together with an acoustic instrument (violin).
May 29, 2006
United States Patent Application: 0060107822
Apple has recently filed an interesting US Patent Application:*
*
The invention generally pertains to a hand-held computing device. More particularly, the invention pertains to a computing device that is capable of controlling the speed of the music so as to affect the mood and behavior of the user during an activity such as exercise. By way of example, the speed of the music can be controlled to match the pace of the activity (synching the speed of the music to the activity of the user) or alternatively it can be controlled to drive the pace of the activity (increasing or decreasing the speed of the music to encourage a greater or lower pace).
May 27, 2006
Deep Listening Institute, Ltd.
Doug pointed me to Deep Listening:
Deep Listening is a philosophy and practice developed by Pauline Oliveros that distinguishes the difference between the involuntary nature of hearing and the voluntary selective nature of listening. The result of the practice cultivates appreciation of sounds on a heightened level, expanding the potential for connection and interaction with one’s environment, technology and performance with others in music and related arts.
May 22, 2006
Political Eurovision Song Contest
The Eurovision Song Contest (or Melodi Grand Prix as it often called) is a bizarre annual music competition broadcasted over the whole of Europe. The music is rarely in focus, and most people tend to “love to hate” the concept. However, for many of the new countries in Europe the contest is important to show their own existence and bound with their allies this Norwegian commentator writes.
May 21, 2006
KORE Universal Sound Platform
Native instruments states that KORE should be the new universal sound platform solving “all problems” in large music software setups. Basically, it works as a generic host for plugins (VST and AU) that can be used in sequencers, and it comes with a hardware controller to facilitate the control.
The argumentation is convincing and the pictures nice, but it seems like this “new” product only scratches on the surface of the real problem.
May 21, 2006
USB Guitar
Seems like everything is getting USB-connectivity these days. The Samson condenser microphone has been out for a while, and not Behringer is releasing a .
iAXE393 USB-guitar, the Ultimate Electric Guitar with Built-In USB Port to Connect Straight to Your Computer. Jam and Record with Killer Modeling Amps and Stomp Boxes. Seems like it only outputs digital audio, though. Would have been interesting if it had had a built-in audio-to-MIDI (or even better to OSC) converter.
May 20, 2006
Sonic Visualiser
{.imagelink}Sonic Visualiser from Queen Mary’s is yet another software tool for visualizing audio content. However, there are some features that stand out:
Cross-platform: available for OS X, Linux, Windows GPL’ed Native support for aiff, wav, mp3 and ogg (but what about AAC?) Annotations: Support for adding labelled time points and defining segments, point values and curves. The annotations can be overlayed on top of waveforms and spectrograms Time-stretch Vamp Plugins is at the core of the Sonic Visualiser, and it seems like they want this to become a standard for non-realtime audio plugins.
May 19, 2006
int.lib by Oli Larkin
{.imagelink}int.lib is a set of abstractions/javascripts for Cycling 74’s Max MSP software that facilitates the control of multiple parameters by navigating a two dimensional visual environment. It implements a gravitational system, allowing the user to represent presets with variable sized balls. As the user moves around the space, the size of the balls and their proximity to the mouse cursor affects the weight of each preset in the interpolated output. int.
May 15, 2006
Laser Sound Performance
{#image172}A memorable show during the Elektrafestival was the Laser Sound Performance by Edwin van der Heide. He used two lasers and (I think) motorized mirrors and filters to create laser patterns on the wall and in the smoke filling the space. The sound was mostly sine tones, sawtooths and various types of noise at an extremely loud level (even with ear plugs). Not really sure how he did it, but there was a really tight synch between the movement of the lasers and the sounds.
May 13, 2006
Marnix de Nijs, media artist
{.imagelink}The installation Spatial Sounds (100dB at 100km/h) by Marnix de Nijs and Edwin van der Heide. Spatial Sounds 100 dB at 100 km/h was set up at Usine-C during the Elektrafestival.
A speaker is mounted on a metallic arm, rotating around at different speeds dependent on the people in the room. Ultrasonic sensors detect the distance to people in the space and changes the sound being played as well as speed of rotation (more technical info here).
May 11, 2006
Why do they play so loud?
I often go to concerts, and too often I find the need to use ear plugs because of loud sound levels. I really don’t get it, why is it necessary to play so loud all the time? Usually lots of people around me agrees that the music is unpleasantly loud, and I often see other people using ear plugs.
I have bought expensive ear plugs a couple of times, but I always tended to forget them (eventually loosing them…), so now I have just bought lots of really cheap ones so that I can have a pair in every pocket.
May 9, 2006
Frank A. Russo
Came across the web page of Frank A. Russo, and found a very interesting paper on Hearing Aids and Music discussing the auditory design of hearing aids:
Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true.
April 28, 2006
Live images on björk's MEDÚLLA web page
There are some simple gif animations that start playing when you hover over some of the images on björk’s MEDÚLLA web page. Nowadays, with lots of flash graphics everywhere, you rarely see such low-quality gifs anymore. However, for some reason I really found these small gifs appealing. Reminds me about David Crawford’s Stop Motion Studies.
April 27, 2006
Sidney Fels lecture
Just went to a lecture by Sidney Fels from the Human Communication Technologies lab and MAGIC[]{#mce_editor_0_parent} at the University of British Columbia (interestingly enough located in the Forest Sciences Centre…). He was talking on the topic of intimate control of musical instruments, and presented some different projects:
GloveTalkII: “a system that translates hand gestures to speech through an adaptive interface.” Iamascope: a caleidoscope like thing, where users would see themselves on a big screen, as well as controlling a simple sound synthesis.
April 26, 2006
MIDI network on OS X
In a discussion on using OSC to communicate over networks, Darryl just mentioned that OS X (apparently starting from Tiger) has the possibility to send MIDI messages over the network. I wonder how I have managed to oversee this feature, since it is sitting there as an option right in the Audio MIDI setup. The help file reads:
You can use the MIDI network driver to send and receive MIDI information between computers over a network.
April 25, 2006
OSC - MIDI address space
My post over at the Open Sound Control forum:
I guess we are all trying to get rid of MIDI, but as long as we have tons of gear around, it would be good to have a generic way of describing MIDI information in OSC. Perhaps I am missing something obvious, but I have looked around and haven’t found any suggestions for a full implementation of MIDI messages as an OSC address space.
April 24, 2006
Turntable-Controlled Vibrating Chaise Longue
{.imagelink}Daito Manabe has developed a Turntable-Controlled Vibrating Chaise Longue where it is possible to feel 34 sounds played back through a vibrating chaise longue. Lots of pictures of the making process is available on Daitos web page under works/chair the difference.
April 24, 2006
Visual Scratch
{#image139}Jesse Kriss has developed Visual Scratch a realtime visualization of scratch DJ performance, built using Processing, Max/MSP, Ms. Pinky, and MaxLink.
April 23, 2006
Art of Cobra
I went to see the McGill improv ensemble perform Art of Cobra by John Zorn. Usually, I find it more interesting to play free improvisation than listening to it, but this time it was quite entertaining.
Cobra is a rule game, explained by Zorn as “I’m going to hold up some cards and they’re going to play something.” he prompter holds up a que card, points at the performers and then they playing something.
April 23, 2006
WFS in electronic music
Today I went to a guest lecture by Marije Baalman on WaveFieldSynthesis (a spatial sound reproduction principle based on the Huygens principle) over at Concordia. I heard a demonstration of WFS at IRCAM a couple of years back, and it was good to (finally) get a good theoretical introduction to the field.
They are usually testing it with 24 speakers, but they are now going to make a permanent 900 speaker setup at the Technical University in Berlin for creating a surround WFS setup.
April 23, 2006
Yves Guiard and bimanual action
Yves Guiard should have held a lecture at McGill last week, but unfortunately could not make it. Reading on his web page and looking up some of the references, I found some interesting comments about bimanual control. He writes:
During the nineteen eighties, I spent a lot of time trying to understand the logic of division of labour between the left and the right hands in human movements. I came to believe there is something deeply misleading to the concept of hand dominance, central to established thinking in the field of human laterality.
April 22, 2006
Palindrome
Found some interesting dance/performance examples at the web site of German/American performance company Palindrome. They are also developing the EyeCon video software for interactive performance.
April 21, 2006
LibriVox
LibriVox is a voluntary project set up to record all books in the public domain and make them available, for free, in audio format on the internet. Besides the joy of having audio books, this is also very interesting from a speech/voice research perspective.
Another source for open-source text files is the French Incipit blog. Interestingly enough, I found a French version of Nicholas Cook’s introduction to music!
April 20, 2006
Ball State University Interactive Wireless Sculpture
Ball State University Interactive Wireless Sculpture is an outdoor interactive digital installation interpreting the wireless data infrastructure at Ball State University. Beginning the evening of April 18 and running through April 19, this digital media sculpture, consisting of 4 projection screens, computers, speakers and lights, will broadcast interactive media that reacts to the amount of traffic on the campus’ 15 wireless zones. The sculpture will contain its own wireless access points, sensing local interactions of viewers using wireless devices.
April 19, 2006
monome
{.imagelink}The monome 40h is a reconfigurable grid of sixty-four backlit buttons, connecting with USB and communicating both MIDI and OSC (Create Digital Music Review).
April 19, 2006
Sounds Like Bach
Douglas Hofstadter is discussing music and artificial intelligence:
Back when I was young – when I wrote “Gödel, Escher, Bach” – I asked myself the question “Will a computer program ever write beautiful music?”, and then proceeded to speculate as follows: “There will be no new kinds of beauty turned up for a long time by computer music-composing programs… To think – and I have heard this suggested – that we might soon be able to command a preprogrammed mass-produced mail-order twenty-dollar desk-model ‘music box’ to bring forth from its sterile circuitry pieces which Chopin or Bach might have written had they lived longer is a grotesque and shameful misestimation of the depth of the human spirit.
April 19, 2006
Why Is That Thing Beeping? A Sound Design Primer
Came across a nice introduction to sound design by Max Lord: Why Is That Thing Beeping? A Sound Design Primer - Boxes and Arrows
April 5, 2006
Theater Max
There seems to be a lot of initiatives for making “higher-level” abstractions for working in Max/MSP these days. Now, I just came across a project at UCLA intended mainly for theater productions:
Theater Max is the result of several years of work, lots of trial and error, and far too many hours of programming for us to count. What we now call Theater Max got its start in 2001 with a production of Eugene Ionesco’s Macbett.
April 2, 2006
SPEAR
{.imagelink}SPEAR is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude.
It offers some great features, and I particularly like the possibility to easily select single partials and edit them directly. Most controls also work in realtime.
April 2, 2006
VLDCMCaR
Bob L. Sturm at UC Santa Barbara:
{.imagelink}VLDCMCaR (pronounced vldcmcar) is a MATLAB application for exploring concatenative audio synthesis using six independent matching criteria. The entire application is encompassed in a graphical user interface (GUI). Using this program a sound or composition can be concatenatively synthesized using audio segments from a corpus database of any size. Mahler can be synthesized using hours of Lawrence Welk; howling monkeys can approximate President Bush’s speech; and a Schoenberg string quartet can be remixed using Anthony Braxton playing alto saxaphone.
March 30, 2006
Apple - Sound and Hearing
John Lazarro writes on the Auditory list:
*Apple released a software update today for iPods, that lets users set a maximum dB level for the device, and lets parents lockdown the maximum dB level of their children’s iPod with a combination lock. Apple also put up a website on how to use the feature to limit long-term hearing damage.
*
March 28, 2006
PLOrk: Princeton Laptop Orchestra
{#image113}Dan Trueman and Perry Cook at Princeton have set up an undergrad course called PLOrk: Princeton Laptop Orchestra, where they have 15 workstations consisting of Powerbooks, sound cards, sensor interfaces and spherical speakers. The idea is to give students the chance to improvise and experiment with electronic music in a really hands-on way (more info). Great idea! We should try and set up something like that in Oslo.
March 28, 2006
The 5 Rhythms
I recently got to know about the concept of 5 rhythms, and the Norwegian group doing this.
Gabrielle Roth’s The 5 Rhythmsare an exhilarating and liberating approach to the exploration of improvised movement and dance that is authentic, inspired and catalytic. The 5 Rhythms (Flowing, Staccato, Chaos, Lyrical, Stillness) are a map which can take you on an ecstatic journey, opening you to the inherent wisdom, creativity and energy of your body.
March 17, 2006
sCrAmBlEd?HaCkZ!
sCrAmBlEd?HaCkZ! is a Realtime-Mind-Music-Video-Re-De-Construction-Machine. It is a conceptual software which makes it possible to work with samples in a completely new way by making them available in a manner that does justice to their nature as concrete musical memories.
February 22, 2006
UBC Max/MSP/Jitter Toolbox
Just came across the UBC Max/MSP/Jitter Toolbox which seems to be quite similar to Jamoma. The UBC Max/MSP/Jitter Toolbox is a collection of modules for creating and processing audio in Max/MSP and manipulating video and 3D graphics using Jitter. I have just briefly tested it, and here are some screenshots from one of the example patches.
{.imagelink}
February 21, 2006
Olympic Figure Skating
{.imagelink}Watching the ladies’ figure skating competition from the olympics, I am amazed by the total lack of connection between gestures and music. To start off with, I am not very impressed by the music accompanying the programmes, most being massively layered, romantic orchestral music, but the fact that it is also recorded by a microphone in front of a moderate PA system in the skating hall does not call for a good listening experience.
February 17, 2006
Nord Modular
{.imagelink}Clavia has recently released a new version of their software for Nord Modular which now includes the possibility to create new settings based on evolution algorithms. These algorithms were part of the PhD work of my colleague Palle Dahlstedt from Göteborg, and makes it possible to create new settings from a set of “parents”. Very interesting stuff! The software is available as a free download for both Windows and OSX, but of course you need to have a Clavia synth to really appreciate this…
February 4, 2006
Access Hidden Files on iPod
I found a way of getting access to the music files on my windows-formatted iPod on a mac over at Ecoustics:
Launch the Terminal.application and type:
find /Volumes/[iPod’sNAME]/iPod_Control/Music -print | awk ‘¬ { gsub(/ /, “\ “); print }’
Substitute the name of your iPod for [iPod’sNAME]. Any spaces should be replaced with underscores (_). This will print a list of all the songs inside the Music folder with \ in place of spaces.
January 23, 2006
Into Great Silence
Film director Philip Groening has made Into Great Silence, a film about a Carthusian convent where the monks are living in complete silence:
Only in complete silence, one starts to hear. Only when language resigns, one starts to see.
About 160 minutes of next to total silence. How can that work in a cinema? How silent can it be? Will sound suddenly burst out, without warning? How dark can it be amid the unlit masses of monks in their sanctuary?
January 16, 2006
Intelligent MIDI Sequencing with Hamster Control
I first came across the Intelligent MIDI Sequencing with Hamster Control project a couple of years ago, and still find it a very funny!
January 15, 2006
New Cycling '74 forum
Just found out that Cycling ‘74 has released a brand new forum. Looks very promising, and it is nice that everything is available as RSS feeds.
January 14, 2006
Digital thoughts by Paul Lansky
I came across the piece Notjustmoreidlechatter by composer Paul Lansky, showcasing a fascinating use of voice for creating musical rhythm and texture. And then I found the article Digital thoughts where he explains some of his compositional ideas throughout the years.
January 12, 2006
Demonstrations of Auditory Illusions
I came across a nice site with demonstrations of auditory illusions. There is also the page of Diana Deutsch.
December 30, 2005
Quintet.net
Georg Hajdu has just released a new version of his Quintet.net performance system.
“Quintet.net is an interactive network performance environment invented and developed by composer and computer musician Georg Hajdu. It enables up to five performers to play music over the Internet under the control of a “conductor.” The environment, which was programmed with the graphical programming language Max/MSP consists of four components: the Server, the Client, the Conductor and the Listener; the latter component enables the Internet/network audience to follow the performance […].
December 30, 2005
Web Phases
I have been reading up on hypertext and hypermedia theory and looked around for papers on hypermusic. One of the few papers I found on the topic was by John Maxwell Hobbs describing his 1998 piece Web Phases.
November 29, 2005
Practising electronics
I think Kurt Ralske puts it very well in “The Pianist: A Note on Digital Technique”
“For the classical pianist, the tedium of endless hours of practicing scales takes on an aura of nobility; it’s a virtuous, character-building activity. Instead of practicing scales, the digital artist learns software and hardware, learns programming languages, learns the techniques of creating digital models of sound, image, information, and intelligence.”
I wonder when music technologists will be employed in orchestras as musicians.
November 28, 2005
ChucK : Concurrent, On-the-fly Audio Programming Language
I finally got around to download and try
ChucK : Concurrent, On-the-fly Audio Programming Language by Ge Wang. Feels a bit strange, but I guess I need to work a little bit more with it. It says something about graphical tools in the readme, and I’m looking forward to that.
December 13, 2001
Laser dance
Working with choreographer Mia Habib, I created the piece Laser Dance, which was shown on 30 November 1 December 2001 at the Norwegian Academy of Ballet and Dance in Oslo.
The theme of the piece was “Light”, and the choreographer wanted to use direct light sources as the point of departure for the interaction. Mia had decided to work with laser beams, one along the backside of the stage and one on the diagonal, facing towards the audience.
Tag: academic life
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
Tag: culture
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
Tag: informatics
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
Tag: musicology
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
September 7, 2016
New SMC paper: Optical or Inertial? Evaluation of Two Motion Capture Systems for Studies of Dancing to Electronic Dance Music
My colleague Ragnhild Torvanger Solberg and I presented a paper at the Sound and Music Computing conference in Hamburg last week called: “Optical or Inertial? Evaluation of Two Motion Capture Systems for Studies of Dancing to Electronic Dance Music”.
This is a methodological paper, trying to summarize our experiences with using our Qualisys motion capture system for group dance studies. We have two other papers in the pipeline that describes the actual data from the experiments in question.
February 25, 2014
New department video
[As I have mentioned previously, life has been quite hectic over the last year, becoming Head of Department at the same time as getting my second daughter. So my research activities have slowed down considerably, and also the activity on this blog.]{style=“line-height: 1.5;”}
When it comes to blogging, I have focused on building up my Head of Department blog (in Norwegian), which I use to comment on things happening in the Department as well as relevant (university) political issues.
February 11, 2013
Head of Department!
Today I start as Head of Department of Musicology at the University of Oslo!
One of the things I promised in my application was to set up a Head of Department blog, and I am happy to announce that the first entry was posted this morning.
I am thrilled and excited to get the opportunity to lead the institution that I have both studied and worked at for more than a decade.
October 3, 2010
Survey on eMusicology
Many music researchers, myself included, are dependent on technology in and for their work. In fact, I think many of the most interesting research findings in musicology in recent years are based on the new potential from various types of technology, e.g. tools coming from the music information retrieval (MIR) community.
I am therefore puzzled when I meet music researchers that are not interested in, or even outspokenly negative, to the possibilities of new technologies for music research.
Tag: psychology
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
Tag: writing
May 7, 2023
Different Publication Cultures
At RITMO, we have several different disciplines working together. The three core disciplines at RITMO are musicology, psychology, and informatics. In addition, we have people working in philosophy, physics, computer science, biology, dance studies, and so on. This also means that we have several different publication cultures. In this blog post, I will reflect on the differences between them.
The Paper Proceedings Culture My professorship is in music technology. I don’t know if music technology should be considered a discipline; it might be better described as a community of communities.
December 16, 2022
Exploring Essay Writing with You.com
There has been much discussion about ChatGPT recently, a chat robot that can write meaningful answers to questions. I haven’t had time to test it out properly, and it was unavailable when I wanted to check it today. Instead, I have played around with YouWrite, a service that can write text based on limited input.
I thought it would be interesting to ask it to write about something I know well, so I asked it to write a text based on an abbreviated version of the abstract of my new book:
August 24, 2022
Still Standing Manuscript in Preparation
I sent off the final proofs for my Sound Actions book before the summer. I don’t know when it will actually be published, but since it is off my table, I have had time to work on new projects.
My new project AMBIENT will start soon, but I still haven’t been able to write up all the results from my two projects on music-related micro-motion: Sverm and MICRO. This will be the topic of the book I have started writing this summer, with the working title Still Standing: Exploring Human Micromotion.
July 23, 2021
Sound Actions Manuscript in Preparation
Ever since I finished my dissertation in 2007, I have thought about writing it up as a book. Parts of the dissertation were translated and extended in the Norwegian-language textbook Musikk og bevegelse (which, by the way, is out of print but freely available as an ebook). That book focused primarily on music-related body motion and was written for the course MUS2006 at the University of Oslo. However, my action-sound theory was only partially mentioned and never properly presented in a book format.
December 2, 2020
Meeting New Challenges
Life is always full of challenges, but those challenges are also what drives personal development. I am constantly reminded about that when I see this picture, which was made by my mother Grete Refsum when I started in school.
I think the symbolism in the image is great. The eager child is waiting with open arms for an enormous ball. Even though I am much older now, I think the feeling of starting on something new is always the same.
November 17, 2019
Some tips and tricks when writing academic papers
I have been teaching the course Research Methods, Tools and Issues in our MCT programme this semester. The last class was an “open clinic” in which I answered questions about academic writing. Here is a summary of some of the things I answered, which may hopefully also be useful for others.
Formatting Your academic exam paper is not the place to experiment with fancy layout and formatting. Some basic tips:
January 8, 2013
Batch convert RTF files to TXT
Last year I decided to use plain text files (TXT) as the main file type for all my computer text input. There are several reasons for this, but perhaps the most important one was all the problems experienced when trying to open other types of text-based files (RTF, DOC, etc.) on various iOS and Android devices that I use daily. Another reason is to become independent of specific software solutions, forcing you to use a specific software for something as basic as writing text on your computer or device.
October 29, 2012
To footnote or not
By coincidence I have had several discussions about footnotes, endnotes and different types of citation styles recently. Such discussions often end up in “religious” wars, in which researchers from different disciplines argue why “their” system is the best. I often find myself agreeing with none or everyone in such discussions, since I am working in and between several different disciplines (the arts, humanities, technology, psychology, medicine), and publish my own work in journals that use different ways of handling citations and notes.
November 29, 2011
Application writing as example of stretchtext
I have been working on an ERC Starting Grant application over the last months. Besides the usual conceptual/practical challenges of writing funding applications, this particular application also posed the challenge of writing not only one proposal document, but two: one long (15 pages) and one short (5 pages). I am used to writing research papers and applications where you are dealing with three levels:
title abstract content But for the ERC application I had to handle four levels:
April 26, 2011
Comma after i.e. and e.g.?
I just discovered that it is common to use a comma before and after i.e. and e.g. in American English style of writing. In British English, a comma is inserted before but not after these abbreviations.
August 18, 2010
Writing complex documents
I have been using LaTeX for most of my more advanced writing needs for so many years, that I tend to forget that there are so few other good options out there for writing what could be called “complex” documents, i.e. book-sized documents with a good portion of notes, pictures, links, etc.
I just had to help out in trying to create a large document based on 30+ individual documents in MS Word.
Tag: ambient
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
September 5, 2022
Starting up the AMBIENT project
Today, I am starting up my new research project AMBIENT: Bodily Entrainment to Audiovisual Rhythms. I have recruited a great team and today we will have our first meeting to discuss how to work together in the coming years. I will surely write much about this project on the blog. For now, here is a quick teaser to explain what it is all about:
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
February 9, 2022
Recruiting for the AMBIENT project
I am happy to announce that I am recruiting for my new research project AMBIENT: Bodily Entrainment to Audiovisual Rhythms. The project will continue my line of research into the effects of sound and visuals on our bodies and minds and the creative use of such effects. Here is a short video in which I explain the motivation for the project:
https://www.youtube.com/watch?v=7BFFfIydL5U Now hiring The idea is to put together a multidisciplinary team of three early career researchers experienced with one or more of the following methods: sound analysis, video analysis, interviews, questionnaires, motion capture, physiological sensing, statistics, signal processing, machine learning, interactive (sound/music) systems.
Tag: eye-tracking
January 13, 2023
New MOOC: Pupillometry – The Eye as a Window Into the Mind
I am happy to announce a new online course from RITMO: Pupillometry – The Eye as a Window Into the Mind. This is the third so-called Massive Open Online Course (MOOC) I have been part of making, following Motion Capture and Music Moves. I am excited to get it started on Monday, 16 January.
Discover the applications of pupillometry research Pupillometry is a relatively new research method within the sciences, and it has wide-ranging applications within psychology, neuroscience, and beyond.
October 26, 2021
MusicLab Copenhagen
After nearly three years of planning, we can finally welcome people to MusicLab Copenhagen. This is a unique “science concert” involving the Danish String Quartet, one of the world’s leading classical ensembles. Tonight, they will perform pieces by Bach, Beethoven, Schnittke and folk music in a normal concert setting at Musikhuset in Copenhagen. However, the concert is nothing but normal.
Live music research During the concert, about twenty researchers from RITMO and partner institutions will conduct investigations and experiments informed by phenomenology, music psychology, complex systems analysis, and music technology.
October 30, 2020
MusicTestLab as a Testbed of Open Research
Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab.
October 9, 2017
And we're off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.
Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
Tag: mooc
January 13, 2023
New MOOC: Pupillometry – The Eye as a Window Into the Mind
I am happy to announce a new online course from RITMO: Pupillometry – The Eye as a Window Into the Mind. This is the third so-called Massive Open Online Course (MOOC) I have been part of making, following Motion Capture and Music Moves. I am excited to get it started on Monday, 16 January.
Discover the applications of pupillometry research Pupillometry is a relatively new research method within the sciences, and it has wide-ranging applications within psychology, neuroscience, and beyond.
January 28, 2022
Preparing videos for FutureLearn courses
This week we started up our new online course, Motion Capture: The Art of Studying Human Activity, and we are also rerunning Music Moves: Why Does Music Make You Move? for the seventh time. Most of the material for these courses is premade, but we record a new wrap-up video at the end of each week. This makes it possible to answer questions that have been posed during the week and add some new and relevant material.
January 7, 2022
New online course: Motion Capture
After two years in the making, I am happy to finally introduce our new online course: Motion Capture: The art of studying human activity.
The course will run on the FutureLearn platform and is for everyone interested in the art of studying human movement. It has been developed by a team of RITMO researchers in close collaboration with the pedagogical team and production staff at LINK – Centre for Learning, Innovation & Academic Development.
January 22, 2021
New run of Music Moves
I am happy to announce a new run (the 6th) of our free online course Music Moves: Why Does Music Make You Move?. Here is a 1-minute welcome video:
The course starts on Monday (25 January 2021) and will run for six weeks. In the course, you will learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
We developed the course 5 years ago, but the content is still valid.
January 22, 2019
Music Moves #4 has started
We have just kicked off the fourth round of Music Moves, the free, online course we have developed at University of Oslo. The course introduces a lot of the core theories, concepts and methodologies that we work with at RITMO. This time around we also have participants from both the MCT master’s programme and the NordicSMC Winter School taking the course as an introduction to further on-campus studies.
To help with running the course, we have recruited Ruby Topping, who is currently an exchange student at University of Oslo.
February 5, 2017
Music Moves on YouTube
We have been running our free online course Music Moves a couple of times on the FutureLearn platform. The course consists of a number of videos, as well as articles, quizzes, etc., all of which help create a great learning experience for the people that take part.
One great thing about the FutureLearn model (similar to Coursera, etc.) is that they focus on creating a complete course. There are many benefits to such a model, not least to create a virtual student group that interact in a somewhat similar way to campus students.
January 24, 2016
New MOOC: Music Moves
Together with several colleagues, and with great practical and economic support from the University of Oslo, I am happy to announce that we will soon kick off our first free online course (a so-called MOOC) called Music Moves.
Music Moves: Why Does Music Make You Move? Learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
[Go to course – starts 1 Feb](https://www.
Tag: pupillometry
January 13, 2023
New MOOC: Pupillometry – The Eye as a Window Into the Mind
I am happy to announce a new online course from RITMO: Pupillometry – The Eye as a Window Into the Mind. This is the third so-called Massive Open Online Course (MOOC) I have been part of making, following Motion Capture and Music Moves. I am excited to get it started on Monday, 16 January.
Discover the applications of pupillometry research Pupillometry is a relatively new research method within the sciences, and it has wide-ranging applications within psychology, neuroscience, and beyond.
October 9, 2017
And we're off: RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
I am happy to announce that RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion officially started last week. This is a new centre of excellence funding by the Research Council of Norway.
Even though we have formally taken off, this mainly means that the management group has started to work. Establishing a centre with 50-60 researchers is not done in a few days, so we will more or less spend the coming year to get up to speed.
March 16, 2017
New Centre of Excellence: RITMO
I am happy to announce that the Research Council of Norway has awarded funding to establish RITMO Centre of Excellence for Interdisciplinary Studies in Rhythm, Time and Motion. The centre is a collaboration between Departments of Musicology, Psychology and Informatics at University of Oslo.
Project summary Rhythm is omnipresent in human life, as we walk, talk, dance and play; as we tell stories about our past; and as we predict the future.
Tag: teaching
January 13, 2023
New MOOC: Pupillometry – The Eye as a Window Into the Mind
I am happy to announce a new online course from RITMO: Pupillometry – The Eye as a Window Into the Mind. This is the third so-called Massive Open Online Course (MOOC) I have been part of making, following Motion Capture and Music Moves. I am excited to get it started on Monday, 16 January.
Discover the applications of pupillometry research Pupillometry is a relatively new research method within the sciences, and it has wide-ranging applications within psychology, neuroscience, and beyond.
January 12, 2023
Running a workshop with a Jupyter Notebook presentation
Today, I ran a workshop called Video Visualization together with RITMO research assistant Joachim Poutaraud. The workshop was part of the Digital Scholarship Days 2023 organized by the University of Oslo Library, four days packed of hands-on tutorials of various useful things.
Presentation slides made by Jupyter Notebook Joachim has done a fantastic job updating the Wiki with all the new things he has implemented in the toolbox. However, the Wiki is not the best thing to use in a workshop, it has too much information and would create an information overload for the participants.
January 7, 2022
New online course: Motion Capture
After two years in the making, I am happy to finally introduce our new online course: Motion Capture: The art of studying human activity.
The course will run on the FutureLearn platform and is for everyone interested in the art of studying human movement. It has been developed by a team of RITMO researchers in close collaboration with the pedagogical team and production staff at LINK – Centre for Learning, Innovation & Academic Development.
January 22, 2021
New run of Music Moves
I am happy to announce a new run (the 6th) of our free online course Music Moves: Why Does Music Make You Move?. Here is a 1-minute welcome video:
The course starts on Monday (25 January 2021) and will run for six weeks. In the course, you will learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
We developed the course 5 years ago, but the content is still valid.
December 27, 2019
Teaching with a document camera
How does an “old-school” document camera work for modern-day teaching? Remarkably well, I think. Here are some thoughts on my experience over the last few years.
The reason I got started with a document camera was because I felt the need for a more flexible setup for my classroom teaching. Conference presentations with limited time are better done with linear presentation tools, I think, since the slides help with the flow.
November 17, 2019
Some tips and tricks when writing academic papers
I have been teaching the course Research Methods, Tools and Issues in our MCT programme this semester. The last class was an “open clinic” in which I answered questions about academic writing. Here is a summary of some of the things I answered, which may hopefully also be useful for others.
Formatting Your academic exam paper is not the place to experiment with fancy layout and formatting. Some basic tips:
June 21, 2019
Carpentries Train the Trainer
I have spent the two last days at a “Train the Trainers” workshop organized by the Carpentries project. Here I will summarize some thoughts on the workshop, and things that I will take with me for my own teaching practice.
The Carpentries The Carpentries project comprises the Software Carpentry, Data Carpentry, and Library Carpentry communities, with a shared mission to teach foundational computational and data science skills to researchers. I have taken several Carpentries lessons over the last years, organized by volunteers here at the University of Oslo.
June 5, 2019
NIME publication: NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing
The MCT master’s programme has been running for a year now, and everyone involved has learned a lot. In parallel to the development of the programme, and teaching it, we are also running the research project SALTO. Here the idea is to systematically reflect on our educational practice, which again will feed back into better development of the MCT programme.
One outcome of the SALTO project, is a paper that we presented at the NIME conference in Porto Alegre this week:
January 25, 2019
Testing reveal.js for teaching
I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities.
November 25, 2018
Reflecting on some flipped classroom strategies
I was invited to talk about my experiences with flipped classroom methodologies at a seminar at the Faculty of Humanities last week. Preparing for the talk got me to revisit my own journey of working towards flipped teaching methodologies. This has also involved explorations of various types of audio/video recording. I will go through them in chronological order.
Podcasting Back in 2009-2011, I created “podcasts” of my lectures a couple of semesters, such as in the course MUS2006 Music and Body Movements (which was at the time taught in Norwegian).
December 27, 2016
Starting afresh
After four years as Head of Department (of Musicology at UiO), I am going back to my regular associate professor position in January. It has been a both challenging and rewarding period as HoD, during which I have learned a lot about managing people, managing budgets, understanding huge organizations, developing strategies, talking to all sorts of people at all levels in the system, and much more.
I am happy to hand over a Department in growth to the new HoD (Peter Edwards).
July 15, 2016
New paper: NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs
At NIME we have a large archive of the conference proceedings, but we do not (yet) have a proper repository for instrument designs. For that reason I took part in a workshop on Monday with the aim to lay the groundwork for a new repository:
NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs [PDF]
This workshop will explore the potential creation of a community database of digital musical instrument (DMI) designs.
January 24, 2016
New MOOC: Music Moves
Together with several colleagues, and with great practical and economic support from the University of Oslo, I am happy to announce that we will soon kick off our first free online course (a so-called MOOC) called Music Moves.
Music Moves: Why Does Music Make You Move? Learn about the psychology of music and movement, and how researchers study music-related movements, with this free online course.
[Go to course – starts 1 Feb](https://www.
July 15, 2013
New publication: An Action-Sound Approach to Teaching Interactive Music
My paper titled An action–sound approach to teaching interactive music has recently been published by Organised Sound. The paper is based on some of the theoretical ideas on action-sound couplings developed in my PhD, combined with how I designed the course Interactive Music based on such an approach to music technology.
**Abstract
**The conceptual starting point for an `action-sound approach’ to teaching music technology is the acknowledgment of the couplings that exist in acoustic instruments between sounding objects, sound-producing actions and the resultant sounds themselves.
September 5, 2012
Teaching in Aldeburgh
I am currently in beautiful Aldeburgh, a small town on the east coast of England, teaching at the Britten-Pears Young Artist Programme together with Rolf Wallin and Tansy Davies. This post is mainly to summarise the things I have been going through, and provide links for various things.
Theoretical stuff My introductory lectures went through some of the theory of an embodied understanding of the experience of music. One aspect of this theory that I find very relevant for the development of interactive works is what I call action-sound relationships.
September 3, 2010
PD introductions in Norwegian on YouTube
I am teaching two courses this semester:
Sound theory 1 (in English) Sound analysis (in Norwegian, together with Rolf Inge Godøy) In both courses I use Pure Data (PD) for demonstrating various interesting phenomena (additive synthesis, beating, critical bands, etc.), and the students also get various assignments to explore such things themselves. There are several PD introduction videos on YouTube in English, but I found that it could be useful to also have something in Norwegian.
Tag: reveal
January 12, 2023
Running a workshop with a Jupyter Notebook presentation
Today, I ran a workshop called Video Visualization together with RITMO research assistant Joachim Poutaraud. The workshop was part of the Digital Scholarship Days 2023 organized by the University of Oslo Library, four days packed of hands-on tutorials of various useful things.
Presentation slides made by Jupyter Notebook Joachim has done a fantastic job updating the Wiki with all the new things he has implemented in the toolbox. However, the Wiki is not the best thing to use in a workshop, it has too much information and would create an information overload for the participants.
January 25, 2019
Testing reveal.js for teaching
I was at NTNU in Trondheim today, teaching a workshop on motion capture methodologies for the students in the Choreomundus master’s programme. This is an Erasmus Mundus Joint Master Degree (EMJMD) investigating dance and other movement systems (ritual practices, martial arts, games and physical theatre) as intangible cultural heritage. I am really impressed by this programme! It was a very nice and friendly group of students from all over the world, and they are experiencing a truly unique education run by the 4 partner universities.
April 8, 2016
Finally moving from Apple's Keynote to LibreOffice Impress
Apple’s Keynote has been my preferred presentation tool for about a decade. For a long time it felt like the ideal tool, easy to use, powerful and flexible. But at some point, probably around the time when the iOS version of Keynote came along, the Mac version of Keynote started loosing features and became more limited than it had used to be. Since then, I have experienced all sorts of problems, including non-compatibility of new and old presentation file versions, problems with linked video files, crashes, etc.
Tag: workshop
January 12, 2023
Running a workshop with a Jupyter Notebook presentation
Today, I ran a workshop called Video Visualization together with RITMO research assistant Joachim Poutaraud. The workshop was part of the Digital Scholarship Days 2023 organized by the University of Oslo Library, four days packed of hands-on tutorials of various useful things.
Presentation slides made by Jupyter Notebook Joachim has done a fantastic job updating the Wiki with all the new things he has implemented in the toolbox. However, the Wiki is not the best thing to use in a workshop, it has too much information and would create an information overload for the participants.
Tag: acceleration
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
Tag: accelerometer
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
August 7, 2022
Analyzing Recordings of a Mobile Phone Lying Still
What is the background “noise” in the sensors of a mobile phone? In the fourMs Lab, we have a tradition of testing the noise levels of various devices. Over the last few years, we have been using mobile phones in multiple experiments, including the MusicLab app that has been used in public research concerts, such as MusicLab Copenhagen.
I have yet to conduct a systematic study of many mobile phones lying still, but today I tried recording my phone—a Samsung Galaxy Ultra S21—lying still on the table for ten minutes.
Tag: gyroscope
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
Tag: magnetometer
January 3, 2023
Testing Mobile Phone Motion Sensors
For my annual Still Standing project, I am recording sensor data from my mobile phone while standing still for 10 minutes at a time. This is a highly curiosity-driven and data-based project, and part of the exploration is to figure out what I can get out of the sensors. I have started sharing graphs of the linear acceleration of my sessions with the tag #StillStanding on Mastodon. However, I wondered if this is the sensor data that best represents the motion.
Tag: micro
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
June 6, 2019
NIME publication and performance: Vrengt
My PhD student Cagri Erdem developed a performance together with dancer Katja Henriksen Schia. The piece was first performed together with Qichao Lan and myself during the RITMO opening and also during MusicLab vol. 3. See here for a teaser of the performance:
This week Cagri, Katja and myself performed a version of the piece Vrengt at NIME in Porto Alegre.
We also presented a paper describing the development of the instrument/piece:
May 3, 2017
New publication: Sonic Microinteraction in the Air
I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.
{.csl-bib-body} {.csl-entry} Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
{.csl-entry}
{.csl-entry} Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale.
February 3, 2017
Starting up the MICRO project
I am super excited about starting up my new project - MICRO - Human Bodily Micromotion in Music Perception and Interaction - these days. Here is a short trailer explaining the main points of the project:
Now I have also been able to recruit two great researchers to join me, postdoctoral researcher Victor Evaristo Gonzalez Sanchez and PhD fellow Agata Zelechowska. Together we will work on human micromotion, how music influences such micromotion, and how we can get towards microinteraction in digital musical instruments.
Tag: musiclab
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
February 6, 2022
MusicLab receives Danish P2 Prisen
Yesterday, I was in Copenhagen to receive the Danish Broadcasting Company’s P2 Prisen for “event of the year”. The prize was awarded to MusicLab Copenhagen, a unique “research concert” last October after two years of planning.
The main person behind MusicLab Copenhagen is Simon Høffding, a former postdoc at RITMO, now an associate professor at The University of Southern Denmark. He has collaborated with the world-leading Danish String Quartet for a decade, focusing on understanding more about musical absorption.
October 30, 2020
MusicTestLab as a Testbed of Open Research
Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab.
November 29, 2019
Keynote: Experimenting with Open Research Experiments
Yesterday I gave a keynote lecture at the Munin Conference on Scholarly Publishing in Tromsø. This is an annual conference that gathers librarians, research administrators and publishers, but also some researchers and students. It is my first time to the conference, and found it to be a very diverse, interesting and welcoming group of people.
Abstract Is it possible to do experimental music research completely openly? And what can we gain by opening up the research process from beginning to end?
Tag: project
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
September 5, 2022
Starting up the AMBIENT project
Today, I am starting up my new research project AMBIENT: Bodily Entrainment to Audiovisual Rhythms. I have recruited a great team and today we will have our first meeting to discuss how to work together in the coming years. I will surely write much about this project on the blog. For now, here is a quick teaser to explain what it is all about:
Tag: silence
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
October 26, 2011
The act of standing still: stillness or standstill?
[caption id=“attachment_1283” align=“alignright” width=“300” caption=“Plots of a neck marker from a 10 minute recording of standing still”][/caption]
As mentioned previously (here and here), I have been doing some experiments on standing still in silence. One thing is to do it, another is to talk (or write) about it. Then I need to have words describing what I have been doing.
To start with the simple; the word silence seems to be quite clearly defined as the “lack of sound”, and is similar to the Norwegian word stillhet.
Tag: sound actions
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
December 31, 2022
365 Sound Actions
1 January this year, I set out to record one sound action per day. The idea was to test out the action–sound theory from my book Sound Actions. One thing is writing about action–sound couplings and mappings, another is to see how the theory works with real-world examples. As I commented on after one month, the project has been both challenging and inspiring. Below I write about some of my experiences but first, here is the complete list:
December 22, 2022
An object-action-context approach to writing alt text
I came across an interesting blog post by Alex Chen on how to write better image descriptions for web pages. They propose an “object-action-context” approach when writing image descriptions. I see that such an approach could also be helpful for my sound actions project.
Adding better descriptions I am soon getting to the end of my year-long project of recording one sound action daily. A sound action is a multimodal entity consisting of body motion and its resultant sound.
December 20, 2022
Open Sourcing My Sound Actions Book
Last week, my book was published by the MIT Press, and I am happy to announce that the source code is available on GitHub. Most people are probably mainly interested in the content of the book. If so, you should grab a free copy of the final version. This blog post explains why I have made the source code available.
License I was fortunate to secure funding from the University of Oslo to make the book freely available, what is often referred to as Open Access.
December 16, 2022
Exploring Essay Writing with You.com
There has been much discussion about ChatGPT recently, a chat robot that can write meaningful answers to questions. I haven’t had time to test it out properly, and it was unavailable when I wanted to check it today. Instead, I have played around with YouWrite, a service that can write text based on limited input.
I thought it would be interesting to ask it to write about something I know well, so I asked it to write a text based on an abbreviated version of the abstract of my new book:
December 13, 2022
New Book: Sound Actions - Conceptualizing Musical Instruments
I am happy to announce that my book Sound Actions - Conceptualizing Musical Instruments is now published! I am also thrilled that this is an open access book, meaning that is free to download and read. You are, of course, also welcome to pick up a paper copy!
Here is a quick video summary of the book’s content:
In the book, I combine perspectives from embodied music cognition and interactive music technology.
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
January 31, 2022
One month of sound actions
One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.
Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:
January 1, 2022
2022, a Year of Sound Actions
Over the last few years, I have worked on a book project with the working title Sound Actions. The manuscript has been through peer reviewing and several rounds of editing and will be published by The MIT Press sometime in 2022.
Action-Sound Couplings and Mappings The book is based on the action-sound theory I developed as part of my dissertation. My main point is that we experience the world through action-sound couplings and mappings.
July 23, 2021
Sound Actions Manuscript in Preparation
Ever since I finished my dissertation in 2007, I have thought about writing it up as a book. Parts of the dissertation were translated and extended in the Norwegian-language textbook Musikk og bevegelse (which, by the way, is out of print but freely available as an ebook). That book focused primarily on music-related body motion and was written for the course MUS2006 at the University of Oslo. However, my action-sound theory was only partially mentioned and never properly presented in a book format.
January 5, 2008
Dissertation is printed!
My dissertation came from the printing company yesterday. Here’s a picture of some of them:
It feels a bit weird to see the final book lying there, being the result of a year of planning and three years of hard work. I wrote most of it last spring, submitting the manuscript in July. Now, about half a year later, I have a much more distant relationship to the whole thing. Seeing the final result is comforting, but it is also sad to let go.
Tag: sverm
January 1, 2023
2023, A Year of Still Standing
Yesterday, I completed my 365 Sound Actions project, during which I recorded one sound action per day as part of preparing for the launch of my book Sound Actions. Today, on 1 January 2023, I start this year’s project: recording myself standing still 10 minutes every day. You can follow the progress on Mastodon.
Starting up AMBIENT Although I am happy about completing my sound actions project, I have enjoyed the ritual of doing something every day.
February 16, 2022
Completing the MICRO project
I wrote up the final report on the project MICRO - Human Bodily Micromotion in Music Perception and Interaction before Christmas. Now I finally got around to wrapping up the project pages. With the touch of a button, the project’s web page now says “completed”. But even though the project is formally over, its results will live on.
Aims and objectives The MICRO project sought to investigate the close relationships between musical sound and human bodily micromotion.
June 6, 2019
NIME publication and performance: Vrengt
My PhD student Cagri Erdem developed a performance together with dancer Katja Henriksen Schia. The piece was first performed together with Qichao Lan and myself during the RITMO opening and also during MusicLab vol. 3. See here for a teaser of the performance:
This week Cagri, Katja and myself performed a version of the piece Vrengt at NIME in Porto Alegre.
We also presented a paper describing the development of the instrument/piece:
May 3, 2017
New publication: Sonic Microinteraction in the Air
I am happy to announce a new book chapter based on the artistic-scientific research in the Sverm and MICRO projects.
{.csl-bib-body} {.csl-entry} Citation: Jensenius, A. R. (2017). Sonic Microinteraction in “the Air.” In M. Lesaffre, P.-J. Maes, & M. Leman (Eds.), The Routledge Companion to Embodied Music Interaction (pp. 431–439). New York: Routledge.
{.csl-entry}
{.csl-entry} Abstract: This chapter looks at some of the principles involved in developing conceptual methods and technological systems concerning sonic microinteraction, a type of interaction with sounds that is generated by bodily motion at a very small scale.
January 2, 2013
Sverm video #4
The last of the four Sverm videos by Lavasir Nordrum hast just been posted on Vimeo. The first short movie was titled Micromovements, then followed Microsounds and Excitation, and the last one is called Resonance. It has been exciting to work with the video medium in addition to the performances, and it has given a very different perspective on the project.
December 5, 2012
Sverm video #3
Video artist Lavasir Nordrum hast just posted the third of four short movies created together with the Sverm group. The first short movie was titled Micromovements, and the second was titled Microsounds. This month’s short movie is called Excitation, and is focused on the first half of an even or action. This will be followed by a short movie called Resonance to be released on 1 January.
November 2, 2012
Sverm video #2
As I wrote about last month, the Sverm group has teamed up with video artist Lavasir Nordrum. The plan is that he will create four short and poetic videos documenting four of the main topics we have been working on in the Sverm project. The production plan for the videos is quite tight: we shoot content for the videos during a few hours in the middle of each month, and then Lavasir publishes the final video two weeks later.
October 30, 2012
Musikkteknologidagene 2012
[caption id=“attachment_2086” align=“alignright” width=“300”] Alexander holding a keynote lecture at Musikkteknologidagene 2012 (Photo: Nathan Wolek).[/caption]
Last week I held a keynote lecture at the Norwegian music technology conference Musikkteknologidagene, by (and at) the Norwegian Academy of Music and NOTAM. The talk was titled: “Embodying the human body in music technology”, and was an attempt at explaining why I believe we need to put more emphasis on human-friendly technologies, and why the field of music cognition is very much connected to that of music technology.
Tag: LaTeX
December 30, 2022
Adding Title and Author to PDFs exported from Jupyter Notebook
I am doing some end of the year cleaning on my hard drive and just uploaded the Jupyter Notebook I used in the analysis of a mobile phone lying still earlier this year.
For some future studies, I thought it would be interesting to explore the PDF export functionality from Jupyter. That worked very well except for that I didn’t get any title or author name on top:
Then I found a solution on Stack Overflow.
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
October 7, 2019
What tools do I use for writing?
Earlier today I was asked about what tools I use when writing. This is not something I have written about here on the blog before, although I do have very strong opinions on my own tools. I actually really enjoy reading about how other people work, so writing about it here may perhaps also be interesting to others.
Text editor: Atom Most of my writing, whether it is e-mail drafts, meeting notes, or academic papers, is done in the form of plain text files.
February 3, 2013
Unofficial ERC Starting Grant LaTeX template
After I mentioned that I used LaTeX for an ERC Starting Grant application in a previous blog post, I have gotten several questions from people about what type of LaTeX template I used. Unfortunately, the ERC does not provide any LaTeX template, only templates for MS Word and OpenOffice. My scientific workflow is so dependent on LaTeX/BibTeX that I decided to recreate a LaTeX document setup that resembled the MS Word template.
February 7, 2012
LaTeX fonts in OSX
When creating figures for papers written in LaTeX, I have found it aesthetically unpleasing to have different fonts in the figures than in the text. Most figures I create in either OmniGraffle or Matlab, and here I have relied on regular OSX fonts.
Fortunately, I have discovered that it is possible to use LaTeX fonts in OSX. Apparently, this is now included as a feature in the latest version(s) of the MacTeX distribution (?
November 2, 2011
Compact bibliography list in LaTeX
I have already written about how to compact lists earlier today. Now is the time to compact the bibliography… This is how the regular bibliography in LaTeX looks like:
First I found a suggestion to use the setspace function, but it turns out that it is much easier to just use the bibsep option to natbib. Just add the following to the preamble:
\usepackage{natbib} \setlength{\bibsep}{0.0pt} and you will get something like this:
November 2, 2011
Compact lists in LaTeX
I have for a long time been struggling with making lists more compact in LaTeX. While the standard lists often look good, as seen in the example below, there are times when space limits, etc. makes it necessary to save some space.
{width=“600” height=“283”}
Up until now I have been using things like the rather ugly [\vspace{-7pt}]{style=“font-family: monospace; white-space: pre;”} command to remove space between list items. Now I finally decided to figure out a better solution.
June 30, 2011
Using MultiMarkDown
I tend to move between different computers/devices and OSes all the time, and have started to become very tired of storing text data in different formats that are either not compatible or tend to mess up the formatting between different applications (e.g. RTF files).
To avoid this I am now testing to write all my text-based documents (notes, memos, letters, etc.) using MultiMarkDown. This is a so-called Lightweight markup language, similar to e.
May 4, 2011
Remove chapter and part text from LaTeX documents
When using the \part and \chapter tags in LaTeX you will typically end up with parts and chapters that say “part” and “chapter” before the name you have written. Putting these lines in your preamble will remove this:
\renewcommand{\partname}{} \renewcommand{\chaptername}{} \renewcommand{\thechapter}{} \renewcommand{\thesection}{}
April 18, 2011
Use Preview instead of Adobe Reader in Texmate
I just installed Adobe Reader on my new computer, only to discover that it hijacked the PDF preview window in TextMate when working on LaTeX documents. This also happened the last time I installed a new system, and I couldn’t remember what I did to change it back to using Preview as the default PDF viewer.
After googling around, I remembered that TextMate is just using the regular browser settings when it comes to displaying PDF files.
March 25, 2011
Avoid subscript in Matlab titles
I am working on some plots in Matlab, where I am using the filename as the title of the plot. In many of the files I am using underscores (_) as separator, and the result is that Matlab creates a subscript.
So for a file called b_staccato_004, I get a title b~s~taccato~0~04.
After some googling I found that this is because Matlab per default treats such text strings as LaTeX code.
October 1, 2010
LaTeX formatting issues
I am about to submit an article for review, and had to format in a special way. Here is a quick summary of what I did:
No paragraph indents:
\setlength{\parindent}{0in} but a single line between:
\usepackage{parskip} Left text justification:
\begin{flushleft} ... \end{flushleft} Double spacing:
\renewcommand{\baselinestretch}{2}
August 18, 2010
Writing complex documents
I have been using LaTeX for most of my more advanced writing needs for so many years, that I tend to forget that there are so few other good options out there for writing what could be called “complex” documents, i.e. book-sized documents with a good portion of notes, pictures, links, etc.
I just had to help out in trying to create a large document based on 30+ individual documents in MS Word.
April 8, 2008
Writing in NeoOffice, dreaming of LaTeX
I am working on a paper for a journal that only accepts RTF documents, and to avoid the possible problems resulting from converting a LaTeX document into RTF (or possibly from PDF), I decided to try using a word processor from the beginning. For simple word processing I have grown very found of Bean recently, a lightweight application slightly more advanced than TextEdit. I started out with Bean, but since I had to include endnotes in the document I ended up moving over to NeoOffice instead.
April 21, 2007
File search in Bibdesk
Bibdesk is just getting better! In version 1.3.4 they have included searching inside linked PDF files in the library. It is not as powerful as the AI search functionality in DevonThink, but it is still very, very useful.
{width=“500”}
April 3, 2007
Oxford Dictionary in TextMate
After working with TextMate for a couple of weeks, I have decided to stay there and leave TexShop behind for my LaTeX editing. Just found out that it even supports the ctrl-apple-d trick for getting definitions from the Oxford Dictionary in the text.
March 29, 2007
Drag and drop pictures in TextMate
After being convinced by Tim, I have started using TextMate for text editing things. Right now I am mostly interested in its many nice LaTeX features, and the best so far is that it will create the necessary code when dropping and image into the text. You can’t believe how much time and effort this saves me. Very, very handy!
March 14, 2007
LaTeX Columns
Found this little trick to make columns anywhere in a LaTeX document:
Put this line in the preamble:
usepackage{palatino, url, multicol} Then add this where you want the columns:
begin{multicols}{2}{ My text... } end{multicols}
February 22, 2007
MSP tilde in LaTeX
I spent a couple of minutes trying to figure out how to create a nice tilde (~) for writing the name of Max/MSP externals in LaTeX (e.g. dac~), so I figured I could post the solution in case anyone else wonders. First I tried using \tilde{} and \widetilde{}, but they didn’t look nice. However, this little thing does the trick:
$\sim$
I guess you need the math environment to get this working.
February 21, 2007
Multiple bibliographies in LaTeX
I have been wondering how to make a separate bibliography of my own publications as an appendix in my dissertation. Vincent pointed me to multibib and its siblings, and then I came across this FAQ about all the glories of multiple bibliographies. Doesn’t look like the easiest thing to get going, but I’ll dive into it and see if I manage to get out of it successfully.
January 11, 2007
Smart programs
I had a discussion about which software tools I use for my research, so here is a list of the most important (in no particular order):
Firefox: with adblock and mouse gestures. NetNewsWire: for handling all the blogs I am reading. MarsEdit: to write blog entries. Publishes directly to my WordPress driven blog. OmniGraffle: for making diagrams. I even made my last conference poster with this program, works great also with photos.
December 29, 2006
LaTeX: Table of Contents tricks
As my dissertation draft grows bigger (and messier…), I see that I need to restrict the depth of the Table of Contents. These lines do the trick:
\setcounter{tocdepth}{1}
\tableofcontents
First I tried to use tocdepth 2, but that gave me three levels. I guess this is because it counts the chapter level as 0.
I have also been wondering why the bibliography hasn’t shown up in the table of contents. I haven’t found an explanation, but the solution is this:
April 25, 2006
Word Attachments
I have received a number of Word attachments recently. Nowadays, I only touch MS Word when I am forced to by other people, as I rely on TextWrangler, TextEdit, OpenOffice and LaTex for my various text related activities.
I started to summarize why I think people should avoid Word, especially as e-mail attchments, but then I found some web pages with more well-thought and well-rounded arguments:
- Manuel M T Chakravarty’s Attachments in Proprietary Formats Considered Harmful
February 13, 2006
PDFs with inline video/animation
I recently discovered that it is possible to generate PDFs with video/animation included in the file using LaTeX. Works like a charm, but unfortunately the files only work properly in Acrobat (not Preview).
Tag: PDF
December 30, 2022
Adding Title and Author to PDFs exported from Jupyter Notebook
I am doing some end of the year cleaning on my hard drive and just uploaded the Jupyter Notebook I used in the analysis of a mobile phone lying still earlier this year.
For some future studies, I thought it would be interesting to explore the PDF export functionality from Jupyter. That worked very well except for that I didn’t get any title or author name on top:
Then I found a solution on Stack Overflow.
June 16, 2022
Export images from a PDF file
I have previously written about how to export each of the pages of a PDF file as an image. That works well for, for example, presentation slides that should go on a web page. But sometimes there is a need to export only the images within a page. This can be achieved with a small command line tool called pdfimages.
One way of using it is:
pdfimages -p -png file.pdf image This will export all images in file.
August 24, 2020
Improving the PDF files in the NIME archive
This blog post summarizes my experimentation with improving the quality of the PDF files in the proceedings of the annual International Conference on New Interfaces for Musical Expression (NIME).
Centralized archive We have, over the last few years, worked hard on getting the NIME adequately archived. Previously, the files were scattered on each year’s conference web site. The first step was to create a central archive on nime.org. The list there is automagically generated from a collection of publicly available BibTeX files that serve as the master document of the proceedings archive.
November 29, 2019
Creating individual image files from presentation slides
How do you create full-screen images from each of the slides of a Google Docs presentation without too much manual work? For the previous blog post on my Munin keynote, I wanted to include some pictures from my 90-slide presentation. There is probably a point and click solution to this problem, but it is even more fun to use some command line tools to help out. These commands have been tested on Ubuntu 19.
December 27, 2016
Split PDF files easily using Ubuntu scripts
One of the fun parts of reinstalling an OS (yes, I think it is fun!), is to discover new software and new ways of doing things. As such, it works as a “digital shower”, getting rid of unnecessary stuff that has piled up.
Trying to also get rid of some physical mess, I am scanning some piles of paper documents. This leaves me with some large multi-page PDFs that I would like to split up easily.
June 29, 2016
Shell script for compressing PDF files on Ubuntu
Back on OSX one of my favourite small programs was called PDFCompress, which compressed a large PDF file into something more manageable. There are many ways of doing this on Ubuntu as well, but nothing really as smooth as I used to on OX.
Finally I took the time to figure out how I could make a small shell script based on ghostscript. The whole script looks like this:
#!/bin/sh gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.
May 22, 2011
Reducing PDF file size
I am working on finalizing an electronic version of a large PDF file (600 page NIME proceedings), and have had some problems optimizing the PDF file. This may not be so strange, since the file is an assembly of 130 individual PDF files all made by different people and using all sorts of programs and OS.
Usually, PDFCompress works wonders when it comes to reducing PDF file sizes, but for the proceedings-file it choked at some of the fonts.
April 18, 2011
Use Preview instead of Adobe Reader in Texmate
I just installed Adobe Reader on my new computer, only to discover that it hijacked the PDF preview window in TextMate when working on LaTeX documents. This also happened the last time I installed a new system, and I couldn’t remember what I did to change it back to using Preview as the default PDF viewer.
After googling around, I remembered that TextMate is just using the regular browser settings when it comes to displaying PDF files.
August 9, 2010
PDF merge in preview
After I began using PDFCompress for minimizing PDF files, the only reason I have had for using the full Adobe Acrobat has been to combine PDFs. Now I realize that since OS 10.5 this functionality has been built into Preview. I guess I should really start reading the release notes of OSes and applications a bit more carefully, since I managed to get to 10.6 before I found out about this feature.
July 16, 2008
The challenge of creating booklets
I have been trying to create a booklet out of a standing A4 paper (the booklet size should be 105 x 297 mm), but this has proven to be much more difficult than I would have originally thought. It is a while since I have been doing things like this, and I still remember how easy it was to do such things back in the days when I used to use MS Publisher 1.
Tag: accessibility
December 22, 2022
An object-action-context approach to writing alt text
I came across an interesting blog post by Alex Chen on how to write better image descriptions for web pages. They propose an “object-action-context” approach when writing image descriptions. I see that such an approach could also be helpful for my sound actions project.
Adding better descriptions I am soon getting to the end of my year-long project of recording one sound action daily. A sound action is a multimodal entity consisting of body motion and its resultant sound.
Tag: alt text
December 22, 2022
An object-action-context approach to writing alt text
I came across an interesting blog post by Alex Chen on how to write better image descriptions for web pages. They propose an “object-action-context” approach when writing image descriptions. I see that such an approach could also be helpful for my sound actions project.
Adding better descriptions I am soon getting to the end of my year-long project of recording one sound action daily. A sound action is a multimodal entity consisting of body motion and its resultant sound.
Tag: images
December 22, 2022
An object-action-context approach to writing alt text
I came across an interesting blog post by Alex Chen on how to write better image descriptions for web pages. They propose an “object-action-context” approach when writing image descriptions. I see that such an approach could also be helpful for my sound actions project.
Adding better descriptions I am soon getting to the end of my year-long project of recording one sound action daily. A sound action is a multimodal entity consisting of body motion and its resultant sound.
June 16, 2022
Export images from a PDF file
I have previously written about how to export each of the pages of a PDF file as an image. That works well for, for example, presentation slides that should go on a web page. But sometimes there is a need to export only the images within a page. This can be achieved with a small command line tool called pdfimages.
One way of using it is:
pdfimages -p -png file.pdf image This will export all images in file.
February 3, 2022
Different 16:9 format resolutions
I often have to convert between different resolutions of videos and images and always forget the pixel dimensions that correspond to a 16:9 format. So here is a cheat-sheet:
**2160p: **3840×2160 **1440p: **2560×1440 **1080p: **1920×1080 **720p: **1280×720 540p: 960x540 **480p: **854×480 **360p: **640×360 **240p: **426×240 120p: 213x120 I also came across this complete list of true 16:9 resolution combinations, but the ones above suffice for my usage. Happy converting!
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
Tag: open research
December 20, 2022
Open Sourcing My Sound Actions Book
Last week, my book was published by the MIT Press, and I am happy to announce that the source code is available on GitHub. Most people are probably mainly interested in the content of the book. If so, you should grab a free copy of the final version. This blog post explains why I have made the source code available.
License I was fortunate to secure funding from the University of Oslo to make the book freely available, what is often referred to as Open Access.
November 21, 2022
Explaining the Norwegian Career Assessment Matrix (NOR-CAM)
The Norwegian Career Assessment Matrix (NOR-CAM) is a toolbox for recognition and rewards in academic careers that was launched by Universities Norway in May 2021. I was part of the working group developing the toolbox and have blogged about this experience previously.
There has been much interest in NOR-CAM and I have held numerous presentations about it since it was launched. Most of these presentations have been held live (and often on Zoom).
February 4, 2022
The status of FAIR in higher education
I participated in the closing event of the FAIRsFAIR project last week. For that, I was asked to share thoughts on the status of FAIR in higher education. This is a summary of the notes that I wrote for the event.
What is FAIR? First of all, The FAIR principles state that data should be:
Findable: The first step in (re)using data is to find them. Metadata and data should be easy to find for both humans and computers.
December 21, 2021
Why I Don't Review for Elsevier Journals
This blog post is written to have a URL to send to Elsevier editors that ask me to review for their journals. I have declined to review for Elsevier journals for at least a decade, but usually haven’t given an explanation. Now I will start doing it alongside my decline.
My decision is based on a fundamental flaw in today’s commercial journal publishing ecosystem. This is effectively summarized by Scott Aaronson, in an analogy in his Review of The Access Principle by John Willinsky
December 12, 2021
New article: Best versus Good Enough Practices for Open Music Research
After a fairly long publication process, I am happy to finally announce a new paper: Best versus Good Enough Practices for Open Music Research in Empirical Musicology Review.
Summary The abstract reads:
Music researchers work with increasingly large and complex data sets. There are few established data handling practices in the field and several conceptual, technological, and practical challenges. Furthermore, many music researchers are not equipped for (or interested in) the craft of data storage, curation, and archiving.
October 26, 2021
MusicLab Copenhagen
After nearly three years of planning, we can finally welcome people to MusicLab Copenhagen. This is a unique “science concert” involving the Danish String Quartet, one of the world’s leading classical ensembles. Tonight, they will perform pieces by Bach, Beethoven, Schnittke and folk music in a normal concert setting at Musikhuset in Copenhagen. However, the concert is nothing but normal.
Live music research During the concert, about twenty researchers from RITMO and partner institutions will conduct investigations and experiments informed by phenomenology, music psychology, complex systems analysis, and music technology.
October 18, 2021
From Open Research to Science 2.0
Earlier today, I presented at the national open research conference Hvordan endres forskningshverdagen når åpen forskning blir den nye normalen? The conference is organized by the Norwegian Forum for Open Research and is coordinated by Universities Norway. It has been great to follow the various discussions at the conference. One observation is that very few questions the transition to Open Research. We have, finally, come to a point where openness is the new normal.
September 20, 2021
More research should be solid instead of novel
Novelty is often highlighted as the most important criterion for getting research funding. That a manuscript is novel is also a major concern for many conference/journal reviewers. While novelty may be good in some contexts, I find it more important that research is solid.
I started thinking about novelty versus solidity when I read through the (excellent) blog posts about the ISMIR 2021 Reviewing Experience. These blog posts deal with many topics, but the question about novelty caught my attention.
September 7, 2021
Open Research puzzle illustration
It is challenging to find good illustrations to use in presentations and papers. For that reason, I hope to help others by sharing some of the illustrations I have made myself. I will share them with a permissive license (CC-BY) to be easily reused for various purposes.
I start with the “puzzle” that I often use in presentations about Open Research. It outlines some of the various parts of the research process and how they can be made (more) open.
August 19, 2021
Why universities should care about employee web pages
Earlier this year, I wrote about my 23 tips to improve your web presence. Those tips were meant to encourage academics to care about how their employee web pages look at universities. Such pages look different from university to university. Still, in most places, they contain an image and some standard information on the top, followed by more or less structured information further down. For reference, this is an explanation of how my employee page is built up:
June 1, 2021
Launching NOR-CAM – A toolbox for recognition and rewards in academic careers
What is the future of academic career assessment? How can open research practices be included as part of a research evaluation? These were some of the questions we asked ourselves in a working group set up by Universities Norway. Almost two years later, the report is ready. Here I will share some of the ideas behind the suggested Norwegian Career Assessment Matrix (NOR-CAM) and some of the other recommendations coming out of the workgroup.
January 26, 2021
Some Thoughts on the Archival of Research Activities
Recently, I have been engaged in an internal discussion at the University of Oslo about our institutional web pages. This has led me to realize that a university’s web pages are yet another part of what I like to think of as an Open Research “puzzle”:
Cutting down on web pages The discussion started when our university’s communication department announced that they wanted to reduce the number of web pages. One way of doing that is by unpublishing a lot of pages.
October 30, 2020
MusicTestLab as a Testbed of Open Research
Many people talk about “opening” the research process these days. Due to initiatives like Plan S, much has happened when it comes to Open Access to research publications. There are also things happening when it comes to sharing data openly (or at least FAIR). Unfortunately, there is currently more talking about Open Research than doing. At RITMO, we are actively exploring different strategies for opening our research. The most extreme case is that of MusicLab.
August 27, 2020
Why is open research better research?
I am presenting at the Norwegian Forskerutdanningskonferansen on Monday, which is a venue for people involved in research education. I have been challenged to talk about why open research is better research. In the spirit of openness, this blog post is an attempt to shape my argument. It can be read as an open notebook for what I am going to say.
Open Research vs Open Science My first point in any talk about open research is to explain why I think “open research” is better than “open science”.
August 13, 2020
NIME Publication Ecosystem Workshop
During the NIME conference this year (which as run entirely online due to the coronavirus crisis), I led a workshop called NIME Publication Ecosystem Workshop. In this post, I will explain the background of the workshop, how it was run in an asynchronous+synchronous mode, and reflect on the results.
If you don’t want to read everything below, here is a short introduction video I made to explain the background (shot at my “summer office” up in the Hardangervidda mountain range in Norway):
January 9, 2020
Podcast on Open Research
I was in Tromsø to hold a keynote lecture at the Munin conference a month ago, and was asked to contribute to a podcast they are running called Open Science Talk. Now it is out, and I am happy to share:
Open Science Talk · #26 Music Research In this episode, we talk about Music Research, and how it is to practice open research within this field. Our guest is Alexander Jensenius, Associate Professor at the Department of Musicology Centre for Interdisciplinary Studies in Rhythm, Time and Motion (IMV) at the University of Oslo.
November 29, 2019
Keynote: Experimenting with Open Research Experiments
Yesterday I gave a keynote lecture at the Munin Conference on Scholarly Publishing in Tromsø. This is an annual conference that gathers librarians, research administrators and publishers, but also some researchers and students. It is my first time to the conference, and found it to be a very diverse, interesting and welcoming group of people.
Abstract Is it possible to do experimental music research completely openly? And what can we gain by opening up the research process from beginning to end?
June 7, 2019
Workshop: Open NIME
This week I led the workshop “Open Research Strategies and Tools in the NIME Community” at NIME 2019 in Porto Alegre, Brazil. We had a very good discussion, which I hope can lead to more developments in the community in the years to come. Below is the material that we wrote for the workshop.
Workshop organisers Alexander Refsum Jensenius, University of Oslo Andrew McPherson, Queen Mary University of London Anna Xambó, NTNU Norwegian University of Science and Technology Dan Overholt, Aalborg University Copenhagen Guillaume Pellerin, IRCAM Ivica Ico Bukvic, Virginia Tech Rebecca Fiebrink, Goldsmiths, University of London Rodrigo Schramm, Federal University of Rio Grande do Sul Workshop description The development of more openness in research has been in progress for a fairly long time, and has recently received a lot of more political attention through the Plan S initiative, The Declaration on Research Assessment (DORA), EU’s Horizon Europe, and so on.
March 22, 2019
Towards Convergence in Research Assessment
I have written a short article for the latest edition of LINK, the magazine of the European Association of Research Managers and Administrators. Self-archiving a couple of the article here.
Towards Convergence in Research Assessment Open Science is on everyone’s lips these days. There are many reasons why this shift is necessary and wanted, and also several hurdles. One big challenge is the lack of incentives and rewards. Underlying this is the question of what we want to incentivize and reward, which ultimately boils down to the way we assess research and researchers.
December 22, 2018
Open Research vs Open Science
Open Science is on everyone’s lips these days. But why don’t we use Open Research more?
This is a question I have been asking regularly after I was named Norwegian representative in EUA’s Expert Group on Science 2.0 / Open Sciencecommittee earlier this year. For those who don’t know, the European University Association (EUA) represents more than 800 universities and national rectors’ conferences in 48 European countries. It is thus a very interesting organization when it comes to influencing the European higher education and research environment.
November 22, 2016
Participating in the opening of The Guild
I participated in the opening of the Guild of Research Universities in Brussels yesterday. The Guild is
a transformative network of research-led universities from across the European continent, formed to strengthen the voice of universities in Europe, and to lead the way through new forms of collaboration in research, innovation and education.
The topic of the opening symposium is that of Open Innovation, a hot topic these days, and something that the European Commission is putting a lot of pressure on.
Tag: open source
December 20, 2022
Open Sourcing My Sound Actions Book
Last week, my book was published by the MIT Press, and I am happy to announce that the source code is available on GitHub. Most people are probably mainly interested in the content of the book. If so, you should grab a free copy of the final version. This blog post explains why I have made the source code available.
License I was fortunate to secure funding from the University of Oslo to make the book freely available, what is often referred to as Open Access.
Tag: Research
December 16, 2022
Exploring Essay Writing with You.com
There has been much discussion about ChatGPT recently, a chat robot that can write meaningful answers to questions. I haven’t had time to test it out properly, and it was unavailable when I wanted to check it today. Instead, I have played around with YouWrite, a service that can write text based on limited input.
I thought it would be interesting to ask it to write about something I know well, so I asked it to write a text based on an abbreviated version of the abstract of my new book:
September 5, 2022
Starting up the AMBIENT project
Today, I am starting up my new research project AMBIENT: Bodily Entrainment to Audiovisual Rhythms. I have recruited a great team and today we will have our first meeting to discuss how to work together in the coming years. I will surely write much about this project on the blog. For now, here is a quick teaser to explain what it is all about:
February 3, 2017
Starting up the MICRO project
I am super excited about starting up my new project - MICRO - Human Bodily Micromotion in Music Perception and Interaction - these days. Here is a short trailer explaining the main points of the project:
Now I have also been able to recruit two great researchers to join me, postdoctoral researcher Victor Evaristo Gonzalez Sanchez and PhD fellow Agata Zelechowska. Together we will work on human micromotion, how music influences such micromotion, and how we can get towards microinteraction in digital musical instruments.
December 27, 2016
Starting afresh
After four years as Head of Department (of Musicology at UiO), I am going back to my regular associate professor position in January. It has been a both challenging and rewarding period as HoD, during which I have learned a lot about managing people, managing budgets, understanding huge organizations, developing strategies, talking to all sorts of people at all levels in the system, and much more.
I am happy to hand over a Department in growth to the new HoD (Peter Edwards).
November 5, 2014
My research on national TV
A couple of weeks ago, NRK, the Norwegian broadcasting company screened a documentary about my research together with the physiotherapists at NTNU in the CIMA project. The short story is that we have developed the tools I first made for the Musical Gestures Toolbox during my PhD, into a system with the ambition of detecting signs of cerebral palsy in infants.
The documentary was made for the science program Schrödingers Katt, and I am very happy that they spent so much time on developing the story, filming and editing.
July 12, 2012
Paper #1 at SMC 2012: Evaluation of motiongrams
Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
May 6, 2012
Visual overviews in MS Academic Search
I have been using Google Scholar as one of my main sources for finding academic papers and books, and find that is has improved considerably over the last few years.
A while ago they also opened for creating your own academic profile. It is fairly basic, but they have done a great job in managing to find most of my papers, citations, etc.
Now also Microsoft has jumped on academic search, and has launched their own service.
August 31, 2010
Interdisciplinarity in UiO's new strategy
I am happy to see that the first point in the new UiO strategy plan is interdisciplinarity, or more specifically: “Et grensesprengende universitet”. Interdisciplinarity is always easier in theory than in practice, and this is something I am debating in a feature article in the latest volume (pages 32-33) of Forskerforum, the journal of the The Norwegian Association of Researchers (Forskerforbundet).
I have written about interdisciplinarity on this blog several times before (here, here and here).
July 3, 2010
GDIF recording and playback
Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here.
We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:
July 2, 2010
New motiongram features
Inspired by the work [[[Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale]{.entry-content}]{.status-content}]{.status-body} a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:
About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.
July 1, 2010
Quantity of motion of an arbitrary number of inputs
In video analysis I have been working with what is often referred to as “quantity of motion” (which should not be confused with momentum, the product of mass and velocity p=mv), i.e. the sum of all active pixels in a motion image. In this sense, QoM is 0 if there is no motion, and has a positive value if there is motion in any direction.
Working with various types of sensor and motion capture systems, I see the same need to know how much motion there is in the system, independent of the number of variables and dimensions in the system studied.
June 30, 2010
Springer books at UiO
The new University of Oslo contract with Springer means that staff and students get access to all Springer books (~15 000) published since 2005 (titles available here). Now I just need to get an iPad to start reading…
April 30, 2009
i.e. and e.g.
A quick observation this morning as I was brushing up on a couple of grammatical things over at Grammar Girl while finishing a book chapter: Concerning the abbreviations i.e. (that is) and e.g. (for example), most American English dictionaries seem to suggest that they should be followed by a comma, while in British English it is fine to leave the commas out.
April 27, 2009
Updated software
I was at the Musical Body conference at University of London last week and presented my work on visualisation of music-related movements. For my PhD I developed the Musical Gestures Toolbox as a collection of components and modules for Max/MSP/Jitter, and most of this has been merged into Jamoma. However, lots of potential users are not familiar with Max, so over the last couple of years I have decided to develop standalone applications for some of the main tasks.
February 9, 2009
human-computer confluence
A couple of weeks ago I came across the English word confluence. The Oxford dictionary informs me that this means “the junction of two rivers, esp. rivers of approximately equal width”. That sounds very poetic, and it gets even better when you combine it with humans and computers, as they have done in the call for FP7 FET projects:
The initiative aims to investigate and demonstrate new possibilities emerging at the confluence between the human and technological realms.
January 30, 2009
Lab wiki
We have set up a new wiki for the fourMs lab. It is mainly intended as a place for tutorials and various tricks and tips in using the software and hardware systems available. Hopefully, the wiki can also be useful for other researchers in the field.
October 28, 2008
Three workshops in a row
The last few weeks have been quite busy here in Oslo. We opened the new lab just about a month ago, and since then I have organised several workshops, guest lectures and concerts both at UiO and at NMH. I was planning to post some longer descriptions of what has been going on, but decided to go for a summary instead.
{height=“150”} First we had a workshop called embedded systems workshop, but which I retroactively have renamed RaPMIC workshop (Rapid Prototyping of Music Instruments and Controllers).
October 23, 2008
Some thoughts on data signal processing in Max
We are having a Jamoma workshop at the fourMs lab this week. Most of the time is being spent on making Jamoma 0.5 stable, but we are also discussing some other issues. Throughout these discussions, particularly about how to handle multichannel audio in Max, I have realised that we should also start thinking about data signals as a type in itself.
Jamoma is currently, as is Max, split into three different “types” of modules and processing: control, audio and video.
September 30, 2008
Logos robot orchestra
While in Gent a couple of weeks ago I had the chance to visit the Logos Foundation and hear a concert with the robot orchestra. It was very interesting to hear Godfried-Willem Raes talk about his instruments and his music making, which has been going on for almost 40 years.
September 28, 2008
On the news
A journalist from the national broadcaster NRK came to our opening on Friday, and he made a story which was shown on the “cultural news” Friday night. The clip can be seen here (in Norwegian). Below is an image of Rolf Inge Godøy being interviewed before the opening.
September 16, 2008
Lab opening
As I have blogged about over on our project page, we are going to have an official opening of the new lab next Friday. Please come over if you are in the neighbourhood.
September 9, 2008
Blog and spam
I had a spam attack in the comments fields of the blog a few weeks ago, leaving me close to 30 000 comments that I have to manually moderate. So I have decided to turn off the comment feature for now. Too bad, since I think the ability to comment on other people’s blog entries is an important part in the democratisation of the web.
September 9, 2008
Entrainment
One of the groups at the ISSSM showed a video of metronome synchronisation shot at the Nonlinear dynamics and medical physics group at Lancaster University. This is an old physics trick, first described by Huygens, but it is still fascinating. Here I found the video on Youtube:
September 9, 2008
Multimodal sensing
AppleInsider reports on a set of patents for multimodal sensing (i.e. using two or more senses at the same time). Multimodal sensing has been a hot research topic in human-computer interaction for several years, based on the knowledge that human perception and cognition is fundamentally multimodal. If we want computers to respond more efficiently to human communication they will also have to use more than one modality in their sensing and communication.
August 26, 2008
Open lab
We have slowly been moving into our new lab spaces over the last weeks. The official opening of the labs is scheduled for Friday 26 September, but we had a pre-opening “Open lab” for the new music students last week, and here are some of the pictures shot by Anne Cathrine Wesnes during the presentation.
Here I am telling the students a little about our new research group, and showing the main room:
July 17, 2008
Black box in the lab
Last week we started setting up a “black box” in the new lab space. It is great to finally have a more permanent motion lab set up that we can use for various types of observation studies and recording sessions.
July 17, 2008
Exporting references from Google Scholar
I have written about the (hidden) possibility of exporting references from Google Scholar before, but since several people have asked about this lately, I will post a more detailed description of how you can do that here. It is very simple:
1. Go to Google Scholar
2. Select the Scholar preferences:
3. At the bottom of the preferences page you find a menu where you can choose which reference format you prefer (BibTex, Endnote, Reference Manager, etc.
May 23, 2008
Presentation at Mobile Music Workshop
Last week I presented the paper Some Challenges Related to Music and Movement in Mobile Music Technology at the Mobile Music Workshop in Vienna. A PDF of the paper is available here. Not sure if the abstract justifies the fairly dense paper, but at least it is compact.
Mobile music technology opens many new opportunities in terms of location-aware systems, social interaction etc., but we should not forget that many challenges faced in ”immobile” music technology research are also apparent in mobile computing.
May 15, 2008
Gumstix and PDa
Another post from the Mobile Music Workshop in Vienna. Yesterday I saw a demo on the Audioscape project by Mike Wozniewski (McGill). He was using the Gumstix, a really small system running a Linux version called OpenEmbedded. He was running PDa (a Pure Data clone) and was able to process sensor data and run audio off of the small device.
May 12, 2008
Kickoff-seminar
Some pictures from the kickoff-seminar for the Sensing Music-related Actions project last week:
Project leader Rolf-Inge Godøy started with a short presentation of the new project.
Then Marcelo M. Wanderley (McGill, Montreal) held an overview of various types of motion capture solutions, and the pros and cons of each of them. He stressed two main challenges he had had over the years: synchronisation of various types of mo-cap data with audio, video, music notation, etc.
May 10, 2008
New lab
It has now been confirmed that the Sensing Music-related Actions project will move into new spaces in a building called Veglaboratoriet. The building, which used to house various types of chemical laboratories, is located next to the computer science building. The downside is that we will be farther away from the music department (10 minutes to walk…), but we will be one floor up from the robot lab of the ROBIN group, the partner in our new project.
May 8, 2008
Motion Capture System Using Accelerometers
Came across a student project from Cornell on doing motion capture using accelerometers, based on the Atmel controller. It is a nice overview of many of the challenges faced when working with accelerometers, and the implementation seems to work well.
{width=“300/”}
May 7, 2008
Anechoic chamber at UiO
A couple of weeks ago we had an excursion to an anechoic chamber in the basement of the physics department at the University of Oslo. This room is a remainder from back when there was an acoustics group in Oslo (which later moved to Trondheim), and it was a pure coincidence that we discovered that the old room is still intact, largely thanks to Arnt Inge Vistnes. He also happens to be the person that first introduced me to the Fourier transform back in the days when I studied physics, so he got the challenge of holding a guest lecture for our students in sound theory on the topic of (…) the Fourier transform.
April 24, 2008
Sensing Music-related Actions
The web page for our new research project called Sensing Music-related Actions is now up and running. This is a joint research project of the departments of Musicology and Informatics, and has received external funding through the VERDIKT program of the The Research Council of Norway. The project runs from July 2008 until July 2011.
The focus of the project will be on basic issues of sensing and analysing music-related actions, and creating various prototypes for testing the control possibilities of such actions in enactive devices.
February 15, 2008
Recordings in Casa Paganini
The location of the EyesWeb Week is the facilities of the DIST group in the beautiful Casa Paganina, including a large auditorium next to the laboratories. This allows for an ecological setting for experiments, since performers can actually perform on a real stage with real audience. I wish we could have something like this in Oslo!
Here a picture from an experimental setup where we are looking at the synchronisation between the musicians in a string trio.
February 14, 2008
Harvard adopts Open Access
The Chronicle reports that Harvard University enforces an Open Access policy for all publications made by the faculty. This is great, and a drastic step towards making research more publicly available.
We have an Open Access system at UiO (called DUO), but so far this is mainly used to publish master theses. I have tried to push for the option to upload other types of publications there too, and this is supposed to be possible now from the FRIDA system which we use to document all research activities.
February 13, 2008
Motiongrams in EyesWeb!
We had a programming session this morning, and Paolo Coletta implemented a block for creating motiongrams in EyesWeb. It will be available in the new EyesWeb XMI release which will happen in the end of this week. Great!
February 12, 2008
Free Software
I am participating in the EyesWeb Week in Genoa this week. This morning Nicola Bernardini held a lecture about Free Software. I have heard him talk on this topic several times before, but as I have now some more experience on participating in a Free Software project (i.e. Jamoma), I got more out of his ideas.
Some main points from the talk:
Use Free Software! Freeware and shareware may have nothing to do with Free Software.
February 4, 2008
Press coverage
There has been quite a lot of media interest concerning my PhD disputation last week. A Norwegian news search engine reports some 80 appearances, and this is not counting all the radio interviews I have done… Here are some examples:
TV:
NRK - Store Studio NRK - Østlandssendingen TV Budstikka Nettavisen ScanpixNTB TV Newspapers:
Forskning.no Dagens Næringsliv NRK.no Vårt land + 40 something versions of the story that NTB wrote (national news agency) “Dr.
January 18, 2008
Open Sound Control
The newly refurbished OSC forum web site has sparked off some discussions on the OSC_dev mailing list. One interesting note was a reply from Andy W. Schmeder on how OSC should be spelled out correctly:
The short answer is, use “Open Sound Control”. The other form one may encounter is “OpenSound Control”, but we don’t use that anymore. Any additional forms you may encounter are probably unintentional.
I have been using various versions over the years (also including OpenSoundControl), I guess this is then an official answer since Andy is working at CNMAT.
January 10, 2008
Paper version gone - Electronic version ready
The paper versions of my dissertation arrived late Friday, and I spent the following days burning 100 CD-ROMs to accompany them… The books were announced available yesterday morning, and all were gone around lunch time.
If someone did not get their hands on the paper version, here is (finally) the link to the electronic version (8.1 MB). This is a file optimised for screen usage, so it is in RGB colours, and with internal and external hyperlinks.
January 5, 2008
Dissertation is printed!
My dissertation came from the printing company yesterday. Here’s a picture of some of them:
It feels a bit weird to see the final book lying there, being the result of a year of planning and three years of hard work. I wrote most of it last spring, submitting the manuscript in July. Now, about half a year later, I have a much more distant relationship to the whole thing. Seeing the final result is comforting, but it is also sad to let go.
December 17, 2007
Challenges with dissertation printing
Time has come for preparing my dissertation for official printing. Luckily, I had done most of the formatting when creating the manuscript for the committee, so I expected an easy process. It hasn’t been too bad, but some challenges have appeared:
CMYK: There are several years since I last had to deal with professional printing, so I had totally forgotten about the need for preparing all colour images in CMYK. Similarly, the people at the printing office asked me to convert all images that are supposed to appear in b/w to grayscale.
December 11, 2007
Coordinate systems
I am updating the GDIF messaging in the jmod.mouse module in Jamoma. Trond suggested to use the OpenGL convention for ranges and coordinate systems, which should give something like this:
{width=“414” height=“270”}
This means that values on the vertical axis would fall between [-1 1], while values on the horizontal axis would be dependent on the size of the screen. For my screen (1280x800) this gives a range of [-1.6 1.
November 26, 2007
PhD accepted for public disputation
I am happy to announce that my dissertation entitled “ACTION — SOUND: Developing Methods and Tools to Study Music-Related Body Movement” has been accepted for public disputation for the degree Philosopiae Doctor (Ph.D.) at the University of Oslo:
Trial lecture: Wednesday 30 January, 16:15-17:00, Salen, ZEB Disputation: Thursday 31 January, 10:15, Gamle festsal, Sentrum The dissertation will be available to download from this website in a couple of weeks, and a couple of weeks before the disputation you may also get hold of one of the printed copies (for free) from the administration office of the department.
November 6, 2007
Bug Labs: Lego-like computer modules
Bug Labs has announced a new open source, Lego-like computer modules running Linux. The idea is to create hardware that can easily be assembled in various ways. Looks neat!
{#p503 .imagelink}
October 25, 2007
Careers After Music Psychology
Richard Parncutt is asking for response from ex-music psychology students for the Careers After Music Psychology survey.
If you have studied music psychology at any time (even if just one course), we would be grateful for about 20 minutes of your valuable time.
Please participate regardless of whether or not your current occupation involves music or psychology in any way.
This questionnaire aims
to document the careers of ex-students of music psychology to inform current students of music psychology about career opportunities to develop career-oriented strategies for teaching music psychology to promote music psychology among potential employers I am very much looking forward to seeing the results of this research, and I hope (and expect) that they find people to end up in a wide range of disciplines.
October 23, 2007
Music Performance Research
I heard about the initiative last year at Music & Gesture 2 in Manchester, and now I see that the new online journal Music Performance Research is actually up and running.
Music Performance Research is an international peer-reviewed journal that disseminates theoretical and empirical research on the performance of music. Its purpose is to disseminate research on the nature of music performance from both theoretical and empirical perspectives. The journal publishes contributions from all disciplines that are relevant to music performance, including archaeology, cultural studies, composition, computer science, education, ethnomusicology, history, medicine, music theory and analysis, musicology, philosophy, physics, psychology, neuroscience and sociology.
October 3, 2007
Radical Musicology
Radical Musicology is a peer-reviewed online journal produced in the International Centre for Music Studies at Newcastle University (UK). It was established to provide a forum for progressive thinking across the whole field of musical studies, and encourages work that draws on any and all relevant disciplinary and interdisciplinary perspectives.
Sounds good, but will it actually manage to keep up to this goal? After browsing through the articles in the first volume, it does not appear particularly “radical”, but rather following along the tradition of new musicology.
September 25, 2007
Idea, Concept, Product
Earlier today I went to the release seminar of a new book on creativity and idea development called Slagkraft - Håndbok i idéutvikling by Erik Lerdahl. In his introduction, Erik Lerdahl stressed the importance of creativity not being something that happens by random, but rather that it is a “muscle” that can be trained. Nice metaphor.
What I found most interesting during the seminar was the talk by Ragnar Johansen, the marketing director from Stabburet, a Norwegian food producer.
September 22, 2007
Doepfer USB64
The new Doepfer USB64 Info looks very interesting with its 64 analog (or digital) inputs and €125 price tag. I am not so excited about the MIDI plug, and wonder whether they intend to communicate some higher resolution data through the USB plug.
{width=“602” height=“162”}
September 19, 2007
Giant Music Ball
I have been preparing for Forskningstorget, an annual science fair in the city centre of Oslo, the last couple of days. Last year we made a Music Troll, and this year we are making a giant music ball for people to play with.
The ball is built from a huge boat buoy, 120 cm in diameter, made for tank boats and stormy weather. This makes it just perfect for a music installation which is supposed to survive some thousand children over the next couple of days…
August 26, 2007
Cognitive Load Theory
I have been sitting through a number of presentations the last days (and many more will follow…), and came to think about some keypoints from the Cognitive Load Theory:
Working memory is only limited when you’re learning new information. Once information is in long-term memory, it can be brought back to working memory in very large amounts. In a classroom situation, only limited material is going to be retained, unless notes are taken or handed out.
August 26, 2007
Interview on ADHD
On Friday I appeared in an interview in Aftenposten, one of the larger newspapers in Norway. The interview describes a recently started collaboration between the Musical Gestures group and Terje Sagvolden’s group working on ADHD. More precisely, they are interested in using my Musical Gestures Toolbox and motiongrams for studying the movements of rats and children with ADHD.
August 16, 2007
Reflections on a PhD project 1
I am slowly adjusting to normal life after finishing my dissertation in July. Needless to say, completing a dissertation is a long physical and psychological experience. In the coming weeks I will write up some of the thoughts I have had during the final lap of the project.
Looking at my blog activity over the course of the project, it is interesting to note that it can also serve as a “measure” for my research activity.
May 16, 2007
Musikkteknologidagene 2007
Musikkteknologidagene 2007, a Norwegian contact meeting for people working in the field(s) of music technology, will be organised at the Norwegian Academy of Music 10 and 11 October. I initiated the first of these meetings back in 2005, and am happy that we manage to keep the concept alive. Both research on, and use of, music technology is growing rapidly in Norway as everywhere else. However, while many of us working in the field have large international networks in our special branches of the music technology world, we often seem to know little about what is happening in our own country.
May 15, 2007
Journal of interdisciplinary music studies
There is a new music journal out titled Journal of interdisciplinary music studies, and which seems to be freely available online. I was particularly pleased to read Richard Parncutt’s opening paper on the history and future of systematic musicology. While it has been overshadowed (and to some extent suppressed) by historical musicology for the last decade, there seems to be a growing interest for systematic musicology today.
However, as Parncutt argues, much of this research is carried out under other names and in other departments, e.
May 12, 2007
Skim v 0.3
I recently became aware of Skim, a PDF Reader and note-taker for OS X made by the team behind BibDesk. Skim is designed to help reading and annotating scientific papers in PDF, or what they say: “Stop printing and start skimming”.
While v0.2 of Skim didn’t contain much more functionality than what’s already available in Preview, v0.3 starts to become interesting. Particularly the possibilities to save and print the notes taken separately.
March 15, 2007
ISSSM 2007
Students in musicology, music cognition and technology should consider ISSSM 2007:
Following on the success of the first international summer school in systematic musicology (ISSSM 2006), the summer school will be held for the second time at IPEM, the research centre of the Department of Musicology of Ghent University (Belgium). This year courses will focus on current topics in the research field such as embodied music cognition, music information retrieval and music and interactive media.
March 14, 2007
EMMA: Extensible MultiModal Annotation markup language
Strange that I didn’t see this before. Apparently, W3C has made a draft for multimodal annotation called EMMA: Extensible MultiModal Annotation markup language. The abstract of the document reads:
The W3C Multimodal Interaction working group aims to develop specifications to enable access to the Web using multimodal interaction. This document is part of a set of specifications for multimodal systems, and provides details of an XML markup language for containing and annotating the interpretation of user input.
March 13, 2007
Export BibTex from Google Scholar
I just realised that it is possible to export BibTex entries directly from Google Scholar. This, and other bibliography entry formats, can be set at the bottom of the Scholar Preferences panel. I can’t tell how much easier this makes my life
March 12, 2007
Pareto principle
The Pareto principle (also known as the 80-20 rule, the law of the vital few and the principle of factor sparsity) states that, for many phenomena, 80% of the consequences stem from 20% of the causes.
[…]
Mathematically, where something is shared among a sufficiently large set of participants, there will always be a number k between 50 and 100 such that k% is taken by (100 ? k)% of the participants; however, k may vary from 50 in the case of equal distribution to nearly 100 in the case of a tiny number of participants taking almost all of the resources.
March 8, 2007
Delete-button
I need to be better at using the delete-button. Computers have made it so easy to write and save lots of text, probably too easy. Since there are no limits, I tend to just shuffle things around the documents (I have one document per chapter) making up my dissertation draft.
Throughout the last years, I have been collecting all these small snippets of comments, thoughts and quotes, and I have started to think that now is the time to start using that delete-button rather than just keep moving things around.
February 23, 2007
jill/txt » the novelty of blogs is wearing off?
jill/txt is discussing whether the novelty of blogs is wearing off:
For the second semester running, I have not succeeded in getting my students enthused about blogging. […] And they’re smart interested students. Who are bizarrely enough writing papers about blogging while saying they don’t really understand blogging. Because you’ve only posted three posts to your own blog, I tell them, tearing my hair out.
I think the comment by Linn is right on the target:
February 20, 2007
Concordance
DevonThink Pro has a concordance function that counts all the words in my research database, currently containing a little more than one thousand documents. This might seem like a trivial function, but it really is an interesting read. First there is a bunch of standard words:
Frequency Word --- --- 243220 the 111156 and 41456 for 38622 that 34588 The 25630 with 25591 are 18692 this 17210 from 15045 can .
February 20, 2007
Recording Hoax
Craig Sapp (formerly at CCARH now at CHARM) writes:
I have been analyzing the performances of Chopin Mazurkas and have been noticing an unusual occurence: the performances of the same two pianists always matched whenever I do an analysis for a particular mazurka. In fact, they matched as well as two different re-releases of the same original recording.
The full story about how the tracks have been slightly time-stretched, panned and EQed before being rereleased is covered in a recent story in Gramophone.
February 20, 2007
Some thoughts on GDIF
We had a meeting about GDIF at McGill yesterday, and I realised that people had very different thoughts about what it is and what it can be used for.
While GDIF is certainly intended for formalising the way we code movement and gesture information for realtime usage in NIME using OSC, it is also supposed to be used for offline analysis. I think the best way of doing this, is to have a three level approach as sketched here:
February 17, 2007
Movement, action, gesture
Ever since I started my PhD project I have been struggling with the word gesture. Now as I am working on a theory chapter for my dissertation, I have had to really try and decide on some terminology, and this is my current approach:
I use movement as the general term to describe the act of changing physical position of body parts related to music performance or perception. Action is used to denote goal-directed movements that form a separate unit.
February 17, 2007
On reading and writing blogs
I am spending quite a bit of time on reading and writing blogs, e-mail lists and forums every day. After talking to a person that thought this would be just a waste of time, I have been thinking about why this could be justified from a research perspective. While in many cases it could be considered waste of time, in other cases it is really crucial for my research. Working in a fast moving field, where quite a bit of the activities happen online and very little is available through traditional research channels (e.
February 17, 2007
Trond Lossius' fellowship report
I spent my flight to Montreal (which became much longer than I expected when I was rescheduled through Chicago) reading Trond Lossius’ report for the Fellowship in the arts program. He addresses a number of interesting topics:
Commenting on the necessity for carrying out research for instead of on art, he discusses the concept of “art as code”:
It is not only a question of developing tools. [..] Programming code becomes a meta-medium, and creating the program is creating the art work.
February 16, 2007
Mind maps
After reading Ola’s blog entry about content management, I decided to give MindManager a try. Except for the price tag (luckily they have educational discounts), I like it a lot. It is the first mindmapping software I find useful, and I particularly like the possibility to make notes on any entry. This makes it possible to really use it for mind mapping, and not only as a visualisation tool.
Previously, I have tested NovaMind, which creates some fancy-looking mindmaps, but the GUI is much too clumsy for me, and it seems focused on creating printable mindmaps.
February 12, 2007
Critical Thinking About Word and .doc
A comment on why university teachers should think critically about Word and .doc:
Many of us teach cultural analysis and critical thinking in our writing classes. Our first year readers are full of cultural commentary, and we use these texts to teach our students to question the status quo and understand more deeply the implications of the choices they make in this consumer culture.
Do writing teachers do the same when they tell students to submit their documents as .
February 8, 2007
Adding Disciplines to Two-dimensional Interdisciplinarity Sketch
It is always difficult to categorise things, since it is always possible to think of other ways of doing it. But here I have tried to include some of the various fields that my work touch upon in my two-axes sketch:
The idea is to include this in the introduction of my dissertation.
February 8, 2007
MSc in Music Tech at Georgia Tech
Georgia Tech has been hiring a young and interesting music tech faculty over the last years, and now they start a Master of Science program in music tech with a focus on the design and development of novel enabling music technologies. This is yet another truly interdisciplinary music tech program to appear over the last couple of years, and accepting students from a number of different backgrounds, including music, computing and engineering.
February 8, 2007
Two-dimensional Interdisciplinarity Sketch
I am working on the introduction to my dissertation, and am trying to place my work in a context. Officially, I’m in a musicology program (Norwegian musicology ≈ science of music) in the Faculty of Humanities, but most of my interests are probably closer to psychology and computer science. Quite a lot of what I have been doing has also been used creatively (concerts and installations) although that is not really the focus of my current research.
February 5, 2007
PhD, ph.d. and other abbreviations
PhD degrees are new in Norway. Until a couple of years ago, each faculty used to have their own degrees: dr.art., dr.ing. etc. Now, as Norwegian universities are awarding degrees entitled philosophie doctores, I have been used to reading and writing PhD as PhD. However, I just got to know that the official Norwegian abbreviation is ph.d. with dots, no spaces and uncapitalized letters.
February 5, 2007
Vancouver guidelines
As a member of the university’s research committee, I have been reading the Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication (more popularly know as the Vancouver guidelines) as a basis for creating new and general guidelines for the university.
I particularly find the section about authorship credit interesting. Authors of a paper should meet the following three criteria:
Substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data.
January 24, 2007
Petition for guaranteed public access to publicly-funded research results
Petition for guaranteed public access to publicly-funded research results
In January 2006 the European Commission published the Study on the Economic and Technical Evolution of the Scientific Publication Markets of Europe. The Study resulted from a detailed analysis of the current scholarly journal publication market, together with extensive consultation with all the major stakeholders within the scholarly communication process (researchers, funders, publishers, librarians, research policymakers, etc.). The Study noted that ‘dissemination and access to research results is a pillar in the development of the European Research Area’ and it made a number of balanced and reasonable recommendations to improve the visibility and usefulness of European research outputs.
January 14, 2007
iPhone sensing
As I have mentioned elsewhere, I am thrilled by the fact that various sensing technologies are getting so cheap that they are incorporated everywhere. As could be seen from the presentation of Apple’s new iPhone, it includes an accelerometer to sense tilt of the device (and also movement if they decide to use that for anything), a proximity sensor (ultrasound?) to turn off the display when the phone is put to the ear and a light sensor to change the brightness of the screen (?
January 12, 2007
Vibrating Plates
Derek Kverno and Jim Nolen have studied the vibration of circular, square and rectangular plates with unbound edges, and have posted som very nice images of radiation patterns of vibrating plates.
January 11, 2007
Gestures and technology
What I find most fascinating about Apple’s new iPhone, is the shift from buttons to body. Getting away from the paradigm of pressing buttons to make a call or to navigate, the iPhone boasts a large multi-touch screen where the user will be able to interact by pointing at pictures and objects. Furthermore, the built-in rotation sensor will sense the direction of the device and rotate the screen accordingly, somehow similar to how new digital cameras rotate the pictures you take automatically.
January 11, 2007
Smart programs
I had a discussion about which software tools I use for my research, so here is a list of the most important (in no particular order):
Firefox: with adblock and mouse gestures. NetNewsWire: for handling all the blogs I am reading. MarsEdit: to write blog entries. Publishes directly to my WordPress driven blog. OmniGraffle: for making diagrams. I even made my last conference poster with this program, works great also with photos.
January 10, 2007
The Laws of Simplicity
John Maeda’s Laws of Simplicity:
REDUCE – The simplest way to achieve simplicity is through thoughtful reduction ORGANIZE – Organization makes a system of many appear fewer TIME – Savings in time feel like simplicity LEARN – Knowledge makes everything simpler DIFFERENCES – Simplicity and complexity need each other CONTEXT – What lies in the periphery of simplicity is definitely not peripheral EMOTION – More emotions are better than less TRUST – In simplicity we trust FAILURE – Some things can never be made simple THE ONE – Simplicity is about subtracting the obvious, and adding the meaningful
January 2, 2007
How to Sell Your Book, CD, or DVD on Amazon
How to distribute things through Amazon.
Get an ISBN (for a book), or a UPC (for a CD or DVD). For one book it costs $125, for one CD, $55, for one DVD, $89. Get a bar code based on the ISBN or UPC. Costs $10, or may be included in UPC. Sign up with Amazon, $30 per year. Duplicate your stuff; include the bar code on the outside. Ship two copies to Amazon Send cover scan Track sales Register it (optional)
December 31, 2006
5 Ways to use Quicksilver
I came across Dave Parry’s blog academhack, with some interesting comments on Mac software in an academic context. I was particularly happy about his 5 Ways to use Quicksilver, which helped me get started using the web and dictionary search in Quicksilver.
December 20, 2006
Hatten's Musical Gestures
An interesting quote from Robert Hatten’s 2004 book on musical gestures:
Musical gesture is biologically and culturally grounded in communicative human movement. Gesture draws upon the close interaction (and intermodality) of a range of human perceptual and motor systems to synthesize the energetic shaping of motion through time into significant events with unique expressive force. The biological and cultural motivations of musical gesture are further negotiated within the conventions of a musical style, whose elements include both the discrete (pitch, rhythm, meter) and the analog (dynamics, articulation, temporal pacing).
December 20, 2006
How to Shut up and Get to Work!
Joel Spolsky writes about flow:
We all know that knowledge workers work best by getting into “flow”, also known as being “in the zone”, where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration…trouble is that it’s so easy to get knocked out of the zone. Noise, phone calls, going out for lunch, having to drive 5 minutes to Starbucks for coffee, and interruptions by coworkers – especially interruptions by coworkers – all knock you out of the zone.
December 20, 2006
Linear presentations
I have been thinking about what I wrote about improvisation a couple of weeks ago. While preparing for a presentation last week, I was thinking about how linear my presentation software (Apple’s Keynote) is. It is as bad as PowerPoint when it comes to locking you into a linear presentation style. This is fine if you have a clear idea of what you would like to say and which order you want to say things in, but I often find that I have several sections that could be organized differently dependent on the audience, the time constraints etc.
December 20, 2006
Movement-Sound Couplings
I am working on the theory chapter of my dissertation, and am trying to pin down some terminology. For a long time I have been using the concept of gesture-sound relationships to denote the intimate links between a physical movement and the resultant sound. However, since I am throwing away gesture for now, I also need to reconsider the rest of my vocabulary.
Hodgins (2004) uses the term music-movement structural correspondences, which I find problematic since it places music first.
December 20, 2006
Movement, Action, Gesture
I have been struggling with the word gesture for a while. I, and many others in the music cognition/technology community, have been using it to denote music-related actions (i.e. physical body movement).
Not only is the term confusing in the musicology community (e.g. the way Hatten writes about inner-musical qualities), but it is also a misleading term in behavioral and linguistics communities, where gesture usually denotes communicative hand movement or facial expressions.
December 18, 2006
Spectator-listener
Usually, we use the word listener when describing the perceiver in a musical context. This, however, does not fit well with the premise of my research which is that music cognition is multimodal in nature. I am reluctant to use the word listener, since it favours listening over the other modalities. The composite spectator-listener (as used by Fells in this paper) includes both the auditory and visual modalities, and is much better than only listener but still lacks the other modalities.
December 6, 2006
On Improvisation
Yesterday, someone commented that improvisation is all about being able to play some random stuff, in realtime. My experience is really the opposite. Learning to improvise on a musical instrument is really all about learning scales, phrases, motifs, and getting experienced in putting them together in a structured way. In realtime.
The same is true for improvised presentations and speeches. After holding a number of presentations on my research lately, I have been thinking about how similar the preparation process for a presentation is to a music performance.
December 5, 2006
CiteULike and BibDesk
I have started testing CiteULike for creating an online bibliography, and came across this blog post on using CiteULike and BibDesk. I would really love to be able to synchronize BibDesk with CiteULike but that doesn’t seem like an option thus far.
December 5, 2006
Why Blog for Documentary?
Adrian Miles writes about why blogging is interesting for documentary film makers, and summarizes the discussion into the following key points:
to document, discuss, reflect and engage with your own practice to promote and build awareness around your current project to spread promotion and recognition across the life of the entire project, and not just post-release so you have a network identity (when someone Googles you, or your project, they find what you say about things first) to present work in progress (brief rough cuts, for example) to present parts or all of your footage that ends up on the floor to solicit, by invitation or discovery, new material (people find you - see 4) relevant to your project to develop your own network skills so that the leap from old to new is lessened transparency about your process, which complements the implicit ethics of documentary as a practice to provide another way of contributing to your community (of documentary filmmakers, and the subject or subjects of your documentary work) I think these are equally interesting for all sorts of other projects, including my own research.
December 4, 2006
WiiMote used as a mouse on windows
This video shows WiiMote used as a mouse on windows.
December 1, 2006
Guest lecture: Benoît Bardy
Benoît Bardy held a very interesting guest lecture on the topic “Perception-Action Dynamics Underlying Gesture Classification” yesterday.
An interesting opening remark was on terminology. He commented that in his field (kinesiology) they never use the term gesture at all, while in the ConGAS community noone seems to talk about movement. He suggested the following definitions for some key terms:
Gesture: non-verbal communication, body language, sign, expressive movements Movement: change in position/orientation Action: goal-directed movement Skill: capacity to reach a goal with efficient performance I have tried to understand if there is a difference between movement and motion, but he couldn’t enlighten me there.
November 23, 2006
Profcasting
Adrian Miles coins the term “profcasting” about academic podcasting:
One of the reasons podcasting has had such an easy adoption within universities is that the form fits so comfortably within existing teaching models. […] The problems with it, […] It is asymmetric (I talk to you, you listen), it constructs the learner as passive, and it struggles to provide room for clarification and commentary (dialogue). On the other hand it can be very effective for those students who cannot attend the lecture […]
November 23, 2006
Thinking in graphics
I am very visually oriented and often prefer some graphic representation over text. Now, as I am starting to get into the writing face of my dissertation, I am looking for how to better incorporate visuals (and other media) as part of my dissertation. I will probably end up with some more or less traditionally formatted document, although I have been thinking about writing a hypertext document. However, I will probably make it as an electronic document (PDF) with included audio and video, and of course plenty of graphics and images.
November 1, 2006
Making conference posters
InDesign used to be my program of choice for design issues, but since it is super-slow on my MacIntel I have been looking for another solution. OmniGraffle Pro has been my main tool for creating small vector graphics for a while, and I gave it a chance to make a full poster. I am very happy with the work flow and the end result looks great. It handles pictures effortlessly (although I miss some simple photo tweaking utilities and cropping) and the graphics look very crisp even in a large format.
October 30, 2006
Trond Lossius on sound art
In an interview, Trond Lossius discusses his take on sound art. He mentions how he treats video as an advanced spotlight, giving the eyes something to look at while listening to the sound:
Video kommer jeg mest til å bruke som avanserte lyskilder. Tanken er at de skal invitere publikum til å bevege seg rundt i rommet, og dermed også utforske hvordan lyden varierer i rommet. Bevegelsene, teksturene og fargene i videoene kan gi øyet noe å hvile på og samtidig invitere til koblinger til hvilke kvaliteter lyden har.
October 25, 2006
UB drivers for Phidgets
Phidgets just released a new library and drivers for Intel Macs. This was the last thing I really have been missing after I got my new MacBook this summer.
October 11, 2006
Lego instruments
A group of German students are working on a project called Stekgreif where they include a number of popular sensors built as lego-blocks. Adding power through the lego bricks makes it possible to build instruments and other fun things entirely out of lego.
October 9, 2006
Gypsy MIDI controller
{#image292}Nick Rothwell reviews the Gypsy MIDI controller in Sound on Sound. An excerpt from his conclusion:
I know some artists who could build great live performances around a Gypsy MIDI suit, and others who would merely look like plonkers. As to the first question, here at Cassiel Central we’ve been through all manner of MIDI controllers and sensing systems, from fader boxes (motorised and not) through accelerometers, ultrasound systems, camera tracking, joysticks, game controllers and Buchla devices, and some common issues emerge.
September 29, 2006
Norwegian Science Fair
Last weekend we participated (again) with a stand at a big science fair down in the city centre of Oslo during the Norwegian Research Days.
{.imagelink}
The most interesting thing, and also what I have spent the most time on lately was a “music troll” I have been making together with Einar Sneve Martinussen and Arve Voldsund. The troll is basically a box with four speakers on the sides, and four arms sticking out with heads with included sensors.
September 19, 2006
Nokia 5500
Nokia 5500 is a new sport phone with a built in pedometer and the ability to use gestures (well, only tapping so far) for controlling music playback. As accelerometers get cheaper I expect to see lots of new gesture-controlled devices.
September 2, 2006
DevonThink
Steven Berlin Johnson has an interesting blog entry on his use of DevonThink Pro:
Over the past few years of working with this approach, I’ve learned a few key principles. The system works for three reasons:
1) The DevonThink software does a great job at making semantic connections between documents based on word frequency.
2) I have pre-filtered the results by selecting quotes that interest me, and by archiving my own prose.
September 2, 2006
Dissertation Calculator
Well, the Dissertation Calculator suggests how my PhD research could have been laid out.
August 19, 2006
A researcher's life
I overheard a conversation the other day where a person commented that university researchers have such a relaxed life, only sitting in their offices reading and writing books all the time. This claim involves (at least) two parts: 1) quiet/relaxed and 2) reading/writing. My own experience as a university research fellow tells a very different story:
Quiet/relaxed: Except for a couple of conferences, this summer was, indeed, quiet. That was mainly because I chose to work when everyone else was on vacation… But looking back at the last week, which happened to be the semester opening week (universities and schools start early here in Norway), I don’t think I ever had more than a couple of minutes of “quiet time” in between the rush of e-mails, telephones, meetings, lectures, concerts, etc.
August 2, 2006
Microsoft Live Labs: Photosynth
{#image248}Researchers at Microsoft Live Labs are working on Photosynth based on Photo Tourism from the University of Washington. By structuring the photos based on their relative position to each other, it is possible to navigate in a large photo collection in a 3D style way. The system looks very responsive from the video, but I would be curious to see how it works in a real-world context.
It would be very interesting to create similar navigation tools for audio.
July 17, 2006
New book: New Digital Musical Instruments: Control and Interaction Beyond the Keyboard
{.imagelink}Eduardo Miranda and Marcelo M. Wanderley have just released a new book called New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. The chapters are:
- Musical Gestures: Acquisition and Mapping
Gestural Controllers Sensors and Sensor-to-Computer Interfaces Biosignal Interfaces Toward Intelligent Musical Instruments So far most publications in this field have been in conference proceedings, so it is great to have a book that can be used in teaching.
July 15, 2006
Electromyography
For some experiments we are conducting on piano playing I have been looking for a way of measuring muscle activity, or electromyography as it is more properly called:
Electromyography (EMG) is a medical technique for evaluating and recording physiologic properties of muscles at rest and while contracting. EMG is performed using a instrument called an electromyograph, to produce a record called an electromyogram. An electromyograph detects the electrical potential generated by muscle cells when these cells contract, and also when the cells are at rest.
July 15, 2006
vlog 3.0 [a blog about vogs] » Is Labsome a Place?
Adrian Miles has an interesting reflection on the lack of a “place” to work in traditional humanities:
Well, one way to approach this is to recognise that in trad. humanities (which I’d defie as having a written based and print literate methodology and practice) place is rendered secondary to idea. We write, and what is written is always regarded as more important than the act of writing (the first separation of theory and practice in trad.
July 5, 2006
NIME paper on GDIF
Here is the poster I presented at NIME 2006 in Paris based on the paper Towards a Gesture Description Interchange Format.
The paper was written together with Tellef Kvifte, and the abstract reads:
This paper presents our need for a Gesture Description Interchange Format (GDIF) for storing, retrieving and sharing information about music-related gestures. Ideally, it should be possible to store all sorts of data from various commercial and custom made controllers, motion capture and computer vision systems, as well as results from different types of gesture analysis, in a coherent and consistent way.
June 27, 2006
Emotionally intelligent interfaces
Peter Robinson (University of Cambridge) are working on emotionally intelligent interfaces, and have made a setup for a summer show at a science museum in London where they can track 20 different types of emotional responses using computer vision:
Can you read minds? The answer is most likely ‘yes’. You may not consider it mind reading but our ability to understand what people are thinking and feeling from their facial expressions and gestures is just that.
June 21, 2006
ICMC papers
My paper entitled “Using motiongrams in the study of musical gestures” was accepted to ICMC 06 in New Orleans. The abstract is:
Navigating through hours of video material is often time-consuming, and it is similarly difficult to create good visualization of musical gestures in such a material. Traditional displays of time-sampled video frames are not particularly useful when studying single-shot studio recordings, since they present a series of still images and very little movement related information.
June 21, 2006
Interaction Design
We have started a collaboration between between UiO and AHO, and some of the music technology students followed courses with the interaction designers at AHO this spring semester. This was a great success, and I was impressed with what came out of it.
Henrik Marstrander has worked on a table interface where he can control various musical parameters, and Jon Olav Eikenes and Marie Wennesland has made a multi-touch multi-touch interface modelled after Jeff Han.
May 23, 2006
Nike+iPod
Apple and Nike has teamed up and released the Nike+iPod package, which allows for using an iPod Nano as a pedometer and share the training information online. It is based on a wireless accelerometer (1.37 x 0.95 x 0.30 inches, 0.23 ounce, using a proprietary protocol at 2.4GHz) and a receiver that connects to the iPod (Size: 1.03 x 0.62 x 0.22 inches, 0.12 ounce). Suggested price is US$29, which is very cheap thinking about the included accelerometer.
May 20, 2006
Sonic Visualiser
{.imagelink}Sonic Visualiser from Queen Mary’s is yet another software tool for visualizing audio content. However, there are some features that stand out:
Cross-platform: available for OS X, Linux, Windows GPL’ed Native support for aiff, wav, mp3 and ogg (but what about AAC?) Annotations: Support for adding labelled time points and defining segments, point values and curves. The annotations can be overlayed on top of waveforms and spectrograms Time-stretch Vamp Plugins is at the core of the Sonic Visualiser, and it seems like they want this to become a standard for non-realtime audio plugins.
May 17, 2006
Blogging
Katherine Wilson writes about how she underestimated blogging when she got started:
At the start I underestimated what it could be used for. It’s a database, a diary, a place to jot down notes that don’t fit anywhere else, a place to stake out your research territory, a self-promotion tool, an information bank, an ideas exchange, a support community, a progress-log, a device for self-discipline, confidence-tracker, a complaints department, a file storage system.
May 17, 2006
PDF reading
Marc Hedlund at O’Reilly summarizes the good things about PDF books:
- They are searchable.
They are portable. They can often be bought and downloaded immediately. I am still trying to decide what I think about this. In general I prefer to have all articles and reference literature available as PDFs in my digital library, currently organizing them using DevonThink Pro. As computer screens are finally getting bigger, brighter and with higher resolution (even the new MacBook is sporting 1200x800 pixels on the 13 inch screen), it is becoming increasingly more pleasant reading on screen.
May 13, 2006
Marnix de Nijs, media artist
{.imagelink}The installation Spatial Sounds (100dB at 100km/h) by Marnix de Nijs and Edwin van der Heide. Spatial Sounds 100 dB at 100 km/h was set up at Usine-C during the Elektrafestival.
A speaker is mounted on a metallic arm, rotating around at different speeds dependent on the people in the room. Ultrasonic sensors detect the distance to people in the space and changes the sound being played as well as speed of rotation (more technical info here).
May 9, 2006
Cycling '74: MaxMSP => Working with Max is not easy
Found an interesting thread on the Max list entitled Working with Max is not easy. But what is easy. Before we learn something we find it difficult. When we know it we find it easy. I guess a problem with Max, if it can be called a problem, is that its low entry-level (at least compared to many other programming languages) is that the user might be misleaded into thinking that this is something that can be mastered in two weeks.
May 9, 2006
Frank A. Russo
Came across the web page of Frank A. Russo, and found a very interesting paper on Hearing Aids and Music discussing the auditory design of hearing aids:
Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true.
May 4, 2006
Online or Invisible? [Steve Lawrence; NEC Research Institute]
Steve Lawrence discusses the importance of online research papers in the paper Online or Invisible?:
The results are dramatic. There is a clear correlation between the number of times an article is cited, and the probability that the article is online. More highly cited articles, and more recent articles, are significantly more likely to be online.
[…]
Free online availability of scientific literature offers substantial benefits to science and society. To maximize impact, minimize redundancy, and speed scientific progress, author and publishers should aim to make research easy to access.
May 3, 2006
Novint Falcon
{#image164}We are currently working with the Phantom Omni haptic devices at McGill, but unfortunately they are rather expensive. I have been looking forward to test the Novint Falcon which is supposed to sell for around $100, but after being in touch with the company it seems like they will not start shipping devices before next year.
I really think such devices will change the way we work with computers. The computer experience has been 2-dimensional way too long, and from my initial testing of 3D haptic devices shows how much potential is lying in this type of human computer interaction.
May 1, 2006
Trigonometry
I had to brush up on my trigonometry to solve some mapping issues, and found this nice overview. Strange how much I have forgotten about these things, I really need to get back to my linear algebra books! I never really understood the point of learning those vector transformation things back when I studied maths, but now as I have to implement some 3d gesture models I see that it is actually very useful.
April 27, 2006
Sidney Fels lecture
Just went to a lecture by Sidney Fels from the Human Communication Technologies lab and MAGIC[]{#mce_editor_0_parent} at the University of British Columbia (interestingly enough located in the Forest Sciences Centre…). He was talking on the topic of intimate control of musical instruments, and presented some different projects:
GloveTalkII: “a system that translates hand gestures to speech through an adaptive interface.” Iamascope: a caleidoscope like thing, where users would see themselves on a big screen, as well as controlling a simple sound synthesis.
April 25, 2006
OSC - MIDI address space
My post over at the Open Sound Control forum:
I guess we are all trying to get rid of MIDI, but as long as we have tons of gear around, it would be good to have a generic way of describing MIDI information in OSC. Perhaps I am missing something obvious, but I have looked around and haven’t found any suggestions for a full implementation of MIDI messages as an OSC address space.
April 25, 2006
Wired 11.09: PowerPoint Is Evil
Edward Tufte has an interesting Wired article entitled PowerPoint Is Evil. The main point is that PowerPoint forces people to create presentations in a certain way, and he especially comments on the problems of bullet points.
I have made quite a lot of PowerPoint presentations over the years, and I clearly see his point. It is, indeed, easy to fall into the habit of creating lots of bullet points covering everything you want to say.
April 23, 2006
WFS in electronic music
Today I went to a guest lecture by Marije Baalman on WaveFieldSynthesis (a spatial sound reproduction principle based on the Huygens principle) over at Concordia. I heard a demonstration of WFS at IRCAM a couple of years back, and it was good to (finally) get a good theoretical introduction to the field.
They are usually testing it with 24 speakers, but they are now going to make a permanent 900 speaker setup at the Technical University in Berlin for creating a surround WFS setup.
April 21, 2006
LibriVox
LibriVox is a voluntary project set up to record all books in the public domain and make them available, for free, in audio format on the internet. Besides the joy of having audio books, this is also very interesting from a speech/voice research perspective.
Another source for open-source text files is the French Incipit blog. Interestingly enough, I found a French version of Nicholas Cook’s introduction to music!
April 2, 2006
SPEAR
{.imagelink}SPEAR is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude.
It offers some great features, and I particularly like the possibility to easily select single partials and edit them directly. Most controls also work in realtime.
April 2, 2006
Teatrix
Last week I participated in the Teatrix workshop organized by BEK at USF Verftet in Bergen. The idea was to explore technology in a stage setting. The people participating were: Paola Tognazzi, H.C. Gilje, Gisle Frøysland, Marie Nerland, Trond Lossius, Thorolf Thuestad, Tim Place, Iver Findlay, Linda Birkedal, Alexander Refsum Jensenius, Georges Gagneré, Anders Gogstad.
The most interesting for me was the chance to work together with Tim Place and Trond Lossius on Jamoma, and during the week we had the chance to discuss and develop quite a lot.
April 2, 2006
VLDCMCaR
Bob L. Sturm at UC Santa Barbara:
{.imagelink}VLDCMCaR (pronounced vldcmcar) is a MATLAB application for exploring concatenative audio synthesis using six independent matching criteria. The entire application is encompassed in a graphical user interface (GUI). Using this program a sound or composition can be concatenatively synthesized using audio segments from a corpus database of any size. Mahler can be synthesized using hours of Lawrence Welk; howling monkeys can approximate President Bush’s speech; and a Schoenberg string quartet can be remixed using Anthony Braxton playing alto saxaphone.
March 30, 2006
Apple - Sound and Hearing
John Lazarro writes on the Auditory list:
*Apple released a software update today for iPods, that lets users set a maximum dB level for the device, and lets parents lockdown the maximum dB level of their children’s iPod with a combination lock. Apple also put up a website on how to use the feature to limit long-term hearing damage.
*
March 29, 2006
Daniel Rozin Wooden Mirrors
Daniel Rozin has made some Wooden Mirrorsfrom various materials. Any person standing in front of one of these pieces is instantly reflected on its surface. The mechanical mirrors all have video cameras, motors and computers on board and produce a soothing sound as the viewer interacts with them.
March 28, 2006
The Silent Speaker
Forbes.com writes about Charles Jorgensen who is working on what he calls subvocal speech recognition. He attaches a set of electrodes to the skin of his throat and his words are recognized by a computer even when he is not producing any sound.
March 24, 2006
Music and Gesture 2
Just got to know that I got a paper accepted at the Music and Gesture 2 conference. The presentation will focus on new techniques for representing musical gestures (i.e. physical movement) and how they have been implemented in the Musical Gestures Toolbox.
March 24, 2006
NIME 06 - IRCAM - Paris
I also recently got to know that two papers I have been co-authoring have been accepted to NIME in Paris. One is called “Towards a Coherent Terminology and Model of Instrument Description and Design” and the other “Towards a Gesture Description Interchange Format”. The idea in the latter is to develop a set of gestural descriptors as a GDIF to match the Sound Description Interchange Format (SDIF) which has been around for some years.
February 24, 2006
Membrane Switches and Linear Position Sensors
Mark just pointed me to the web page of Spectra Symbol, a company making membrane switches and linear position sensors. I particularly like the circular position sensor!
February 13, 2006
Instant replay may help to mould memories
Nature News writes about a recent discovery of how rats running through a maze tend to have a backwards replay of the route when resting:
As the rats ran along the track, the nerve cells fired in a very specific sequence. This is not surprising, because certain cells in this region are known to be triggered when an animal passes through a particular spot in a space. But the researchers were taken aback by what they saw when the rats were resting.
February 10, 2006
Metadata Hootenanny
{.imagelink}Metadata Hootenanny is a tool for easy adding metadata (annotations and chapters) to QuickTime files. It also has a nice timeline function, showing the frames (or only keyframes) of the movie file, where it is possible to easy navigate and add chapter information. Seems like an easy way of adding information quickly to movie files, although it does not have any more advanced features as found in real annotation software.
February 5, 2006
Video Annotation Software
A short overview of various video annotation software:
- Anvil by Michael Kipp is a java-based program for storing several layers of annotations, like a text sequencer. Can only use avi files. Intended for gesture research (understood as gestures used when talking).
- Transana from University of Wisconsin, Madison, is developed mainly as a tool for transcribing and describing video and audio content. Seems like it is mainly intended for behavioural studies.
February 2, 2006
HCI at Stanford University: d.tools
d.tools is a hardware and software system that enables designers to rapidly prototype the bits (the form) and the atoms (the interaction model) of physical user interfaces in concert. d.tools was built to support design thinking rather than implementation tinkering. With d.tools, designers place physical controllers (e.g., buttons, sliders), sensors (e.g., accelerometers), and output devices (e.g., LEDs, LCD screens) directly onto form prototypes, and author their behavior visually in our software workbench.
January 26, 2006
Stanford on iTunes
Stanford on iTunes provides access to a wide range of Stanford-related digital audio content via the iTunes Music Store, Apple’s popular music jukebox and online music store. The project includes two sites: a public site, targeted primarily at alumni, which includes Stanford faculty lectures, learning materials, music, sports, and more. an access-restricted site for students delivering course-based materials and advising content.
January 24, 2006
Integrated sensing display
Apple has patented a new Integrated sensing display:
On Jan. 12, the US Patent & Trademark Office revealed Apple’s new patent application titled “Integrated sensing display.” This is certainly the year of the integrated camera, as this patent presents.
An integrated sensing display is disclosed. The sensing display includes display elements integrated with image sensing elements. As a result, the integrated sensing device can not only output images (e.g., as a display) but also input images (e.
January 16, 2006
Intelligent MIDI Sequencing with Hamster Control
I first came across the Intelligent MIDI Sequencing with Hamster Control project a couple of years ago, and still find it a very funny!
January 15, 2006
Converting MPEG-2 .MOD files
I have been struggling with figuring out the easiest way of converting MPEG-2 .MOD files coming out of a JVC Everio HD camera to something else, and finally found a good solution in Squared 5 - MPEG Streamclip which allows for converting these files to more or less all codecs that are available on the system. It is also a good idea to rename the .MOD files to .M2V or .
January 15, 2006
retrievr - search by sketch
“retrievr is an experimental service which lets you search and explore in a selection of Flickr images by drawing a rough sketch.
[…]
retrievr is based on research conducted by Chuck Jacobs, Adam Finkelstein and David Salesin at the University of Washington: Fast Multiresolution Image Querying (1995).”
January 14, 2006
Digital thoughts by Paul Lansky
I came across the piece Notjustmoreidlechatter by composer Paul Lansky, showcasing a fascinating use of voice for creating musical rhythm and texture. And then I found the article Digital thoughts where he explains some of his compositional ideas throughout the years.
January 14, 2006
Philosophy in the Flesh
Philosophy in the Flesh by George Lakoff and Mark Johnson starts with these nice sentences:
The mind is inherently embodied.
Thought is mostly unconscious.
Abstract concepts are largely metaphorical.
January 12, 2006
Demonstrations of Auditory Illusions
I came across a nice site with demonstrations of auditory illusions. There is also the page of Diana Deutsch.
December 30, 2005
Flat Earth on Wikipedia
In a rather bizarre Wikipedia discussion about the flatness of Earth, I found an interesting statement:
- At the scale of less than 10^-9 meters or so, the earth is a space has an undefinable shape.
From about 10^-9 meters through to about 10^4 meters, the earth is flat. From about 10^4 meters to about 10^9 meters the earth is a sphere. At the scale of anything greater than about 10^9 meters, the earth is a point.
December 30, 2005
Web Phases
I have been reading up on hypertext and hypermedia theory and looked around for papers on hypermusic. One of the few papers I found on the topic was by John Maxwell Hobbs describing his 1998 piece Web Phases.
December 29, 2005
Open Sound Control forum
The CNMAT people have made a forum at the Open Sound Control site. OSC is a way of communicating musical information between devices, much in the same way as MIDI, but without all the problems of MIDI (low resolution etc). Although OSC seems to have gained ground in the research community, I think we all have to support it more if it is ever going to be accepted by the commercial industry.
December 28, 2005
Mirror Neurons
The concept of mirror neurons was discovered at the University of Parma, Italy some years back, and shows how we have the same neural activitiy whether we do a movement ourselves or just watch someone else doing it. NOVA has made an excellent documentary about mirror neurons.
December 27, 2005
Academic English
Thomas Hylland Eriksen has some interesting thoughts on academic English:
“With the total dominance of Microsoft Word, the result is comparable to that of the total dominance of English (or, for most of us, EFL). Everything is compatible with everything else; yet, many of us feel, even if we cannot prove, that it shapes our thoughts in insidious ways.”
December 19, 2005
10 Tips on Writing the Living Web
10 Tips on Writing the Living Web is a good list of reminders for writing web pages:
Write for a reason Write often Write tight Make good friends Find good enemies Let the story unfold Stand up, speak out Be sexy Use your archives Relax!
December 19, 2005
Project Xanadu
Looking for some references to nonlinear writing and hypertext, I ended up on the web page of Project Xanadu started by Ted Nelson in 1960. I read about it many years ago, when the web was still quite young, and it was fascinating to read more about the ideas of true nonlinear writing.
December 8, 2005
MPEG-7 & MPEG-21
Looking for frameworks for storing metadata, I am trying to understand more about the current state of MPEG-7, a “multimedia content description standard” and the MPEG-21 multimedia framework.
December 2, 2005
In-shoe dynamic pressure measuring
“The pedar system is an accurate and reliable pressure distribution measuring system for monitoring local loads between the foot and the shoe.”
www.novel.de
November 30, 2005
A Change of Heart
Some interesting thoughts on the meaning of a PhD from “Nathaniel Worther” the pseudonym of an engineer hunting for a job these days:
Chronicle Careers: A Change of Heart
Tag: book
December 13, 2022
New Book: Sound Actions - Conceptualizing Musical Instruments
I am happy to announce that my book Sound Actions - Conceptualizing Musical Instruments is now published! I am also thrilled that this is an open access book, meaning that is free to download and read. You are, of course, also welcome to pick up a paper copy!
Here is a quick video summary of the book’s content:
In the book, I combine perspectives from embodied music cognition and interactive music technology.
August 24, 2022
Still Standing Manuscript in Preparation
I sent off the final proofs for my Sound Actions book before the summer. I don’t know when it will actually be published, but since it is off my table, I have had time to work on new projects.
My new project AMBIENT will start soon, but I still haven’t been able to write up all the results from my two projects on music-related micro-motion: Sverm and MICRO. This will be the topic of the book I have started writing this summer, with the working title Still Standing: Exploring Human Micromotion.
July 23, 2021
Sound Actions Manuscript in Preparation
Ever since I finished my dissertation in 2007, I have thought about writing it up as a book. Parts of the dissertation were translated and extended in the Norwegian-language textbook Musikk og bevegelse (which, by the way, is out of print but freely available as an ebook). That book focused primarily on music-related body motion and was written for the course MUS2006 at the University of Oslo. However, my action-sound theory was only partially mentioned and never properly presented in a book format.
December 2, 2020
Meeting New Challenges
Life is always full of challenges, but those challenges are also what drives personal development. I am constantly reminded about that when I see this picture, which was made by my mother Grete Refsum when I started in school.
I think the symbolism in the image is great. The eager child is waiting with open arms for an enormous ball. Even though I am much older now, I think the feeling of starting on something new is always the same.
March 10, 2017
New Book: A NIME Reader
I am happy to announce that Springer has now released a book that I have been co-editing with Michael J. Lyons: “A NIME Reader: Fifteen Years of New Interfaces for Musical Expression”. From the book cover:
What is a musical instrument? What are the musical instruments of the future? This anthology presents thirty papers selected from the fifteen year long history of the International Conference on New Interfaces for Musical Expression (NIME).
August 6, 2009
Book manuscript ready
Over the last year I have been working on a text book based on my dissertation. It started out as a translation of my dissertation into Norwegian, but I quickly realized that an educational text is much more useful. So in practice I have written a totally new book, although it is drawing on research from my dissertation. The title of the book is Musikk og bevegelse (Music and movement) and that is exactly what it is about.
January 5, 2008
Dissertation is printed!
My dissertation came from the printing company yesterday. Here’s a picture of some of them:
It feels a bit weird to see the final book lying there, being the result of a year of planning and three years of hard work. I wrote most of it last spring, submitting the manuscript in July. Now, about half a year later, I have a much more distant relationship to the whole thing. Seeing the final result is comforting, but it is also sad to let go.
Tag: instruments
December 13, 2022
New Book: Sound Actions - Conceptualizing Musical Instruments
I am happy to announce that my book Sound Actions - Conceptualizing Musical Instruments is now published! I am also thrilled that this is an open access book, meaning that is free to download and read. You are, of course, also welcome to pick up a paper copy!
Here is a quick video summary of the book’s content:
In the book, I combine perspectives from embodied music cognition and interactive music technology.
June 17, 2021
New publication: NIME and the Environment
This week I presented the paper NIME and the Environment: Toward a More Sustainable NIME Practice at the International Conference on New Interfaces for Musical Expression (NIME) in Shanghai/online with Raul Masu, Adam Pultz Melbye, and John Sullivan. Below is our 3-minute video summary of the paper.
And here is the abstract:
This paper addresses environmental issues around NIME research and practice. We discuss the formulation of an environmental statement for the conference as well as the initiation of a NIME Eco Wiki containing information on environmental concerns related to the creation of new musical instruments.
November 25, 2018
Lecture-performance setup
I have not been very good at blogging recently, primarily because I have been so busy in starting up both RITMO and MCT. As things are calming down a bit now, I am also trying to do some digital cleaning up, archiving files, organizing photos, etc.
As part of the cleanup, I came across this picture of my setup for a lecture-performance held at the humanities library earlier this fall. It consists of a number of sound makers, various types of acoustic ones, and also some electronic.
March 10, 2017
New Book: A NIME Reader
I am happy to announce that Springer has now released a book that I have been co-editing with Michael J. Lyons: “A NIME Reader: Fifteen Years of New Interfaces for Musical Expression”. From the book cover:
What is a musical instrument? What are the musical instruments of the future? This anthology presents thirty papers selected from the fifteen year long history of the International Conference on New Interfaces for Musical Expression (NIME).
Tag: nime
December 13, 2022
New Book: Sound Actions - Conceptualizing Musical Instruments
I am happy to announce that my book Sound Actions - Conceptualizing Musical Instruments is now published! I am also thrilled that this is an open access book, meaning that is free to download and read. You are, of course, also welcome to pick up a paper copy!
Here is a quick video summary of the book’s content:
In the book, I combine perspectives from embodied music cognition and interactive music technology.
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
June 17, 2021
New publication: NIME and the Environment
This week I presented the paper NIME and the Environment: Toward a More Sustainable NIME Practice at the International Conference on New Interfaces for Musical Expression (NIME) in Shanghai/online with Raul Masu, Adam Pultz Melbye, and John Sullivan. Below is our 3-minute video summary of the paper.
And here is the abstract:
This paper addresses environmental issues around NIME research and practice. We discuss the formulation of an environmental statement for the conference as well as the initiation of a NIME Eco Wiki containing information on environmental concerns related to the creation of new musical instruments.
April 26, 2021
Strings On-Line installation
We presented the installation Strings On-Line at NIME 2020. It was supposed to be a physical installation at the conference to be held in Birmingham, UK.
Due to the corona crisis, the conference went online, and we decided to redesign the proposed physical installation into an online installation instead. The installation ran continuously from 21-25 July last year, and hundreds of people “came by” to interact with it.
I finally got around to edit a short (1-minute) video promo of the installation:
August 26, 2020
How long is a NIME paper?
Several people have argued that we should change from having a page limit (2/4/6 pages) for NIME paper submissions to a word limit instead. It has also been argued that references should not be counted as part of the text. However, what should the word limits be?
It is always good to look at the history, so I decided to check how long previous NIME papers have been. I started by exporting the text from all of the PDF files with the pdftotext command-line utility:
August 24, 2020
Improving the PDF files in the NIME archive
This blog post summarizes my experimentation with improving the quality of the PDF files in the proceedings of the annual International Conference on New Interfaces for Musical Expression (NIME).
Centralized archive We have, over the last few years, worked hard on getting the NIME adequately archived. Previously, the files were scattered on each year’s conference web site. The first step was to create a central archive on nime.org. The list there is automagically generated from a collection of publicly available BibTeX files that serve as the master document of the proceedings archive.
August 13, 2020
NIME Publication Ecosystem Workshop
During the NIME conference this year (which as run entirely online due to the coronavirus crisis), I led a workshop called NIME Publication Ecosystem Workshop. In this post, I will explain the background of the workshop, how it was run in an asynchronous+synchronous mode, and reflect on the results.
If you don’t want to read everything below, here is a short introduction video I made to explain the background (shot at my “summer office” up in the Hardangervidda mountain range in Norway):
June 7, 2019
Workshop: Open NIME
This week I led the workshop “Open Research Strategies and Tools in the NIME Community” at NIME 2019 in Porto Alegre, Brazil. We had a very good discussion, which I hope can lead to more developments in the community in the years to come. Below is the material that we wrote for the workshop.
Workshop organisers Alexander Refsum Jensenius, University of Oslo Andrew McPherson, Queen Mary University of London Anna Xambó, NTNU Norwegian University of Science and Technology Dan Overholt, Aalborg University Copenhagen Guillaume Pellerin, IRCAM Ivica Ico Bukvic, Virginia Tech Rebecca Fiebrink, Goldsmiths, University of London Rodrigo Schramm, Federal University of Rio Grande do Sul Workshop description The development of more openness in research has been in progress for a fairly long time, and has recently received a lot of more political attention through the Plan S initiative, The Declaration on Research Assessment (DORA), EU’s Horizon Europe, and so on.
June 5, 2019
NIME publication: NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing
The MCT master’s programme has been running for a year now, and everyone involved has learned a lot. In parallel to the development of the programme, and teaching it, we are also running the research project SALTO. Here the idea is to systematically reflect on our educational practice, which again will feed back into better development of the MCT programme.
One outcome of the SALTO project, is a paper that we presented at the NIME conference in Porto Alegre this week:
March 12, 2018
Nordic Sound and Music Computing Network up and running
I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.
This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.
March 10, 2017
New Book: A NIME Reader
I am happy to announce that Springer has now released a book that I have been co-editing with Michael J. Lyons: “A NIME Reader: Fifteen Years of New Interfaces for Musical Expression”. From the book cover:
What is a musical instrument? What are the musical instruments of the future? This anthology presents thirty papers selected from the fifteen year long history of the International Conference on New Interfaces for Musical Expression (NIME).
July 15, 2016
New NIME paper: Trends at NIME – Reflections on Editing 'A NIME Reader'
Michael J. Lyons and myself have been working on an edited collection of papers from the NIME conference over the last year, and we presented some reflections on this work at NIME yesterday.
Trends at NIME – Reflections on Editing “A NIME Reader” [PDF]**
**
This paper provides an overview of the process of editing the forthcoming anthology “A NIME Reader—Fifteen years of New Interfaces for Musical Expression.” The selection process is presented, and we reflect on some of the trends we have observed in re-discovering the collection of more than 1200 NIME papers published throughout the 15 yearlong history of the conference.
July 15, 2016
New paper: NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs
At NIME we have a large archive of the conference proceedings, but we do not (yet) have a proper repository for instrument designs. For that reason I took part in a workshop on Monday with the aim to lay the groundwork for a new repository:
NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs [PDF]
This workshop will explore the potential creation of a community database of digital musical instrument (DMI) designs.
June 30, 2014
New publication: To Gesture or Not (NIME 2014)
This week I am participating at the NIME conference, organised at Goldsmiths, University of London. I am doing some administrative work as chair of the NIME steering committee, and I am also happy to present a paper tomorrow:
Title
To Gesture or Not? An Analysis of Terminology in NIME Proceedings 2001–2013
Links
Paper (PDF)
Presentation (HTML)
Spreadsheet with summary of data (ODS)
OSX shell script used for analysis
Abstract
The term ‘gesture’ has represented a buzzword in the NIME community since the beginning of its conference series.
July 15, 2013
Documentation of the NIME project at Norwegian Academy of Music
From 2007 to 2011 I had a part-time research position at the Norwegian Academy of Music in a project called New Instruments for Musical Exploration, and with the acronym NIME. This project was also the reason why I ended up organising the NIME conference in Oslo in 2011.
The NIME project focused on creating an environment for musical innovation at the Norwegian Academy of Music, through exploring the design of new physical and electronic instruments.
May 31, 2013
NIME 2013
Back from a great NIME 2013 conference in Daejeon + Seoul! For Norwegian readers out there, I have written a blog post about the conference on my head of department blog. I would have loved to write some more about the conference in English, but I think these images from my Flickr account will have to do for now:
{style=“padding: 0; overflow: hidden; margin: 0; width: 500px;”} At the last of the conference it was also announced that next year’s conference will be held in London and hosted by the Embodied AudioVisual Interaction Group at Goldsmiths.
May 28, 2013
Kinectofon: Performing with shapes in planes
Yesterday, Ståle presented a paper on mocap filtering at the NIME conference in Daejeon. Today I presented a demo on using Kinect images as input to my sonomotiongram technique.
Title
Kinectofon: Performing with shapes in planes
Links
Paper (PDF) Poster (PDF) Software Videos (coming soon) Abstract
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB and depth image streams retrieved from a Microsoft Kinect sensor device.
May 27, 2013
Filtering motion capture data for real-time applications
We have three papers from our fourMs group at this year’s NIME conference in Daejeon. The first one was presented today by Ståle Skogstad, and is based on his work on trying minimize the delay when filtering motion capture data.
Title
Filtering motion capture data for real-time applications
Links:
Paper (PDF) Project page Max/MSP implementation Abstract:
In this paper we present some custom designed filters for real-time motion capture applications. Our target application is motion controllers, i.
April 29, 2013
NIME panel at CHI
This week the huge ACM SIGCHI Conference on Human Factors in Computing Systems (also known as CHI) is organised in Paris. This is the largest conference in the field of human-computer interaction, and is also the conference at which the NIME conference series started. I will participate in a panel session called “Music, Technology, and Human-Computer Interaction” on Wednesday. This is a great opportunity to show musical HCI to the broader HCI community, and I am very much looking forwards to participating.
January 17, 2013
NIME 2013 deadline approaching
{.alignright .size-full .wp-image-2180 width=“211” height=“160”}
Here is a little plug for the submission deadline for this year’s NIME conference. I usually don’t write so much about deadlines here, but as the current chairof the international steering committee for the conference series, I feel that I should do my share in helping to spread the word. The NIME conference is a great place to meet academics, designers, technologists, and artists, all working on creating weird instruments and music.
May 31, 2012
Some pictures from NIME 2012
{style=“padding: 0; overflow: hidden; margin: 0; width: 500px;”} NIME 2012 was full of interesting presentations, posters, demos, installations, and concerts (including 4 papers from our group). I would have loved to write up a detailed report on everything I saw and heard, but just don’t have the time. Here are at least a selection of my photos, to give an impression of how it was:
May 24, 2012
Moog on Google
Probably by coincidence, but still a nice concurrence: on the last day of this year’s International Conference on New Interfaces for Musical Expression (NIME) in Ann Arbor, Michigan, Google celebrates Robert Moog’s 78 year birthday.
The interesting thing is that Google not only has a picture of a Moog synthesizer, but they also have an interactive model up and running, where it is possible to play on the keyboard and tweak the knobs.
May 23, 2012
Music ball paper at NIME 2012
Yesterday I wrote about the 4 papers I was involved in at this year’s NIME conference in Ann Arbor, Michigan. The one I was the first author on was entitled The music ball project: Concept, design, development, performance, and is mainly a historic write-up of the work I have been doing on developing different types of music balls over the years, including various handheld music balls, the Music Troll, Big Buoy and the ADHD ball.
May 22, 2012
4 papers at NIME 2012
I was involved in no less than 4 papers at this year’s NIME conference in Ann Arbor, Michigan.
K. Nymoen, A. Voldsund, S. A. v. D. Skogstad, A. R. Jensenius, and J. Tørresen.
**Comparing motion data from an iPod touch to a high-end optical infrared marker-based motion capture system
**[PDF]
The paper presents an analysis of the quality of motion data from an iPod Touch (4th gen.). Acceleration and orientation data derived from internal sensors of an iPod is compared to data from a high end optical infrared marker-based motion capture system (Qualisys) in terms of latency, jitter, accuracy and precision.
June 3, 2011
Chair of the NIME Steering Committee
At the last day of this year’s NIME conference in Oslo I was not only elected as a member of the international steering committee (SC) for the NIME conference series, but I was also elected as the new chair for the SC. This is exciting, particularly since I will be the first NIME SC chair ever. Since the start in 2001, the conference has seen a rapid growth, and we now see that it is time to formalise the structure of the organisation a bit.
May 31, 2011
NIME 2011
It has been fairly quiet on this blog as of recently. This is not because I haven’t been doing anything, rather the opposite. We are now at the end of day 2 of the NIME conference, and there is one more day to go. Lots of great presentations, concerts and hundreds of cool NIME people in Oslo these days!
September 9, 2010
Call for participation: NIME 2011
I am chair for the 11th International Conference on New Interfaces for Musical Expression (NIME 2011)
, which will be organized 30 May - 1 June 2011 here in Oslo, Norway.
The official “call for participation” has just been posted here, and sent to various mailing lists. Please forward this to anyone that you think may be interested in participating.
June 6, 2008
uOSC
micro-OSC (uOSC) was made public yesterday at NIME:
micro-OSC (uOSC) is a firmware runtime system for embedded platforms designed to remain as small as possible while also supporting evolving trends in sensor interfaces such as regulated 3.3 Volt high-resolution sensors, mixed analog and digital multi-rate sensor interfacing, n > 8-bit data formats.
uOSC supports the Open Sound Control protocol directly on the microprocessor, and the completeness of this implementation serves as a functional reference platform for research and development of the OSC protocol.
June 6, 2008
Virtual slide guitar
Jyri Pakarinen just presented a paper on the Virtual Slide Guitar (VSG) here at NIME in Genova.
They used a commercial 6DOF head tracking solution from Naturalpoint called TrackIR 4 Pro. The manufacturer promises:
Experience real time 3D view control in video games and simulations just by moving your head! The only true 6DOF head tracking system of its kind. TrackIR takes your PC gaming to astonishing new levels of realism and immersion!
July 5, 2006
NIME paper on GDIF
Here is the poster I presented at NIME 2006 in Paris based on the paper Towards a Gesture Description Interchange Format.
The paper was written together with Tellef Kvifte, and the abstract reads:
This paper presents our need for a Gesture Description Interchange Format (GDIF) for storing, retrieving and sharing information about music-related gestures. Ideally, it should be possible to store all sorts of data from various commercial and custom made controllers, motion capture and computer vision systems, as well as results from different types of gesture analysis, in a coherent and consistent way.
June 1, 2006
Building low-cost music controllers
New publication on our Cheapstick music controller:
{width=“600” height=“315”}
Reference:
A. R. Jensenius, R. Koehly, and M. M. Wanderley. Building low-cost music controllers. In R. Kronland-Martinet, T. Voinier, and S. Ystad, editors, CMMR 2005, LNCS 3902, pages 123–129. Berlin Heidelberg: Springer-Verlag, 2006. (PDF from Springer)
**Abstract:
**This paper presents our work on building low-cost music controllers intended for educational and creative use. The main idea was to build an electronic music controller, including sensors and a sensor interface, on a “10 euro” budget.
Tag: publication
December 13, 2022
New Book: Sound Actions - Conceptualizing Musical Instruments
I am happy to announce that my book Sound Actions - Conceptualizing Musical Instruments is now published! I am also thrilled that this is an open access book, meaning that is free to download and read. You are, of course, also welcome to pick up a paper copy!
Here is a quick video summary of the book’s content:
In the book, I combine perspectives from embodied music cognition and interactive music technology.
August 1, 2013
New publication: Non-Realtime Sonification of Motiongrams
Today I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.
January 14, 2013
New publication: Some video abstraction techniques for displaying body movement in analysis and performance
Today the MIT Press journal Leonardo has published my paper entitled “Some video abstraction techniques for displaying body movement in analysis and performance”. The paper is a summary of my work on different types of visualisation techniques of music-related body motion. Most of these techniques were developed during my PhD, but have been refined over the course of my post-doc fellowship.
The paper is available from the Leonardo web page (or MUSE), and will also be posted in the digital archive at UiO after the 6 month embargo period.
January 8, 2013
New publication: Performing the Electric Violin in a Sonic Space
I am happy to announce that a paper I wrote together with Victoria Johnson has just been published in Computer Music Journal. The paper is based on the experiences that Victoria and I gained while working on the piece Transformation for electric violin and live electronics (see video of the piece below).
Citation
A. R. Jensenius and V. Johnson. Performing the electric violin in a sonic space. Computer Music Journal, 36(4):28–39, 2012.
July 12, 2012
Paper #1 at SMC 2012: Evaluation of motiongrams
Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
November 1, 2006
Motiongrams
Challenge Traditional keyframe displays of videos are not particularly useful when studying single-shot studio recordings of music-related movements, since they mainly show static postural information and no motion.
Using motion images of various kinds helps in visualizing what is going on in the image. Below can be seen (from left): motion image, with noise reduction, with edge detection, with “trails” and added to the original image.
Making Motiongrams We are used to visualizing audio with spectrograms, and have been exploring different techniques for visualizing music-related movements in a similar manner.
June 1, 2006
Building low-cost music controllers
New publication on our Cheapstick music controller:
{width=“600” height=“315”}
Reference:
A. R. Jensenius, R. Koehly, and M. M. Wanderley. Building low-cost music controllers. In R. Kronland-Martinet, T. Voinier, and S. Ystad, editors, CMMR 2005, LNCS 3902, pages 123–129. Berlin Heidelberg: Springer-Verlag, 2006. (PDF from Springer)
**Abstract:
**This paper presents our work on building low-cost music controllers intended for educational and creative use. The main idea was to build an electronic music controller, including sensors and a sensor interface, on a “10 euro” budget.
Tag: .jpg
December 9, 2022
Optimizing JPEG files
I have previously written about how to resize all the images in a folder. That script was based on lossy compression of the files. However, there are also tools for optimizing image files losslessly. One approach is to use the .jpgoptim](https://github.com/tjko.jpgoptim) function available on ubuntu. Here is an excellent explanation of how it works.
Lossless optimization As part of moving my blog to Hugo, I took the opportunity to optimize all the images in all my image folders.
September 18, 2022
Convert HEIC photos to.jpg
A quick note-to-self about how I managed to download a bunch of photos from an iPhone and convert them to.jpg on my laptop running Ubuntu 22.04.
As opposed to Android phones, iPhones do not show up as a regular disk with easy access to the DCIM folder storing photos. Fortunately, Rapid Photo Downloader managed to launch the iPhone and find all the images. Unfortunately, all the files were stored as HEIC files, using the High Efficiency Image File Format.
Tag: jpeg
December 9, 2022
Optimizing JPEG files
I have previously written about how to resize all the images in a folder. That script was based on lossy compression of the files. However, there are also tools for optimizing image files losslessly. One approach is to use the .jpgoptim](https://github.com/tjko.jpgoptim) function available on ubuntu. Here is an excellent explanation of how it works.
Lossless optimization As part of moving my blog to Hugo, I took the opportunity to optimize all the images in all my image folders.
Tag: ubuntu
December 9, 2022
Optimizing JPEG files
I have previously written about how to resize all the images in a folder. That script was based on lossy compression of the files. However, there are also tools for optimizing image files losslessly. One approach is to use the .jpgoptim](https://github.com/tjko.jpgoptim) function available on ubuntu. Here is an excellent explanation of how it works.
Lossless optimization As part of moving my blog to Hugo, I took the opportunity to optimize all the images in all my image folders.
September 18, 2022
Convert HEIC photos to.jpg
A quick note-to-self about how I managed to download a bunch of photos from an iPhone and convert them to.jpg on my laptop running Ubuntu 22.04.
As opposed to Android phones, iPhones do not show up as a regular disk with easy access to the DCIM folder storing photos. Fortunately, Rapid Photo Downloader managed to launch the iPhone and find all the images. Unfortunately, all the files were stored as HEIC files, using the High Efficiency Image File Format.
August 13, 2022
Convert a folder of LibreOffice .ODT files to .DOCX files
I don’t spend much time in traditional “word processors”, but when I do, it is usually in LibreOffice. Then I prefer to save the files in the native .ODT format. But it happens that I need to send a bunch of files to someone that prefers .DOCX files. Instead of manually converting all the files, here is a short one-liner that does the trick using the magical pandoc, the go-to tool for converting text documents.
August 9, 2022
Add fade-in and fade-out programmatically with FFmpeg
There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).
June 16, 2022
Export images from a PDF file
I have previously written about how to export each of the pages of a PDF file as an image. That works well for, for example, presentation slides that should go on a web page. But sometimes there is a need to export only the images within a page. This can be achieved with a small command line tool called pdfimages.
One way of using it is:
pdfimages -p -png file.pdf image This will export all images in file.
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
April 13, 2022
Programmatically resizing a folder of images
This is a note to self about how to programmatically resize and crop many images using ImageMagick.
It all started with a folder full of photos with different pixel sizes and ratios. That is because they had been captured with various cameras and had also been manually cropped. This could be verified by running this command to print their pixel sizes:
identify -format "%wx%h\n" *.JPG Fortunately, all the images had a reasonably large pixel count, so I decided to go for a 5MP pixel count (2560x1920 in 4:3 ratio).
August 12, 2021
Soft drop shadows in LibreOffice Draw
My new book will be published Open Access, and I also aim only to use open-source tools as part of the writing process. The most challenging has been to figure out how to make nice-looking illustrations.
Parts of the book are based on the Ph.D. dissertation that I wrote a long time ago. I wrote that on a MacBook and made all the illustrations in OmniGraffle. While it was quite easy to make the switch to Ubuntu in general, OmniGraffle has been one of the few programs I have really missed in the Linux world.
March 18, 2021
Splitting audio files in the terminal
I have recently played with AudioStellar, a great tool for “sound object”-based exploration and musicking. It reminds me of CataRT, a great tool for concatenative synthesis. I used CataRT quite a lot previously, for example, in the piece Transformation. However, after I switched to Ubuntu and PD instead of OSX and Max, CataRT was no longer an option. So I got very excited when I discovered AudioStellar some weeks ago. It is lightweight and cross-platform and has some novel features that I would like to explore more in the coming weeks.
March 1, 2021
Flatten file names in the terminal
I am often dealing with folders with lots of files with weird file names. Spaces, capital letters, and so on, often cause problems. Instead of manually fixing such file names, here is a quick one-liner (found here) that can be run in the terminal (at least on Ubuntu) to solve the problem:
rename 'tr/ A-Z/-a-z/' -- * It is based on a simple regular expression, replacing any spaces with hyphens, and changing any capital letters to lower case.
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
March 15, 2020
Flattening Ricoh Theta 360-degree videos using FFmpeg
I am continuing my explorations of the great terminal-based video tool FFmpeg. Now I wanted to see if I could “flatten” a 360-degree video recorded with a Ricoh Theta camera. These cameras contain two fisheye lenses, capturing two 180-degree videos next to each other. This results in video files like the one I show a screenshot of below.
These files are not very useful to watch or work with, so we need to somehow “flatten” them into a more meaningful video file.
February 21, 2020
Creating image masks from video file
As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.
I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first step is to extract keyframes from the video file using this one-liner ffmpeg command:
February 21, 2020
Creating multi-exposure keyframe image displays with FFmpeg and ImageMagick
While I was testing visualization of some videos from the AIST database earlier today, I wanted to also create some “keyframe image displays”. This can be seen as a way of doing multi-exposure photography, and should be quite straightforward to do. Still it took me quite some time to figure out exactly how to implement it. It may be that I was searching for the wrong things, but in case anyone else is looking for the same, here is a quick write up.
November 29, 2019
Creating individual image files from presentation slides
How do you create full-screen images from each of the slides of a Google Docs presentation without too much manual work? For the previous blog post on my Munin keynote, I wanted to include some pictures from my 90-slide presentation. There is probably a point and click solution to this problem, but it is even more fun to use some command line tools to help out. These commands have been tested on Ubuntu 19.
September 28, 2019
Installing Ubuntu on a HP Pavilion laptop
So I decided to install Ubuntu on my daughter’s new laptop, more specifically an HP Pavilion. The choice of this particular laptop was because it looked nice, and had good specs for the money. It was first after the purchase I read all the complaints people have about the weird UEFI implementation on HP laptops. So I started the install process with some worries.
Reading on various forums, people seemed to have been doing all sorts of strange things to be able to install Ubuntu on HP laptops, including modifying the UEFI setup, changing the BIOS, and so on.
September 28, 2019
Which Linux version to choose for a 9-year old?
My 9-year old daughter is getting her first laptop. But which OS should she get started with?
I have been using various versions of Ubuntu as my main OS for around 5 years now, currently using Ubuntu Studio on my main laptop. This distro is based on XFCE, a very lightweight yet versatile OS. The reason for choosing Ubuntu Studio over the regular XUbuntu was to get a bunch of music apps by default.
May 19, 2019
Rotate lots of image on Ubuntu
I often find myself with a bunch of images that are not properly rotated. Many cameras write the rotation information to the EXIF header of the image file, but the file itself is not actually rotated. Some photo editors do this automagically when you import the files, but I prefer to copy files manually to my drive.
I therefore have a little one-liner that can rotate all the files in a folder:
November 25, 2018
Sort images based on direction (portrait/landscape)
I have lots and lots of photos on my computer (and servers!). Sometimes I have a pile of photos of which I want to find only the ones that are in portrait or landscape mode. This can be done manually for a few images, but browsing through thousands of them is more tricky. Then I often tend to use a nifty little shell script that I found here. It effectively sorts all images into two folders automagically.
May 18, 2018
Trim video files using FFmpeg
This is a note to self, and hopefully others, about how to easily and quickly trim videos without recompressing the file.
I often have long video recordings that I want to split or trim. Splitting and trimming are temporal transformations and should not be confused with the spatial transformation cropping. Cropping a video means cutting out parts of the image, and I have another blog post on cropping video files using FFmpeg.
January 3, 2017
Move windows between screens on Ubuntu
As part of the fun of reinstalling an OS, you need to set up all the small things again (and you also get rid of all the small things you had set up and that you don’t need any longer…). This message is mainly a note to self about how to move windows between screens on Ubuntu with a key combination, found at stackexchange:
Install CompizConfig Settings Manager: sudo apt install compizconfig-settings-manager compiz-plugins-extra Run Compiz from the dash Click Window Management Enable the Put plug-in (select the check-box) Click on Put Configure the shortcut for Put to next Output (click enable).
January 3, 2017
Remove standard bookmarks in Nautilus
Yet another note to self on how to fix things in Ubuntu after a fresh install, found at askubuntu, this time to remove the standard bookmarks in the Nautilus file browser. I use a different setup of folders, and don’t really need these unused bookmarks. I wish it could have been easier to just right-click and delete to remove them (like for your own bookmarks), but it turns out to be a bit more tricky.
December 27, 2016
Split PDF files easily using Ubuntu scripts
One of the fun parts of reinstalling an OS (yes, I think it is fun!), is to discover new software and new ways of doing things. As such, it works as a “digital shower”, getting rid of unnecessary stuff that has piled up.
Trying to also get rid of some physical mess, I am scanning some piles of paper documents. This leaves me with some large multi-page PDFs that I would like to split up easily.
December 27, 2016
Starting afresh
After four years as Head of Department (of Musicology at UiO), I am going back to my regular associate professor position in January. It has been a both challenging and rewarding period as HoD, during which I have learned a lot about managing people, managing budgets, understanding huge organizations, developing strategies, talking to all sorts of people at all levels in the system, and much more.
I am happy to hand over a Department in growth to the new HoD (Peter Edwards).
June 29, 2016
Shell script for compressing PDF files on Ubuntu
Back on OSX one of my favourite small programs was called PDFCompress, which compressed a large PDF file into something more manageable. There are many ways of doing this on Ubuntu as well, but nothing really as smooth as I used to on OX.
Finally I took the time to figure out how I could make a small shell script based on ghostscript. The whole script looks like this:
#!/bin/sh gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.
April 8, 2016
Finally moving from Apple's Keynote to LibreOffice Impress
Apple’s Keynote has been my preferred presentation tool for about a decade. For a long time it felt like the ideal tool, easy to use, powerful and flexible. But at some point, probably around the time when the iOS version of Keynote came along, the Mac version of Keynote started loosing features and became more limited than it had used to be. Since then, I have experienced all sorts of problems, including non-compatibility of new and old presentation file versions, problems with linked video files, crashes, etc.
August 3, 2015
Add date to files in Ubuntu
Even though I have been running Ubuntu as my main OS for more than a year now, I am still trying to figure out a good workflow. One thing I have been missing from my former OSX setup was the ability to quickly and easily prepend the date to a number of files. Having moved my files between many different OSes, hard drives, network drives, etc. over many years, I know that the files’ creation dates will break at some point.
April 12, 2015
Simple video editing in Ubuntu
I have been using Ubuntu as my main OS for the past year, but have often relied on my old MacBook for doing various things that I haven’t easily figured out how to do in Linux. One of those things is to trim video files non-destructively. This is quite simple to do in QuickTime, although Apple now forces you to save the file with a QuickTime container (.mov) even though there is still only MPEG-4 compression in the file (h.
June 4, 2010
Boot problems Ubuntu 10.04
Just as I started to believe that Ubuntu had matured to become a super-stable and grandma-friendly OS, I got an unexpected black screen on boot of Ubuntu 10.04 on a Dell Latitude D400. After some googling I have found a solution that works:
On boot, hit the `e’ button when the grub menu shows up. Then add the following after “quiet splash”: [i915.modeset=1]{style=“font-family: monospace;”}
If this works and you get into the system, you can do this procedure to change the grub loader permanently:
January 12, 2009
Triple boot on MacBook
I am back at work after a long vacation, and one of the first things I started doing this year was to reinstall several of my computers. There is nothing like a fresh start once in a while, with the added benefits of some extra hard disk space (not reinstalling all those programs I never use anyway) and performance benefits (incredible how fast a newly installed computer boots up!).
I have been testing Ubuntu on an Asus eee for a while, and have been impressed by how easy it was to install and use.
Tag: blog
December 1, 2022
Moving from WordPress to Hugo
I have been running my blog with a self-hosted WordPress install for almost two decades, and I finally decided to move to a static website solution. The relatively rapid shift was triggered by a WordPress update that caused trouble with the paths to images throughout my blog. However, since my site was hacked a couple of years ago, I have pondered alternative solutions for running the blog.
I have been tired of all the updates and security issues with a server-based content management system and figured that a static website solution would be more straightforward in the long run.
October 25, 2010
When and where to post?
At any point in time I always have 20 or more blog post drafts stored in MarsEdit. Whenever I get an idea for something I want to blog about, I try to write it down. But then, for whatever reason, I decide not to post it right away. This may be because it was an underdeveloped idea, one that I feel I have to think more about before I actually post it.
August 4, 2010
What to choose: Browser plugin, web interface, desktop application?
Nowadays I have a hard time deciding on what type of application to use. Only a few years back I would use desktop applications for most things, but with the growing amount of decent web 2.0 “applications” I notice that I have slowly moved towards doing more and more online.
Let me use this blog as an example. It is based on WordPress, which now offers a good and efficient web interface.
May 18, 2008
Tags and categories
I have been remodelling my web page today, installing the latest version of Wordpress, and testing out a new theme and organisational structure. I have been using categories for a while in my blog, but have not used the tags feature because I didn’t really understand the difference before I read this:
Categories can be tags, sure, but not all categories are tags, and not all tags should be categories. I think of categories as a table of contents and tags as the index page of a book.
Tag: hugo
December 1, 2022
Moving from WordPress to Hugo
I have been running my blog with a self-hosted WordPress install for almost two decades, and I finally decided to move to a static website solution. The relatively rapid shift was triggered by a WordPress update that caused trouble with the paths to images throughout my blog. However, since my site was hacked a couple of years ago, I have pondered alternative solutions for running the blog.
I have been tired of all the updates and security issues with a server-based content management system and figured that a static website solution would be more straightforward in the long run.
Tag: pelican
December 1, 2022
Moving from WordPress to Hugo
I have been running my blog with a self-hosted WordPress install for almost two decades, and I finally decided to move to a static website solution. The relatively rapid shift was triggered by a WordPress update that caused trouble with the paths to images throughout my blog. However, since my site was hacked a couple of years ago, I have pondered alternative solutions for running the blog.
I have been tired of all the updates and security issues with a server-based content management system and figured that a static website solution would be more straightforward in the long run.
Tag: web
December 1, 2022
Moving from WordPress to Hugo
I have been running my blog with a self-hosted WordPress install for almost two decades, and I finally decided to move to a static website solution. The relatively rapid shift was triggered by a WordPress update that caused trouble with the paths to images throughout my blog. However, since my site was hacked a couple of years ago, I have pondered alternative solutions for running the blog.
I have been tired of all the updates and security issues with a server-based content management system and figured that a static website solution would be more straightforward in the long run.
August 19, 2021
Why universities should care about employee web pages
Earlier this year, I wrote about my 23 tips to improve your web presence. Those tips were meant to encourage academics to care about how their employee web pages look at universities. Such pages look different from university to university. Still, in most places, they contain an image and some standard information on the top, followed by more or less structured information further down. For reference, this is an explanation of how my employee page is built up:
March 17, 2021
23 tips to improve your web presence
I was challenged to say a few words about improving their personal web pages at the University of Oslo. This led to a short talk titled 23 tips to improve your web presence. The presentation was based on experiences with keeping my own personal page up to date, but hopefully, the tips can be useful for others.
Why should you care about your employee page? Some of my reasons include:
January 26, 2021
Some Thoughts on the Archival of Research Activities
Recently, I have been engaged in an internal discussion at the University of Oslo about our institutional web pages. This has led me to realize that a university’s web pages are yet another part of what I like to think of as an Open Research “puzzle”:
Cutting down on web pages The discussion started when our university’s communication department announced that they wanted to reduce the number of web pages. One way of doing that is by unpublishing a lot of pages.
August 5, 2011
Beautifying directory listings using .htaccess
Sometimes the easiest way of sharing files is to just put them in a open web directory. I came across this very detailed blog post about how to change the looks of an apache directory listing by editing the .htaccess file.
August 18, 2010
UiO adds social media buttons on web pages
A few weeks ago I mentioned that University of Oslo now openly supports RSS- and Twitter-feeds from the official employee web sites. Now I see that social linking has also been embedded in the new profile, as can be seen for example here.
These types of links have been around for some years, but many academic institutions seem to have been very reluctant when it comes to jump on the web 2.
June 21, 2010
UiO goes social and opens for blogging
University of Oslo is brushing up the web pages this year, and now the turn has come to my department. When I updated my official profile I found (to my big surprise) that it is possible to include RSS and Twitter feeds. Wow, not bad, not bad at all! I am very happy that the university sees the possibilities in promoting blogging and social fora among the staff.
Another good thing is that publications are now automatically extracted from Frida, the Norwegian publication database that we have to use.
Tag: wordpress
December 1, 2022
Moving from WordPress to Hugo
I have been running my blog with a self-hosted WordPress install for almost two decades, and I finally decided to move to a static website solution. The relatively rapid shift was triggered by a WordPress update that caused trouble with the paths to images throughout my blog. However, since my site was hacked a couple of years ago, I have pondered alternative solutions for running the blog.
I have been tired of all the updates and security issues with a server-based content management system and figured that a static website solution would be more straightforward in the long run.
August 9, 2010
Opened for comments (again)
I have opened for comments on the blog again! The comment option was closed a year ago after having received a couple of hundred thousand comments in a couple of days. Now I have updated to the latest version of WordPress, and have activated new spam filters. Hopefully, this can keep the spam out this time. At least it is worth a try.
Happy commenting!
May 18, 2008
Tags and categories
I have been remodelling my web page today, installing the latest version of Wordpress, and testing out a new theme and organisational structure. I have been using categories for a while in my blog, but have not used the tags feature because I didn’t really understand the difference before I read this:
Categories can be tags, sure, but not all categories are tags, and not all tags should be categories. I think of categories as a table of contents and tags as the index page of a book.
Tag: nor-cam
November 21, 2022
Explaining the Norwegian Career Assessment Matrix (NOR-CAM)
The Norwegian Career Assessment Matrix (NOR-CAM) is a toolbox for recognition and rewards in academic careers that was launched by Universities Norway in May 2021. I was part of the working group developing the toolbox and have blogged about this experience previously.
There has been much interest in NOR-CAM and I have held numerous presentations about it since it was launched. Most of these presentations have been held live (and often on Zoom).
June 1, 2021
Launching NOR-CAM – A toolbox for recognition and rewards in academic careers
What is the future of academic career assessment? How can open research practices be included as part of a research evaluation? These were some of the questions we asked ourselves in a working group set up by Universities Norway. Almost two years later, the report is ready. Here I will share some of the ideas behind the suggested Norwegian Career Assessment Matrix (NOR-CAM) and some of the other recommendations coming out of the workgroup.
Tag: research assessment
November 21, 2022
Explaining the Norwegian Career Assessment Matrix (NOR-CAM)
The Norwegian Career Assessment Matrix (NOR-CAM) is a toolbox for recognition and rewards in academic careers that was launched by Universities Norway in May 2021. I was part of the working group developing the toolbox and have blogged about this experience previously.
There has been much interest in NOR-CAM and I have held numerous presentations about it since it was launched. Most of these presentations have been held live (and often on Zoom).
Tag: life
November 19, 2022
Leaving Twitter
Today, I decided to leave Twitter. I have been in doubt for a while; I wanted to see how the platform would develop after Musk’s take-over. Unfortunately, things have been steadily declining, and I am now at a point where I don’t want to support the company any longer.
I leave the platform with mixed feelings. I have used Twitter as my primary social media platform after I decided to say Goodbye to Facebook some years ago.
January 2, 2014
Goodbye to Facebook
I am happy to say that I have already completed my first and only new year’s resolution this year: getting rid of my Facebook account.
It turned out to be much easier than expected, as there is a separate, easily accessible delete account page on Facebook. I just had to type my password and a captcha and that was it. Now my Facebook account is disabled, and will be permanently deleted after 14 days.
Tag: mastodon
November 19, 2022
Leaving Twitter
Today, I decided to leave Twitter. I have been in doubt for a while; I wanted to see how the platform would develop after Musk’s take-over. Unfortunately, things have been steadily declining, and I am now at a point where I don’t want to support the company any longer.
I leave the platform with mixed feelings. I have used Twitter as my primary social media platform after I decided to say Goodbye to Facebook some years ago.
Tag: politics
November 19, 2022
Leaving Twitter
Today, I decided to leave Twitter. I have been in doubt for a while; I wanted to see how the platform would develop after Musk’s take-over. Unfortunately, things have been steadily declining, and I am now at a point where I don’t want to support the company any longer.
I leave the platform with mixed feelings. I have used Twitter as my primary social media platform after I decided to say Goodbye to Facebook some years ago.
December 21, 2021
Why I Don't Review for Elsevier Journals
This blog post is written to have a URL to send to Elsevier editors that ask me to review for their journals. I have declined to review for Elsevier journals for at least a decade, but usually haven’t given an explanation. Now I will start doing it alongside my decline.
My decision is based on a fundamental flaw in today’s commercial journal publishing ecosystem. This is effectively summarized by Scott Aaronson, in an analogy in his Review of The Access Principle by John Willinsky
January 2, 2014
Goodbye to Facebook
I am happy to say that I have already completed my first and only new year’s resolution this year: getting rid of my Facebook account.
It turned out to be much easier than expected, as there is a separate, easily accessible delete account page on Facebook. I just had to type my password and a captcha and that was it. Now my Facebook account is disabled, and will be permanently deleted after 14 days.
Tag: social media
November 19, 2022
Leaving Twitter
Today, I decided to leave Twitter. I have been in doubt for a while; I wanted to see how the platform would develop after Musk’s take-over. Unfortunately, things have been steadily declining, and I am now at a point where I don’t want to support the company any longer.
I leave the platform with mixed feelings. I have used Twitter as my primary social media platform after I decided to say Goodbye to Facebook some years ago.
January 2, 2014
Goodbye to Facebook
I am happy to say that I have already completed my first and only new year’s resolution this year: getting rid of my Facebook account.
It turned out to be much easier than expected, as there is a separate, easily accessible delete account page on Facebook. I just had to type my password and a captcha and that was it. Now my Facebook account is disabled, and will be permanently deleted after 14 days.
Tag: twitter
November 19, 2022
Leaving Twitter
Today, I decided to leave Twitter. I have been in doubt for a while; I wanted to see how the platform would develop after Musk’s take-over. Unfortunately, things have been steadily declining, and I am now at a point where I don’t want to support the company any longer.
I leave the platform with mixed feelings. I have used Twitter as my primary social media platform after I decided to say Goodbye to Facebook some years ago.
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
November 16, 2010
Why I prefer open to closed (and Twitter to Facebook)
I am not a huge fan of Facebook, and the future of Facebook makes me even more sceptic. Besides all the technological lock-in issues, I have a major problem with how Facebook makes people forget that they are communicating private things in a (semi)-public space.
This is also the reason I have turned off my Facebook wall. I consider Facebook a public communication channel, but I have experienced that many others use it for more private communication.
October 25, 2010
When and where to post?
At any point in time I always have 20 or more blog post drafts stored in MarsEdit. Whenever I get an idea for something I want to blog about, I try to write it down. But then, for whatever reason, I decide not to post it right away. This may be because it was an underdeveloped idea, one that I feel I have to think more about before I actually post it.
August 18, 2010
UiO adds social media buttons on web pages
A few weeks ago I mentioned that University of Oslo now openly supports RSS- and Twitter-feeds from the official employee web sites. Now I see that social linking has also been embedded in the new profile, as can be seen for example here.
These types of links have been around for some years, but many academic institutions seem to have been very reluctant when it comes to jump on the web 2.
June 21, 2010
UiO goes social and opens for blogging
University of Oslo is brushing up the web pages this year, and now the turn has come to my department. When I updated my official profile I found (to my big surprise) that it is possible to include RSS and Twitter feeds. Wow, not bad, not bad at all! I am very happy that the university sees the possibilities in promoting blogging and social fora among the staff.
Another good thing is that publications are now automatically extracted from Frida, the Norwegian publication database that we have to use.
Tag: heic
September 18, 2022
Convert HEIC photos to.jpg
A quick note-to-self about how I managed to download a bunch of photos from an iPhone and convert them to.jpg on my laptop running Ubuntu 22.04.
As opposed to Android phones, iPhones do not show up as a regular disk with easy access to the DCIM folder storing photos. Fortunately, Rapid Photo Downloader managed to launch the iPhone and find all the images. Unfortunately, all the files were stored as HEIC files, using the High Efficiency Image File Format.
Tag: libreoffice
August 13, 2022
Convert a folder of LibreOffice .ODT files to .DOCX files
I don’t spend much time in traditional “word processors”, but when I do, it is usually in LibreOffice. Then I prefer to save the files in the native .ODT format. But it happens that I need to send a bunch of files to someone that prefers .DOCX files. Instead of manually converting all the files, here is a short one-liner that does the trick using the magical pandoc, the go-to tool for converting text documents.
August 12, 2021
Soft drop shadows in LibreOffice Draw
My new book will be published Open Access, and I also aim only to use open-source tools as part of the writing process. The most challenging has been to figure out how to make nice-looking illustrations.
Parts of the book are based on the Ph.D. dissertation that I wrote a long time ago. I wrote that on a MacBook and made all the illustrations in OmniGraffle. While it was quite easy to make the switch to Ubuntu in general, OmniGraffle has been one of the few programs I have really missed in the Linux world.
April 8, 2016
Finally moving from Apple's Keynote to LibreOffice Impress
Apple’s Keynote has been my preferred presentation tool for about a decade. For a long time it felt like the ideal tool, easy to use, powerful and flexible. But at some point, probably around the time when the iOS version of Keynote came along, the Mac version of Keynote started loosing features and became more limited than it had used to be. Since then, I have experienced all sorts of problems, including non-compatibility of new and old presentation file versions, problems with linked video files, crashes, etc.
Tag: linux
August 13, 2022
Convert a folder of LibreOffice .ODT files to .DOCX files
I don’t spend much time in traditional “word processors”, but when I do, it is usually in LibreOffice. Then I prefer to save the files in the native .ODT format. But it happens that I need to send a bunch of files to someone that prefers .DOCX files. Instead of manually converting all the files, here is a short one-liner that does the trick using the magical pandoc, the go-to tool for converting text documents.
August 9, 2022
Add fade-in and fade-out programmatically with FFmpeg
There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
June 16, 2022
Export images from a PDF file
I have previously written about how to export each of the pages of a PDF file as an image. That works well for, for example, presentation slides that should go on a web page. But sometimes there is a need to export only the images within a page. This can be achieved with a small command line tool called pdfimages.
One way of using it is:
pdfimages -p -png file.pdf image This will export all images in file.
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
November 3, 2019
Converting MXF files to MP4 with FFmpeg
We have a bunch of Canon XF105 at RITMO, a camera that records MXF files. This is not a particularly useful file format (unless for further processing). Since many of our recordings are just for documentation purposes, we often see the need to convert to MP4. Here I present two solutions for converting MXF files to MP4, both as individual files and a combined file from a folder. These are shell scripts based on the handy FFmpeg.
September 28, 2019
Installing Ubuntu on a HP Pavilion laptop
So I decided to install Ubuntu on my daughter’s new laptop, more specifically an HP Pavilion. The choice of this particular laptop was because it looked nice, and had good specs for the money. It was first after the purchase I read all the complaints people have about the weird UEFI implementation on HP laptops. So I started the install process with some worries.
Reading on various forums, people seemed to have been doing all sorts of strange things to be able to install Ubuntu on HP laptops, including modifying the UEFI setup, changing the BIOS, and so on.
September 28, 2019
Which Linux version to choose for a 9-year old?
My 9-year old daughter is getting her first laptop. But which OS should she get started with?
I have been using various versions of Ubuntu as my main OS for around 5 years now, currently using Ubuntu Studio on my main laptop. This distro is based on XFCE, a very lightweight yet versatile OS. The reason for choosing Ubuntu Studio over the regular XUbuntu was to get a bunch of music apps by default.
May 19, 2019
Rotate lots of image on Ubuntu
I often find myself with a bunch of images that are not properly rotated. Many cameras write the rotation information to the EXIF header of the image file, but the file itself is not actually rotated. Some photo editors do this automagically when you import the files, but I prefer to copy files manually to my drive.
I therefore have a little one-liner that can rotate all the files in a folder:
June 29, 2016
Shell script for compressing PDF files on Ubuntu
Back on OSX one of my favourite small programs was called PDFCompress, which compressed a large PDF file into something more manageable. There are many ways of doing this on Ubuntu as well, but nothing really as smooth as I used to on OX.
Finally I took the time to figure out how I could make a small shell script based on ghostscript. The whole script looks like this:
#!/bin/sh gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.
April 8, 2016
Finally moving from Apple's Keynote to LibreOffice Impress
Apple’s Keynote has been my preferred presentation tool for about a decade. For a long time it felt like the ideal tool, easy to use, powerful and flexible. But at some point, probably around the time when the iOS version of Keynote came along, the Mac version of Keynote started loosing features and became more limited than it had used to be. Since then, I have experienced all sorts of problems, including non-compatibility of new and old presentation file versions, problems with linked video files, crashes, etc.
February 16, 2009
Asus eee tricks
When I got my Asus eee a few months ago I tested the built-in OS for about an hour and then decided to install Ubuntu eee (later renamed to Easypeasy) instead. I felt the Xandros OS was too limiting and wanted to test out something more powerful. One of the reasons for buying the eee in the first place was to test whether it would be useful for laptop performance, and then I needed an OS where it was possible to install Chuck, PD and SC3 without any problems.
May 15, 2008
Gumstix and PDa
Another post from the Mobile Music Workshop in Vienna. Yesterday I saw a demo on the Audioscape project by Mike Wozniewski (McGill). He was using the Gumstix, a really small system running a Linux version called OpenEmbedded. He was running PDa (a Pure Data clone) and was able to process sensor data and run audio off of the small device.
Tag: pandoc
August 13, 2022
Convert a folder of LibreOffice .ODT files to .DOCX files
I don’t spend much time in traditional “word processors”, but when I do, it is usually in LibreOffice. Then I prefer to save the files in the native .ODT format. But it happens that I need to send a bunch of files to someone that prefers .DOCX files. Instead of manually converting all the files, here is a short one-liner that does the trick using the magical pandoc, the go-to tool for converting text documents.
Tag: word
August 13, 2022
Convert a folder of LibreOffice .ODT files to .DOCX files
I don’t spend much time in traditional “word processors”, but when I do, it is usually in LibreOffice. Then I prefer to save the files in the native .ODT format. But it happens that I need to send a bunch of files to someone that prefers .DOCX files. Instead of manually converting all the files, here is a short one-liner that does the trick using the magical pandoc, the go-to tool for converting text documents.
Tag: terminal
August 9, 2022
Add fade-in and fade-out programmatically with FFmpeg
There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
June 16, 2022
Export images from a PDF file
I have previously written about how to export each of the pages of a PDF file as an image. That works well for, for example, presentation slides that should go on a web page. But sometimes there is a need to export only the images within a page. This can be achieved with a small command line tool called pdfimages.
One way of using it is:
pdfimages -p -png file.pdf image This will export all images in file.
March 31, 2022
Merge multiple MP4 files
I have been doing several long recordings with GoPro cameras recently. The cameras automatically split the recordings into 4GB files, which leaves me with a myriad of files to work with. I have therefore made a script to help with the pre-processing of the files.
This is somewhat similar to the script I made to convert MXF files to MP4, but with better handling of the temp file for storing information about the files to merge:
November 17, 2021
Preparing video for Matlab analysis
Typical video files, such as MP4 files with H.264 compression, are usually small in size and with high visual quality. Such files are suitable for visual inspection but do not work well for video analysis. In most cases, computer vision software prefers to work with raw data or other compression formats.
The Musical Gestures Toolbox for Matlab works best with these file types:
Video: use .jpg (Motion.jpg) as the compression format.
June 17, 2021
Normalize audio in video files
We are organizing the Rhythm Production and Perception Workshop at RITMO next week. As mentioned in another blog post, we have asked presenters to send us pre-recorded videos. They are all available on the workshop page.
During the workshop, we will play sets of videos in sequence. When doing a test run today, we discovered that the sound levels differed wildly between files. There is clearly the need for normalizing the sound levels to create a good listener experience.
June 15, 2021
Making 100 video poster images programmatically
We are organizing the Rhythm Production and Perception Workshop 2021 at RITMO a week from now. Like many other conferences these days, this one will also be run online. Presentations have been pre-recorded (10 minutes each) and we also have short poster blitz videos (1 minute each).
Pre-recorded videos People have sent us their videos in advance, but they all have different first “slides”. So, to create some consistency among the videos, we decided to make an introduction slide for each of them.
February 21, 2020
Creating image masks from video file
As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.
I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first step is to extract keyframes from the video file using this one-liner ffmpeg command:
February 21, 2020
Creating multi-exposure keyframe image displays with FFmpeg and ImageMagick
While I was testing visualization of some videos from the AIST database earlier today, I wanted to also create some “keyframe image displays”. This can be seen as a way of doing multi-exposure photography, and should be quite straightforward to do. Still it took me quite some time to figure out exactly how to implement it. It may be that I was searching for the wrong things, but in case anyone else is looking for the same, here is a quick write up.
November 3, 2019
Converting MXF files to MP4 with FFmpeg
We have a bunch of Canon XF105 at RITMO, a camera that records MXF files. This is not a particularly useful file format (unless for further processing). Since many of our recordings are just for documentation purposes, we often see the need to convert to MP4. Here I present two solutions for converting MXF files to MP4, both as individual files and a combined file from a folder. These are shell scripts based on the handy FFmpeg.
May 18, 2018
Trim video files using FFmpeg
This is a note to self, and hopefully others, about how to easily and quickly trim videos without recompressing the file.
I often have long video recordings that I want to split or trim. Splitting and trimming are temporal transformations and should not be confused with the spatial transformation cropping. Cropping a video means cutting out parts of the image, and I have another blog post on cropping video files using FFmpeg.
Tag: Sensors
August 7, 2022
Analyzing Recordings of a Mobile Phone Lying Still
What is the background “noise” in the sensors of a mobile phone? In the fourMs Lab, we have a tradition of testing the noise levels of various devices. Over the last few years, we have been using mobile phones in multiple experiments, including the MusicLab app that has been used in public research concerts, such as MusicLab Copenhagen.
I have yet to conduct a systematic study of many mobile phones lying still, but today I tried recording my phone—a Samsung Galaxy Ultra S21—lying still on the table for ten minutes.
October 16, 2017
Working with an Arduino Mega 2560 in Max
I am involved in a student project which uses some Arduino Mega 2560 sensor interfaces in an interactive device. It has been a while since I worked with Arduinos myself, as I am mainly working with Belas these days. Also, I have never worked with the Mega before, so I had to look around a little to figure out how to set it up with Cycling ‘74’s Max.
I have previously used Maxuino for interfacing Arduinos with Max.
June 6, 2008
uOSC
micro-OSC (uOSC) was made public yesterday at NIME:
micro-OSC (uOSC) is a firmware runtime system for embedded platforms designed to remain as small as possible while also supporting evolving trends in sensor interfaces such as regulated 3.3 Volt high-resolution sensors, mixed analog and digital multi-rate sensor interfacing, n > 8-bit data formats.
uOSC supports the Open Sound Control protocol directly on the microprocessor, and the completeness of this implementation serves as a functional reference platform for research and development of the OSC protocol.
May 15, 2008
Gumstix and PDa
Another post from the Mobile Music Workshop in Vienna. Yesterday I saw a demo on the Audioscape project by Mike Wozniewski (McGill). He was using the Gumstix, a really small system running a Linux version called OpenEmbedded. He was running PDa (a Pure Data clone) and was able to process sensor data and run audio off of the small device.
May 8, 2008
Motion Capture System Using Accelerometers
Came across a student project from Cornell on doing motion capture using accelerometers, based on the Atmel controller. It is a nice overview of many of the challenges faced when working with accelerometers, and the implementation seems to work well.
{width=“300/”}
April 24, 2008
Sensing Music-related Actions
The web page for our new research project called Sensing Music-related Actions is now up and running. This is a joint research project of the departments of Musicology and Informatics, and has received external funding through the VERDIKT program of the The Research Council of Norway. The project runs from July 2008 until July 2011.
The focus of the project will be on basic issues of sensing and analysing music-related actions, and creating various prototypes for testing the control possibilities of such actions in enactive devices.
November 6, 2007
Bug Labs: Lego-like computer modules
Bug Labs has announced a new open source, Lego-like computer modules running Linux. The idea is to create hardware that can easily be assembled in various ways. Looks neat!
{#p503 .imagelink}
September 22, 2007
Doepfer USB64
The new Doepfer USB64 Info looks very interesting with its 64 analog (or digital) inputs and €125 price tag. I am not so excited about the MIDI plug, and wonder whether they intend to communicate some higher resolution data through the USB plug.
{width=“602” height=“162”}
September 19, 2007
Giant Music Ball
I have been preparing for Forskningstorget, an annual science fair in the city centre of Oslo, the last couple of days. Last year we made a Music Troll, and this year we are making a giant music ball for people to play with.
The ball is built from a huge boat buoy, 120 cm in diameter, made for tank boats and stormy weather. This makes it just perfect for a music installation which is supposed to survive some thousand children over the next couple of days…
January 14, 2007
iPhone sensing
As I have mentioned elsewhere, I am thrilled by the fact that various sensing technologies are getting so cheap that they are incorporated everywhere. As could be seen from the presentation of Apple’s new iPhone, it includes an accelerometer to sense tilt of the device (and also movement if they decide to use that for anything), a proximity sensor (ultrasound?) to turn off the display when the phone is put to the ear and a light sensor to change the brightness of the screen (?
December 4, 2006
WiiMote used as a mouse on windows
This video shows WiiMote used as a mouse on windows.
October 25, 2006
UB drivers for Phidgets
Phidgets just released a new library and drivers for Intel Macs. This was the last thing I really have been missing after I got my new MacBook this summer.
October 11, 2006
Lego instruments
A group of German students are working on a project called Stekgreif where they include a number of popular sensors built as lego-blocks. Adding power through the lego bricks makes it possible to build instruments and other fun things entirely out of lego.
October 9, 2006
Gypsy MIDI controller
{#image292}Nick Rothwell reviews the Gypsy MIDI controller in Sound on Sound. An excerpt from his conclusion:
I know some artists who could build great live performances around a Gypsy MIDI suit, and others who would merely look like plonkers. As to the first question, here at Cassiel Central we’ve been through all manner of MIDI controllers and sensing systems, from fader boxes (motorised and not) through accelerometers, ultrasound systems, camera tracking, joysticks, game controllers and Buchla devices, and some common issues emerge.
September 29, 2006
Norwegian Science Fair
Last weekend we participated (again) with a stand at a big science fair down in the city centre of Oslo during the Norwegian Research Days.
{.imagelink}
The most interesting thing, and also what I have spent the most time on lately was a “music troll” I have been making together with Einar Sneve Martinussen and Arve Voldsund. The troll is basically a box with four speakers on the sides, and four arms sticking out with heads with included sensors.
September 19, 2006
Nokia 5500
Nokia 5500 is a new sport phone with a built in pedometer and the ability to use gestures (well, only tapping so far) for controlling music playback. As accelerometers get cheaper I expect to see lots of new gesture-controlled devices.
July 17, 2006
New book: New Digital Musical Instruments: Control and Interaction Beyond the Keyboard
{.imagelink}Eduardo Miranda and Marcelo M. Wanderley have just released a new book called New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. The chapters are:
- Musical Gestures: Acquisition and Mapping
Gestural Controllers Sensors and Sensor-to-Computer Interfaces Biosignal Interfaces Toward Intelligent Musical Instruments So far most publications in this field have been in conference proceedings, so it is great to have a book that can be used in teaching.
July 15, 2006
Electromyography
For some experiments we are conducting on piano playing I have been looking for a way of measuring muscle activity, or electromyography as it is more properly called:
Electromyography (EMG) is a medical technique for evaluating and recording physiologic properties of muscles at rest and while contracting. EMG is performed using a instrument called an electromyograph, to produce a record called an electromyogram. An electromyograph detects the electrical potential generated by muscle cells when these cells contract, and also when the cells are at rest.
May 23, 2006
Nike+iPod
Apple and Nike has teamed up and released the Nike+iPod package, which allows for using an iPod Nano as a pedometer and share the training information online. It is based on a wireless accelerometer (1.37 x 0.95 x 0.30 inches, 0.23 ounce, using a proprietary protocol at 2.4GHz) and a receiver that connects to the iPod (Size: 1.03 x 0.62 x 0.22 inches, 0.12 ounce). Suggested price is US$29, which is very cheap thinking about the included accelerometer.
May 13, 2006
Marnix de Nijs, media artist
{.imagelink}The installation Spatial Sounds (100dB at 100km/h) by Marnix de Nijs and Edwin van der Heide. Spatial Sounds 100 dB at 100 km/h was set up at Usine-C during the Elektrafestival.
A speaker is mounted on a metallic arm, rotating around at different speeds dependent on the people in the room. Ultrasonic sensors detect the distance to people in the space and changes the sound being played as well as speed of rotation (more technical info here).
March 29, 2006
Daniel Rozin Wooden Mirrors
Daniel Rozin has made some Wooden Mirrorsfrom various materials. Any person standing in front of one of these pieces is instantly reflected on its surface. The mechanical mirrors all have video cameras, motors and computers on board and produce a soothing sound as the viewer interacts with them.
March 28, 2006
The Silent Speaker
Forbes.com writes about Charles Jorgensen who is working on what he calls subvocal speech recognition. He attaches a set of electrodes to the skin of his throat and his words are recognized by a computer even when he is not producing any sound.
February 24, 2006
Membrane Switches and Linear Position Sensors
Mark just pointed me to the web page of Spectra Symbol, a company making membrane switches and linear position sensors. I particularly like the circular position sensor!
February 2, 2006
HCI at Stanford University: d.tools
d.tools is a hardware and software system that enables designers to rapidly prototype the bits (the form) and the atoms (the interaction model) of physical user interfaces in concert. d.tools was built to support design thinking rather than implementation tinkering. With d.tools, designers place physical controllers (e.g., buttons, sliders), sensors (e.g., accelerometers), and output devices (e.g., LEDs, LCD screens) directly onto form prototypes, and author their behavior visually in our software workbench.
December 2, 2005
In-shoe dynamic pressure measuring
“The pedar system is an accurate and reliable pressure distribution measuring system for monitoring local loads between the foot and the shoe.”
www.novel.de
December 13, 2001
Laser dance
Working with choreographer Mia Habib, I created the piece Laser Dance, which was shown on 30 November 1 December 2001 at the Norwegian Academy of Ballet and Dance in Oslo.
The theme of the piece was “Light”, and the choreographer wanted to use direct light sources as the point of departure for the interaction. Mia had decided to work with laser beams, one along the backside of the stage and one on the diagonal, facing towards the audience.
Tag: motion analysis
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
Tag: filters
July 13, 2022
Removing audio hum using a highpass filter in FFmpeg
Today, I recorded Sound Action 194 - Rolling Dice as part of my year-long sound action project.
The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.
Tag: kayaking
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
Tag: captions
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
September 3, 2020
Embed YouTube video with subtitles in different languages
This is primarily a note to self post, but could hopefully also be useful for others. At least, I spent a little too long to figure out to embed a YouTube video with a specific language on the subtitles.
The starting point is that I had this project video that I wanted to embed on a project website:
However, then I found that you can add info about the specific language you want to use by adding this snippet after the URL:
Tag: subtitles
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
September 3, 2020
Embed YouTube video with subtitles in different languages
This is primarily a note to self post, but could hopefully also be useful for others. At least, I spent a little too long to figure out to embed a YouTube video with a specific language on the subtitles.
The starting point is that I had this project video that I wanted to embed on a project website:
However, then I found that you can add info about the specific language you want to use by adding this snippet after the URL:
Tag: youtube
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
September 3, 2020
Embed YouTube video with subtitles in different languages
This is primarily a note to self post, but could hopefully also be useful for others. At least, I spent a little too long to figure out to embed a YouTube video with a specific language on the subtitles.
The starting point is that I had this project video that I wanted to embed on a project website:
However, then I found that you can add info about the specific language you want to use by adding this snippet after the URL:
August 16, 2012
fourMs videos
Over the years we have uploaded various videos to YouTube of our fourMs lab activities. Some of these videos have been uploaded using a shared YouTube user, others by myself and others. I just realised that a good solution for gathering all the different videos is just to create a playlist, and then add all relevant videos there. Then it should also be possible to embed this playlist in web pages, like below:
Tag: html
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
Tag: language
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
Tag: mac
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
Tag: markdown
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
October 7, 2019
What tools do I use for writing?
Earlier today I was asked about what tools I use when writing. This is not something I have written about here on the blog before, although I do have very strong opinions on my own tools. I actually really enjoy reading about how other people work, so writing about it here may perhaps also be interesting to others.
Text editor: Atom Most of my writing, whether it is e-mail drafts, meeting notes, or academic papers, is done in the form of plain text files.
July 18, 2011
Taking notes
I used to use Journler for taking notes on my computer, and when Journler died I moved on to MacJournal. However, nowadays I constantly find myself using different computers (Mac, Windows, Linux) and various mobile devices (iOS and Android) every day, and have found it to be problematic to be locked into an OSX/iOS application for note taking/access. There are some cross-platform note-taking applications out there, most notably Evernote, which I have tried to become friends with several times, without success.
Tag: windows
May 13, 2022
Em-dash is not a hyphen
I have been doing quite a lot of manuscript editing recently and realize that many people—including academics—don’t understand the differences between the symbols hyphen, en-dash, and em-dash. So here is a quick explanation:
hyphen (-): is used to join words (“music-related”). You type this character with the Minus key on the keyboard, so it is the easiest one to use. en-dash (–): is used to explain relationships between two concepts (“action–couplings”) or in number series (0–100).
January 12, 2009
Triple boot on MacBook
I am back at work after a long vacation, and one of the first things I started doing this year was to reinstall several of my computers. There is nothing like a fresh start once in a while, with the added benefits of some extra hard disk space (not reinstalling all those programs I never use anyway) and performance benefits (incredible how fast a newly installed computer boots up!).
I have been testing Ubuntu on an Asus eee for a while, and have been impressed by how easy it was to install and use.
Tag: hybrid
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
November 12, 2021
Running a hub-based conference
For the last couple of days, I have participated in the NordicSMC conference. It was organized by a team of Ph.D. fellows from Aalborg University Copenhagen, supported by the Nordic Sound and Music Computing network. UiO is happy to be a partner in this network, together with colleagues in Copenhagen (AAU), Stockholm (KTH), Helsinki (Aalto), and Reykjavik (UoI).
Choosing a conference format When we began discussing the conference earlier this year, it quickly became apparent that it was unrealistic to meet in person.
September 17, 2021
Running a hybrid disputation in a Zoom Webinar
I have been running the disputation of Guilherme Schmidt Câmara today. At RITMO, we have accepted that “hybrid mode” will be the new normal. So also for disputations. Fortunately, we had already many years of experience with video conferencing before the corona crisis hit. We have also gained lots of experience by running the Music, Communication and Technology master’s programme for some years.
In another blog post, I summarized some experiences of running our first hybrid disputation.
December 12, 2020
Running a hybrid disputation on Zoom
Yesterday, I wrote about Agata Zelechowska’s disputation. We decided to run it as a hybrid production, even though there was no audience present. It would, of course, have been easier to run it as an online-only event. However, we expect that hybrid is the new “normal” for such events, and therefore thought that it would be good to get started exploring the hybrid format right away. In this blog post, I will write up some of our experiences.
Tag: streaming
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
November 12, 2021
Running a hub-based conference
For the last couple of days, I have participated in the NordicSMC conference. It was organized by a team of Ph.D. fellows from Aalborg University Copenhagen, supported by the Nordic Sound and Music Computing network. UiO is happy to be a partner in this network, together with colleagues in Copenhagen (AAU), Stockholm (KTH), Helsinki (Aalto), and Reykjavik (UoI).
Choosing a conference format When we began discussing the conference earlier this year, it quickly became apparent that it was unrealistic to meet in person.
September 17, 2021
Running a hybrid disputation in a Zoom Webinar
I have been running the disputation of Guilherme Schmidt Câmara today. At RITMO, we have accepted that “hybrid mode” will be the new normal. So also for disputations. Fortunately, we had already many years of experience with video conferencing before the corona crisis hit. We have also gained lots of experience by running the Music, Communication and Technology master’s programme for some years.
In another blog post, I summarized some experiences of running our first hybrid disputation.
June 27, 2021
Running a hybrid conference
There are many ways to run conferences. Here is a summary of how we ran the Rhythm Production and Perception Workshop 2021 at RITMO this week. RPPW is called a workshop, but it is really a full-blown conference. Almost 200 participants enjoy 100 talks and posters, 2 keynote speeches, and 3 music performances spread across 4 days.
A hybrid format We started planning RPPW as an on-site event back in 2019.
June 27, 2021
Running a successful Zoom Webinar
I have been involved in running some Zoom Webinars over the last year, culminating with the Rhythm Production and Perception Workshop 2021 this week. I have written a general blog post about the production. Here I will write a little more about some lessons learned on running large Zoom Webinars.
In previous Webinars, such as the RITMO Seminars by Rebecca Fiebrink and Sean Gallagher, I ran everything from my office. These were completely online events, based on each person sitting with their own laptop.
December 12, 2020
Running a hybrid disputation on Zoom
Yesterday, I wrote about Agata Zelechowska’s disputation. We decided to run it as a hybrid production, even though there was no audience present. It would, of course, have been easier to run it as an online-only event. However, we expect that hybrid is the new “normal” for such events, and therefore thought that it would be good to get started exploring the hybrid format right away. In this blog post, I will write up some of our experiences.
Tag: imagemagick
April 13, 2022
Programmatically resizing a folder of images
This is a note to self about how to programmatically resize and crop many images using ImageMagick.
It all started with a folder full of photos with different pixel sizes and ratios. That is because they had been captured with various cameras and had also been manually cropped. This could be verified by running this command to print their pixel sizes:
identify -format "%wx%h\n" *.JPG Fortunately, all the images had a reasonably large pixel count, so I decided to go for a 5MP pixel count (2560x1920 in 4:3 ratio).
June 15, 2021
Making 100 video poster images programmatically
We are organizing the Rhythm Production and Perception Workshop 2021 at RITMO a week from now. Like many other conferences these days, this one will also be run online. Presentations have been pre-recorded (10 minutes each) and we also have short poster blitz videos (1 minute each).
Pre-recorded videos People have sent us their videos in advance, but they all have different first “slides”. So, to create some consistency among the videos, we decided to make an introduction slide for each of them.
March 1, 2020
Creating different types of keyframe displays with FFmpeg
In some recent posts I have explored the creation of motiongrams and average images, multi-exposure displays, and image masks. In this blog post I will explore different ways of generating keyframe displays using the very handy command line tool FFmpeg.
As in the previous posts, I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first attempt is to create a 3x3 grid image by just sampling frames from the original image.
February 21, 2020
Creating image masks from video file
As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.
I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first step is to extract keyframes from the video file using this one-liner ffmpeg command:
February 21, 2020
Creating multi-exposure keyframe image displays with FFmpeg and ImageMagick
While I was testing visualization of some videos from the AIST database earlier today, I wanted to also create some “keyframe image displays”. This can be seen as a way of doing multi-exposure photography, and should be quite straightforward to do. Still it took me quite some time to figure out exactly how to implement it. It may be that I was searching for the wrong things, but in case anyone else is looking for the same, here is a quick write up.
Tag: interdisciplinarity
March 31, 2022
A new figure of the disciplinarities: intra, cross, multi, inter, trans
Back in 2012, I published what has become my (by far) most-read blog post: Disciplinarities: intra, cross, multi, inter, trans. There I introduced a figure that I regularly receive permission requests to republish (which I always give, in the spirit of open research).
The challenge with the previous blog post has been that I based my figure on a combination of a textual description by Stember and a more limited figure by Zeigler.
November 19, 2021
Rigorous Empirical Evaluation of Sound and Music Computing Research
At the NordicSMC conference last week, I was part of a panel discussing the topic Rigorous Empirical Evaluation of SMC Research. This was the original description of the session:
The goal of this session is to share, discuss, and appraise the topic of evaluation in the context of SMC research and development. Evaluation is a cornerstone of every scientific research domain, but is a complex subject in our context due to the interdisciplinary nature of SMC coupled with the subjectivity involved in assessing creative endeavours.
March 12, 2012
Disciplinarities: intra, cross, multi, inter, trans
For some papers I am currently working on, I have taken up my interest in definitions of different types of disciplinarities (see blog post from a couple of years ago). Since that time, I think talking about the need for working interdisciplinary has only increased, but still there seem to be no real incentives for actually making it possible to work genuinely interdisciplinary. This holds when working within an academic setting, and it is even more complicated when trying to bridge academic and artistic disciplines.
August 31, 2010
Interdisciplinarity in UiO's new strategy
I am happy to see that the first point in the new UiO strategy plan is interdisciplinarity, or more specifically: “Et grensesprengende universitet”. Interdisciplinarity is always easier in theory than in practice, and this is something I am debating in a feature article in the latest volume (pages 32-33) of Forskerforum, the journal of the The Norwegian Association of Researchers (Forskerforbundet).
I have written about interdisciplinarity on this blog several times before (here, here and here).
July 10, 2009
Multi-, cross- and interdisciplinarity
While reading in The biophysical foundations of human movement, I came across a nice illustration (adapted from Zeigler 1990) of the relationships between multi-, cross- and interdisciplinarity. These terms are often used, and I think it helps to have a visual guide for separating them.
The idea of the model is that when a field becomes more multidisciplinary it can eventually move towards becoming more cross-disciplinary and finally interdisciplinary. Thinking about the two fields that I feel the mostly associated with myself, i.
February 8, 2007
Adding Disciplines to Two-dimensional Interdisciplinarity Sketch
It is always difficult to categorise things, since it is always possible to think of other ways of doing it. But here I have tried to include some of the various fields that my work touch upon in my two-axes sketch:
The idea is to include this in the introduction of my dissertation.
February 8, 2007
Two-dimensional Interdisciplinarity Sketch
I am working on the introduction to my dissertation, and am trying to place my work in a context. Officially, I’m in a musicology program (Norwegian musicology ≈ science of music) in the Faculty of Humanities, but most of my interests are probably closer to psychology and computer science. Quite a lot of what I have been doing has also been used creatively (concerts and installations) although that is not really the focus of my current research.
Tag: merge
March 31, 2022
Merge multiple MP4 files
I have been doing several long recordings with GoPro cameras recently. The cameras automatically split the recordings into 4GB files, which leaves me with a myriad of files to work with. I have therefore made a script to help with the pre-processing of the files.
This is somewhat similar to the script I made to convert MXF files to MP4, but with better handling of the temp file for storing information about the files to merge:
Tag: digital competency
March 7, 2022
Digital competency
What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.
Competencies vs skills First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware.
Tag: digitalisation
March 7, 2022
Digital competency
What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.
Competencies vs skills First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware.
Tag: Technology
March 7, 2022
Digital competency
What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.
Competencies vs skills First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware.
March 12, 2018
Nordic Sound and Music Computing Network up and running
I am super excited about our new Nordic Sound and Music Computing Network, which has just started up with funding from the Nordic Research Council.
This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.
December 13, 2017
Come study with me! New master's programme: Music, Communication and Technology
It has been fairly quiet here on the blog recently. One reason for this is that I am spending quite some time on setting up the new Music, Communication and Technology master’s programme. This is an exciting collaborative project with our colleagues at NTNU. The whole thing is focused around network-based communication, and the students will use, learn about, develop and evaluate technologies for musical communication between the two campuses in Oslo and Trondheim.
June 22, 2017
New Master's Programme: Music, Communication & Technology
{.description .introduction} We are happy to announce that “Music, Communication & Technology” will be the very first joint degree between NTNU and UiO, the two biggest universities in Norway. The programme is now approved by the UiO board and will soon be approved by the NTNU board.
www.uio.no/mct-master www.ntnu.edu/studies/mct This is a different Master’s programme. Music is at the core, but the scope is larger. The students will be educated as technological humanists, with technical, reflective and aesthetic skills.
December 13, 2014
New publication: From experimental music technology to clinical tool
I have written a chapter called From experimental music technology to clinical tool in the newly published anthology Music, Health, Technology and Design, edited by Karette A. Stensæth from the Norwegian Academy of Music. Here is the summary of the book:
This anthology presents a compilation of articles that explore the many intersections of music, health, technology and design. The first and largest part of the book includes articles deriving from the multidisciplinary research project called RHYME (www.
August 5, 2010
Many applications that do few things or a few applications doing everything?
To follow up on my previous post about the differences between browser plugins, web interfaces and desktop applications, here is another post about my current rethinking of computer habits.
In fact, I started writing this post a couple of months ago, when I decided to move back to using Apple Mail as my main e-mail application again. I had used Mail for a few years when I decided to test out Thunderbird last year.
August 4, 2010
What to choose: Browser plugin, web interface, desktop application?
Nowadays I have a hard time deciding on what type of application to use. Only a few years back I would use desktop applications for most things, but with the growing amount of decent web 2.0 “applications” I notice that I have slowly moved towards doing more and more online.
Let me use this blog as an example. It is based on WordPress, which now offers a good and efficient web interface.
June 4, 2010
Boot problems Ubuntu 10.04
Just as I started to believe that Ubuntu had matured to become a super-stable and grandma-friendly OS, I got an unexpected black screen on boot of Ubuntu 10.04 on a Dell Latitude D400. After some googling I have found a solution that works:
On boot, hit the `e’ button when the grub menu shows up. Then add the following after “quiet splash”: [i915.modeset=1]{style=“font-family: monospace;”}
If this works and you get into the system, you can do this procedure to change the grub loader permanently:
May 26, 2010
Plugins, plugins, plugins
The world is becoming a huge collection of plugins. I hadn’t tried Google Chrome in a while, and just realized that not only has it become much more stable since the last time I battled with it, but I also find more or less all my favourite Firefox extensions being ported. This blog post is a test to see how ScribeFire behaves under Chrome. While am at it, I also installed the WPtouch extension to my WordPress install to see if that could help making my web page more accesible on mobile devices.
September 9, 2008
Full sync with Google calendar
Google recently added CalDAV support to the calendar, and this tutorial explains how to set it up with various programs, including iCal, Outlook, Sunbird and Thunderbird.
May 18, 2008
Tags and categories
I have been remodelling my web page today, installing the latest version of Wordpress, and testing out a new theme and organisational structure. I have been using categories for a while in my blog, but have not used the tags feature because I didn’t really understand the difference before I read this:
Categories can be tags, sure, but not all categories are tags, and not all tags should be categories. I think of categories as a table of contents and tags as the index page of a book.
May 15, 2008
Gumstix and PDa
Another post from the Mobile Music Workshop in Vienna. Yesterday I saw a demo on the Audioscape project by Mike Wozniewski (McGill). He was using the Gumstix, a really small system running a Linux version called OpenEmbedded. He was running PDa (a Pure Data clone) and was able to process sensor data and run audio off of the small device.
May 12, 2008
Optitrack motion capture
I held a guest lecture at the speech, music and hearing group at KTH in Stockholm a couple of weeks ago, and got a tour of the lab afterwards. There I got a demonstration of the Optitrack optical motion capture system, which, as compared to other similar systems, is an amazingly cheap solution starting at $4999. Obviously, it has lower accuracy and precision than the larger systems, but then it also costs 1/20 of the price… However, 100 Hz speed and millimeter precision is decent for a USB-based system, and the cameras are really portable (10x5 cm or so each).
May 8, 2008
Motion Capture System Using Accelerometers
Came across a student project from Cornell on doing motion capture using accelerometers, based on the Atmel controller. It is a nice overview of many of the challenges faced when working with accelerometers, and the implementation seems to work well.
{width=“300/”}
May 5, 2008
Softkinetic
Dutch company Softkineticoffers what they call natural interfaces, i.e. interfaces where you don’t have to put on any sensors to interact:
Softkinetic operates with a single depth sensing camera, requires no marker (no gamepad, no wiimote, no special gloves or clothing, no headset - nothing), and works under all lighting conditions and scene settings (at home, in a fitness center, an amusement park, a classroom, a game cafe, an industrial simulation room - anywhere.
April 24, 2008
Sensing Music-related Actions
The web page for our new research project called Sensing Music-related Actions is now up and running. This is a joint research project of the departments of Musicology and Informatics, and has received external funding through the VERDIKT program of the The Research Council of Norway. The project runs from July 2008 until July 2011.
The focus of the project will be on basic issues of sensing and analysing music-related actions, and creating various prototypes for testing the control possibilities of such actions in enactive devices.
April 8, 2008
Writing in NeoOffice, dreaming of LaTeX
I am working on a paper for a journal that only accepts RTF documents, and to avoid the possible problems resulting from converting a LaTeX document into RTF (or possibly from PDF), I decided to try using a word processor from the beginning. For simple word processing I have grown very found of Bean recently, a lightweight application slightly more advanced than TextEdit. I started out with Bean, but since I had to include endnotes in the document I ended up moving over to NeoOffice instead.
February 25, 2008
Apple tries to patent gestures
Wired reports that Apple has filed around 200 patent applications related to multitouch and gesture control:
Yet it appears that the company is not trying to patent the entire multitouch concept, but rather trying to protect certain uses of it – specifically the methods to interpret gestures, and in some cases, the gestures themselves.
It is interesting to see that they mention the interpretation of a gesture. This means that they separate between gesture and action, i.
February 14, 2008
Syncing Movement and Audio using a VST-plugin
I just heard Esteban Maestre from UPF present his project on creating a database of instrumental actions of bowed instruments, for use in the synthesis of score-based material. They have come up with a very interesting solution to the recording and synchronisation of audio with movement data: Building a VST plugin which implements recording of motion capture data from a Polhemus Liberty, together with bow sensing through an Arduino. This makes it possible to load the VST-plugin inside regular audio sequencing software and do the recording from there.
February 14, 2008
TRIL centre, Emobius and Shimmer
I just heard a presentation by a group of researchers from the Tril centre (Technology Research for Independent Living) in Dublin. They have developed Emobius (or EyesWeb Mobius), a set of blocks for various types of biomedical processing, as well as a graphical front-end to the forthcoming EyesWeb XMI. It is fascinating to see how the problems they are working on in applications for older persons are so similar to what we are dealing with in music research.
February 12, 2008
Free Software
I am participating in the EyesWeb Week in Genoa this week. This morning Nicola Bernardini held a lecture about Free Software. I have heard him talk on this topic several times before, but as I have now some more experience on participating in a Free Software project (i.e. Jamoma), I got more out of his ideas.
Some main points from the talk:
Use Free Software! Freeware and shareware may have nothing to do with Free Software.
January 18, 2008
Open Sound Control
The newly refurbished OSC forum web site has sparked off some discussions on the OSC_dev mailing list. One interesting note was a reply from Andy W. Schmeder on how OSC should be spelled out correctly:
The short answer is, use “Open Sound Control”. The other form one may encounter is “OpenSound Control”, but we don’t use that anymore. Any additional forms you may encounter are probably unintentional.
I have been using various versions over the years (also including OpenSoundControl), I guess this is then an official answer since Andy is working at CNMAT.
January 7, 2008
Time Machine
I had my first go at restoring a file using Time Machine today. Looking for a Keynote presentation, I realised that I had kept only the PDF of the presentation and not the original presentation file. Not really sure how that happened, but, anyway, the file was lost.
I have had Time Machine running on my computer ever since I upgraded to X.5, and have been wondering whether it would be worth the extra CPU peaks that appear every hour or so when it activates and copies changed files.
November 6, 2007
Bug Labs: Lego-like computer modules
Bug Labs has announced a new open source, Lego-like computer modules running Linux. The idea is to create hardware that can easily be assembled in various ways. Looks neat!
{#p503 .imagelink}
September 28, 2007
Eduroam
I just learned about Eduroam:
Eduroam which stands for Education Roaming, is a RADIUS-based infrastructure that uses 802.1X security technology to allow for inter-institutional roaming. Being part of eduroam allows users visiting another institution connected to eduroam to log on to the WLAN using the same credentialsusername and passwordthe user would use if he were at his home institution. Depending on local policies at the visited institutions, eduroam participants may also have additional resources at their disposal.
September 22, 2007
Doepfer USB64
The new Doepfer USB64 Info looks very interesting with its 64 analog (or digital) inputs and €125 price tag. I am not so excited about the MIDI plug, and wonder whether they intend to communicate some higher resolution data through the USB plug.
{width=“602” height=“162”}
September 10, 2007
eLearning getting to UiO
I have been complaining about the poor support for eLearning solutions at the University of Oslo for some years. I have tried Fronter, but find it too closed and rigid for what I want to do. I like that course information is open and easily available for everyone, but so far the standard course pages have been very much focused on basic information only.
I recently discovered that things have improved a lot under the surface, and that it is now possible to give students access to add information to folders under the course web sites.
May 10, 2007
Björk to tour with Reactable
{#image457}The MTG group at Pompeu Fabra reports that Björk will use the Reactable in her upcoming tour:
With her first tour concert at the Coachella Festival in California, the Icelandic singer Björk introduced the reactable for the first time to a mainstream audience. Our instrument will form a key element of the artist’s current world tour “Volta” which will appear at numerous music festivals during the next 18 months.
I have tried the Reactable at various conferences and it is great that this innovative collaborative instrument gets some attention outside the music tech community.
May 2, 2007
Surface computing
A Microsoft demo of surface computing, showing several prototypes of “gesture control” (what I would call action control) in software.
April 16, 2007
My website as a graph
Made a visualisation of the structure of my web page with this DOM Visualizer Applet.
{#image451 width=“460” height=“444”}
March 24, 2007
SD USB card
There are very few items that make me happy every time I use them, but my Sandisk SD memory card is one of them. I have had it for around a year, and it seems to be some of the most ingenious industrial design during the last years. It makes cables unnecessary, as I simple flip it open and connect it to a USB port. Brilliant! In the beginning I was very afraid that it should break, but I have been using it a lot over the last year without any problems.
March 21, 2007
Technical Parameters
I have been thinking a lot about GUIs, namespaces and control parameters over the last couple of days. One of the big challenges we are facing is how to make technology more human-friendly. Often it seems that technology controls us more than we control the technology.
Creating a user interface of any kind is very similar how we think about mapping in musical instruments. In essence, any type of control is one, or several, layers of mapping between one set of parameters to another.
February 28, 2007
AudioPint
The AudioPint project at MIT aims at creating a computer based system that is as portable and stable as hardware gear:
Consider a system that is small, lightweight, tough, able to be powered up, plugged in, and it used immediately - but with sounds that can be controlled by any computer-compatible input device, opening wide the space of expressive possibilities. Devices supported include midi controllers, joysticks, mice, touchpads, or any other custom controller that can connect to a computer!
February 28, 2007
Jon Olav Eikenes' Diploma Project
Jon Olav Eikenes has posted information about his diploma project on control of sound spatialisation at the department of interaction design at the Oslo School of Architecture and Design. As a co-adviser I think it is great to see an interdisciplinary project working so well. I hope we can get more of this type of collaborative projects in the future.
February 27, 2007
MIT: MAS.960 Principles of Electronic Music Controllers
Came across the web site of MIT course MAS.960 Principles of Electronic Music Controllers, which has some interesting references and links tovarious resources on NIME development. It is also worth checking out many of the student projects.
February 20, 2007
Recording Hoax
Craig Sapp (formerly at CCARH now at CHARM) writes:
I have been analyzing the performances of Chopin Mazurkas and have been noticing an unusual occurence: the performances of the same two pianists always matched whenever I do an analysis for a particular mazurka. In fact, they matched as well as two different re-releases of the same original recording.
The full story about how the tracks have been slightly time-stretched, panned and EQed before being rereleased is covered in a recent story in Gramophone.
February 17, 2007
Bob Ludwig on Surround Mixing
I went to a speech on surround mixing (5.1) last night by Bob Ludwig of Gateway Mastering. He spent a lot of time talking about gear and technicalities of mastering, and also discussed the different stages in mastering for various formats SACD, DVD-Audio etc. An interesting thing he commented on is the fact that when Dolby Digital is downmixed to stereo in consumer gear, the LFE channel is left out. So he advised to use the LFE (.
February 15, 2007
Tag Clouds
TagCrowd is an online tool for creating tag clouds from any text to visualize word frequency. Tag clouds have become popular on Flickr and a number of other social web sites.
I really like the idea about tag clouds since they can quickly visualise the content of a text by summarise (and quantify relationships among) the most important words in a text.
Does anyone know about a standalone software that could create tag clouds of large texts?
February 12, 2007
Critical Thinking About Word and .doc
A comment on why university teachers should think critically about Word and .doc:
Many of us teach cultural analysis and critical thinking in our writing classes. Our first year readers are full of cultural commentary, and we use these texts to teach our students to question the status quo and understand more deeply the implications of the choices they make in this consumer culture.
Do writing teachers do the same when they tell students to submit their documents as .
February 8, 2007
MSc in Music Tech at Georgia Tech
Georgia Tech has been hiring a young and interesting music tech faculty over the last years, and now they start a Master of Science program in music tech with a focus on the design and development of novel enabling music technologies. This is yet another truly interdisciplinary music tech program to appear over the last couple of years, and accepting students from a number of different backgrounds, including music, computing and engineering.
February 8, 2007
Windows Vista soundscape
I wrote this blog entry several months ago, but never posted it because I thought I would have time to go back and evaluate the sounds more. Since I don’t see that happen any time before I finish my dissertation, I just go along and post it now:
Microsoft has posted some info and examples of the Vista soundscape. The sounds are designed by Robert Fripp and will be some of the most well known sounds on the planet in not too long.
February 4, 2007
YouTube - Web 2.0 ... The Machine is Us/ing Us
A great little movie about the internet (html, xml, hypertext, etc.) by Michael Wesch, an Assistant Professor of Cultural Anthropology from Kansas State University.
January 16, 2007
NOVINT Falcon
{.imagelink}NOVINT has finally got around to release Falcon the much awaited first, cheap haptic controller. I have my doubts about how solid the thing is, at least when I know how fragile the many times more expensive Phantoms are. Nevertheless, Falcon will finally introduce haptics to everyone.
January 14, 2007
iPhone sensing
As I have mentioned elsewhere, I am thrilled by the fact that various sensing technologies are getting so cheap that they are incorporated everywhere. As could be seen from the presentation of Apple’s new iPhone, it includes an accelerometer to sense tilt of the device (and also movement if they decide to use that for anything), a proximity sensor (ultrasound?) to turn off the display when the phone is put to the ear and a light sensor to change the brightness of the screen (?
January 11, 2007
DropDMG
DropDMG is a wonderful little program that will create a disk image from any type of file, folder, CD or DVD you drop on it. If the rest of the world could be just this easy…
January 11, 2007
Gestures and technology
What I find most fascinating about Apple’s new iPhone, is the shift from buttons to body. Getting away from the paradigm of pressing buttons to make a call or to navigate, the iPhone boasts a large multi-touch screen where the user will be able to interact by pointing at pictures and objects. Furthermore, the built-in rotation sensor will sense the direction of the device and rotate the screen accordingly, somehow similar to how new digital cameras rotate the pictures you take automatically.
January 11, 2007
Smart programs
I had a discussion about which software tools I use for my research, so here is a list of the most important (in no particular order):
Firefox: with adblock and mouse gestures. NetNewsWire: for handling all the blogs I am reading. MarsEdit: to write blog entries. Publishes directly to my WordPress driven blog. OmniGraffle: for making diagrams. I even made my last conference poster with this program, works great also with photos.
January 6, 2007
Tim Place on parameter control
Gregory Taylor has made an interview with Tim Place about Hipno. It is interesting how he comments about the Hipnoscope control:
The Hipnoscope does something that I’m quite proud of, which is that it allows you to quickly audition a plug-in and some of its possibilities. But at the same time it really rewards those who are patient explorers that spend time really focusing on subtleties offers. I still find myself surprised at the results I get sometimes - the Hipnoscope creates this palette where there is an almost infinite range of subtlety with some of the plug-ins.
December 31, 2006
5 Ways to use Quicksilver
I came across Dave Parry’s blog academhack, with some interesting comments on Mac software in an academic context. I was particularly happy about his 5 Ways to use Quicksilver, which helped me get started using the web and dictionary search in Quicksilver.
December 31, 2006
Noise
{#image361}If you ever wanted some nice, pink noise in the background while working on your computer, Noise is the tool! Apparently, lots of people use this to try and shut out more distractive sounds. While I would prefer a program doing noise-cancelling (which would probably be tricky using the built-in microphone since it would also detect your own sounds while typing on the keyboard), this actually works ok.
December 4, 2006
Nettradio
Gustav pointed me to Nettradio, a great little tool for adding a bunch of Norwegian radio stations to iTunes.
December 4, 2006
WiiMote used as a mouse on windows
This video shows WiiMote used as a mouse on windows.
December 4, 2006
YouOS: A Web Operating System
Jamie just pointed me to YouOS, an operating system running entirely within a web browser:
YouOS and its applications run entirely within a web browser, but have the look and feel of desktop applications. An application’s code and data reside remotely but are executed and modified locally. This model allows for a great deal of freedom. You can edit a document at home in a text editor and then go to school or work and instantly access the same text editor and document.
November 16, 2006
M-AUDIO - MidAir
M-Audio has released MidAir a wireless MIDI transmitter and receiver system.
{width=“460” height=“250”}
The system is also able to synchronize between several performers.
I just wish that some of these large companies would start to use OSC one day…
November 8, 2006
Auto-Completion in OS X
Just learned that it is possible to get auto-completion with the Esc and F5 Keys in all Cocoa applications. Just start typing, hit the button and you will get a list of matches.
{width=“193” height=“426”}
November 8, 2006
Set headphone volume level on Intel Macs
Macworld: Mac OS X Hints: Set headphone volume level on Intel Macs
If you’ve got a new Intel-powered Mac, here’s a feature you may not have even known you had. For years, all Macs have had the ability to have different volume levels for different inputs. Plug in a USB-powered iMic, for instance, and you can set its output volume level independently of that of your internal speakers.
November 2, 2006
Arduino
Seems like the Arduino community is growing quickly.
Arduino is an open-sou