Below you will find pages that utilize the taxonomy term “video”
August 2, 2023
Finding duration and pixel dimensions for a bunch of video files
As part of my #StillStanding project I need to handle a lot of video files on a daily basis. Today, I wanted to check the duration and pixel dimensions of a bunch of files in different folders. As always, I turned to FFmpeg, or more specifically FFprobe, for help. However, figuring out all the details of how to get out the right information is tricky. So I decided to ask ChatGPT for help.
July 4, 2023
Sound and Light vs Audio and Video
People often refer to “sound and video” as a concept pair. That is confusing because, in my thinking, “sound” and “video” refer to very different things. In this post, I will explain the difference.
Sound and Audio In a previous blog post, I have written about the difference between sound and audio. The short story is that “sound” refers to the physical phenomenon of vibrating molecules, such as sound waves moving through air.
May 26, 2023
The Art of Flying
I participated in the conference The Aesthetics of Absence in Music of the Twenty-First Century at the Department of Musicology the last couple of days. Judith Lochhead started her keynote lecture with a clip from the movie The art of flying by Jan van Ijken. This is a beautiful short film based on clips of flocking birds:
The art of flying from Jan van IJken on Vimeo.
Of course, I wanted to see how some video visualizations would work, so I reached for the Musical Gestures Toolbox for Python.
May 25, 2023
Understanding the GoPro Max' File Formats
I use a GoPro Max 360-degree camera in my annual #StillStanding project. That means that I also have had an excellent chance to work with GoPro files and try to understand their inner logic. In this blog post, I will summarize some of my findings.
What is recorded? Recording “a video” with a GoPro Max results in recording multiple files. For example, each of my daily 10-minute recordings ends up with something like this:
May 20, 2023
The effect of skipping frames for video visualization
I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps.
May 10, 2023
Visualization of Musique de Table
Musique de Table is a wonderful piece written by Thierry de Mey. I have seen it performed live several times, and here came across a one-shot video recording that I thought it would be interesting to analyse:
The test with some video visualization tools in the Musical Gestures Toolbox for Python.
For running the commands below, you first need to import the toolbox in Python:
import musicalgestures as mg I started the process by importing the source video:
April 1, 2023
Making 2D Images from 360-degree Videos
For my annual Still Standing project, I am recording 360 videos with audio and sensor data while standing still for 10 minutes.
I have started exploring how to visualize the sensor data best. Today, I am looking into visualization strategies for 360-degree images. I have written about how to pre-process 360-degree videos from Garmin VIRB and Ricoh Theta cameras previously.
The Theta records in a dual fisheye format like this:
December 31, 2022
365 Sound Actions
1 January this year, I set out to record one sound action per day. The idea was to test out the action–sound theory from my book Sound Actions. One thing is writing about action–sound couplings and mappings, another is to see how the theory works with real-world examples. As I commented on after one month, the project has been both challenging and inspiring. Below I write about some of my experiences but first, here is the complete list:
July 17, 2022
Video visualizations of mountain walking
After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:
July 13, 2022
Kayak motion analysis with video-based horizon leveling
Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).
June 11, 2022
Adding subtitles to videos
In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME). This is the video I discuss in this blog post:
Note that YouTube supports turning on and off the subtitles (CC button).
May 7, 2022
Running a disputation on YouTube
Last week, Ulf Holbrook defended his dissertation at RITMO. I was in charge of streaming the disputation, and here are some reflections on the technical setup and streaming.
Zoom Webinars vs YouTube Streaming I have previously written about running a hybrid disputation using a Zoom webinar. We have used variations of that setup also for other events. For example, last year, we ran RPPW as a hybrid conference. There are some benefits of using Zoom, particularly when having many presenters.
March 31, 2022
Merge multiple MP4 files
I have been doing several long recordings with GoPro cameras recently. The cameras automatically split the recordings into 4GB files, which leaves me with a myriad of files to work with. I have therefore made a script to help with the pre-processing of the files.
This is somewhat similar to the script I made to convert MXF files to MP4, but with better handling of the temp file for storing information about the files to merge:
February 12, 2022
Edit video rotation metadata in FFmpeg
I am recording a lot of short videos these days for my sound actions project. Sometimes the recordings end up being rotated, which is based on the orientation sensor (probably the gyroscope) of my mobile phone. This rotation is not part of the recorded video data, it is just information written into the header of the MPEG file. That also means that it is possible to change the rotation without recoding the file.
February 3, 2022
Different 16:9 format resolutions
I often have to convert between different resolutions of videos and images and always forget the pixel dimensions that correspond to a 16:9 format. So here is a cheat-sheet:
**2160p: **3840×2160 **1440p: **2560×1440 **1080p: **1920×1080 **720p: **1280×720 540p: 960x540 **480p: **854×480 **360p: **640×360 **240p: **426×240 120p: 213x120 I also came across this complete list of true 16:9 resolution combinations, but the ones above suffice for my usage. Happy converting!
January 31, 2022
One month of sound actions
One month has passed of the year and my sound action project. I didn’t know how it would develop when I started and have found it both challenging and inspiring. It has also engaged people around me more than I had expected.
Each day I upload one new video recording to YouTube and post a link on Twitter. If you want to look at the whole collection, it is probably better to check out this playlist:
January 28, 2022
Preparing videos for FutureLearn courses
This week we started up our new online course, Motion Capture: The Art of Studying Human Activity, and we are also rerunning Music Moves: Why Does Music Make You Move? for the seventh time. Most of the material for these courses is premade, but we record a new wrap-up video at the end of each week. This makes it possible to answer questions that have been posed during the week and add some new and relevant material.
January 9, 2022
Frame differencing with FFmpeg
I often want to create motion videos, that is, videos that only show what changed between frames. Such videos are nice to look at, and so-called “frame differencing” is also the start point for many computer vision algorithms.
We have made several tools for creating motion videos (and more) at the University of Oslo: the standalone VideoAnalysis app (Win/Mac) and the different versions of the Musical Gestures Toolbox. These are all great tools, but sometimes it would be nice also to create motion videos in the terminal using FFmpeg.
January 7, 2022
Try not to headbang challenge
I recently came across a video of the so-called Try not to headbang challenge, where the idea is to, well, not to headbang while listening to music. This immediately caught my attention. After all, I have been researching music-related micromotion over the last years and have run the Norwegian Championship of Standstill since 2012.
Here is an example of Nath & Johnny trying the challenge:
https://www.youtube.com/watch?v=-I4CBsDT37I As seen in the video, they are doing ok, although they are far from sitting still.
December 21, 2021
Pre-processing Garmin VIRB 360 recordings with FFmpeg
I have previously written about how it is possible to “flatten” a Ricoh Theta+ recording using FFmpeg. Now, I have spent some time exploring how to process some recordings from a Garmin VIRB camera.
Some hours of recordings The starting point was a bunch of recordings from our recent MusicLab Copenhagen featuring the amazing Danish String Quartet. A team of RITMO researchers went to Copenhagen and captured the quartet in both rehearsal and performance.
December 17, 2021
Flamenco video analysis
I continue my testing of the new Musical Gestures Toolbox for Python. One thing is to use the toolbox on controlled recordings with stationary cameras and non-moving backgrounds (see examples of visualizations of AIST videos). But it is also interesting to explore “real world” videos (such as the Bergensbanen train journey).
I came across a great video of flamenco dancer Selene Muñoz, and wondered how I could visualize what is going on there:
December 15, 2021
Kayaking motion analysis
Like many others, I bought a kayak during the pandemic, and I have had many nice trips in the Oslo fiord over the last year. Working at RITMO, I think a lot about rhythm these days, and the rhythmic nature of kayaking made me curious to investigate the pattern a little more.
Capturing kayaking motion My spontaneous investigations into kayak motion began with simply recording a short video of myself kayaking.
November 17, 2021
Preparing video for Matlab analysis
Typical video files, such as MP4 files with H.264 compression, are usually small in size and with high visual quality. Such files are suitable for visual inspection but do not work well for video analysis. In most cases, computer vision software prefers to work with raw data or other compression formats.
The Musical Gestures Toolbox for Matlab works best with these file types:
Video: use .jpg (Motion.jpg) as the compression format.
November 13, 2021
Releasing the Musical Gestures Toolbox for Python
After several years in the making, we finally “released” the Musical Gestures Toolbox for Python at the NordicSMC Conference this week. The toolbox is a collection of modules targeted at researchers working with video recordings.
Below is a short video in which Bálint Laczkó and I briefly describe the toolbox:
https://youtu.be/tZVX\_lDFrwc About MGT for Python The Musical Gestures Toolbox for Python includes video visualization techniques such as creating motion videos, motion history images, and motiongrams.
October 27, 2021
Rotate video using FFmpeg
Here is another FFmpeg-related blog post, this time to explain how to rotate a video using the command-line tool FFmpeg. There are two ways of doing this, and I will explain both in the following.
Rotation in metadata The best first try could be to make the rotation by only modifying the metadata in the file. This does not work for all file types, but should work for some (including .
October 26, 2021
Crop video files with FFmpeg
I have previously written about how to trim video files with FFmpeg. It is also easy to crop a video file. Here is a short how-to guide for myself and others.
Cropping is not the same as trimming This may be basic, but I often see the concepts of cropping and trimming used interchangeably. So, to clarify, trimming a video file means making it shorter by removing frames in the beginning and/or end.
October 13, 2021
Converting a .WAV file to .AVI
Sometimes, there is a need to convert an audio file into a blank video file with an audio track. This can be useful if you are on a system that does not have a dedicated audio player but a video player (yes, rare, but I work with odd technologies…). Here is a quick recipe
FFmpeg to the rescue When it comes to converting from one media format to another, I always turn to FFmpeg.
June 27, 2021
Running a successful Zoom Webinar
I have been involved in running some Zoom Webinars over the last year, culminating with the Rhythm Production and Perception Workshop 2021 this week. I have written a general blog post about the production. Here I will write a little more about some lessons learned on running large Zoom Webinars.
In previous Webinars, such as the RITMO Seminars by Rebecca Fiebrink and Sean Gallagher, I ran everything from my office. These were completely online events, based on each person sitting with their own laptop.
June 17, 2021
Normalize audio in video files
We are organizing the Rhythm Production and Perception Workshop at RITMO next week. As mentioned in another blog post, we have asked presenters to send us pre-recorded videos. They are all available on the workshop page.
During the workshop, we will play sets of videos in sequence. When doing a test run today, we discovered that the sound levels differed wildly between files. There is clearly the need for normalizing the sound levels to create a good listener experience.
June 15, 2021
Making 100 video poster images programmatically
We are organizing the Rhythm Production and Perception Workshop 2021 at RITMO a week from now. Like many other conferences these days, this one will also be run online. Presentations have been pre-recorded (10 minutes each) and we also have short poster blitz videos (1 minute each).
Pre-recorded videos People have sent us their videos in advance, but they all have different first “slides”. So, to create some consistency among the videos, we decided to make an introduction slide for each of them.
May 11, 2021
Combining audio and video files with FFmpeg
When working with various types of video analysis, I often end up with video files without audio. So I need to add the audio track by copying either from the source video file or from a separate audio file. There are many ways of doing this. Many people would probably reach for a video editor, but the problem is that you would most likely end up recompressing both the audio and video file.
February 10, 2021
Some thoughts on microphones for streaming and recording
Many people have asked me about what types of microphones to use for streaming and recording. This is really a jungle, with lots of devices and things to think about. I have written some blog posts about such things previously, such as tips for doing Skype job interviews, testing simple camera/mic solutions, running a Hybrid Disputation, and how to work with plug-in-power microphones.
Earlier today I held a short presentation about microphones at RITMO.
January 28, 2021
Analyzing a double stroke drum roll
Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:
January 24, 2021
Convert between video containers with FFmpeg
In my ever-growing collection of smart FFmpeg tricks, here is a way of converting from one container format to another. Here I will convert from a QuickTime (.mov) file to a standard MPEG-4 (.mp4), but the recipe should work between other formats too.
If you came here to just see the solution, here you go:
ffmpeg -i infile.mov -acodec copy -vcodec copy outfile.mp4 In the following I will explain everything in a little more detail.
January 2, 2021
Create timelapse video from images with FFmpeg
I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.
Here is an FFmpeg one-liner that does the job:
December 11, 2020
PhD disputation of Agata Zelechowska
I am happy to announce that Agata Zelechowska yesterday successfully defended her PhD dissertation during a public disputation. The dissertation is titled Irresistible Movement: The Role of Musical Sound, Individual Differences and Listening Context in Movement Responses to Music and has been carried out as part of my MICRO project at RITMO.
The dissertation is composed of five papers and an extended introduction. The abstract reads:
This dissertation examines the phenomenon of spontaneous movement responses to music.
November 6, 2020
Visual effect of the different tblend functions in FFmpeg
FFmpeg is a fantastic resource for doing all sorts of video manipulations from the terminal. However, it has a lot of features, and it is not always easy to understand what they all mean.
I was interested in understanding more about how the tblend function works. This is a function that blends successive frames in 30 different ways. To get a visual understanding of how the different operations work, I decided to try them all out on the same video file.
September 3, 2020
Embed YouTube video with subtitles in different languages
This is primarily a note to self post, but could hopefully also be useful for others. At least, I spent a little too long to figure out to embed a YouTube video with a specific language on the subtitles.
The starting point is that I had this project video that I wanted to embed on a project website:
However, then I found that you can add info about the specific language you want to use by adding this snippet after the URL:
March 20, 2020
Pixel array images of long videos in FFmpeg
Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.
I find that creating motiongrams or videograms is a good way to visualize the content of videos.
March 19, 2020
Convert MPEG-2 files to MPEG-4
{width=“300”}
This is a note to self, and could potentially also be useful to others in need of converting “old-school” MPEG-2 files into more modern MPEG-4 files using FFmpeg.
In the fourMs lab we have a bunch of Canon XF105 video cameras that record .MXF files with MPEG-2 compression. This is not a very useful format for other things we are doing, so I often have to recompress them to something else.
March 18, 2020
Simple tips for better video conferencing
Very many people are currently moving to video-based meetings. For that reason I have written up some quick advise on how to improve your setup. This is based on my interview advise, but grouped differently.
Network {width=“200” height=“100”}
The first important thing is to have as good a network as you can. Video conferencing requires a lot of bandwidth, so even though your e-mail and regular browsing works fine, it may still not be sufficient for good video transmission.
March 15, 2020
Flattening Ricoh Theta 360-degree videos using FFmpeg
I am continuing my explorations of the great terminal-based video tool FFmpeg. Now I wanted to see if I could “flatten” a 360-degree video recorded with a Ricoh Theta camera. These cameras contain two fisheye lenses, capturing two 180-degree videos next to each other. This results in video files like the one I show a screenshot of below.
These files are not very useful to watch or work with, so we need to somehow “flatten” them into a more meaningful video file.
February 21, 2020
Creating image masks from video file
As part of my exploration in creating multi-exposure keyframe image displays with FFmpeg and ImageMagick, I tried out a number of things that did not help solve the initial problem but still could be interesting for other things. Most interesting was the automagic creation of image masks from a video file.
I will use a contemporary dance video from the AIST Dance Video Database as an example:
The first step is to extract keyframes from the video file using this one-liner ffmpeg command:
February 21, 2020
Creating multi-exposure keyframe image displays with FFmpeg and ImageMagick
While I was testing visualization of some videos from the AIST database earlier today, I wanted to also create some “keyframe image displays”. This can be seen as a way of doing multi-exposure photography, and should be quite straightforward to do. Still it took me quite some time to figure out exactly how to implement it. It may be that I was searching for the wrong things, but in case anyone else is looking for the same, here is a quick write up.
February 21, 2020
Visualizing some videos from the AIST Dance Video Database
Researchers from AIST have released an open database of dance videos, and I got very excited to try out some visualization methods on some of the files. This was also a good chance to test out some new functionality in the Musical Gestures Toolbox for Matlab that we are developing at RITMO. The AIST collection contains a number of videos. I selected one hip-hop dance video based on a very steady rhythmic pattern, and a contemporary dance video that is more fluid in both motion and music.
February 14, 2020
Testing simple camera and microphone setups for quick interviews
We just started a new run of our free online course Music Moves. Here we have a tradition of recording wrap-up videos every Friday, in which some of the course educators answer questions from the learners. We have recorded these in many different ways, from using high-end cameras and microphones to just using a handheld phone. We have found that using multiple cameras and microphones is too time-consuming, both in setup and editing.
December 27, 2019
Teaching with a document camera
How does an “old-school” document camera work for modern-day teaching? Remarkably well, I think. Here are some thoughts on my experience over the last few years.
The reason I got started with a document camera was because I felt the need for a more flexible setup for my classroom teaching. Conference presentations with limited time are better done with linear presentation tools, I think, since the slides help with the flow.
November 3, 2019
Converting MXF files to MP4 with FFmpeg
We have a bunch of Canon XF105 at RITMO, a camera that records MXF files. This is not a particularly useful file format (unless for further processing). Since many of our recordings are just for documentation purposes, we often see the need to convert to MP4. Here I present two solutions for converting MXF files to MP4, both as individual files and a combined file from a folder. These are shell scripts based on the handy FFmpeg.
October 23, 2019
Tips for doing your job interview over Skype
I have been interviewing a lot of people for various types of university positions over the years. Most often these interviews are conducted using a video-conferencing system. Here I provide some tips to help people prepare for a video-based job interview:
We (and many others) typically use Skype for interviews, not because it is the best system out there (of commercial platforms I prefer Zoom), but because it is the most widespread solution.
November 25, 2018
Reflecting on some flipped classroom strategies
I was invited to talk about my experiences with flipped classroom methodologies at a seminar at the Faculty of Humanities last week. Preparing for the talk got me to revisit my own journey of working towards flipped teaching methodologies. This has also involved explorations of various types of audio/video recording. I will go through them in chronological order.
Podcasting Back in 2009-2011, I created “podcasts” of my lectures a couple of semesters, such as in the course MUS2006 Music and Body Movements (which was at the time taught in Norwegian).
September 28, 2018
Musical Gestures Toolbox for Matlab
Yesterday I presented the Musical Gestures Toolbox for Matlab in the late-breaking demo session at the ISMIR conference in Paris.
The Musical Gestures Toolbox for Matlab (MGT) aims at assisting music researchers with importing, preprocessing, analyzing, and visualizing video, audio, and motion capture data in a coherent manner within Matlab.
Most of the concepts in the toolbox are based on the Musical Gestures Toolbox that I first developed for Max more than a decade ago.
June 18, 2018
Testing Blackmagic Web Presenter
We are rapidly moving towards the start of our new Master’s programme Music, Communication & Technology. This is a unique programme in that it is split between two universities (in Oslo and Trondheim), 500 kilometres apart. We are working on setting up a permanent high-quality, low-latency connection that will be used as the basis for our communication. But in addition to this permanent setup we need solutions for quick and easy communication.
May 18, 2018
Trim video files using FFmpeg
This is a note to self, and hopefully others, about how to easily and quickly trim videos without recompressing the file.
I often have long video recordings that I want to split or trim. Splitting and trimming are temporal transformations and should not be confused with the spatial transformation cropping. Cropping a video means cutting out parts of the image, and I have another blog post on cropping video files using FFmpeg.
November 22, 2016
From Basic Music Research to Medical Tool
The Research Council of Norway is evaluating the research being done in the humanities these days, and all institutions were given the task to submit cases of how societal impact. Obviously, basic research is per definition not aiming at societal impact in the short run, and my research definitely falls into category.Still it is interesting to see that some of my basic research is, indeed, on the verge of making a societal impact in the sense that policy makers like to think about.
April 12, 2015
Simple video editing in Ubuntu
I have been using Ubuntu as my main OS for the past year, but have often relied on my old MacBook for doing various things that I haven’t easily figured out how to do in Linux. One of those things is to trim video files non-destructively. This is quite simple to do in QuickTime, although Apple now forces you to save the file with a QuickTime container (.mov) even though there is still only MPEG-4 compression in the file (h.
February 25, 2014
New department video
[As I have mentioned previously, life has been quite hectic over the last year, becoming Head of Department at the same time as getting my second daughter. So my research activities have slowed down considerably, and also the activity on this blog.]{style=“line-height: 1.5;”}
When it comes to blogging, I have focused on building up my Head of Department blog (in Norwegian), which I use to comment on things happening in the Department as well as relevant (university) political issues.
February 25, 2014
New fourMs video
Not only do we have a new Department video, but we have also made a short video documentary about our fourMs group. It is in Norwegian (subtitles coming soon), but even though you do not understand the language, the video has lots of nice shots from the labs and the background music is made by Professor Rolf Inge Godøy.
August 1, 2013
New publication: Non-Realtime Sonification of Motiongrams
Today I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.
July 19, 2013
Calculating duration of QuickTime movie files
I have been doing video analysis on QuickTime (.mov) files for several years, but have never really had the need to use the time information of the movie files. For a project, I now had the need for getting the timecode in seconds out of the files, and this turned out to be a little more tricky than first expected. Hence this little summary for other people that may be in the same situation.
July 15, 2013
Documentation of the NIME project at Norwegian Academy of Music
From 2007 to 2011 I had a part-time research position at the Norwegian Academy of Music in a project called New Instruments for Musical Exploration, and with the acronym NIME. This project was also the reason why I ended up organising the NIME conference in Oslo in 2011.
The NIME project focused on creating an environment for musical innovation at the Norwegian Academy of Music, through exploring the design of new physical and electronic instruments.
June 26, 2013
Visualisations of a timelapse video
Yesterday, I posted a blog entry on my TimeLapser application, and how it was used to document the working process of the making of the sculpture Hommage til kaffeselskapene by my mother. The final timelapse video looks like this:
Now I have run this timelapse video through my VideoAnalysis application, to see what types of analysis material can come out of such a video.
The average image displays a “summary” of the entire video recording, somehow similar to an “open shutter” in traditional photography.
June 25, 2013
Timelapser
I have recently started moving my development efforts over to GitHub, to keep everything in one place. Now I have also uploaded a small application I developed for a project by my mother, Norwegian sculptor Grete Refsum. She wanted to create a timelapse video of her making a new sculpture, “Hommage til kaffeselskapene”, for her installation piece Tante Vivi, fange nr. 24 127 Ravensbrück.
There are lots of timelapse software available, but none of them that fitted my needs.
May 28, 2013
Kinectofon: Performing with shapes in planes
Yesterday, Ståle presented a paper on mocap filtering at the NIME conference in Daejeon. Today I presented a demo on using Kinect images as input to my sonomotiongram technique.
Title
Kinectofon: Performing with shapes in planes
Links
Paper (PDF) Poster (PDF) Software Videos (coming soon) Abstract
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB and depth image streams retrieved from a Microsoft Kinect sensor device.
April 6, 2013
ImageSonifyer
Earlier this year, before I started as head of department, I was working on a non-realtime implementation of my sonomotiongram technique (a sonomotiongram is a sonic display of motion from a video recording, created by sonifying a motiongram). Now I finally found some time to wrap it up and make it available as an OSX application called ImageSonifyer. The Max patch is also available, for those that want to look at what is going on.
January 22, 2013
KinectRecorder
I am currently working on a paper describing some further exploration of the sonifyer technique and module that I have previously published on. The new thing is that I am now using the inputs from a Kinect device as the source material for the sonification, which opens up for using also the depth in the image as an element in the process.
To be able to create figures for the paper, I needed to record the input from a Kinect to a regular video file.
January 14, 2013
New publication: Some video abstraction techniques for displaying body movement in analysis and performance
Today the MIT Press journal Leonardo has published my paper entitled “Some video abstraction techniques for displaying body movement in analysis and performance”. The paper is a summary of my work on different types of visualisation techniques of music-related body motion. Most of these techniques were developed during my PhD, but have been refined over the course of my post-doc fellowship.
The paper is available from the Leonardo web page (or MUSE), and will also be posted in the digital archive at UiO after the 6 month embargo period.
January 8, 2013
New publication: Performing the Electric Violin in a Sonic Space
I am happy to announce that a paper I wrote together with Victoria Johnson has just been published in Computer Music Journal. The paper is based on the experiences that Victoria and I gained while working on the piece Transformation for electric violin and live electronics (see video of the piece below).
Citation
A. R. Jensenius and V. Johnson. Performing the electric violin in a sonic space. Computer Music Journal, 36(4):28–39, 2012.
January 2, 2013
Sverm video #4
The last of the four Sverm videos by Lavasir Nordrum hast just been posted on Vimeo. The first short movie was titled Micromovements, then followed Microsounds and Excitation, and the last one is called Resonance. It has been exciting to work with the video medium in addition to the performances, and it has given a very different perspective on the project.
December 5, 2012
Sverm video #3
Video artist Lavasir Nordrum hast just posted the third of four short movies created together with the Sverm group. The first short movie was titled Micromovements, and the second was titled Microsounds. This month’s short movie is called Excitation, and is focused on the first half of an even or action. This will be followed by a short movie called Resonance to be released on 1 January.
November 2, 2012
Sverm video #2
As I wrote about last month, the Sverm group has teamed up with video artist Lavasir Nordrum. The plan is that he will create four short and poetic videos documenting four of the main topics we have been working on in the Sverm project. The production plan for the videos is quite tight: we shoot content for the videos during a few hours in the middle of each month, and then Lavasir publishes the final video two weeks later.
October 10, 2012
Sverm video #1
For the last couple of years I have been involved in an artistic research project called Sverm, in which we investigate the artistic potential of bodily micromovements and microsound. We are currently working towards a series of intimate lab performances in the end of November.
As a side-project to the performances, we are also working with video artist Lavasir Nordrum, on the making of four short videos documenting the four main parts of the project: micromovement, microsound, excitation, resonance.
September 11, 2012
McLaren's Dots
I am currently working on some extensions to my motiongram-sonifyer, and came across this beautiful little film by Norman McLaren from 1940:
The sounds heard in the film are entirely synthetic, created by drawing in the sound-track part of the film. McLaren explained this a 1951 BBC interview:
I draw a lot of little lines on the sound-track area of the 35-mm. film. Maybe 50 or 60 lines for every musical note.
September 5, 2012
Teaching in Aldeburgh
I am currently in beautiful Aldeburgh, a small town on the east coast of England, teaching at the Britten-Pears Young Artist Programme together with Rolf Wallin and Tansy Davies. This post is mainly to summarise the things I have been going through, and provide links for various things.
Theoretical stuff My introductory lectures went through some of the theory of an embodied understanding of the experience of music. One aspect of this theory that I find very relevant for the development of interactive works is what I call action-sound relationships.
August 16, 2012
fourMs videos
Over the years we have uploaded various videos to YouTube of our fourMs lab activities. Some of these videos have been uploaded using a shared YouTube user, others by myself and others. I just realised that a good solution for gathering all the different videos is just to create a playlist, and then add all relevant videos there. Then it should also be possible to embed this playlist in web pages, like below:
July 12, 2012
Paper #1 at SMC 2012: Evaluation of motiongrams
Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
June 25, 2012
Record videos of sonification
I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.
It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max.
August 5, 2011
Flickr introduces long photos
Flickr has opened for uploading videos, or, rather, what they call “long photos”. As such, they are not trying to compete with YouTube or Vimeo, but rather making it possible to upload videos that are closer to a photography than a movie (i.e. with a narrative). I like this approach, and it resonates with how I am often recording a video as if it was a photography.
The difference between what I could call a photo video and a movie video, can be seen as analog to the difference between music compostion/production and soundscaping.
June 17, 2011
Hurtigruten
One of the more bizarre TV programs ever may be the current screening of Hurtigruten by Norwegian public broadcaster NRK. Following the success of the screening of the train ride from Bergen to Oslo, they are now filming the entire (5+ days) journey of the boat trip from Bergen to Kirkenes.
Here is some info on how and why they are doing this, or you can just follow the journey live here.
August 27, 2010
Screen recording in QuickTime X
I just discovered that QuickTime X has built in support for screen recording. I have been using iShowU for screen recordings for a while, and while it has the advantage of recording only a portion of the screen, the QT approach seems to be easier and quicker to work with. Short tutorial below:
August 9, 2010
Evaluating a semester of podcasting
Earlier this year I wrote a post about how I was going to try out podcasting during the course MUS2006 Musikk og bevegelse this spring semester. As I am preparing for new courses this fall, now is the time to evaluate my podcasting experience, and decide on whether I am going to continue doing this.
Why podcasting? The first question I should ask myself is why I would be interested in setting up a podcast from my lectures?
July 2, 2010
New motiongram features
Inspired by the work [[[Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale]{.entry-content}]{.status-content}]{.status-body} a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:
About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.
May 12, 2010
NTNU PhD defense
Two weeks ago Lars Adde defended his PhD entitled Prediction of cerebral palsy in young infants. Computer based assessment of general movements at NTNU in Trondheim. I have contributed to this research through development of the General Movement Toolbox, a variant of my Musical Gestures Toolbox. This toolbox he has used to analyse video material of children with fidgety movements, with the aim of being able to predict cerebral palsy at an early stage.
July 17, 2008
Black box in the lab
Last week we started setting up a “black box” in the new lab space. It is great to finally have a more permanent motion lab set up that we can use for various types of observation studies and recording sessions.
June 17, 2008
AudioVideoAnalysis
To allow everyone to watch their own synchronised spectrograms and motiongrams, I have made a small application called AudioVideoAnalysis.
Download AudioVideoAnalysis for OS X (8MB) It currently has the following features:
Draws a spectrogram from any connected microphone Draws a motiongram/videogram from any connected camera Press the escape button to toggle fullscreen mode Built with Max/MSP by Cycling ‘74 on OS X.5. I will probably make a Windows version at some point, but haven’t gotten that far yet.
May 15, 2008
Sonification of Traveling Landscapes
I just heard a talk called “Real-Time Synaesthetic Sonification of Traveling Landscapes” (PDF) by Tim Pohle and Peter Knees from the Department of Computational Perception (great name!) in Linz. They have made an application creating music from a moving video camera. The implementation is based on grabbing a one pixel wide column from the video, plotting these columns and sonifying the image. Interestingly enough, the images they get out (see below) of this are very close to the motiongrams and videograms I have been working on.
November 14, 2007
GeoVision MPEG4 Codec
I recently received a video file with some material I am supposed to analyse. The problem was that I couldn’t figure out what type of codec was used. VLC told me that it uses a codec called GMP4. After some research I have found that this means an MPEG-4 codec developed by GeoVision. I have found a windows version of this codec, but nothing for OS X. If anyone has any ideas, please shout out.
October 15, 2007
Flash Movie Conversion on OS X
Looking for a solution to make flash movies on OS X, I came across this nice tutorial based on FFMPEGX. In terms of video quality I prefer to create videos with H.264 compression using MPEG Streamclip, but since flash seems to be the de facto standard on the web these days I will try to use this for an upcoming project.
{#image497}
September 14, 2007
Video broadcasting
Vegard mentioned QuickTime Broadcaster in a blog entry yesterday. While QT broadcaster is certainly easy to set up and use, I have found it even easier to use some of the video broadcasting solutions in Max/MSP/Jitter. The jit.qt.broadcast object allows for QT streaming, but I have found the jit.broadcast object using RTSP to be somewhat more stable. Using Jitter also opens for all sorts of image manipulation, text overlays etc. as we are used to in the Max/MSP world.
April 5, 2007
Choosing the Right Video Format
The discussion about video standards for live processing has been summarised as:
Codec: Motion.jpg (for interlaced footage) or Photo.jpg. Compression ratio/quality: Quality 80 is a decent baseline for.jpg, though you can crank as high as 97 to improve quality. Keyframes: Encode a keyframe on every frame so it’s ‘scratch-ready’. Alpha channels: For video containing alpha channels, PNG is the format of choice. Sounds like more or less the same conclusion that has been reached in the Jitter forum, when this question comes up there once in a while.
January 29, 2007
Optical Illusions and Visual Phenomena
{#image381}Talking about optical illusions, here’s a bunch of them. Great that many of them are made as javascripts so that the effects can be changed.
[ youtube=http://youtube.com/w/?v=_dIya1aJJKA]
November 1, 2006
Motiongrams
Challenge Traditional keyframe displays of videos are not particularly useful when studying single-shot studio recordings of music-related movements, since they mainly show static postural information and no motion.
Using motion images of various kinds helps in visualizing what is going on in the image. Below can be seen (from left): motion image, with noise reduction, with edge detection, with “trails” and added to the original image.
Making Motiongrams We are used to visualizing audio with spectrograms, and have been exploring different techniques for visualizing music-related movements in a similar manner.
November 1, 2006
Sony HDR-SR1
An interesting review of the new Sony HDR-SR1 HDD based HD video camera. Except for the fact that there are no decent video software to edit this type of video format, and the lack of support for OS X, this looks like a great camera.
August 18, 2006
Lasse - Hyperactive
{#image258}Lasse - Hyperactive is a very simple and low-cost videomusic production, but also very powerful and funny.
August 18, 2006
Moving towards HDD video cameras
{#image261}I have been using the JVC Everio GZMC500, one of the first hard drive based video cameras with a decent price tag and ok features, for more than half a year and my general impressions are very positive.
Positive things:
No tapes!!! 3CCD, excellent for recording in dark concert/lecture halls Very small and handy Negative things:
No microphone/line input (this was a major drawback with this model, but luckily the built-in stereo microphone is not too bad…) Storing files in an MPEG-2 format which is probably good for writing directly to DVD, but a hazzle to work with on a computer (at least Macs) since they have to be re-encoded to something that is more easily playable in QuickTime.
June 21, 2006
ICMC papers
My paper entitled “Using motiongrams in the study of musical gestures” was accepted to ICMC 06 in New Orleans. The abstract is:
Navigating through hours of video material is often time-consuming, and it is similarly difficult to create good visualization of musical gestures in such a material. Traditional displays of time-sampled video frames are not particularly useful when studying single-shot studio recordings, since they present a series of still images and very little movement related information.
April 24, 2006
Visual Scratch
{#image139}Jesse Kriss has developed Visual Scratch a realtime visualization of scratch DJ performance, built using Processing, Max/MSP, Ms. Pinky, and MaxLink.
April 22, 2006
Palindrome
Found some interesting dance/performance examples at the web site of German/American performance company Palindrome. They are also developing the EyeCon video software for interactive performance.
March 29, 2006
Daniel Rozin Wooden Mirrors
Daniel Rozin has made some Wooden Mirrorsfrom various materials. Any person standing in front of one of these pieces is instantly reflected on its surface. The mechanical mirrors all have video cameras, motors and computers on board and produce a soothing sound as the viewer interacts with them.
March 27, 2006
MøB
{.imagelink}I’m participating in a workshop in Bergen, and got to meet Gisle Frøysland who is developing MøB, a software for installations and realtime manipulation of digital media in GNU/Linux-based networks. I am looking forward to seeing it in action during the course of the workshop.
March 24, 2006
Fogscreen
The Fogscreen is a new invention which makes objects seem to appear and move in thin air! It is a screen you can walk through! The FogScreen is created by using a suspended fog generating device, there is no frame around the screen. The installation is easy: just replace the conventional screen with FogScreen. You don´t need to change anything else - it works with standard video projectors.The fog we are using is dry, so it doesn’t make you wet even if you stay under the FogScreen device for a long time.
March 17, 2006
sCrAmBlEd?HaCkZ!
sCrAmBlEd?HaCkZ! is a Realtime-Mind-Music-Video-Re-De-Construction-Machine. It is a conceptual software which makes it possible to work with samples in a completely new way by making them available in a manner that does justice to their nature as concrete musical memories.
February 20, 2006
dbv
{#p95 .imagelink}dbv is a customizable vj tool built with Max/MSP/Jitter. Simple, but with some nice implementation details. I particularly like the way it displays video thumbnails, and adds extra pages if you have more videos than it is space for in the preview pane.
February 20, 2006
traer.physics
{#p94 .imagelink}traer.physics is a particle system physics engine for the Processing video programming environment. The user community of Processing seems to be growing rapidly these days, and from my few tests of the language it seems to be stable and efficient.
Would be interesting to see if it is possible to combine Processing with Max/MSP/Jitter. OSC is one option, but it would be nice if someone made a wrapper so that it could be possible to run Processing from a Max object.
February 5, 2006
Video Annotation Software
A short overview of various video annotation software:
- Anvil by Michael Kipp is a java-based program for storing several layers of annotations, like a text sequencer. Can only use avi files. Intended for gesture research (understood as gestures used when talking).
- Transana from University of Wisconsin, Madison, is developed mainly as a tool for transcribing and describing video and audio content. Seems like it is mainly intended for behavioural studies.
January 15, 2006
Converting MPEG-2 .MOD files
I have been struggling with figuring out the easiest way of converting MPEG-2 .MOD files coming out of a JVC Everio HD camera to something else, and finally found a good solution in Squared 5 - MPEG Streamclip which allows for converting these files to more or less all codecs that are available on the system. It is also a good idea to rename the .MOD files to .M2V or .
December 14, 2005
MP4 to WMV
I have been struggling with creating video files that are easily playable on both OS X and Windows. Of course it is possible to make an avi with some “ancient” video codec, but that is not very tempting when the new H.264 codec is so nice. Of course, it would be nice if Windows users could use QuickTime, but for those who decline to do so, I found an easy way for converting MPEG-4 files to WMV.
November 29, 2005
JVC GZ-MC500
I have been thinking about buying a new video camera. As I start to get very tired of working with dv-tapes, I was curious to check out some of the new hd cameras. Seems like it is still a bit early, as I guess the market will change next year, although this JVC GZ-MC500 camcorder looks very sweet.
A 4GB microdrive seems too small, though, and it is unfortunately disqualified since it doesn’t sport a microphone input.
November 28, 2001
Master exam concert
Last week I performed my master exam concert at the Department of Music and Theatre, University of Oslo. The program consisted of improvisations for piano and live electronics. Different MIDI, audio, and video processing techniques were used. Here I describe the different pieces.
Performa It is incredible how many exciting sounds one can get from a piano, and mallets are a nice change from playing on the keys. The computer helps with temporal adjustments and background sounds.