Splitting audio files in the terminal

I have recently played with AudioStellar, a great tool for “sound object”-based exploration and musicking. It reminds me of CataRT, a great tool for concatenative synthesis. I used CataRT quite a lot previously, for example, in the piece Transformation. However, after I switched to Ubuntu and PD instead of OSX and Max, CataRT was no longer an option. So I got very excited when I discovered AudioStellar some weeks ago. It is lightweight and cross-platform and has some novel features that I would like to explore more in the coming weeks.

Samples and sound objects

In today’s post, I will describe how to prepare short audio files to load into AudioStellar. The software is based on loading a collection of “samples”. I always find the term “sample” to be confusing. In digital signal processing terms, a sample is literally one sample, a number describing the signal’s amplitude in that specific moment in time. However, in music production, a “sample” is used to describe a fairly short sound file, often in the range of 0.5 to 5 seconds. This is what in the tradition of the composer-researcher Pierre Schaeffer would be called a sound object. So I prefer to use that term to refer to coherent, short snippets of sound.

AudioStellar relies on loading short sound files. They suggest that for the best experience, one should load files that are shorter than 3 seconds. I have some folders with such short sound files, but I have many more folders with longer recordings that contain multiple sound objects in one file. The beauty of CataRT was that it would analyse such long files and identify all the sound objects within the files. That is not possible in AudioStellar (yet, I hope). So I have to chop up the files myself. This can be done manually, of course, and I am sure some expensive software also does the job. But this was a good excuse to dive into SoX (Sound eXchange).

SoX for sound file processing

SoX is branded as “the Swiss Army knife of audio manipulation”. I have tried it a couple of times, but I usually rely on FFmpeg for basic conversion tasks. FFmpeg is mainly targeted at video applications, but it handles many audio-related tasks well. Converting from .AIFF to .WAV or compressing to .MP3 or .AAC can easily be handled in FFmpeg. There are even some basic audio visualization tools available in FFmpeg.

However, for some more specialized audio jobs, SoX come in handy. I find that the man pages are not very intuitive. There are also relatively few examples of its usage online, at least compared to the numerous FFmpeg examples. Then I was happy to find the nice blog of Mads Kjelgaard, who has written a short set of SoX tutorials. And it was the tutorial on how to remove silence from sound files that caught my attention.

Splitting sound files based on silence

The task is to chop up long sound files containing multiple sound objects. The description of SoX’s silence function is somewhat cryptic. In addition to the above mentioned blog post, I also came across another blog post with some more examples of how the SoX silence function works. And lo and behold, one of the example scripts managed to very nicely chop up one of my long sound files of bird sounds:

sox birds_in.aif birds_out.wav silence 1 0.1 1% 1 0.1 1% : newfile : restart

The result is a folder of short sound files, each containing a sound object. Note that I started with an .AIFF file but converted it to .WAV along the way since that is the preferred format of AudioStellar.

SoX managed to quickly split up a long sound file of bird chirps into individual files, each containing one sound object.

To scale this up a bit, I made a small script that will do the same thing on a folder of files:

#!/bin/bash

for i in *.aif;
do
name=`echo $i | cut -d'.' -f1`;
sox "$i" "${name}.wav" silence 1 0.1 1% 1 0.1 1% : newfile : restart
done

And this managed to chop up 20 long sound files into approximately 2000 individual sound files.

The batch script split up 20 long sound files into approximately 2000 short sound files in just a few seconds.

There were some very short sound files and some very long. I could have tweaked the script a little to remove these. However, it was quicker to sort the files by file size and delete the smallest and largest files. That left me with around 1500 sound files to load into AudioStellar. More on that exploration later.

Loading 1500 animal sound objects into AudioStellar.

All in all, I was happy to (re)discover SoX and will explore it more in the future. I was happy to see that the above settings worked well for sound recordings with clear silence parts. Some initial testing of more complex sound recordings were not equally successful. So understanding more about how to tweak the settings will be important for future usage.

INTIMAL Documentary

I think it is nice to share this new short documentary about the project INTIMAL: Interfaces for Relational Listening – Body, Memory, Migration, Telematics today on 8 March. This project ran from 2017 to 2019 at RITMO, under the direction of sound artist Ximena Alarcon. It was funded by a Marie Sklodowska-Curie grant from the EU, and I was fortunate enough to mentor the project.

The main aim of INTIMAL was to explore the body as an “interface” for keeping and transforming the memory of place in migratory contexts. This was done through the development and exploration of a physical-virtual “embodied system” for relational listening. The short documentary excellently describes the project and explains its outcomes.

Some more information about the project from the INTIMAL web page:

The project invites people to listen to their migrations, in order to expand their sense of place and sense of presence. In its first stage, Ximena proposes two questions: 1) how the body becomes an interface that keeps memory of place, and 2) how to improvise and transmit the experience of an embodied migratory journey using non-screen based interfaces. As a case study, she involved Colombian women who have migrated to European countries, in the exploration of their migratory journeys, via listening, improvised voice and body movement, in co-located and telematic settings. She used Deep Listening practice and Embodied Music Cognition methods to develop the INTIMAL System: a physical-virtual “embodied” system for Relational Listening, to be used in telematic sonic performance, in the context of human migration.

Visualising a Bach prelude played on Boomwhackers

I came across a fantastic performance of a Bach prelude played on Boomwhackers by Les Objets Volants.

It is really incredible how they manage to coordinate the sticks and make it into a beautiful performance. Given my interest in the visual aspects of music performance, I reached for the Musical Gestures Toolbox to create some video visualisations.

I started with creating an average image of the video:

Average image of the video.

This image is not particularly interesting. The performers moved around quite a bit, so the average image mainly shows the stage. An alternative spatial summary is the creation of a keyframe history image of the video file. This is created by extracting the keyframes of the video (approximately 50 frames) and combining these into one image:

Keyframe history image.

The keyframe history image summarizes how the performers moved around on stage and explained the spatial distribution of activity over time. But to get more into the temporal distribution of motion, we need to look at a spatiotemporal visualization. This is where motiongrams are useful:

Motiongram of vertical motion (time from left to right)
Motiongram of vertical motion (time from left to right)
Motiongram of horizontal motion (time from top to bottom)
Motiongram of horizontal motion (time from top to bottom)

If you click on the images above, you can zoom in to look at the visual beauty of the performance.

Analyzing a double stroke drum roll

Yesterday, PhD fellow Mojtaba Karbassi presented his research on impedance control in robotic drumming at RITMO. I will surely get back to discussing more of his research later. Today, I wanted to share the analysis of one of the videos he showed. Mojtaba is working on developing a robot that can play a double stroke drum roll. To explain what this is, he showed this video he had found online, made by John Wooton:

The double stroke roll is a standard technique for drummers, but not everyone manages to perform it as evenly as in this example. I was eager to have a look at the actions in a little more detail. We are currently beta-testing the next release of the Musical Gestures Toolbox for Python, so I thought this video would be a nice test case.

Motion video

I started the analysis by extracting the part of the video where he is showing the complete drum roll. Next, I generated a motion video of this segment:

This is already fascinating to look at. Since the background is removed, only the motion is visible. Obviously, the framerate of the video is not able to capture the speed that he plays with. I was therefore curious about the level of detail I could achieve in the further analysis.

Audio visualization

Before delving into the visualization of the video file, I made a spectrogram of the sound:

If you are used to looking at spectrograms, you can quite clearly see the change in frequency as the drummer is speeding up and then slowing down again. However, a tempogram of the audio is even clearer:

Here you can really see the change in both the frequency and the onset strength. The audio is sampled at a much higher frequency (44.1 kHz) than the video (25 fps). Is it possible to see some of the same effects in the motion?

Motiongrams

I then moved on to create a motiongram of the video:

There are two problems with this motiongram. First, the recording is composed of alternating shots from two different camera angles. These changes between shots can clearly be seen in the motiongram (marked with Camera 1 and 2). Second, this horizontal motiongram only reveals the vertical motion in the video image. Since we are here averaging over each row in the image, the motiongram shows both the left and right-hand motion. For such a recording, it is, therefore, more relevant to look at the vertical motiongram, which shows the horizontal motion:

In this motiongram, we can more clearly see the patterns of each hand. Still, we have the problem of the alternating shots. If we “zoom” in on the part called Camera 2b, it is possible to see the evenness of the motion in the most rapid part:

I also find it fascinating to “zoom” in on the part called Camera 2c, which shows the gradual slow-down of motion:

Finally, let us consider the slowest part of the drum roll (Camera 1d):

Here it is possible to see the beauty of the double strokes very clearly.

Convert between video containers with FFmpeg

In my ever-growing collection of smart FFmpeg tricks, here is a way of converting from one container format to another. Here I will convert from a QuickTime (.mov) file to a standard MPEG-4 (.mp4), but the recipe should work between other formats too.

If you came here to just see the solution, here you go:

ffmpeg -i infile.mov -acodec copy -vcodec copy outfile.mp4

In the following I will explain everything in a little more detail.

Container formats

One of the confusing things about video files is that they have both a container and a compression format. The container is often what denotes the file suffix. Apple introduced the .mov format for QuickTime files and Microsoft used to use .avi files.

Nowadays, there seems to a converge towards using MPEG containers and .mp4 files. However, both Apple and Microsoft software (and others) still output other formats. This is confusing and can also lead to various playback issues. For example, many web browsers are not able to play these formats natively.

Compression formats

The compression format denotes how the video data is organized on the inside of a container. Also, here there are many different formats. The most common today is to use the H.264 format for video and AAC for audio. These are both parts of the MPEG-4 standard and can be embedded in .mp4 containers. However, both H.264 and AAC can also be embedded in other containers, such as .mov and .avi files.

The important thing to notice is that both .mov and .avi files may contain H.264 video and AAC audio. In those cases, the inside of such files is identical to the content of a .mp4 file. But since the container is different, it may still be unplayable in certain software. That is why I would like to convert from one container format to another. In practice that means converting from .mov or .avi to .mp4 files.

Lossless conversion

There are many ways of converting video files. In most cases, you would end up with a lossy conversion. That means that the video content will be altered. The file size may be smaller, but the quality may also be worse. The general rule is that you want to compress a file as few times as possible.

For all sorts of video conversion/compression jobs, I have ended up turning to FFmpeg. If you haven’t tried it already, FFmpeg is a collection of tools for doing all sorts of audio/video manipulations in the terminal. Working in the terminal may be intimidating at first, but you will never look back once you get the hang of it.

Converting a file from .mov to .mp4 is as simple as typing this little command in a terminal:

ffmpeg -i infile.mov outfile.mp4

This will change from a .mov container to a .mp4 container, which is what we want. But it will also (probably) re-compress the video. That is why it is always smart to look at the content of your original file before converting it. You can do this by typing:

ffmpeg -i infile.mov

For my example file, this returns the following metadata:

  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    creation_time   : 2016-08-10T10:47:30.000000Z
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: MacBookPro11,1
    com.apple.quicktime.software: Mac OS X 10.11.6 (15G31)
    com.apple.quicktime.creationdate: 2016-08-10T12:45:43+0200
  Duration: 00:00:12.76, start: 0.000000, bitrate: 5780 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1844x1160 [SAR 1:1 DAR 461:290], 5243 kb/s, 58.66 fps, 60 tbr, 6k tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2016-08-10T10:47:30.000000Z
      handler_name    : Core Media Video
      encoder         : H.264
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 269 kb/s (default)
    Metadata:
      creation_time   : 2016-08-10T10:47:30.000000Z
      handler_name    : Core Media Audio

There is quite a lot of information there, so we need to look for the important stuff. The first line we want to look for is the one with information about the video content:

Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1844x1160 [SAR 1:1 DAR 461:290], 5243 kb/s, 58.66 fps, 60 tbr, 6k     

Here we can see that this .mov file contains a video that is already compressed with H.264. Another thing we can see here is that it is using a weird pixel format (1844×1160). The bit rate of the file is 5243 kb/s, which tells something about how large the file will be in the end. And it is also interesting to see that it is using a framerate of 58.66 fps, which is also a bit odd.

Similarly, we can look at the content of the audio stream of the file:

Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 269 kb/s (default)

Here we can see that the audio is already compressed with AAC at a standard sampling rate of 44.1 kHz and at a more nonstandard bit rate of 269 kb/s.

The main point of investigating the file before we do the conversion is to avoid re-compressing the content of the file. After all, the content is already in the right formats (H.264 and AAC) even though it is in an unwanted container (.mov).

Today’s little trick is how to convert from one format to another without modifying the content of the file, only the container. That can be achieved with the code shown on top:

ffmpeg -i original.mov -acodec copy -vcodec copy outfile.mp4

There are several benefits of doing it this way:

  1. Quality. Avoiding an unnecessary re-compression of the content, which would only degrade the content.
  2. Preserve the pixel size, sampling rates, etc. of the originals. Most video software will use standard settings for these. I often work with various types of non-standard video files, so it is nice to preserve this information.
  3. Save time. Since no re-compression is needed, we only copy content from one container to another. This is much, much faster than re-compressing the content.

All in all, this long explanation of a short command may help to improve your workflows and save some time.