Add fade-in and fade-out programmatically with FFmpeg

There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).

It is easy to add a fade-in to the beginning of a file using FFmpeg’s afade function. From the documentation, you can do a 15-second fade-in like this:


And a 25-second fade-out like this:


Unfortunately, the latter requires that you specify when to start the fade-out. That doesn’t work well in general, and particularly not for batch processing.

A neat trick

Searching for solutions, I found a neat trick that solved the problem. First, you create the normal fade-in. Then you make the fade-out by reversing the audio stream, applying a fade-in, and then reversing again. The whole thing looks like this:

ffmpeg -i input.mp4 -c:v copy -af "afade=d=5, areverse, afade=d=5, areverse" output.mp4

A hack, but it works like a charm! And you don’t need to re-encode the video (hence the -c:v copy message above).

Putting it together

If you want to run this on a folder of files and run a normalization in the same go (so you avoid recompressing more than once), then you can use this bash script:


shopt -s nullglob
for i in *.mp4 *.MP4 *.mov *.MOV *.flv *.webm *.m4v; do 
   name=`echo $i | cut -d'.' -f1`; 
   ffmpeg -i "$i" -c:v copy -af "loudnorm=I=-16:LRA=11:TP=-1.5,afade=d=5, areverse, afade=d=5, areverse" "${name}_norm.mp4"; 

Save, run, and watch the magic!

Video visualizations of mountain walking

After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:

What can one get from the audio and video of such a trip? Here are some results generated with various functions from the Musical Gestures Toolbox for Python.

Static visualizations

The first trial was to create some static visualizations from the video recording.

A keyframe image display shows nine sampled images from the video. The first ones mainly show the path since I was leaning forward while walking upward, and the last show the scenery.
An average image of the whole video does not tell much in this case, and I guess it shows that (on average) I looked up most of the time. Hence the horizon can be seen toward the bottom of the image.

The average image is not particularly interesting in this case. Then it may be better to create a history video that averages images over a shorter period, such as in this video:

A history video is averaging over several seconds of video footage.

Still quite shaky, but it creates an interesting soft-focus rendition of the video. This may resemble how I perceived the scenery as I walked up and down.


A better visualization, then, are the videograms, which give more information about the spatiotemporal features of the video recording.

A horizontal videogram of the 25-minute walking sequence reveals the spatiotemporal differences in the recording: first walking upward facing the ground, then having a short break on the top, and then walking downward facing the scenery.
A vertical videogram is less interesting in this case.


The videograms are based on collapsing the original images in the video sequence. Motiongrams, on the other hand, collapse the motion image sequence, clearly showing what changed between frames.

A horizontal motiongram reveals the same information as the videogram and clearly shows the break I took in the middle. (the black part in the middle).
A vertical motiongram is not particularly relevant.

Audio analysis

What can one get out of the audio recording of walking? The waveform does not tell much, except that the average levels look higher in the second half (where I was walking down).

A waveform of the audio that I recorded during the 25-minute walking.
The sonogram shows a lot of energy throughout the energy spectrum, and my break at the top can be seen a little over halfway through. A peculiar black line at 8.7 kHz has to come from the GoPro, and the camera also cuts all sound above approximately 13 kHz.
The tempogram also reveals the break in the middle and estimates a tempo of my walking of almost 120 BPM.

It is fascinating how the estimated tempo of my walking was almost 120 BPM, which happens to be similar to the 2 Hz frequency found in many studies of walking and everyday activities. It will be interesting to try a similar approach for other walking videos.

Removing audio hum using a highpass filter in FFmpeg

Today, I recorded Sound Action 194 – Rolling Dice as part of my year-long sound action project.

The idea has been to do as little processing as possible to the recordings. That is because I want to capture sounds and actions as naturally as possible. The recorded files will also serve as source material for both scientific and artistic explorations later. For that reason, I only trim the recordings non-destructively using FFmpeg.

Recording the dice example, however, I noticed an unfortunate low-frequency hum in the original recording:

The original recording has an unfortunate low-frequency hum.

I like the rest of the recording, so I thought it would be a pity to skip publishing this sound action only because of the hum. So I decided to break my rule of not processing the sound and apply a simple highpass filter to remove the noise.

Fortunately, FFmpeg, as always, comes to the rescue. It has myriad audio filters that can be combined in various ways. I only needed to add a highpass filter, which can be accomplished using this one-liner:

ffmpeg -i input.mp4 -c:v copy -af highpass=400 output.mp4

Here I use the -c:v copy to copy the video stream directly. This avoids re-compressing the file and saves time. Then I use the -af highpass=400 function to add the highpass filter to the audio stream with a frequency of 400 Hz. This is relatively high but works well for this example.

The recording with highpass-filtered audio.

Adding a filter means that the audio stream needs to be re-compressed. So it breaks with the original (conceptual and technical) idea. However, the result sounds more like how I experienced it. I didn’t notice the hum while recording, and this project is focused on foreground sounds, not the background. However, this example is relevant for my upcoming project, AMBIENT, in which I will focus on the background sound of various in-door environments.

Kayak motion analysis with video-based horizon leveling

Last year, I wrote about video-based motion analysis of kayaking. Those videos were recorded with a GoPro Hero 8 and I tested some of the video visualization methods of the Musical Gestures Toolbox for Python. This summer I am testing out some 360 cameras for my upcoming AMBIENT project. I thought I should take one of these, a GoPro Max, out for some kayaking in the Oslo fjord. Here are some impressions of the trip (and recording).

Horizon leveling

I stumbled upon the feature “horizon leveling” by accident when going through the settings on the GoPro Max. The point is that it will stabilize the recorded image so that the horizon will always be leveled. I haven’t found any tech details about the feature, but assume that it uses a built-in gyroscope for the leveling. As it turns out, this feature also appears to be included on the GoPro Hero 9 and 10.

This feature works amazingly well, as can be seen from an excerpt of my kayaking adventure below:

Video visualizations

The Musical Gestures Toolbox for Python is in active development and I, therefore, thought it could be interesting to test some video visualization methods on the kayaking video. The whole video recording is 1.5 hours long, which is a good starting point for exploring some of the video visualization techniques in the toolbox. After all, one of the points of the toolbox was to develop solutions for visualizing long video recordings without scene changes.

My recordings of music or dance performances are similar to my kayaking videos in that they are based on continuous single-camera recordings. So I tested some of the basic visualization techniques.

A “keyframe” display based on sampling 9 images from the recording. These give “snapshots” of the scenery but don’t tell much about the motion.
A horizontal videogram of the whole recording (time running from left to right) shows more about what happened. Here you can really see that the horizon leveling worked flawlessly throughout. It is interesting to see the “ascending” lines at various intervals. These are due to the fact that I kayaked around an island, and kept turning right.
The vertical videogram shows the sideways motion. It is, perhaps, less informative than the horizontal videogram, but more beautiful. The yellow line in the middle is the kayak.
An average image of the whole recording blurs out all the details but leaves the essential information: the kayak, the fjord, and the horizon.

Audio analysis

Kayaking is a rhythmic activity, so I was interested in also looking at whether I could find any patterns in the audio signal. For now, I have only calculated a tempogram, which estimates the tempo of my kayaking strokes to 114 BPM. I am not sure if that is good or bad (I am only a recreational kayaker) but will try to make some new recordings and compare.

Tempogram of the audio from the kayaking video. It is made from a resampled video file, hence the short duration (the original video is 1.5 hours long).

Solid toolbox

After testing the Musical Gestures Toolbox for Python more actively over the last few weeks, I see that we have now managed to get it to a state where it really works well. Most of the core functions are stable and they allow for combinations in various ways, just like a toolbox should. There are still some optimization issues to sort out, particularly when it comes to improving the creation of motion videos. But overall it is a highly usatile package.

Adding subtitles to videos

In my ever-growing collection of FFmpeg-related blog posts, I will today show how to add subtitles to videos. These tricks are based on the need to create a captioned version of a video I made to introduce the Workshop on NIME Archiving for the 2022 edition of the International Conference on New Interfaces for Musical Expression (NIME).

The video I discuss in this blog post. YouTube supports turning on and off the subtitles (CC button).

Why add subtitles to videos?

I didn’t think much about subtitles previously but have become increasingly aware of their importance. Firstly, adding subtitles is essential from an accessibility point of view. In fact, at UiO, it is now mandatory to add subtitles to all videos we upload to the university web pages. The main reason is that people that have problems hearing the content can read.

Also, for people that can hear sound in the video, it is helpful to have subtitles available. On Twitter, for example, videos will play automatically when you hover over them. However, the sound will usually be off, so without subtitles, it is impossible to hear what is said. There are also times when you may want to only watch some content without listening, for example, if you don’t have headphones available in a public setting.

I guess that having subtitles also helps search engines find your content more efficiently, which may lead to better dissemination of the content.

Creating subtitles

There are numerous ways of doing this, but I usually rely on some machine listening service as the first step these days. At UiO, we have a service called Autotekst that will create a subtitle text file from an audio recording. The nice thing about that service is that it supports both English and Norwegian and two people talking. It is pretty ok but does require some manual cleanup. I typically do that in a text editor while checking the video.

YouTube offers a more streamlined approach for videos uploaded to their service. It has machine listening built in that works quite well for one person talking in English. It also has a friendly GUI where you can go through and check the text and align it with the audio.

YouTube has a nice window for adding/editing subtitles.

I typically use YouTube for all my public videos in English and UiO’s service for material in Norwegian and with multiple people.

Closed Caption formats

Various services and platforms support at least 25 different subtitle formats. I have found that SRT (SubRip) and VTT (Web Video Text Tracks) are the two most common formats. Both are supported by YouTube, while Twitter prefers SRT (although I still haven’t been able to upload one such file successfully). The video player on UiO prefers VTT, so I often have to convert between these two formats.

Fortunately, FFmpeg comes to the rescue (again). Converting from one subtitle format to another is as simple as writing this one-liner:

ffmpeg -i caption.vtt

This will convert an SRT file to VTT in a split second. As the screenshot below shows, there is not much difference between the two formats.

A few lines of the same subtitle file in VTT format (left) and SRT (right).

Playing back videos with subtitles

Web players will play subtitles associated with them, but what about on a local machine. If the subtitle file is named the same as the video file, most video players will use the subtitles when playing the file. Here is how it looks in VLC on Ubuntu:

VLC will show the subtitles automatically if it finds an SRT file with the same name as a video file.

It is also possible to go into the settings to turn the subtitles on and off. I guess it is also possible to have multiple files available to add additional language support, and that would be interesting to explore another time.

The benefit of having subtitles as a separate text file is that they can be turned on and off.

Embedding subtitles in video files

We are exploring using PubPub for the NIME conference, a modern publication platform developed by The MIT Press. There are many good things to say about PubPub, but some features are still missing. Adding a subtitle file to uploaded videos is one missing feature. I, therefore, started exploring whether it is possible to embed the subtitles inside the video file.

A video file is built as a “container” that holds different content, of which video and audio are two (or more) “tracks” within the file. The nice thing about working with FFmpeg is that one quickly understands how such containers are constructed. And, just as I expected, it is possible to embed an SRT file (and probably others too) inside of a video container.

As discussed in this thread, many things can go wrong when you try to do this. I ended up with this one-liner:

ffmpeg -i video.mp4 -i -c copy -c:s mov_text video_cc.mp4

The trick is to think of the subtitle file as just another input file that should be added to the container. The result is a video file with subtitles built in, as shown below:

It may have been a long shot, but the PubPub player didn’t support such embedded subtitles either. Then I started exploring a more old-school approach, “burning” the text into the video file. I feared that I had to do this within a video editor, but, again, it turned out that FFmpeg could do the trick:

ffmpeg -i video.mp4 -vf video_cc.mp4

This “burns” the text into the video, which is not the best way of doing it, I think. After all, the nice thing about having the subtitles in text form is that they can be turned on and off and adjusted in size. Still, having some subtitles may be better than nothing.

The video with the subtitle text “burned” into the video content.
The video with subtitles is in a separate text layer.
The video is embedded on the workshop page, with subtitles.

Posting on Twitter

After having gone through all that trouble, I wanted to post the video on Twitter. This turned out to be more difficult than expected. Three problems arose.

First, Twitter does not support 4K videos, so I had to downsample to Full HD. Fair enough, that is easily done with FFmpeg. Second, Twitter only supports videos shorter than 2:20 minutes; mine is 2:34. Fortunately, I could easily cut out the video’s first and last sentence, and it still made sense. However, this also leads to trouble with the subtitles. The subtitles are based on the timing of the original video. So if I were to trim the video, I would also need to edit the subtitle file to adjust all the timings (Happy to get input on tools for doing that!).

After spending too much time on this project, I reverted to the “burned” text approach. Writing the text into the video and trimming it would ensure some text together with the video. While preparing the one-liner, I wondered whether FFmpeg would be smart enough to also “trim” the subtitles when trimming the video file:

ffmpeg -i video.mp4 -vf,scale=1920:1080,fps=30 -ss 00:00:11 -t 00:02:17  video_hd_cc.mp4

The command above does it all one go: downscale from 4K to HD, add the subtitles, and trim the video to the desired duration. Unfortunately, this command kept text from the sentence that was trimmed out in the beginning:

When trimming a captioned video, you get some text that is not part of the video.

The extra words at the beginning of the video are perhaps not the biggest problem. I would still be interested to hear thoughts on how to avoid this in the future. After all, subtitles are here to stay.