Convert HEIC photos to JPEG

A quick note-to-self about how I managed to download a bunch of photos from an iPhone and convert them to JPEG on my laptop running Ubuntu 22.04.

As opposed to Android phones, iPhones do not show up as a regular disk with easy access to the DCIM folder storing photos. Fortunately, Rapid Photo Downloader managed to launch the iPhone and find all the images. Unfortunately, all the files were stored as HEIC files, using the High Efficiency Image File Format. This format is technically good but practically tricky to work with. So I wanted to convert the files to JPEG.

With some searching, I got some help with How to Open or Convert iOS HEIC Photos to JPEG and PNG in Ubuntu 20.04 | 22.04. The trick is to install a small collection of tools for working with HEIF files:

sudo apt-get install libheif-examples

Then I ran this one-liner to convert a folder of HEIC files:

for file in *.heic; do heif-convert $file ${file/%.heic/.jpg}; done

That’s it!

Convert a folder of LibreOffice .ODT files to .DOCX files

I don’t spend much time in traditional “word processors”, but when I do, it is usually in LibreOffice. Then I prefer to save the files in the native .ODT format. But it happens that I need to send a bunch of files to someone that prefers .DOCX files. Instead of manually converting all the files, here is a short one-liner that does the trick using the magical pandoc, the go-to tool for converting text documents.

for i in *.odt; do name=`echo $i | cut -d'.' -f1`; pandoc "$i" -o "${name}.docx"; done

Paste it into a terminal window opened in the directory of choice and watch the magic!

Add fade-in and fade-out programmatically with FFmpeg

There is always a need to add fade-in and fade-out to audio tracks. Here is a way of doing it for a bunch of video files. It may come in handy with the audio normalization script I have shown previously. That script is based on continuously normalizing the audio, which may result in some noise in the beginning and end (because there is little/no sound in those parts, hence they are normalized more).

It is easy to add a fade-in to the beginning of a file using FFmpeg’s afade function. From the documentation, you can do a 15-second fade-in like this:

afade=t=in:ss=0:d=15

And a 25-second fade-out like this:

afade=t=out:st=875:d=25

Unfortunately, the latter requires that you specify when to start the fade-out. That doesn’t work well in general, and particularly not for batch processing.

A neat trick

Searching for solutions, I found a neat trick that solved the problem. First, you create the normal fade-in. Then you make the fade-out by reversing the audio stream, applying a fade-in, and then reversing again. The whole thing looks like this:

ffmpeg -i input.mp4 -c:v copy -af "afade=d=5, areverse, afade=d=5, areverse" output.mp4

A hack, but it works like a charm! And you don’t need to re-encode the video (hence the -c:v copy message above).

Putting it together

If you want to run this on a folder of files and run a normalization in the same go (so you avoid recompressing more than once), then you can use this bash script:

#!/bin/bash

shopt -s nullglob
for i in *.mp4 *.MP4 *.mov *.MOV *.flv *.webm *.m4v; do 
   name=`echo $i | cut -d'.' -f1`; 
   ffmpeg -i "$i" -c:v copy -af "loudnorm=I=-16:LRA=11:TP=-1.5,afade=d=5, areverse, afade=d=5, areverse" "${name}_norm.mp4"; 
done

Save, run, and watch the magic!

Video visualizations of mountain walking

After exploring some visualizations of kayaking, I was eager to see how a similar approach could work for walking. On a trip to the Norwegian mountains, specifically at Haugastøl, situated halfway between Oslo and Bergen, I strapped a GoPro Hero Black 10 on my chest and walked up and down a nearby hill called Storevarden. The walk was approximately 25 minutes up and down, and a fast-forward version of the video can be seen here:

What can one get from the audio and video of such a trip? Here are some results generated with various functions from the Musical Gestures Toolbox for Python.

Static visualizations

The first trial was to create some static visualizations from the video recording.

A keyframe image display shows nine sampled images from the video. The first ones mainly show the path since I was leaning forward while walking upward, and the last show the scenery.
An average image of the whole video does not tell much in this case, and I guess it shows that (on average) I looked up most of the time. Hence the horizon can be seen toward the bottom of the image.

The average image is not particularly interesting in this case. Then it may be better to create a history video that averages images over a shorter period, such as in this video:

A history video is averaging over several seconds of video footage.

Still quite shaky, but it creates an interesting soft-focus rendition of the video. This may resemble how I perceived the scenery as I walked up and down.

Videograms

A better visualization, then, are the videograms, which give more information about the spatiotemporal features of the video recording.

A horizontal videogram of the 25-minute walking sequence reveals the spatiotemporal differences in the recording: first walking upward facing the ground, then having a short break on the top, and then walking downward facing the scenery.
A vertical videogram is less interesting in this case.

Motiongrams

The videograms are based on collapsing the original images in the video sequence. Motiongrams, on the other hand, collapse the motion image sequence, clearly showing what changed between frames.

A horizontal motiongram reveals the same information as the videogram and clearly shows the break I took in the middle. (the black part in the middle).
A vertical motiongram is not particularly relevant.

Audio analysis

What can one get out of the audio recording of walking? The waveform does not tell much, except that the average levels look higher in the second half (where I was walking down).

A waveform of the audio that I recorded during the 25-minute walking.
The sonogram shows a lot of energy throughout the energy spectrum, and my break at the top can be seen a little over halfway through. A peculiar black line at 8.7 kHz has to come from the GoPro, and the camera also cuts all sound above approximately 13 kHz.
The tempogram also reveals the break in the middle and estimates a tempo of my walking of almost 120 BPM.

It is fascinating how the estimated tempo of my walking was almost 120 BPM, which happens to be similar to the 2 Hz frequency found in many studies of walking and everyday activities. It will be interesting to try a similar approach for other walking videos.