Create timelapse video from images with FFmpeg

I take a lot of timelapse shots with a GoPro camera. Usually, I do this with the camera’s photo setting instead of the video setting. That is because I find it easier to delete unwanted pictures from the series that way. It also simplifies selecting individual photos when I want that. But then I need a way to create a timelapse video from the photos easily.

Here is an FFmpeg one-liner that does the job:

ffmpeg -r 10 -pattern_type glob -i "*.JPG" -s 1920x1440 -vcodec libx264 output.mp4

To break down the different parameters a little:

  • -r 10″: the framerate (fps)
  • “-pattern_type glob”: to allow for selecting all JPGs using “*.JPG”
  • “-s 1920×1440”: downscales the images to a pseudo-like HD format
  • “-vcodec libx264”: force to use this codec

Visual effect of the different tblend functions in FFmpeg

FFmpeg is a fantastic resource for doing all sorts of video manipulations from the terminal. However, it has a lot of features, and it is not always easy to understand what they all mean.

I was interested in understanding more about how the tblend function works. This is a function that blends successive frames in 30 different ways. To get a visual understanding of how the different operations work, I decided to try them all out on the same video file. I started from this dance video:

Then I ran this script:

This created 30 video files, each showing the effect of the tblend operator in question. Here is a playlist with all the different resultant videos:

Instead of watching each of them independently, I also wanted to make a grid of all 30 videos. This can be done manually in a video editor, but I wanted to check how it can be done with FFmpeg. I came across this nice blog post with an example that almost matched my needs. With a little bit of tweaking, I came up with this script:

The final result is a 30-video grid with all the different tblend operators placed next to each other (in alphabetical order from top left, I think). Consult the playlist to see the individual videos.

Pixel array images of long videos in FFmpeg

Continuing my explorations of FFmpeg for video visualization, today I came across this very nice blog post on creating “pixel array” images of videos. Here the idea is to reduce every single frame into only one pixel, and to plot this next to each other on a line. Of course, I wanted to try this out myself.

I find that creating motiongrams or videograms is a good way to visualize the content of videos. They are abstract representations, but still reveal some of what is going on. However, for longer videos, motiongrams may be a bit tricky to look at, and they also take a lot of time to generate (hours, or even days). For that reason I was excited to see how pixel array images would work on some of my material.

First I tried on my “standard” dance video:

which gives this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

Yes, that is mainly a blue line, resulting from the average colour of the video being blue throughout the entire video.

Then I tried with one of the videos from the AIST Dance Video Database:

Which results in this pixel array image:

Pixel array image (640 pixels wide) of the dance video above.

And, yes, that is mainly a gray line… I realize that this method does not work very well with single-shot videos.

To try something very different, I also decided to make a pixel array image of Bergensbanen, a 7-hour TV production of the train between Oslo and Bergen. I made videograms of this recording some years ago, which turned out to be quite nice. So I was excited to see how a pixel array image would work. The end result looks like this (1920 pixels wide):

Pixel array image (1920 pixels wide) of the 7-hour TV production Bergensbanen

As you see, not much is changing, but that also represents the slowness of the train ride. While I originally thought this would be a smart representation, I still think that my videograms were more informative, such as this one:

Bergensbanen
Videogram of Bergensbanen

The big difference between the two visualizations, is that each frame is represented with vertical information in the videogram. The pixel array image, on the other hand, only displays one single pixel per frame. That said, it took only some minutes to generate the pixel array image, and I recall spending several days on generating the videogram.

To sum up, I think that pixel array images are probably more useful for movies and video material in which there are lots of changes throughout. They would be better suited for such a reduction technique. For my videos, in which I always use single-shot stationary cameras, motiongrams and videograms may still be the preferred solution.

Convert MPEG-2 files to MPEG-4

Image result for Canon XF-105
Canon XF105

This is a note to self, and could potentially also be useful to others in need of converting “old-school” MPEG-2 files into more modern MPEG-4 files using FFmpeg.

In the fourMs lab we have a bunch of Canon XF105 video cameras that record .MXF files with MPEG-2 compression. This is not a very useful format for other things we are doing, so I often have to recompress them to something else.

Inspecting one of the files, I just also discovered that they record the audio onto two mono channels:

Stream #0:0: Video: mpeg2video (4:2:2), yuv422p(tv, bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 50000 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc

Stream #0:1: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s

Stream #0:2: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s

So I also want to merge these two mono tracks (which are the left and right inputs of the camera) to a stereo track. FFmpeg comes in handy (as always), and I figured out that this little one-liner will do the trick:

ffmpeg -i input.mxf -vf yadif -vcodec libx264 -q:v 3 -filter_complex "[0:a:0][0:a:1]amerge,channelmap=channel_layout=stereo[st]" -map 0:v -map "[st]" output.mp4

An explanation of some of these settings:

  • yadif: this is for deinterlacing the video
  • libx264: this is probably unnecessary, but forces to use the better MPEG-4 compressor
  • q:v 3: I find this to be a good setting for the video compressor
  • filter_complex: this complex string (courtesy of reddit) does the merging of the two mono sources

Will probably try to add it to MGT-terminal at some point, but this blog post will suffice for now.

“Flattening” Ricoh Theta 360-degree videos using FFmpeg

Ricoh Theta 360-degree camera.

I am continuing my explorations of the great terminal-based video tool FFmpeg. Now I wanted to see if I could “flatten” a 360-degree video recorded with a Ricoh Theta camera. These cameras contain two fisheye lenses, capturing two 180-degree videos next to each other. This results in video files like shown in the screenshot below.

Screenshot from a video recorded with a Ricoh Theta.

These files are not very useful to watch or work with, so we need to somehow “flatten” it into a more meaningful video file. I find it cumbersome to do this in the Ricoh mobile phone apps, so have been looking for a simple solution to do it on my computer.

I see that the FFmpeg developers are working on native support for various 360-degree video files. This is implemented in the filter v360, but since it is not in the stable version of FFmpeg yet, I decided to look for something that works right now. Then I came across this blog post, which shows how to do the flattening based on two so-called PGM files that contain information about how the video should be mapped:

ffmpeg -i ricoh_input.mp4 -i xmap_thetaS_1920x960v3.pgm -i ymap_ThetaS_1920x960v3.pgm -q 0 -lavfi "format=pix_fmts=rgb24,remap" remapped.mp4

The end result is a flattened video file, as shown below:

Screenshot from a “flattened” 360 degree video.

As for where to split up the video (it is a continuous 360-degree video after all) I will have to investigate later.