Rotate video using FFmpeg

Here is another FFmpeg-related blog post, this time to explain how to rotate a video using the command-line tool FFmpeg. There are two ways of doing this, and I will explain both in the following.

Rotation in metadata

The best first try could be to make the rotation by only modifying the metadata in the file. This does not work for all file types, but should work for some (including .mp4) files.

ffmpeg -i input.mp4 -metadata:s:v rotate="-90" -codec copy output.mp4

The nice thing here is that it is superfast and also non-destructive.

Rotation with compression

If the above does not work, you will need to recompress the file. That is not ideal, but will do the trick. To rotate a movie by 90 degrees, you can do:

ffmpeg -i input.mp4 -vf "transpose=1" -c:a copy output.mp4

The trick here is to use the rotation value:

  • 0 – Rotate by 90 degrees counter-clockwise and flip vertically. This is the default.
  • 1 – Rotate by 90 degrees clockwise.
  • 2 – Rotate by 90 degrees counter-clockwise.
  • 3 – Rotate by 90 degrees clockwise and flip vertically.

I often record with cameras hanging upside down. Then I want to rotate 180 degrees, which can be done like this:

ffmpeg -i input.mp4 -vf "transpose=2,transpose=2" output.mp4

Even though the video is re-compressed using this method, you can force the audio to be copied without compression by adding the -c:a copy tag:

ffmpeg -i input.mp4 -vf "transpose=1" -c:a copy output.mp4

Of course, these commands can also be included in a chain of other FFmpeg-commands.

Crop video files with FFmpeg

I have previously written about how to trim video files with FFmpeg. It is also easy to crop a video file. Here is a short how-to guide for myself and others.

Cropping is not the same as trimming

This may be basic, but I often see the concepts of cropping and trimming used interchangeably. So, to clarify, trimming a video file means making it shorter by removing frames in the beginning and/or end. That is not the same as cropping a video file, which only selects a particular part of the video for export.

The one-liner

If you want to get it done, here is the one-liner:

ffmpeg -i input.avi ffmpeg -i output2.mp4 -vf crop=1920:1080:0:200 output.avi

There is more information about this command in many places (for example, here). the most critical parts of the above command are:

  • -vf is the command for “video filter”. It can also be spelt out as -filter:v.
  • crop=1920:1080:0:200 means to make a 1920×1080 video starting from the leftmost side of the source image (=0 pixels horizontally) and 200 pixels down from the top.

Of course, this command can be combined with other FFmpeg combinations.

MusicLab Copenhagen

After nearly three years of planning, we can finally welcome people to MusicLab Copenhagen. This is a unique “science concert” involving the Danish String Quartet, one of the world’s leading classical ensembles. Tonight, they will perform pieces by Bach, Beethoven, Schnittke and folk music in a normal concert setting at Musikhuset in Copenhagen. However, the concert is nothing but normal.

Live music research

During the concert, about twenty researchers from RITMO and partner institutions will conduct investigations and experiments informed by phenomenology, music psychology, complex systems analysis, and music technology. The aim is to answer some big research questions, like:

  • What is musical complexity?
  • What is the relation between musical absorption and empathy?
  • Is there such a thing as a shared zone of absorption, and is it measurable?
  • How can musical texture be rendered visually?

The concert will be live-streamed (on YouTube and Facebook) and it will also be aired on Danish radio. There will also be a short film documenting the whole process.

Researchers and staff from RITMO (and friends) in front of the concert venue.

Real-world Open Research

This concert will be the biggest and most complex MusicLab event to date. Still, all the normal “ingredients” of a MusicLab will be in place. The core is a spectacular performance. We will capture a lot of data using state-of-the-art technologies, but in a way that is as little obtrusive as possible for performers and the audience. After the concert, both performers and researchers will talk about the experience.

Of course, being a flagship Open Research project, all the collected data will be shared openly. The researchers will show glimpses of data processing procedures as part of the “data jockeying” at the end of the event. However, it is first when all data is properly uploaded and pre-processed that data processing can start. All the involved researchers will dig into their respective data. But since everything is openly available, anyone can go in and work on the data as they wish.

Proper preparation

Due to the corona situation, the event has been postponed several times. That has been unfortunate and stressful for everyone involved. On the positive side, it has also meant that we have been able to rehearse and prepare very well. Already a year ago we ran a full rehearsal of the technical setup of the concert. We even live-streamed the whole preparation event, in the spirit of “slow TV”:

I am quite confident that things will run smooth during the concert. Of course, there are always obstacles. For example, one of our eye-trackers broke in one of the last tests. And it is always exciting to wait for Apple and Google to approve updates of our MusicLab app in their respective app stores.

Want to see how it went. Have a look here.

From Open Research to Science 2.0

Earlier today, I presented at the national open research conference Hvordan endres forskningshverdagen når åpen forskning blir den nye normalen? The conference is organized by the Norwegian Forum for Open Research and is coordinated by Universities Norway. It has been great to follow the various discussions at the conference. One observation is that very few questions the transition to Open Research. We have, finally, come to a point where openness is the new normal. Instead, the discussions have focused on how we can move forwards. Having many active researchers in the panels also led to focus on solutions instead of policy.

Openness leads to better research

In my presentation, I began by explaining why I believe opening the research process leads to better research:

  • Opening the process makes the researcher more carefully document everything. For example, nobody wants to make messy data or code available. Adding metadata and descriptions also help improve the quality of what is made available. It also helps in removing irrelevant content.
  • Making the different parts openly available is important for ensuring transparency in the research process. This allows reviewers (and others) to check claims in published papers. It also allows for others to replicate results or use data and methods in other research.
  • This openness and accessibility will ultimately lead to better quality control. Some people complain that we make available lots of irrelevant information. True, not everything that is made available will be checked or used. The same is the case for most other things on the web. That does not mean that nobody will never be interested. We also need to remember that research is a slow activity. It may take years for research results to be used.

Of course, we face many challenges when trying to work openly. As I have described previously, we particularly struggle with privacy and copyright issues. We also don’t have the technical solutions we need. That led me to my main point in the talk.

Connecting the blocks

The main argument in my presentation was that we need to think about connecting the various blocks in the Open Research puzzle. There has, over the last few years, been a lot of focus on individual blocks. First, making publications openly available (Open Access). Nowadays, there is a lot of discussion about Open Data and how to make data FAIR (Findable, Accessible, Interoperable, Reusable). There is also some development in the other building blocks. What is lacking today is a focus on how the different blocks are connected.

There is now a need to connect the different blocks. Dark blue blocks are part of the research process, while the light blue blocks focus on applications and assessment.

By developing individual blocks without thinking sufficiently about their interconnectedness, I fear that we lose out on some of the main points of opening everything. Moving towards Open Research is not only about making things open; it is about rethinking the way we research. That is the idea of the concept of Science 2.0 (or Research 2.0, as I would prefer to call it).

There is much to do before we can properly connect the blocks. But some elements are essential:

  • Persistent identifiers (PID): Having unique and permanent digital references that makes it possible to find and reuse digital material is essential for finding this. This could be DOIs for data, ORCID for researchers, and so on.
  • Timestamping: Many researchers are concerned about who did something first. For example, many people wait with releasing their data because they want to publish an article first. That is because the data (currently) does not have any “value” in itself. In my thinking, if data had PIDs and timestamping they would also be citable. This should also be combined with proper recognition of such contributions.
  • Version control: It has been common to archive various research results when the research is done. This is based on pre-digital workflows. Today, it is much better to provide solutions for proper version control of everything we are doing.

Fortunately, things move in the right direction. It is great to see more researchers try to work openly. That also exposes the current “holes” in infrastructures and policies.

Converting a .WAV file to .AVI

Sometimes, there is a need to convert an audio file into a blank video file with an audio track. This can be useful if you are on a system that does not have a dedicated audio player but a video player (yes, rare, but I work with odd technologies…). Here is a quick recipe

FFmpeg to the rescue

When it comes to converting from one media format to another, I always turn to FFmpeg. It requires “coding” in the terminal, but usually, it is only necessary to write a oneliner. When it comes to converting an audio file (say in .WAV format) to a blank video file (for example, a .AVI file), this is how I would do it:

ffmpeg -i infile.wav -c copy outfile.avi

The “-c copy” part of this command is to preserve the original audio content. The new black video file will have a copy of the original .WAV file content. If you are okay with compressing the audio, you can instead run this command:

ffmpeg -i infile.wav outfile.avi

Then FFmpeg will (by default) compress the audio using the mp3 algorithm. This may or may not be what you are after, but it will at least create a substantially smaller output file.

Of course, you can easily vary the above conversion. For example, if you want to go from .AIFF to .MP4, you would just do:

ffmpeg -i infile.aiff outfile.mp4

Happy converting!