I have previously written about how to trim video files with FFmpeg. It is also easy to crop a video file. Here is a short how-to guide for myself and others.
Cropping is not the same as trimming
This may be basic, but I often see the concepts of cropping and trimming used interchangeably. So, to clarify, trimming a video file means making it shorter by removing frames in the beginning and/or end. That is not the same as cropping a video file, which only selects a particular part of the video for export.
If you want to get it done, here is the one-liner:
After nearly three years of planning, we can finally welcome people to MusicLab Copenhagen. This is a unique “science concert” involving the Danish String Quartet, one of the world’s leading classical ensembles. Tonight, they will perform pieces by Bach, Beethoven, Schnittke and folk music in a normal concert setting at Musikhuset in Copenhagen. However, the concert is nothing but normal.
Live music research
During the concert, about twenty researchers from RITMO and partner institutions will conduct investigations and experiments informed by phenomenology, music psychology, complex systems analysis, and music technology. The aim is to answer some big research questions, like:
What is musical complexity?
What is the relation between musical absorption and empathy?
Is there such a thing as a shared zone of absorption, and is it measurable?
How can musical texture be rendered visually?
The concert will be live-streamed (on YouTube and Facebook) and it will also be aired on Danish radio. There will also be a short film documenting the whole process.
Real-world Open Research
This concert will be the biggest and most complex MusicLab event to date. Still, all the normal “ingredients” of a MusicLab will be in place. The core is a spectacular performance. We will capture a lot of data using state-of-the-art technologies, but in a way that is as little obtrusive as possible for performers and the audience. After the concert, both performers and researchers will talk about the experience.
Of course, being a flagship Open Research project, all the collected data will be shared openly. The researchers will show glimpses of data processing procedures as part of the “data jockeying” at the end of the event. However, it is first when all data is properly uploaded and pre-processed that data processing can start. All the involved researchers will dig into their respective data. But since everything is openly available, anyone can go in and work on the data as they wish.
Due to the corona situation, the event has been postponed several times. That has been unfortunate and stressful for everyone involved. On the positive side, it has also meant that we have been able to rehearse and prepare very well. Already a year ago we ran a full rehearsal of the technical setup of the concert. We even live-streamed the whole preparation event, in the spirit of “slow TV”:
I am quite confident that things will run smooth during the concert. Of course, there are always obstacles. For example, one of our eye-trackers broke in one of the last tests. And it is always exciting to wait for Apple and Google to approve updates of our MusicLab app in their respective app stores.
Earlier today, I presented at the national open research conference Hvordan endres forskningshverdagen når åpen forskning blir den nye normalen? The conference is organized by the Norwegian Forum for Open Research and is coordinated by Universities Norway. It has been great to follow the various discussions at the conference. One observation is that very few questions the transition to Open Research. We have, finally, come to a point where openness is the new normal. Instead, the discussions have focused on how we can move forwards. Having many active researchers in the panels also led to focus on solutions instead of policy.
Opening the process makes the researcher more carefully document everything. For example, nobody wants to make messy data or code available. Adding metadata and descriptions also help improve the quality of what is made available. It also helps in removing irrelevant content.
Making the different parts openly available is important for ensuring transparency in the research process. This allows reviewers (and others) to check claims in published papers. It also allows for others to replicate results or use data and methods in other research.
This openness and accessibility will ultimately lead to better quality control. Some people complain that we make available lots of irrelevant information. True, not everything that is made available will be checked or used. The same is the case for most other things on the web. That does not mean that nobody will never be interested. We also need to remember that research is a slow activity. It may take years for research results to be used.
Of course, we face many challenges when trying to work openly. As I have described previously, we particularly struggle with privacy and copyright issues. We also don’t have the technical solutions we need. That led me to my main point in the talk.
Connecting the blocks
The main argument in my presentation was that we need to think about connecting the various blocks in the Open Research puzzle. There has, over the last few years, been a lot of focus on individual blocks. First, making publications openly available (Open Access). Nowadays, there is a lot of discussion about Open Data and how to make data FAIR (Findable, Accessible, Interoperable, Reusable). There is also some development in the other building blocks. What is lacking today is a focus on how the different blocks are connected.
By developing individual blocks without thinking sufficiently about their interconnectedness, I fear that we lose out on some of the main points of opening everything. Moving towards Open Research is not only about making things open; it is about rethinking the way we research. That is the idea of the concept of Science 2.0 (or Research 2.0, as I would prefer to call it).
There is much to do before we can properly connect the blocks. But some elements are essential:
Persistent identifiers (PID): Having unique and permanent digital references that makes it possible to find and reuse digital material is essential for finding this. This could be DOIs for data, ORCID for researchers, and so on.
Timestamping: Many researchers are concerned about who did something first. For example, many people wait with releasing their data because they want to publish an article first. That is because the data (currently) does not have any “value” in itself. In my thinking, if data had PIDs and timestamping they would also be citable. This should also be combined with proper recognition of such contributions.
Version control: It has been common to archive various research results when the research is done. This is based on pre-digital workflows. Today, it is much better to provide solutions for proper version control of everything we are doing.
Fortunately, things move in the right direction. It is great to see more researchers try to work openly. That also exposes the current “holes” in infrastructures and policies.
Sometimes, there is a need to convert an audio file into a blank video file with an audio track. This can be useful if you are on a system that does not have a dedicated audio player but a video player (yes, rare, but I work with odd technologies…). Here is a quick recipe
FFmpeg to the rescue
When it comes to converting from one media format to another, I always turn to FFmpeg. It requires “coding” in the terminal, but usually, it is only necessary to write a oneliner. When it comes to converting an audio file (say in .WAV format) to a blank video file (for example, a .AVI file), this is how I would do it:
ffmpeg -i infile.wav -c copy outfile.avi
The “-c copy” part of this command is to preserve the original audio content. The new black video file will have a copy of the original .WAV file content. If you are okay with compressing the audio, you can instead run this command:
ffmpeg -i infile.wav outfile.avi
Then FFmpeg will (by default) compress the audio using the mp3 algorithm. This may or may not be what you are after, but it will at least create a substantially smaller output file.
Of course, you can easily vary the above conversion. For example, if you want to go from .AIFF to .MP4, you would just do: