I have been exploring different video visualizations as part of my annual stillstanding project. Some of these I post as part of my daily Mastodon updates, while others I only test for future publications.
Most of the video visualizations and analyses are made with the Musical Gestures Toolbox for Python and structured as Jupyter Notebooks. I have been pondering whether skipping frames is a good idea. The 360-degree videos that I create visualizations from are shot at 25 fps. That is on the lower end for capturing fast motion, but is plenty when studying myself standing still or rooms with basically “nothing” going on. Then it certainly makes sense to downsample the video, something that can easily be done in MGT by adding the skip variable when loading a video file:
video = mg.MgVideo(video_fn, skip=5)
This will reduce the video to 5 fps, which will also reduce the computational time for subsequent operations with a similar amount. A change like that does not matter much when generating an average image, which can be accomplished with the following command:
and will look like this:
However, what happens when creating videograms or motiongrams from a downsampled video file? There is not much going on for most of my standstill sessions, which means that the videograms and motiongrams will contain little information. However, during some sessions something happens. One example is how, during #Stillstanding 136, a person passed me while standing still at the National Library. That someone passes you in a public space is usually not a big deal; it happens constantly. Nevertheless, such an event happening when standing still is something I typically remember for the whole session. A random person passing by becomes an event on its own. That is also why I must capture it in my visualizations.
So how does a short, random person passing by show up in my visualizations? Starting from the 360 video above, I get a videogram like this when running this command on the original video file:
This is a reduced file (640x640px); the original file is available here for comparison.
Here is a cropped-out part of the moving person:
Interestingly, the passing shows up better than expected in the videogram created based on skipping 5 frames (again reduced to a 640x640px image):
Cropping in shows that there is something there, but less clear:
What then about the motiongrams, based on running this command on the source video:
Then we get a motiongram that clearly shows the motion (even in a reduced version):
It also shows up clearly in the motiongram based on the video with skipped frames:
In sum, skipping frames (at least 5) impacts the visualization little when visualizing regular human motion. That is good to know and will save me some processing time.