We are rapidly moving towards the start of our new Master’s programme Music, Communication & Technology. This is a unique programme in that it is split between two universities (in Oslo and Trondheim), 500 kilometres apart. We are working on setting up a permanent high-quality, low-latency connection that will be used as the basis for our communication. But in addition to this permanent setup we need solutions for quick and easy communication. We have been (and will be) testing a lot of different software and hardware solutions, and in a series of blog posts I will describe some of the pros and cons of these.
Today I have been testing the Blackmagic Web Presenter. This is a small box with two video inputs (one HDMI and one SDI), and two audio inputs (one XLR and one stereo RCA). The box functions as a very basic video/audio mixer, but the most interesting thing is that it shows up as a normal web camera on the computer (even in Ubuntu, without drivers!). This means that it can be used in most communication platforms, including Skype, Teams, Hangouts, Appear.in, Zoom, etc., and be the centerpiece of slightly more advanced communication.
My main interest in testing it now was to see if I could connect a regular camera (Canon XF105) and a document camera (Lumens DC193) to the device. As you can see in the video below, this worked flawlessly, and I was able to do a quick recording using the built-in video recorder (Cheese) in Ubuntu.
So to the verdict:
No-frills setup, even on Ubuntu!
Very positive that it scales the video correctly. My camera was running 1080i and the document camera 780p, and the scaling worked flawlessly (you need the same inputs for video transition effects to work, though, but not really a problem for my usage).
Hardware encoding makes it easy to connect also to fairly moderate PCs.
Nice price tag (~$500).
Most people have HDMI devices, but SDI is rare. We have a lot of SDI stuff, so it works fine for our use.
No phantom power for the XLR. This is perhaps the biggest problem, I think. You can use a dynamic microphone, but I would have preferred a condenser. Now I ended up connecting a wireless lavalier microphone, with a line-level XLR connection in the receiver. It is also possible to use a mixer, but the whole point of this box is to have a small, portable and easy set up.
720p output is ok for many things we will use it for, but is not particularly future-proof.
It has a fan. It makes a little more noise than when my laptop fan kicks in, but is not noticeable if it is moved one meter away.
Not perfect, but for its usage I think it works very nicely. For meetings and teaching where it is necessary to have a little more than just a plain web camera, I think it does it job nicely.
This is a note to self, and hopefully to others, about how to easily and quickly trim videos without recompression.
Often I end up with long video recordings that I want to split or trim. One a side note sometimes people call this “cropping”, but in my world cropping is to cut out parts of the image, that is, a spatial transformation. Splitting and trimming are temporal transformations.
You can of course both split and trim in most video editing software, but these will typically also recompress the file on export. This reduces the quality of the video, and it also takes a long time. A much better solution is to trim losslessly, and fortunately there is a way to do this with the wonder-tool FFMPEG. Being a command line utility (available on most platforms) it has a ton of different options, and I never remember these. So here it goes, this is what I use (on Ubuntu) to trim out parts of a long video file:
This will cut out the section from about 1h19min to 2h18min losslessly, and will only take a few seconds to run.
This network brings together a group of internationally leading sound and music computing researchers from institutions in five Nordic countries: Aalborg University, Aalto University, KTH Royal Institute of Technology, University of Iceland, and University of Oslo. The network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, and the social and natural sciences, as well as engineering, and involves a high level of technological competency.
At the University of Oslo we have one open PhD fellowship connected to the network, with application deadline 4 April 2018. We invite PhD proposals that focus on sound/music interaction with periodic/rhythmic human body motion (walking, running, training, etc.). The appointed candidate is expected to carry out observation studies of human body motion in real-life settings, using different types of mobile motion capture systems (full-body suit and individual trackers). Results from the analysis of these observation studies should form the basis for the development of prototype systems for using such periodic/rhythmic motion in musical interaction.
The appointed candidate will benefit from the combined expertise within the NordicSMC network, and is expected to carry out one or more short-term scientific missions to the other partners. At UiO, the candidate will be affiliated with RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This interdisciplinary centre focuses on rhythm as a structuring mechanism for the temporal dimensions of human life. RITMO researchers span the fields of musicology, psychology and informatics, and have access to state-of-the-art facilities in sound/video recording, motion capture, eye tracking, physiological measurements, various types of brain imaging (EEG, fMRI), and rapid prototyping and robotics laboratories.
We have done several sound-tracing studies before at University of Oslo, and here is a new one focusing on free-hand sound-tracings of melodies. I am happy to say that this is a gold open access publication, and that all the data are also available. So it is both free and “free”!
In this paper, we report on a free-hand motion capture study in which 32 participants ‘traced’ 16 melodic vocal phrases with their hands in the air in two experimental conditions. Melodic contours are often thought of as correlated with vertical movement (up and down) in time, and this was also our initial expectation. We did find an arch shape for most of the tracings, although this did not correspond directly to the melodic contours. Furthermore, representation of pitch in the vertical dimension was but one of a diverse range of movement strategies used to trace the melodies. Six different mapping strategies were observed, and these strategies have been quantified and statistically tested. The conclusion is that metaphorical representation is much more common than a ‘graph-like’ rendering for such a melodic sound-tracing task. Other findings include a clear gender difference for some of the tracing strategies and an unexpected representation of melodies in terms of a small object for some of the Hindustani music examples. The data also show a tendency of participants moving within a shared ‘social box’.
I recently mentioned that I have been busy setting up the new MCT master’s programme. But I have been even more busy with preparing the startup of our new Centre of Excellence RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. This is a large undertaking, and a collaboration between researchers from musicology, psychology and informatics. A visual “abstract” of the centre can be seen in the figure to the right.
Now we are recruiting lots of new people for the centre, so please apply or forward to people you think may be interested: