Motiongrams of rhythmic chimpanzee swaying

I came across a very interesting study on the Rhythmic swaying induced by sound in chimpanzees. The authors have shared the videos recorded in the study (Open Research is great!), so I was eager to try out some analyses with the Musical Gestures Toolbox for Matlab.

Here is an example of one of the videos from the collection:

The video quality is not very good, so I had my doubts about what I could find. It is particularly challenging that the camera is moving slightly over time. There is also a part where the camera zooms in towards the end. A good rule of thumb is to always use a tripod and no zoom/pan/tilt when recording video for analysis.

Still, I managed to create a couple of interesting visualizations. Here I include two motiongrams, one horizontal and one vertical:

This horizontal motiongram shows the sideways motion of the monkey. Time runs from left to right.
This vertical motiongram reveals the sideways motion of the monkey. Time runs from top to bottom.

Despite the poor input quality, I was happy to see that the motiongrams are quite illustrative of what we see in the video. They clearly reveal the rhythmic pattern of the monkey’s motion. It would have been interesting to have some longer recordings to do some more detailed analysis of correspondences between sound and motion!

If you are interested in making such visualisations yourself, have a look at our collection of tools in the Musical Gestures Toolbox.

Embed YouTube video with subtitles in different languages

This is primarily a note to self post, but could hopefully also be useful for others. At least, I spent a little too long to figure out to embed a YouTube video with a specific language on the subtitles.

The starting point is that I had this project video that I wanted to embed on a project website:

However, then I found that you can add info about the specific language you want to use by adding this snippet after the URL:

?hl=en&cc_lang_pref=en&cc=1

This means ?hl=en is the language of the controls, &cc_lang_pref=en is the language of the subtitles and &cc=1 turns on the subtitles. The complete block is:

<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/qGN2zbic3JM?hl=en&cc_lang_pref=en&cc=1" width="560"></iframe>

And the embedded video looks like this:

To play the same video with Norwegian subtitles on the Norwegian web page, I use this block:

<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/qGN2zbic3JM?hl=no&cc_lang_pref=no&cc=1" width="560"></iframe>

And this looks like:

Simple when you have found the solution!

Why is open research better research?

I am presenting at the Norwegian Forskerutdanningskonferansen on Monday, which is a venue for people involved in research education. I have been challenged to talk about why open research is better research. In the spirit of openness, this blog post is an attempt to shape my argument. It can be read as an open notebook for what I am going to say.

Open Research vs Open Science

My first point in any talk about open research is to explain why I think “open research” is better than “open science”. Please take a look at a previous blog post for details. The short story is that “open research” feels more inclusive for people from the arts and humanities, who may not identify as “scientists”.

Why not?

I find it strange that in 2020 it is necessary to explain why we believe open research is a good idea. Instead, I would rather suggest that others explain why they do not support the principles of open research. Or, put differently, “why is closed research better research”?

One of the main points of doing research is to learn more and expand our shared knowledge about the world. This is not possible if we do not share the very same knowledge. Sharing has also been a core principle of research/science for centuries. After all, publications are a way of sharing.

The problem is that a lot of today’s publishing is a relic from a post-digital era, and does not take into account all the possibilities afforded by new technologies. The idea of “Science 2.0” is to utilize the potentials of web-based tools in research. Furthermore, this does not only relate to the final publications. A complete open research paradigm involves openness at all levels.

What is Open Research?

There are many definitions of open research, and I will not attempt to come up with the ultimate purpose here. Instead, I will point to (some) of the building blocks in an open research paradigm:

One can always argue about the naming of these, and what they include. The most important is to show that all parts of the research process could, in fact, be open.

How does open research help making better research?

To answer the original question, let me try to come up with one statement for each of the blocks mentioned in the figure above:

  • Open Applications: Funding applications are mainly closed today. But why couldn’t all applications be made publicly available? These would lead to better and more transparent processes, and the applications themselves could be seen as something others can build on. For people to avoid stealing ideas, such public applications would, of course, need to have tracking of applicant IDs, version-controlled IDs on the text, and universal time codes. That way, nobody would be able to claim that they came up with the idea first. One example is how DIKU decided to make all applications and assessments for the call for Centres of Excellence in Education open.
  • Open Assessment: If also, the assessment of research applications were open, this would increase the transparency of who gets funding, and why. The feedback from reviewers would also be openly available for everyone to see and learn how to develop better applications in the future.
  • Open Notebooks: Jumping to when the actual research starts, one could also argue for opening up the entire research process itself. This could involve the use of open notebooks explaining how the research develops. It would also be a way of tracking the steps taken to conduct the research, for example, getting ethics permissions. This could be done on web pages, blogs, or with more computational tools like Jupyter Notebook.
  • Open Methods: During review processes of publications, one of the trickiest parts is to understand how the research was conducted. Then it is crucial that the methods are described clearly and openly. Solutions like the Open Science Framework try to make a complete solution for making material available.
  • Open Source: An increasing amount of methods are computer-based. Sharing the source code of developed software is one approach to opening the methods used in research. It is also of great value for other researchers to build on. Some of the most popular platforms are GitHub and Gitlab.
  • Citizen Science: This is a big topic, but here I would say that it could be a way of opening for contributions by non-researchers in the process. This could be anything from participating in research designs to help with collecting data.
  • Open Data: Sharing the data is necessary so that reviewers can do their job in assessing whether the research results are sane. It is quite remarkable that most papers are still accepted without the reviewers having had access to the data and analysis methods that were used to reach the conclusions. Of course, open data are also of value for other researchers that re-analyze or perform new analysis on the data. In my experience, data are under-analyzed in general. There are numerous platforms available, both commercial (Figshare) and non-profit (Zenodo).
  • Open Manuscripts: Many researchers have been sharing manuscripts with colleagues and getting feedback before submission. With today’s tools, it is possible to do this at scale, asking for feedback on the material even at the stage of manuscripts. There are numerous new tools here, including Authorea and PubPub.
  • Open Peer Review: A traditional review process consists of feedback from 2-3 peers. With an open peer review system, many more peers (and others) could comment on the manuscript, thereby also improving the final quality of the paper. One interesting system here is OpenReview.
  • Open Access: Free access to the final publication is what most people have focused on. This is one crucial building block in the ecosystem, and much (positive) has happened in the last few years. However, we are still far from having universal open access. This is a significant bottleneck in the sharing of new knowledge. Fortunately, the political pressure from cOAlition S and others help in making a change.
  • Open Educational Resources: Academic publications are not for everyone to digest. Therefore, it is also imperative that we create material that can be used by students. This is particularly important to support people’s life-long learning. The popularity of MOOCs on platforms such as EdX, FutureLearn, and Coursera, has shown that there is a large market for this. Many of these are closed, however, which prevents full distribution.
  • Open Citations: Whether you work is cited by peers or not is often critical for many people’s careers. It has become a big business to create citation counts and various types of indexes (the h-index being the most common). The chase for citations has several opposing sides, including self-citations, and dubious pushing for citations to reviewers’ material. Therefore, we need to push for more openness, also when it comes to citations and citation counts.
  • Open Scientific Social Networks: The way people connect is vital in the world of research (as elsewhere). Opening the networks is crucial, particularly for minority researchers, to get access. Diversity will generally always lead to better and more balanced results.
  • Open Assessment: The last block takes us back to the first one. This relates to the assessment of research and researchers and is a topic I have written about before. I also helped organize the 2020 EUA Workshop on Academic Career Assessment in the Transition to Open Science, which has a lot of excellent material online.

Conclusion

As my quick run-through of the different parts of the building blocks has shown, it is possible to open the entire research process. Much experimentation is happening these days, and convergence is happening for some of the blocks. For example, the sharing of source code and data has come a long way in some communities. Some journals even refuse manuscripts without complete data sets and source code. Other parts have barely started. Open assessment may have come shortest, but things are moving also here.

My main argument for opening all parts of the process is that is “sharpening” the research process. You cannot be sloppy if you know that it will be exposed. I often hear people argue that it takes a lot of time to make everything openly available. That is also my experience. On the other hand, why should research be so fast. It is better to focus on quality than quantity. Open research fosters quality research.

One of the most common objections to opening the research process is that other people will steal your ideas, data, code, and so on. However, if everything is tagged correctly, time-stamped, and given unique IDs, it is not possible to steal anything. Everything will be traceable. And plagiarism algorithms will quickly sort out any problems.

The biggest challenge we are facing is that it is challenging to balance between the “old” and the “new” way of doing research. That is why policymakers and researchers need to work together with funders to help flip the model as quickly as possible.

How long is a NIME paper?

Several people have argued that we should change from having a page limit (2/4/6 pages) for NIME paper submissions to a word limit instead. It has also been argued that references should not be counted as part of the text. However, what should the word limits be?

It is always good to look at the history, so I decided to check how long previous NIME papers have been. I started by exporting the text from all of the PDF files with the pdftotext command-line utility:

for i in *.pdf; do name=`echo $i | cut -d'.' -f1`; pdftotext "$i" "${name}.txt"; done

Then I did a word count on these:

wc -w *.txt > wc.txt

And after a little bit of reformatting and sorting, this ends up like this in a spreadsheet format:

And from this we can sort and make a graphical representation of the number of words:

There are some outliers here. A couple of papers are (much) longer than the others, mainly because they contain long appendices. Some files have low word count numbers because the PDF files are protected from editing, and then pdftotext is not able to extract the text. The majority of files, however, are in the range 2500-5000 words.

The word count includes everything, also headers/footers, titles, abstracts, acknowledgements, and references. These differ, but the total words used for these things are 2000-5000 words. So the main text of most papers could be said to be in the range of 2000-4500 words.

Improving the PDF files in the NIME archive

This blog post summarizes my experimentation with improving the quality of the PDF files in the proceedings of the annual International Conference on New Interfaces for Musical Expression (NIME).

Centralized archive

We have, over the last few years, worked hard on getting the NIME adequately archived. Previously, the files were scattered on each year’s conference web site. The first step was to create a central archive on nime.org. The list there is automagically generated from a collection of publicly available BibTeX files that serve as the master document of the proceedings archive. The fact that the metadata is openly available in GitHub makes it possible for people to fix errors in the database. Yes, there are errors here and there, because the files were made by “scraping” the PDF content. It is not just possible to do this manually for more than 1000 PDF files.

The archive points to all the PDF files, some media files (more are coming), and DOIs to archived PDFs in Zenodo. Together, this has turned out to be a stable and (we believe) future-proof solution.

PDF problems

However, as it has turned out, the PDF files in the archive have various issues. All of them work fine in regular PDF readers, but many of them have accessibility issues. There are (at least) three problems with this.

  1. Non-accessible PDFs do not work well for people using alternative readers. We need to strive for universal access at NIME, and this includes the archive.
  2. The files are not optimized for text mining tasks. The latter is something that more people have been interested in. Such an extensive collection of files is a great resource when it comes to understanding a community and how it has developed. This was something I tried myself in a NIME paper in which I analyzed the use of the word “gesture” in all NIME papers up until 2013.
  3. If machines have problems with the files, so have the Google crawlers and other robots looking at the content of the files. This, again, has implications for how the files can be read and indexed in various academic databases.

It is not strange that there are issues with the files. After all, there are a total of 1728 of them. They have been produced from 2001 and until today on a myriad of different types of OSes and software. During this time, the PDF standard itself has also evolved considerably. For that reason, we have found it necessary to do some optimization of the files.

Renaming

The first thing I did was to download the entire collection of PDFs. I quickly discovered that there were some inconsistencies in the file names. We did a large cleanup of the file names some years ago, so things were not entirely bad. But it was still necessary to clean up the file names to have one convention. Here I ended up with renaming everything to a pattern like:

nime2001_paper001.pdf

This makes it possible to sort by year first, and then the submission type (currently only paper and music, but could be more) and then a three-digit unique number based on the submission number. Not all the numbers had leading 0’s, so I added this as well for consistency. Since the conference year and ID are unique, it is easy to do query-replace in the BibTeX database to correct the links there.

Acrobat testing

I usually don’t work much in Acrobat these days, but decided to start my testing there. I was able to get access to a copy of Acrobat XI on a university machine and started looking into different options. From the list of batch processes available, I found these to be particularly promising:

  • “Optimize scanned documents” (converting content into searchable text and reducing file size)
  • “Prepare for distribution” (removing hidden information and other oddities)
  • “Archive documents” (create PDF/A compliant documents)

I first tried to run a batch process using OCR. The aim here was to see if I could retrieve some text from files with images containing text. This did not work particularly well. It skipped most files and crashed on several. After the tenth crash, I gave up and moved on.

The “prepare for distribution” option worked better. It ran through the first 300 files or so with no problems, and reduced the files properly. But then the problems started. For many of the files, it just crashed. And when I came to the 2009-files, they turned out to the protected from editing. So I gave up again.

Finally, I tried the archiving function. Here it popped up a dialogue box asking me to fill in title and authors for every single file. I agree that this would be nice to have, but I do not have time to do this manually for 1728 files.

All in all, my Acrobat exploration turned out quite unsuccessful. Therefore, I went back to my ubuntu machine and decided to investigate what type of command-line tools I could use to get the job done.

File integrity

After searching some forums about checking if PDF files are corrupted I came across the useful qpdf application. Running this on the original NIME collection showed that the majority of files had issues.

find . -type f -iname '*.pdf' \( -exec sh -c 'qpdf --check "{}" > /dev/null && echo "{}": OK' \; -o -exec echo "{}": FAILED \; \)

The check showed that only 794 of the files were labeled as OK, while the others (934) failed. I looked at the failing files, trying to figure out what was wrong. However, I have been unable to find any consistency among failing or passing files. Initially, I thought that there might be differences based on whether they were made in LaTeX or MSWord (or something else), the platform, etc. But it turns out to not be that easy. This may also be because many of the files have been through several steps of updating along the way. For example, for many of the NIME editions, the paper chairs have added page numbers, watermarks, and so on.

Rather than trying to fix the myriad of different problems with the files, I hoped that a file compression step and saving with a newer (and common) PDF format version could help the problem.

File compression

Several of the files were unnecessarily large. Some files were close to 100 MB, and too many were more than 2 MB. This should not be necessary for 4-6 page PDF files. Large files cause bandwidth issues on the server, which means extra cost for the organization and long download time for the user. Although we don’t think about it much, saving space also saves energy and helps reduce our carbon footprint on the planet.

To compress the PDF files, I turned to the convert command line utility, which is part of the Ghostscript family. I experimented with different types of settings, but found that the settings “Screen” and “Ebook” rendered the images pixelated, even on screen. So I went for the “Printer” version, which according to the ghostscript manual should mean a downsampling of images to 300 DPI. This means that they should also print well. The script I used was this:

for i in *.pdf; do name=`echo $i | cut -d'.' -f1`; gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.6 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile="${name}_printer.pdf" "$i"; done

The result was that the folder shrank from 3.8GB to 1.0GB, a quite lovely saving. The image quality also appears to be more or less preserved. However, this is only based on visual inspection of some of the files.

Re-running the file integrity check on all these new files, showed that all 1728 files now passed the check!

PDF/A

I have been working with PDF files for years but had not really read up on the details of the different versions. What turns out to be important when it comes to longterm preservation of files, is that they comply with the PDF/A standard. The regular PDF format has different versions (1.4, 1.5, 1.6) but these are proprietary. However, PDF/A is an ISO standard and appears to be what people use for archiving.

Unfortunately, it turns out that creating PDF/A files using Ghostscript is not entirely straightforward. So more exploration needs to be done there.

Metadata

Finally, one of the problems with the proceedings archive is to get it properly indexed by various search engines. Then having PDF metadata is important. Again, I wish we had the capacity to do this properly for all 1728 files, but that is currently out of the scope.

However, adding some general metadata is better than nothing, so I found the function ExifTool, which can be used to set the metadata on PDF files:

for i in *.pdf; do name=`echo $i | cut -d'.' -f1`;  exiftool -Title="Proceedings of the International Conference on New Interfaces for Musical Expression" -Author="International Conference on New Interfaces for Musical Expression" -Subject="A Peer Reviewed article presented at the International Conference on New Interfaces for Musical Expression" "$i"; done

Conclusion

I still need to figure out the PDF/A issue (help wanted!), but the above recipe has helped in improving the quality of the PDF files considerably. It will save us bandwidth, improve accessibility, and, hopefully, also lead to better indexing of the files.