Below you will find pages that utilize the taxonomy term “jamoma”
November 22, 2016
From Basic Music Research to Medical Tool
The Research Council of Norway is evaluating the research being done in the humanities these days, and all institutions were given the task to submit cases of how societal impact. Obviously, basic research is per definition not aiming at societal impact in the short run, and my research definitely falls into category.Still it is interesting to see that some of my basic research is, indeed, on the verge of making a societal impact in the sense that policy makers like to think about.
December 13, 2012
Performing with the Norwegian Noise Orchestra
Yesterday, I performed with the Norwegian Noise Orchestra at Betong in Oslo, at a concert organised by Dans for Voksne. The orchestra is an ad-hoc group of noisy improvisers, and I immediately felt at home. The performance lasted for 12 hours, from noon to midnight, and I performed for two hours in the afternoon.
For the performance I used my Soniperforma patch based on the sonifyer technique and the Jamoma module I developed a couple of years ago (jmod.
July 12, 2012
Paper #1 at SMC 2012: Evaluation of motiongrams
Today I presented the paper Evaluating how different video features influence the visual quality of resultant motiongrams at the Sound and Music Computing conference in Copenhagen.
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
June 25, 2012
Record videos of sonification
I got a question the other day about how it is possible to record a sonifyed video file based on my sonification module for Jamoma for Max. I wrote about my first experiments with the sonifyer module here, and also published a paper at this year’s ACHI conference about the technique.
It is quite straightforward to record a video file with the original video + audio using the jit.vcr object in Max.
February 3, 2012
Sonification of motiongrams
A couple of days ago I presented the paper “Motion-sound Interaction Using Sonification based on Motiongrams” at the ACHI 2012 conference in Valencia, Spain. The paper is actually based on a Jamoma module that I developed more than a year ago, but due to other activities it took a while before I managed to write it up as a paper.
See below for the full paper and video examples.
The Paper Download paper (PDF 2MB) Abstract: The paper presents a method for sonification of human body motion based on motiongrams.
March 28, 2011
Concert: Victoria Johnson
Together with Victoria Johnson I have developed the piece Transformation, a piece where we are using video analysis to control sound selection and spatialisation. We have been developing the setup and piece during the last couple of years, and performed variations of the piece at MIC, the Opera house and at the music academy last year.
The piece will be performed again today, Monday 28 March 2011 at 19:00 at the Norwegian Academy of Music.
November 9, 2010
Sonification of motiongrams
I have made a new Jamoma module for sonification of motiongrams called jmod.sonifyer~. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and “read” as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.
See the demonstration video below:
The module is available from the Jamoma source repository, and will probably make it into an official release at some point.
July 3, 2010
GDIF recording and playback
Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here.
We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:
July 2, 2010
New motiongram features
Inspired by the work [[[Static no. 12 by Daniel Crooks that I watched at the Sydney Biennale]{.entry-content}]{.status-content}]{.status-body} a couple of weeks ago, I have added the option of scanning a single column in the jmod.motiongram% module in Jamoma. Here is a video that shows how this works in practice:
About motiongrams A motiongram is a way of displaying motion (e.g. human motion) in the time-domain, somehow similar to how we are used to working with time-representations of audio (e.
November 27, 2009
Liquid Vapor
I performed in the open form piece Liquid Vapor by Else Olsen S. yesterday. The performance was special in many ways.
First, electronic music pioneer Pauline Oliveros was also performing in the piece. She performed electric accordeon and live electronics, a great combination.
Second, the performance took place in the magnificent foyer of the new Oslo opera house. Even if it is a large space, we managed to fill it up with all the different stations, equipment and instruments.
November 11, 2009
Jamoma 0.5 released
After extensive testing Jamoma 0.5 is finally released. Even though the version number is low, this release has been worked on for around 18 months, and is actively used in both teaching and performance.
What is Jamoma? A platform for interactive art-based research and performance. It consists of several parallell development efforts:
Jamoma Modular - a structured approach to development and control of modules in the graphical media environment Max. Jamoma DSP - an object-oriented, reflective, application programming interface for C++, with an emphasis on real-time signal processing.
October 7, 2009
Testing control of CataRT from video analysis
I am working with Victoria Johnson on a piece involving movement in physical and sonic space. Here is a screenshot of a patch where I use analysis output of some of my video modules from Jamoma to control the cursor navigating in the 2D-space in CataRT. The video camera is hanging in the ceiling, and this makes it possible for Victoria to explore sounds “spread out” on the floor. For one, CataRT is an amazing tool (thanks to Diemo for sharing it!
June 26, 2009
STSM at KTH
I am currently in Stockholm carrying out a Short Term Scientific Mission (STSM) in the Speech, music and Hearing group at KTH through the COST Action Sonic Interaction Design (SID). The main objective of the STSM is to work on preparations for some experiments on action-sound couplings that will be carried out in the SID project in the fall.
The first part of the SID experiments will involve studying how people move to sound, and the second part will look at how this knowledge can be used to create sound through movement.
April 27, 2009
Updated software
I was at the Musical Body conference at University of London last week and presented my work on visualisation of music-related movements. For my PhD I developed the Musical Gestures Toolbox as a collection of components and modules for Max/MSP/Jitter, and most of this has been merged into Jamoma. However, lots of potential users are not familiar with Max, so over the last couple of years I have decided to develop standalone applications for some of the main tasks.
October 28, 2008
Three workshops in a row
The last few weeks have been quite busy here in Oslo. We opened the new lab just about a month ago, and since then I have organised several workshops, guest lectures and concerts both at UiO and at NMH. I was planning to post some longer descriptions of what has been going on, but decided to go for a summary instead.
{height=“150”} First we had a workshop called embedded systems workshop, but which I retroactively have renamed RaPMIC workshop (Rapid Prototyping of Music Instruments and Controllers).
October 23, 2008
Some thoughts on data signal processing in Max
We are having a Jamoma workshop at the fourMs lab this week. Most of the time is being spent on making Jamoma 0.5 stable, but we are also discussing some other issues. Throughout these discussions, particularly about how to handle multichannel audio in Max, I have realised that we should also start thinking about data signals as a type in itself.
Jamoma is currently, as is Max, split into three different “types” of modules and processing: control, audio and video.
September 4, 2008
Papers at ICMC 2008
Last week I was in Belfast for the International Computer Music Conference (ICMC 2008). The conference was hosted by SARC, and it was great to finally be able to see (and hear!) the sonic lab which they have installed in their new building.
I was involved in two papers, the first one being a Jamoma-related paper called “Flexible Control of Composite Parameters in Max/MSP” (PDF) written by Tim Place, Trond Lossius, Nils Peters and myself.
June 16, 2008
NIME paper
A group of Jamoma-developers presented a paper suggesting an extension to OSC at this year’s NIME in Genova two weeks ago:
Reference:
Place, T., T. Lossius, A. R. Jensenius, N. Peters and P. Baltazar (2008): Proceedings of the 2008 International Conference on New Interfaces for Musical Expression, 5-7 June 2008, Genova.
Downloads:
Full paper Poster Abstract:
An approach for creating structured Open Sound Control (OSC) messages by separating the addressing of node values and node properties is suggested.
June 11, 2008
Motiongrams sync'ed to spectrograms
One of my reasons for developing motiongrams was to have a solution for visualising movement in a way that would be compatible to spectrograms. That way it would be possible to study how movement is evolving over time, in relation to how the audio is changing over time.
In my current implementation of motiongrams in Max/MSP/Jitter (and partially in EyesWeb), there has been no way to synchronise with a spectrogram. The problem was that the built-in spectrogram in Max/MSP was running much faster than the motiongram, and they was therefore out of sync from the start.
June 8, 2008
NIME Jamoma workshop
Some pictures from our Jamoma workshop after NIME:
{width=“400” height=“237”}
Pascal showing the ramping and mapping magic in Jamoma.
{width=“400” height=“252”}
Tim showing that Jamoma is soon to be working in Max 5.
February 12, 2008
Free Software
I am participating in the EyesWeb Week in Genoa this week. This morning Nicola Bernardini held a lecture about Free Software. I have heard him talk on this topic several times before, but as I have now some more experience on participating in a Free Software project (i.e. Jamoma), I got more out of his ideas.
Some main points from the talk:
Use Free Software! Freeware and shareware may have nothing to do with Free Software.
December 11, 2007
Coordinate systems
I am updating the GDIF messaging in the jmod.mouse module in Jamoma. Trond suggested to use the OpenGL convention for ranges and coordinate systems, which should give something like this:
{width=“414” height=“270”}
This means that values on the vertical axis would fall between [-1 1], while values on the horizontal axis would be dependent on the size of the screen. For my screen (1280x800) this gives a range of [-1.6 1.
December 11, 2007
Mapping and conditioning
The concept of “mapping” is frequently used in the computer music community these days, and has also been used over the last couple of days during the Jamoma workshop. This reminded me about the distinction between mapping and conditioning, as frequently pointed out by Marcelo Wanderley:
Conditioning: filtering, scaling and normalizing signals in a 1-to-1 mapping Mapping: creating couplings between multidimensional data sets, e.g. MxN. For clarity’s sake it is probably useful to separate between the two.
December 10, 2007
Jamoma Workshop in Brussels
We have a Jamoma workshop in Brussels this week. Some of the major things we will be talking about, and working on, during these days is:
FunctionLib: for handling various types of mathematical functions in a consistent manner. UnitLib: for converting between different types of units. Timing and structuring in modules and patches. Most of the time the first day, though, has been spent on general error solving and more random discussions.
March 24, 2007
Jamoma Workshops
We rounded up the “Jamoma week” with a workshop for a small crowd of power Max users at Ars Longa in Paris today. This time I think we were more successful in explaining that Jamoma is not just a set of ready made patches, it is really mostly about creating a systematic approach to Mac patching, in addition to improving communication in and between Max and similar environments.
The week in Albi was to a great extent spent fixing bugs in the Jamoma 0.
March 21, 2007
GUI and control
The idea behind the current implementation of Jamoma is based on separating GUI from algorithm. Currently this is solved by having the algorithm in a separate file which is included in the module file containing the GUI. This is better than having everything in one patch, but I don’t think that we could say that the GUI is separated enough from the algorithm.
We have discussed this a bit over the last couple of days, and I have been trying to think about various ways of dealing with this problem.
March 21, 2007
Technical Parameters
I have been thinking a lot about GUIs, namespaces and control parameters over the last couple of days. One of the big challenges we are facing is how to make technology more human-friendly. Often it seems that technology controls us more than we control the technology.
Creating a user interface of any kind is very similar how we think about mapping in musical instruments. In essence, any type of control is one, or several, layers of mapping between one set of parameters to another.