Prof.dr.Godfried-Willem RAES
"The multitasker as an approach in musical composition"
The market for music composition software seems to be dominated by very traditional 'left to right' -in time- approaches: sequencers, music notation programs, sound editors. As such softwares mimic at the most the most trivial part of musical composition: writing, be it in notes, in midi-codes, in sound-events..., scores , even though the most advanced of these softwares allow for some forms of real time interaction and/or programming. Now 'writing' scores is as a matter of fact merely a technical aspect of composition and probably not even an essential one. It does not cover the 'conception' nor the flow of ideas, not the architecture nor the syntax used. It does not cover the fundamental concept of rhetoric so essential -in our opinion at least- to composition.
At the other hand we know of many programs -mostly not on the 'market', i.e. not 'commercial' but rather part of the academic community, that facilitate composers to create works in the area of electroacoustic music mostly, from the bottom up: starting with the creation of soundmaterial and further building up from there. (C-sound may serve as a good example). Although not in principle inherent to this atomistic concept, it tends to lead in musical practice to an 'evolutionary' concept of musical composition. Music progressing through fast or slow gradual changes and metamorphosing of sound objects in the sense of the ancient 'musique concrete'. As such, much of this music fails to have a sense of 'progression'. It is as if it could just about as well be played backwards. It fails the characteristics of a 'narrative'. This music seems to continue the long tradition of the 'electronic music studios' that were once so characteristic for the more innovative musics from the sixties and seventies. It also extends way beyond 'academic' electronic music, as we encounter the tendency often in popular 'noise'- and 'industrial noise' music as well.
Rare on this scene are software programs that work radically the other way round: top to bottom so to speak. Programming environments that let the material be dictated by the architecture of the concept. This has been an important aspect of the undertaking we experimented within the last fifteen years. Our <GMT> project may exemplify it.
Thus, in our compositions of these last years we have been developping the idea of describing and developping musical compositions in terms of interactive tasks and processes operating within a time dependent and changing context. Simplifying matters quite a bit, one could see it as conceiving a piece as consisting of a number of organisms (with properties, norms, and adaptive possibilities) with a limited lifespan and living together within a 'world' which is governed by a basic set of properties and norms but which is at the same time shaped by these organisms and their mutual interactions and interdependencies. The number of organisms can serve as a metaphore for what we -in traditional musical terminology- would call 'a voice' (rather than a part in a score, not necessarily coinciding with what we call a voice in polyphony). The number of organisms in a world can change, as they can die as well as procreate. The notion 'world' here is in its turn of course also a metaphore. It has a past and a future, but as we go further into either direction in time, the amount of available and controllable information shrinks. The 'world' in this concept exists only within a timeframe with fuzzy borders. The size of this timeframe than, is a function of the amount of attention the organisms populating this world pay to it. Attention is in this context a very operational notion and can be measured from the amount of information used by an organisms with regard to the world.
The most trivial way to get such a concept implemented into a working system would of course be to have as many autonomous organisms with their own processing as there are to be tasks to be performed. Parallel processing seemed to be the way to go. Of course provisions have to be made for proper interaction. Without interaction one could hardly have the organisms 'live' in something like a 'world'. The most obvious implementation to make this possible being networking of one or another kind. Quite a few of our larger-scale real-time performance pieces ('A Book of Moves' as well as the 'Songbook') made very literally use of this concept: tasks went to dedicated processing systems of a size just fit to the complexity and speed required for them. Communication took place through a variety of different 'busses': some just using parallel ports, others RS232-serial and again others made use of not too beloved Midi-ports and channels. Never we used anything really more sophisticated such at 10base2 networking, TCP/IP, USB... largely because of failing low-level programming documentation.
Although an extended use of hardware interrupts seemed at first to be the most appropriate system to get it all working together, in practice and in dos-time we never got this to work decently/reliably. The intrinsics of interrupt handling on a platform populated with a lot of different processing systems became soon so complex and technical that they became unworkable for an individual researcher and developper as we happen to be. So -with pain in the heart- we gave up the idea of hardware interrupting and started working out our own software interrupt system where needed and, whereever we could avoid even this, much rather worked with a system of 'token' or 'message' passing. After some years we changed the overall architecture of this approach again. In the latest projects we have been working on, all tasks get a 'history' of their own: an area of common and shared memory in which a tasks leaves data ('traces') of its interaction with the 'world'. (Composition examples: 'Mach'96', 'Winter'97,'Counting Down from minus 747', 'Boxing').This history-memory is timeframed. Taking inspiration from recent findings in scientific perception theory we designed for each task, if needed for the performance of its functions at least, a maximum of 3 timeframed memories: a bottom one covering a timespan of some 30ms, a medium one centering around 300ms and a large one with an upper limit of about 3 seconds. All data acquisition -after initial processing within the task- results in storage in these timeframed buffers. The medium buffer contains nothing but the integrated results from the lower level, such as the upper timeframe buffer contains only 'summaries' from the level just below. The size of these timeframebuffers is made to be adaptive: it changes as a function of the variety of information they contain. If a task gets 'sleepy' because it is'nt high in demand in its world at a given moment, its timeframebuffers shrink to a minimum size, at least not their 'time'-window would shrink, but rather the amount of data their timeframe-memory holds (expressed in bytes). Practically we rarely had to implement timeframebuffers containing more than about 256 bytes or words of data. Taking into account nowadays sizing of memory, this cannot be objectionable anymore. The most important reason for keeping these buffers as small as practical is to be found in the processing power required for many operations to be performed on them in order to retrieve meaningfull information: different filtering algorithms, fourrier transforms for tempo recognition, pitch discrimination, gesture recognition from human interfaces etc...
The software concept of choice along these lines, became for us the adaptive multitasker: a program that distributes processing time over different tasks according to their own needs. The 'master' over the whole process itself, is again conceived as a task itself, and, there can be many 'masters'. The 'secret' to make it all work now lies in granularity of the code. Tasks should perform really fast and thus cannot contain too much time demanding processing. Complexity in this type of software can only be achieved by the structure through which tasks delegate and divide complex calculations over as many other and smaller tasks, until each of them demands only a 'grain' of processing time. Now the critical reader might object that we did'nt do more than exactly the kind of thing programs such as Windows or some other operating systems are doing so wonderfully. That's actually quitte correct, and in fact in the most recent versions of our software we make use of the complete Win32Api and its rich set of high resolution multimedia timers.The point is however, that it seems essential to conceive your own multitasking system nowadays if you want to get your machines back and be in controll of them. There is no trivial way, at least after my own experience, to get precise controll over task priorities in your computers if you are using plain Windows as an operating system, since your own multitasker would become merely a slave of Gates' graphics oriented Windows and all the priorities you carefully try to set up, get sidetracked by the unflexible and slow priority system Windows had built in. Fortunately, by bypassing the standard windows messageloop and writing a new DLL library, it became possible to realize our multitasker in complete preemptiveness.
Algorithmic composition becomes from this perspective something quite different from what it has been too often in the past. For, now it becomes possible to use dynamic gestural description of global form at the very same time as harmonic or melodic syntax in combination with real-time data acquisition. In order to facilitate our own students (and others so inclined) to also work and experiment along these lines, we recently finished a very elementary software library (sort of a Basic-based music description language) containing lots of essential functions to handle harmony-related tasks within timeframes of 10ms to 3 seconds. The algorithms used, internally make use of fuzzy-logic theory and to some extend are based on psychoacoustic research performed by other scientists such as Leon Van Noorden and Mark Leman. The point where we give up as yet -but this is not a matter of principle but is rather dictated by technological as well as learningtime limitations- , is that of sound synthesis: the sound material itself, requiring fast changing timeframed tasks with considerable memory in the order of 100ms. With the proliferation of DSP-systems and processing power on the PC-platform, this might however change soon. Sofar, all sound material we use in this approach either stems from different kinds of midi-gear or from hybrid hardware, such as player-pianos and electroacoustic oscillatorsystems, or, at the other hand, makes use of probably the oldest (and hardest to controll) soundsources: musicians and their instruments. In the latter case the result of the multitasking approach is either a score (on paper) such as for my stringquintet piece 'Boxing', or a real-time score on one or more monitors, as was the case in my pieces 'Winter '97 and 'HydroCePhallus'. Only in the most recent versions of <GMT> real time sound processing and I/O became possible, at least if we use at least a Pentium MMX processor.
We can only hope that the tendency in recent technology to provide the user with more and more processing power at the detriment of the essential documentation to exploit it fully, will once reverse...
Dr.Godfried-Willem RAES
Back to Logos'start-up page | To Godfried-Willem Raes' index page |
To our general purpose multitasker <GMT> (requires the Power Basic Windows Compiler). This is an open source software project! Feel free to join. |
First Published on the web July 25th 1997------ Last Updated 2008-08-07 by Godfried-Willem Raes