“But how do we get musicians more involved with systems like this?” asked an audience-member at Sam Aaron’s talk on Overtone in Cambridge today. Sam had discussed several ongoing issues surrounding computer music, including the search for sufficiently-abstract programming languages for sound synthesis as well as concerns surrounding digital interface design. “After all, this kind of system is ultimately for making music. So, how would Overtone look to musicians?”
As a musician who has used Sam’s system, I would say: it looks off-putting. Lines of text on a black screen immediately scream ‘SCARY’ to me. Okay, it’s made a little more appealing by the presence of some friendly-looking coloured parentheses, but the very idea of using linguistic commands to control sounds is somewhat alien to me. However, I understand what the system is capable of doing, so I’m happy to at least try to learn how to use it. If we ultimately want to be able to plug in more embodied controllers to the system, we have to understand how it works; we need to know what parameters are there, and which ones we want to control, in order to envisage the kind of real-world ‘thing’ that we might want to manipulate to control sound in interesting ways.
I’m reminded of a conversation I had with the composer Tom Mays about the Karlax, a new digital controller that he’s currently using in his compositions. We discussed how, with the Karlax, anything is possible, so that the real virtuosity, if that’s even a useful concept in this context, consists not in manipulating the instrument as such but rather in designating the mappings between the interface and the sound-generation engine. In other words, the instrument – interface plus sound-generation engine – has to be ‘composed’, and that’s the hard bit. Pianos are given to us, ready to play. We don’t have to invent the piano every time we go to play ‘Twinkle Twinkle’. We simply don’t have that luxury with electronic music. But that’s the great challenge, too; that’s why it’s so exciting. Since nothing is pre-given, we have ultimate freedom to take advantage of the computational resources in whatever way we want – in ways that go far beyond what traditional instruments allow us to do.
But if we want to compose instruments, we need to understand the processes. We need to ‘get inside’ the system and work out what the parameters of the sound-generation engine are; we need to figure out which ones are interesting for our current purposes – having, of course, worked out what those purposes might be; we need to have some conception about how the overall architecture fits together. In other words, to make computer music, we can’t just think like ‘musicians’ in the traditional sense, who deal with pre-made instruments. We have to think like programmers. If we don’t, we’ll throw our hands up in existential despair; freedom turns into paralysis, and we either won’t make any music at all or we’ll give up electronic music as a bad business and retreat, grumbling, to our pianos.
When Sam responded thus to his interlocutor today – “I first thought that programmers need to learn to think more like musicians, but the more I’ve seen, the more I think that musicians need to think more like programmers” – a ripple of nervous laughter spread around the lecture theatre. It was as though everyone present – mostly computer scientists, but some musicians too – was uneasy with this notion. “How can we abase musicians thus? Musicians are the cultural heroes of our society, divinely inspired; to suggest that they need to become mere technicians is treachery! Seize him!” Okay, so nobody actually said that, but I suspect that sort of intuition is what underpinned the reaction. It is of course a wonderful thing that musicians are so valued in society, but that doesn’t mean that they should be regarded as untouchable. The converse – that computer programmers could do with learning more from musicians – was seen as uncontroversial; if that had been Sam’s conclusion, the assembled audience would have nodded sagely. Nobody would have laughed. It is still assumed by many that the sort of knowledge that musicians have is somehow more culturally valuable than that possessed by programmers. There is surely no warrant for this view any more. Musicians and programmers have much to learn from each other.
It’s not just musicians, though. I think we could all do with thinking more like programmers. After all, programmers are sophisticated problem-solvers. What do I want to do? What are the parameters I need to think about? How can I design a system involving those parameters that will achieve my ends? Those sound like reasonable questions to ask, whether you’re designing a word-processing programme, ‘composing’ a new digital instrument or maybe even writing a symphony.
Interesting musings Jenny,
ReplyDeleteWestern music has a bit of a tradition of disembodied abstract notations. The first thing I thought of when you referred to lines of text on a screen looking off-putting was common practice notation -- something that I've always found rather off putting and largely unrelated to my auditory experience of music.
With regard to the idea of virtuosic mappings for the Karlax: While I agree that there is something very musical and compositionally important about connecting the physical interface with the music/sound generating algorithms in these "seperable interface -> sound-generator" systems, I would question whether mapping design somehow supplants performance virtuosity. I think there is always scope for virtuosity in performance -- perhaps in spite of the need for "virtuosic" mappings. For many, Michel Waiswiz and The Hands are the quintessential example of virtuosic performance with a new computer music instrument. Michel put a lot into the instrument design, and also practiced and leart to play his instrument with virtuosity. It remains uncommon to see virtuosic performers of new electronic instruments. I don't think one can fully hide behind the "virtuosity of the mappings" argument.
Do we need to think like programmers? I'm not so sure. Music is music, not just sonified mathematics. As a programmer I would suggest that we need to think like instrument builders, and further, of instrument building as an integral part of the practice of composition (or performance/improvisation). Programming is just one tool among many to get that job done. Perhaps better to ask "where do I want to go?" and "is programming a good way to get me there?"
Musicians who find the idea of becoming technicians distasteful would do well to read Partch's "Life in the houses of technitution." One could easily see live coding as another step along the path of "Rationalising Culture," as Georgina Bourne so nicely put it.
And as for the benefits of problem solving, I love this quote from Chris Mann: "experimental music is not a problem-solving environment but a problem-seeking one.'