Wednesday, 2 November 2011

Musicians and programmers: handbags at dawn?

“But how do we get musicians more involved with systems like this?” asked an audience-member at Sam Aaron’s talk on Overtone in Cambridge today.  Sam had discussed several ongoing issues surrounding computer music, including the search for sufficiently-abstract programming languages for sound synthesis as well as concerns surrounding digital interface design.  “After all, this kind of system is ultimately for making music.  So, how would Overtone look to musicians?”

As a musician who has used Sam’s system, I would say: it looks off-putting.  Lines of text on a black screen immediately scream ‘SCARY’ to me.  Okay, it’s made a little more appealing by the presence of some friendly-looking coloured parentheses, but the very idea of using linguistic commands to control sounds is somewhat alien to me.  However, I understand what the system is capable of doing, so I’m happy to at least try to learn how to use it.  If we ultimately want to be able to plug in more embodied controllers to the system, we have to understand how it works; we need to know what parameters are there, and which ones we want to control, in order to envisage the kind of real-world ‘thing’ that we might want to manipulate to control sound in interesting ways.

I’m reminded of a conversation I had with the composer Tom Mays about the Karlax, a new digital controller that he’s currently using in his compositions.  We discussed how, with the Karlax, anything is possible, so that the real virtuosity, if that’s even a useful concept in this context, consists not in manipulating the instrument as such but rather in designating the mappings between the interface and the sound-generation engine.  In other words, the instrument – interface plus sound-generation engine – has to be ‘composed’, and that’s the hard bit.  Pianos are given to us, ready to play.  We don’t have to invent the piano every time we go to play ‘Twinkle Twinkle’.  We simply don’t have that luxury with electronic music.  But that’s the great challenge, too; that’s why it’s so exciting.  Since nothing is pre-given, we have ultimate freedom to take advantage of the computational resources in whatever way we want – in ways that go far beyond what traditional instruments allow us to do.

But if we want to compose instruments, we need to understand the processes.  We need to ‘get inside’ the system and work out what the parameters of the sound-generation engine are; we need to figure out which ones are interesting for our current purposes – having, of course, worked out what those purposes might be; we need to have some conception about how the overall architecture fits together.  In other words, to make computer music, we can’t just think like ‘musicians’ in the traditional sense, who deal with pre-made instruments.  We have to think like programmers.  If we don’t, we’ll throw our hands up in existential despair; freedom turns into paralysis, and we either won’t make any music at all or we’ll give up electronic music as a bad business and retreat, grumbling, to our pianos.

When Sam responded thus to his interlocutor today – “I first thought that programmers need to learn to think more like musicians, but the more I’ve seen, the more I think that musicians need to think more like programmers” – a ripple of nervous laughter spread around the lecture theatre.  It was as though everyone present – mostly computer scientists, but some musicians too – was uneasy with this notion.  “How can we abase musicians thus?  Musicians are the cultural heroes of our society, divinely inspired; to suggest that they need to become mere technicians is treachery!  Seize him!”  Okay, so nobody actually said that, but I suspect that sort of intuition is what underpinned the reaction.  It is of course a wonderful thing that musicians are so valued in society, but that doesn’t mean that they should be regarded as untouchable.  The converse – that computer programmers could do with learning more from musicians – was seen as uncontroversial; if that had been Sam’s conclusion, the assembled audience would have nodded sagely.  Nobody would have laughed.  It is still assumed by many that the sort of knowledge that musicians have is somehow more culturally valuable than that possessed by programmers. There is surely no warrant for this view any more.  Musicians and programmers have much to learn from each other.

It’s not just musicians, though.  I think we could all do with thinking more like programmers.  After all, programmers are sophisticated problem-solvers.  What do I want to do?  What are the parameters I need to think about?  How can I design a system involving those parameters that will achieve my ends?  Those sound like reasonable questions to ask, whether you’re designing a word-processing programme, ‘composing’ a new digital instrument or maybe even writing a symphony.

Wednesday, 26 October 2011

Laptop Orchestra: first jam

After a frenzied morning of coding last Thursday, Sam came up with a simple solution for programming beats on the fly.  Samples are loaded at the start of the session.  The tempo is also globally determined.  Each sample is associated with a line of text.  So, if we want a kick drum on every crotchet beat, we enter [[X]] into the line corresponding to that sample, creating a loop of one kick drum.  If we want a kick drum on only the first and third beats of the bar, say, we enter [[X _ X _ ]], which creates a longer loop, and so on.

Subdivisions of the bar can be easily created.  So for instance, [[X X] [_ X _]] yields two quaver hits on the first beat and a crotchet hit on the third.  [[X X X] [_ X _]] would yield triplet hits on the first beat, and so on.  I found this feature immediately appealing, having been accustomed to grappling with grid-based sequencers where changing the subdivisions of the bar required changes in the global properties of the visual interface.

Another really nice feature that Sam has built in is the facility to vary the volumes of the individual samples in an elegant way.  The samples automatically trigger at volume 9 when X is used as the input, but X can be replaced by any integer between 1 and 9 to vary the volume of the individual samples.

Fuelled by Diet Coke and chocolate Hob Nobs, we decided to have a go.  Sam’s machine was generating the audio, with my machine sending network messages to his via OSC.  It worked with surprisingly little heartache, but we quickly identified a few areas for improvement.

Firstly, once things got more than a little complex, it was hard to tell who was in control of what.  If Sam modified the kick drum pattern, say, it wouldn’t update on my screen, so I would have no visual clues as to who had been the last to modify the patterns, or for that matter as to what pattern was currently in operation.  Some kind of feedback mechanism is, we felt, necessary, so that we can feel that we have some sort of agency over the sounds that we’re respectively producing.  This might involve some system whereby we can tell:

1.    What messages have been sent?
2.    What is the current state of play?
3.    What samples are actually in use and what patterns are associated with each?

We also thought that it might be useful to have a system whereby we can name patterns and recall them with a label so that they can be recycled and used by other players.

We discussed using the strengths of the computer itself in a more systematic way, through incorporating a randomness function into the pattern selection or in some elements thereof.  The whole point of a computer ensemble is to do something sonically that can’t be done in traditional acoustic environments, which involves taking advantage of the technology in more authentic ways, we felt. 

Our priorities for the next session are as follows:

1.    Incorporate a log history of what has transpired in the session – this might be handy for recording sessions and analyzing them afterwards, which of course isn’t as immediately possible in the usual jam session;
2.    Think about having a window with the current status displayed, or some mechanism by which a user can query the current status;
3.    Incorporate a way of piping channels through effects in Supercollider, as well as master controls for effects;
4.    Incorporate panning and volume control, both at local and global levels; and
5.    Think about incorporating the Monome as an input device.

Thursday, 13 October 2011

Aesthetics and laptop orchestras

While out for my daily (well, if I’m honest, it’s a bit less frequent than daily) run yesterday, in the afternoon sunshine, I forgot to bring a water bottle.  I began to flag a bit before the end; my legs got heavy and my body started to entertain mutinous thoughts of stopping.   So I took remedial action.  I took my iPod out of its armband and pressed Play on Kele Okerere’s solo album, Boxer, figuring that the nasty synths and the punchy kick drums would get me over the finish line.  As, indeed, they did.

This got me thinking about aesthetics.  I’m currently supervising an undergraduate course on the aesthetics of music, so I’m grappling with a lot of the classic texts in the area, like Eduard Hanslick’s ‘On the Musically Beautiful’.  Hanslick, and a lot of theorists after him, think about musical beauty – what we perceive as appealing or valuable in a piece of music – in cerebral terms.  The real appreciation of music is not a matter of brute sensory appeal, goes the story, but an altogether more contemplative affair. 

That’s all well and good for art music, and maybe for some other world musics, but it struck me that it seems entirely inappropriate for discussions of electronic music.  For me at least, the thing that gets me going about electronic music is the production values.  The way the mix is put together, the ‘shape’ of the synth sounds, the landscape of the stereo field – these are all things I really seem to feel rather than confront analytically.  I’m not sure if the discipline of aesthetics has really thought about this kind of musical experience, possibly because the guys writing the books on aesthetics do so while listening to Mozart, contentedly puffing on their pipes. 

I wonder what kind of an aesthetic treatise might arise from more serious consideration of the diverse relationships that people have with music these days.  If anyone out there wants to commission me to have a go, my email is in the 'About Me' section.

In the meantime, before publishers start beating down my door, Sam Aaron and I are working on our laptop orchestra.  Well, Sam’s doing all the work, to be honest; my contribution so far has mostly involved the provision of chocolate biscuits and the odd ‘musical insight’.  The aim is to create a collaborative environment in which people can get together and make live electronic music, using the Overtone system.  The system will use Supercollider, and all of its resources, but unlike Supercollider it will have a much more user-friendly linguistic interface.  It will also be able to accommodate live input from multiple users on the same network. 

Technology now has a human face, thanks in part to Steve Jobs, a point made in probably thousands of columns all over the world in recent weeks.  The laptop orchestra is another example of this – yet another example of how we can use technology to share new, meaningful experiences, which is what music is all about, surely.  Anybody who thinks that computer-generated music is anonymous or alienating is obviously not a fan of the genre, but the same could be said about art music for those who don’t get it.  Perhaps the Cambridge University Laptop Orchestra (the catchy name is a work in progress) will change a few minds around here at least.

We’re going to keep a log of progress on the project here.  Watch this space for further updates. 

Saturday, 11 June 2011

Production values

Having been quite heavily involved in the classical music scene here in Cambridge since I arrived, I occasionally find myself in danger of unthinkingly adopting the institutional attitude towards Music (with a capital M): the privileging of the acoustic, the prizing of virtuosity, the hero-worship of composers.  While all of these aspects of music are certainly worthy of admiration, I think it's a mistake to think that all musical creativity must be seen in those terms. 

One particular area of musical creativity that doesn't fit this model is the rise of the 'virtuoso producer', to (possibly) coin a phrase.  The skill that these individuals show - people like Trent Reznor, Danger Mouse; bands like Massive Attack - is not to be underestimated.  It's orchestration on steroids.  Traditional orchestration is an admirable skill, but at least the acoustic symphony orchestra is a relatively stable space of possibilities with its own established set of heuristics.  For the studio producer, however, anything is possible.  Instruments are not chosen, so much as sounds, which in most cases have to be constructed, either by analogue chains of signal-processing pedals (in the case of guitars, for example) or in their digital equivalents in the various software packages that exist for the purpose of production.  Of course, like orchestrators, producers have their heuristics - but producers must, in many cases, construct the 'instruments' in their ensemble as well as arranging the overall texture.  No mean feat.

Any account of musical creativity must bear in mind the ways in which the musical landscape is shifting.  And that's just within Western culture; we haven't even begun to think about the ways in which the creative activity in other cultures differs from ours.  Given the monolithic nature of Western classical music, it could even be that a university music department is exactly the wrong place to start a review of creative musical practice. 

Tuesday, 7 June 2011

The Laptop Orchestra

"You see," says the sceptic, stroking his beard, "one of the big problems I have with electronic music has to do with its insularity.  Not that that's the only problem I have with it, of course!"  He guffaws, compromising the integrity of the buttons on his tweed blazer.  "The music - if we are to call it such - is produced by an individual sitting at a computer, feverishly typing and blinking at a screen, without any possibility of interaction with others.  And surely, is not the interaction with other musicians one of the most compelling reasons we have to produce music in the first place?"

He's right, of course.  Music is, ultimately, a way of communicating.  And certainly, I can see how electronic music, thus construed, might be seen as inherently introspective, possibly ruling it out of being considered as music at all.  However, it doesn't have to be so.  Enter the Laptop Orchestra.  Electronic musicians all over the world are emerging from their dingy basements and coming together to improvise in jam sessions with fellow enthusiasts.  The only difference is that, instead of guitars and keyboards, their instrument is the computer.  Electronic music doesn't have to be insular after all.

Sam Aaron and I are currently talking about setting up a laptop orchestra in Cambridge.  This would be by no means limited to bespectacled, Star-Trek-merchandise-collecting ubernerds - the idea is that anybody who's capable of pressing buttons and executing simple commands can join in an impromptu, improvised electronic jam session.  More traditional musicians can even bring along their acoustic instruments and play along.  We use our computers for almost everything else: why not use them to make music together?  Watch this space.

Monday, 6 June 2011

NIME is on your side

Oslo is probably heaving a sigh of relief after last week's NIME madness.  Hordes of programmers, designers, musicians and techies of all stripes descended on the city, brandishing Apple products, to converse enthusiastically about their favourite force-sensitive resistors, among other topics.  For a musician/philosopher such as myself, the prospect of conversing with such tech-savvy individuals (who, I reckoned, probably thought in labelled circuit diagrams) was intimidating to say the least, but I needn't have worried.  A friendlier bunch I could not have met.  Between playing with the new wacky instruments being demo-ed, trying out the latest open-source audio processing software and even running around a classroom pretending to be a sine oscillator (I'm not even joking), I had a whale of a time. 

It was truly inspiring to meet so many people from such a wide variety of disciplines who had all come together because of a shared passion for music.  One of the main emergent themes of discussion was that everyone, not just the privileged few, should be encouraged to express themselves creatively through music.  If we can get more people to overcome their inhibitions and try their hand at making music, in whatever rudimentary way, through designing apps for the iPad (like the Magic Fiddle) or interactive museum installations, then the interaction will be its own reward.  This, I thought, is what music is really about: self-expression, communication and above all, fun.

Not only that, but new interfaces for musical expression (as per the conference series title) open up a whole new world of sonic possibilities to the creative performers.  We are no longer limited to the native sounds of objects: we can map our actions to whatever musical parameters we want.  We can make sounds that nobody has ever made before, in completely novel and bespoke ways.  Not only that, but the hardware we need to design such instruments is for the most part very cheap, and there is an ever-growing number of open-source software applications being developed by magnanimous folk all over the world.  Of course, this open-endedness brings its own challenges - the removal of all constraints will result in paralysis until we decide which ones need to be reintroduced - but there is no doubt that it's an exciting time to be a musician.  Roll on NIME 2012.