“Can a computer write Shakespeare?” Trevor Cox’s nice Radio 4 programme yesterday was inevitably able only to scratch the surface of that question (which, I should add, was being asked outside of the boring probabilistic sense, explored with characteristic panache in Borges’ The Library of Babel). For my part, I hugely enjoyed discussing with Tom Service the works of the “computer composer” Iamus. Tom had been pretty dismissive when I wrote about that project in the Guardian last year. I was disappointed that he’d only heard the “early work” Hello World!, but it seems the later compositions have not shifted his views much. Yet crucially, this is no reactionary objection to the intrusion of soulless computers into music – on the contrary, Tom thinks that the Iamus team haven’t pushed the technology far enough, and that they are making the mistake of trying to make music that sounds as if humans composed it. That error, he feels, is only compounded by composing for traditional instruments, so that one gets the expressivity of the performer complicating the issue. Why not generate entirely new sounds using electronics, he asked?
I have some sympathy with these suggestions – perhaps Iamus has been too constrained by a stylistic template to create any genuinely new soundscapes. But in a way I feel that is the whole point. I can’t help thinking that here Tom is indulging a prejudice that says if it is composed by a computer then it should sound somehow futuristic and far-out, like the Radiophonic Workshop at their craziest. But why shouldn’t computers be allowed to compose for piano/chamber ensemble/orchestra? We don’t demand that all contemporary human composers abandon these traditional sonic resources. And the Malaga team who devised Iamus specifically set out to see if a computer could be induced to compose music that couldn’t easily be distinguished from that of a human, without being merely a crude pastiche. How could we make the comparison if all we had was a vista of bleeps?
I also think Tom might be being a tad unfair to suggest that it’s entirely the programmers, not Iamus, that is “composing”. Gustavo Diaz-Jerez didn’t give away an awful lot about exactly how the evolutionary algorithm works, but it has become clear to me that the input from the human programmers is very minimal: a musical seed that bears virtually no recognizable relation to the final product. Nor are they assessing or selecting anything on the way. Asserting that, by writing the software, they are the real composers here seems a little like saying that the clever folks who developed Word are the real authors of my books. (I’m damned if they’re going to get any of my pitiful royalties.)
The central issue, however, is something else. Tom seemed to feel that, by using human performers, Iamus was somehow cheating – of course it sounds passionate and committed, because the performers are injecting that into the notes! But wait – has there ever been any music composed for which this is not the case? (Well yes of course, but such experiments – like Varèse’s musique concrete – are the exceptions.) And it is precisely here that we hit the irony. We hear Bach’s Cello Suites and think “What a great genius! What sensitivity! What emotion!” And we too easily forget that, while this is true (my God, how true!), we hardly have the same response when we hear Wendy Carlos doing Switched On Bach on the Moog synthesizer. Even now we may overlook the essential role of the performer, without whom Bach is notation on paper. It only becomes great music when the genius of the composer (in this case) is given sympathetic expression by a skilled interpreter. Why do we give Bach that benefit but feel that all a computer should be allowed is Wendy Carlos?
And it doesn’t stop there. Even Pablo Casals could, in the end, only make acoustic signals. That sounds sacrilegious, I know, but what else is it but vibrations in air – until it falls on the sympathetic ear? It only moves us because we have the resources to be moved: the logical, auditory and emotional resources. It is our minds that turn notes into music, and that is a tremendous skill which sometimes we deny with dismaying insistence (“oh, I don’t know anything about music”). This is what I wanted to get at with my comments on the romanticization of genius. It is tempting to turn the performer into a mere conduit, and ourselves into the passive receiver, and attribute all the creative process to the composer or artist. At worst, this becomes a delusion that we are somehow “communing” with the artist’s mind – as Tom pointed out after the recording, even Beethoven didn’t believe that! Without wishing to deny the artist the primary role, creativity can only be a collaboration. Otherwise, wouldn’t Bach be like a pill that, once swallowed, has the same effect on everyone – the “pharmaceutical model” of music so masterfully dismissed by musicologist John Sloboda?
This is why experiments like Iamus are so interesting. Margaret Boden expressed it better than I did at the end of Trevor’s programme. By removing one “mind” from the equation, they allow us to take apart the pieces of that process and hopefully to thereby understand them better. For, whatever else Iamus can do, its creators evidently don’t claim that it has a “mind” or some kind of autonomous intention. And so the issue becomes that of how we actively construct what we experience out of the materials we are given. That “we” may include the performer too, who is undoubtedly exercising creativity: OK, I have been given these notes, what can I do with them that has some meaning? The performer must find a form. The listener must find one too, and these may or may not overlap, although I suspect that to a considerable degree they do, simply because performer and listener are likely to have built their musical minds from very similar stimuli.
Kandinsky attributed to the artist an almost magical ability to elicit specific emotions from the onlooker. As a synaesthete, he expressed this in musical terms, even though his medium was colour; he surely imagined that music itself could do the same thing. “Colour is the keyboard”, he wrote, “the eyes are hammers, the soul is the piano with many strings. The artist is the hand that plays, touching one key after another purposively, to cause vibrations of the soul.” But few other artists have such delusions of absolute control over the effects their compositions will have. Stravinsky more or less denied anything of the sort. They have at best only a crude set of knobs for dialling in the listener’s/viewer’s response, because every mind has been shaped differently. In a cumbersomely mechanistic picture I imagine the artist making a kind of grid that, placed on the audience’s perceptions, depresses different levers depending on who has them where. It’s in the meeting of grid and levers (and in music the performer reshapes the grid a little) that creativity is determined. As computers get better at making interesting and effective grids, we might learn something new about the levers: why certain grids have certain effects, say.
Of course, those levers are connected to the heart, the tear ducts, the limbic and motor systems and so on. That’s where it gets interesting: can a computer create a grid that will make me cry – not as bad, ersatz movie music does, but as Bach does? When, or if, that happens – well, that’s when I really have to start wondering if computers are creative.