I’ve just heard a lovely programme on BBC Radio 4 about the late, great Ken Campbell. Catch it while you can. I saw Ken perform several times, and it always left me giggling throughout – as did this programme. The phrase ‘true original’ gets overused, but there was no one for whom it is more apt. Ken had a consistent fascination with science, which is understandable in someone whose constant demand was ‘astound me’. He developed some wonderful routines around such exploits as his conversations with David Deutsch about the multiverse, and there’s a great moment in the radio show where Ken’s precision with words collides with the glibness of scientists’ habitual speech patterns: he was delighted, on asking someone at CERN if they were worrried that they might generate an entire new universe in their particle collisions, to be told that it was ‘unlikely’. Ken captured the essence of the speculative extremes of physical theory when he would talk of how it required one not exactly to believe in such things as parallel universes, but simply to suppose them. ‘Can I suppose it?’, I remember him saying, immense eyebrows gesticulating. ‘Well yes, I can suppose it.’
I was thrilled to get Ken involved in the series of talks I arranged with Harriet Coles at Nature at the V&A Museum as part of the ‘Creating Sparks’ BA Festival in 2000. He was doing his Newton/Fatio routine at the time, and it became quickly clear that we were never going to achieve much in the way of tailoring to the themes of the series – he was going to do just what he was going to do. We didn’t much care, knowing that whatever he did would supply a great night – which it did (despite a few grumbles from people who’d been expecting a ‘science’ talk). When we chatted to Ken over tea afterwards, my wife and I figured that he would be exhausting to live with – and from the accounts of his daughter Daisy, that seems to have been the case, although it is nice (and a little surprising) to hear that it was mostly fun too. Ken was interested in the Paracelsus theatre project I was putting together at the time, and I always had the suspicion that he would have made a great Paracelsus. Even then I saw it as the ideal role for either him or the clown performer Gerry Flanagan, and I’m still filled with glee that I got Gerry to do it nearly a decade later. And while working with Gerry was a joy, I suspect working with Ken could be a kind of inspirational nightmare. But it was wonderful to have briefly come into his erratic orbit.
Saturday, October 31, 2009
Tuesday, October 27, 2009
Fear of music?
My piece on the cognition of atonal music has appeared in Prospect. I’m happy to say that it’s part of the site’s free content, but here it is below in extended, pre-edited form. I’m glad to see that it is drawing some comments. Not everyone is going to like what it says, for sure. But Tali Makell gets the wrong end of the stick – I’m not attacking the music of Schoenberg and his school. I’m a fan of most of this music, and think Berg’s Lyric Suite is a masterpiece. And of course I never said that Schoenberg et al. used total serialism. It’s absolutely right that they used some of tonality’s organizing principles, and to great effect. But there are problems with that in itself, as Roger Scruton has pointed out, since some of these structural principles are a direct consequence of tonality itself, and lose their meaning when taken out of that context. Besides, my view is that Schoenberg’s serialism was here acting as a convenient compositional tool that made it a little easier for the composers to use atonality – basically it was a scheme that reduced the effort needed to avoid tonality, and not one that actually brought any profundity or musicality in itself. These composers put that back in in other ways – with dynamics and so forth. There was nothing inevitable about Schoenberg’s method, and the justifications for it given by Adorno, and indeed by Schoenberg himself, are largely bogus. But that doesn’t make serialism intrinsically ‘bad’ as a way of composing. It’s when serialism becomes total that the problems really start (both musically and ideologically).
I hope all of this will be made clearer in my forthcoming book The Music Instinct. I am aware, however, that I have not addressed either here or there the defence of serialism made by Morag Josephine Grant in her book Serial Music, Serial Aesthetics (CUP). Here she attacks Fred Lerdahl’s critique of serialism (and other modern compositional methods), which is based on his linguistic approach to music cognition. Lerdahl believes that music needs to have an audible ‘grammar’: as he says, ‘The musical surface must be available for hierarchical structuring by the listening grammar’. In short, if we can’t construct hierarchies of pitch, rhythm and so forth, then the ‘musical surface’ is shallow: everything just sounds like everything else. This is the thrust of my Prospect piece, and I think it is true: for music to be more than a collection of mere sounds, there needs to be some audible way of organizing it. Grant complains that Lerdahl is forcing music into a straitjacket: she says his ‘argument ‘ that musical language, like spoken language, is generative in structure excludes the possibility of other, non-hierarchical methods of achieving musical coherence.’
But Grant is not, as it might seem, rejecting the need for music to acknowledge cognition. Rather, she asserts that this was precisely the concern of the serialists. They, however (she says), felt that our perception was culturally conditioned, and that we have ‘the ability to develop or uncover previously suppressed abilities’. One of those is the recognition of the tone row itself: ‘the use of the row is itself a constraint, not just on the composer, but in the aid of comprehensibility as well.’ Lerdahl is unaware of this, Grant says, while serialism imposes a system precisely to aid cognition.
And this is where Grant’s thesis utterly falls apart. For it has been shown quite clearly now that serialism’s ‘system’ is not one that can be cognized. It exists only on paper; listeners simply don’t hear it. And the reason for this is that it is simply not the kind of system that the mind intuits: we don’t listen to music by remembering arbitrary sequences of notes, but rather – as Lerdahl says – we organize those notes into hierarchical patterns: hierarchies of pitch, rhythm, melodic contour and so forth.
Can musical coherence be achieved by non-hierarchical methods? I’m not sure anyone knows, but certainly Grant provides no evidence of this – it is an act of faith. And what, precisely, are those methods, if we must exclude the tone row itself as one of them? She doesn’t say. She does, however, say that ‘the intense concentration on the tiniest of fluctuations’ is ‘central to the hearing of serial music’. I’m not sure what this means: fluctuations in what? The fluctuations in rhythm are either rather traditional, as in Berg, or are so extreme (as in Boulez) that rhythm has no meaning. Fluctuations in pitch, as in deviations from the tone row, would not be heard even if they were permitted. Grant also says that serialism ‘has the ability to create structures specific to each of its utterances’. Again, make of that what you will. If it means that each composition, each tone row, creates its own private language, then we know what Wittgenstein had to say about private languages. Why can’t Grant just explain how we are meant to find coherence in serial music, other than by its utilization of traditional techniques in parameters other than pitch?
I suppose one might interpret her remarks as saying that serialism encourages us to focus on each event as an entity in itself, not as something embedded in a hierarchical grammar. That’s possible. It might be interesting. And it is what I refer to below as sound art. If that’s the intention, it seems to me to be an explicit admission that ‘coherence’ isn’t the aim at all. And to my mind, coherence is the one characteristic that music should possess – I don’t care if you ditch tonality, or rhythm, or melody, or harmony, so long as what remains is in some manner coherent.
Incidentally, and in the hope of not sounding horribly patronizing, I have a lot of time for what Grant is up to more broadly. Defending Stockhausen is a noble cause in itself, even if I don’t buy it, and anyone who does so while listening to Scottish folk music and rap wins my vote.
Anyway, here’s the piece:
Writer Joe Queenan recently caused a minor rumpus in the austere world of contemporary classical music by complaining about how painful much of it is. He called Luciano Berio’s 1968 Sinfonia “35 minutes of non-stop torture”, Stockhausen’s Kontra-Punkte (1953) like “a cat running up and down the piano”, Krzysztof Penderecki’s Threnody for the Victims of Hiroshima (1960) “belligerent bees buzzing in the basement”, and Harrison Birtwistle’s latest opera The Minotaur “funereal caterwauling”. A hundred years after Schoenberg, he said, “the public still doesn't like anything after Transfigured Night, and even that is a stretch.”
Inevitably, Queenan was lambasted as a reactionary philistine. Performances of ‘modern’ works like this well attended, his critics said. And while Queenan took pains to distance himself from the conservative concert-goers who demand a steady diet of Mozart and Brahms, his comments were denounced as the same old clichés. The problem is that, like most clichés, they become such by frequent use. Sure, these works will find audiences in London’s highbrow venues, but the fact remains that Stockhausen and Penderecki, whose works now are as old as ‘Rock Around the Clock’, have not been assimilated into the classical canon in the way that Ravel and Stravinsky have. When someone like Queenan has earnestly tried and failed to appreciate this ‘new’ music, it’s fair to ask what the problem is.
David Stubbs considers this important question in his new book Fear of Music (Zero, 2009) but doesn’t come close to answering it. His speculative suggestion – that music lacks an ‘original object’ that, in visual art, can become the subject of veneration or trade – clearly has little force, given that it must surely apply equally to Beethoven and Berio. Indeed, Stubbs’ analysis is part of the problem rather than part of the solution. Like economists trying to understand market crashes, he wants to place all the motive forces outside the system: his gaze never fixes on the music itself. To Stubbs, your responses to music are a function of your context and perspective, not of the music. His comparisons of visual and musical art assumes an equivalence that allows no possibility of their being cognitively distinct processes.
He is in good company. Social theorist Theodore Adorno’s advocacy of Schoenberg’s atonal modernism was politically motivated: tonality was the bastion of bourgeois complacency. To the hardline modernists of the 1950s and 60s, any hint of tonality was a form of recidivism to be denounced with Maoist vigour; Pierre Boulez refused for a time even to speak to tonal composers. American composer Milton Babbitt’s provocative 1958 essay ‘Who Cares if You Listen?’ argued that it was time for ‘serious’ composers to withdraw from public engagement altogether, while offering nothing in the way of explanation for the public’s antipathy to ‘difficult’ music (his included) except a belief that they were too ill-informed to understand it. After giving a lecture on the music of Boulez and Elliott Carter, the eminent pianist and critic Charles Rosen responded thus to a question from the audience about whether composers have a responsibility to write music that the public can understand: “On such occasions I normally reply politely to all questions, no matter how foolish, but this time I answered that the question did not seem to me interesting but that the obvious resentment that inspired it was very significant indeed.”
No one can deny that audiences are conservative, whether they be Parisians rioting at the première of the Rite of Spring in 1913 or punks lobbing bottles at art-rockers Suicide on tour with the Clash. And since questions like this one are often a coded demand that composers start writing ‘real music’ like Mozart did, Rosen can be forgiven some impatience. Stubbs is justifiable indignant that even fans of conceptual art will parrot trite witticisms about the ‘cacophony’ of much experimental music.
But the understanding of the cognitive mechanisms of music that has emerged in the past several decades implies that it is not enough to tell ingrates bemused by Stockhausen to knuckle down and try harder. Many musicologists accept a definition of music as ‘organized sound’ (ironically, since this was the description used by avant-garde electronic pioneer Edgar Varèse for his own musique concrète, a paradigm of all that is seen as distressing about ‘modern’ music). Yet sound does not become organized merely because the composer has used a system to arrange it. Sound is structured into music not on paper, nor even in the mind of the composer, but in the mind of the listener. So music is sound in which the organization is audibly perceptible, not just that in which it is theoretically present.
Our brains use rules of thumb, both learnt and innate, to arrange an acoustic signal into a coherent entity: to pick out key, melody and harmony, to identify rhythm and metre, and to create a sense of structure and logic. The traditional music of just about every culture on earth builds in elements that assist this decoding process. When we encounter unfamiliar music, we may need to adjust our decoding rules, or learn new ones, before we can truly hear it at all.
Chief among these rules are the ‘Gestalt principles’ identified by a group of German-based psychologists in the early twentieth century. Initially identified in visual processing, these principles help us make good guesses at how to interpret complex sensory stimuli. We make assumptions about continuity, for example: the aeroplane that flies into a cloud is the same one that flies out the other side. We group together objects that look similar, or that are close together. Although the Gestalt principles are not foolproof, they make the world more comprehensible. Both in sound and in vision, the ability to interpret sensory data this way must have had clear evolutionary benefits.
In music, this means that melodies that move in small pitch steps tend to sound unified and ‘good’, while ones with large pitch jumps are liable to seem fragmented and harder to make out. Traditional melodies in diverse cultures do indeed proceed mostly in small rather than large pitch steps. Regular rhythms also contribute to coherence, while erratic ones are apt to confuse us.
The composer’s job is to manipulate the expectations that these principles produce – enough to avoid predictability and create a lively musical surface, but not so much as to lose coherence. Out of the interplay between expectation and reality comes much of music’s capacity to excite and move us. But what happens if these rules are seriously undermined? In Boulez’s Structures I or Stockhausen’s Klavierstück VII, say, there is no discernible rhythm, and the ‘melody line’, if one can call it that, is as jagged as the Dolomites. In this situation, we can develop no expectations about the music, and this absence of an audible relationship between one note and the next cuts off a key channel of musical affect. What remains may be a temporarily diverting sound, but the indulgent listener risks becoming like the sentimental audiences about whom nineteenth-century music theorist Eduard Hanslick complained, wallowing in the sonic surface while oblivious to the musical details.
And yet how can Structures lack structure? It is one of the most ‘structured’ pieces of music ever written! It was composed using the technique of ‘integral serialism’, in which musical parameters such as pitch, dynamics and rhythm are prescribed along the lines of Schoenberg’s ‘twelve-tone’ method, introduced in the 1920s. In Schoenberg’s original formulation, this approach dictated only the choice of pitches, and it was meant to eliminate all vestige of tonality – the anchoring of a piece of music to a tonic centre, which enables us to assign it a particular key. Schoenberg considered that tonal music – which meant all Western music until that point – had become tired and formulaic, and serialism was supposed to offer a systematic way of composing atonally.
The composer first chose a tone row: all twelve notes of the chromatic scale, lined up in a particular order. This was the composition’s basic musical gene: the piece was made up of repetitions of the tone row in strict order, sounded simultaneously or in succession. Individual notes could be immediately repeated, and could be used in any octave. And various mathematical permutations of the tone rows, such as reverse order, were also permitted.
The twelve-tone method ensures that no note is used more often than any other, so that none can acquire the status of a tonic merely by repetition. By the 1950s serialism had became, in many leading schools of classical composition, the only ‘respectable’ way to compose; anything hinting at tonality was considered passé and bourgeois. Yet Schoenberg not only failed to justify his horror of tonality – a composer like Béla Bartók displayed remarkable dissonant invention without abandoning it – but more importantly, he never truly came to terms with what its abandonment implied for both composer and listener. Since atonality has no tonal ‘home’, there was nowhere to depart from or return to, so that beginning, endings, and the entire matter of large-scale structure became problematic. As Roger Scruton says, ‘When the music goes everywhere, it also goes nowhere.’
Tonality is also one of the pillars of music comprehension. Far from being a decadent Western device, it is used in just about every musical tradition in the world (it does not rely on Western scales). Cognitive studies have shown how tonality provides a sense of location in pitch space and a way to organize the sequence of notes. It is the removal of this, far more than any considerations of harmony and dissonance, that many listeners find disconcerting in serialism.
This is not to say that atonality in general, and serialism in particular, is doomed to sound aimless and incomprehensible. There are plenty of other parameters, such as rhythm, dynamics and timbre, that a composer can deploy to create coherent structures. Schoenberg often did so masterfully, and Alban Berg’s Lyric Suite (1925-6) is so beautifully wrought that one would hardly know it was a twelve-tone composition at all. But as integral serialism and other techniques progressively and systematically subverted other means of providing audible organization, so it was unsurprising that audiences found the music ever harder to ‘understand’. The serialist’s rules are not ones that can be heard – even specialists in this music can rarely hear tone rows as such. Boulez’s serial piece Le Marteau sans Maître was widely acclaimed when premiered in 1955, but it wasn’t until 22 years later that anyone else could figure out how it was serial: no one could deduce, let alone hear, the organizational ‘structure’. One can hardly blame audiences for suspecting that what is left is musically rather sparse.
This is not to imply that music must return to tonal composition, with its cadences and modulations (although that is to some degree happening anyway). But ‘experimental’ music can only qualify as such if, like any experiment, it includes the possibility of failure. If musical composition takes no account of cognition – if indeed it denies that cognition has any role to play, or determinedly frustrates it – then composers cannot complain when their music is unloved.
Sadly, although these difficulties afflict only one strand of modern classical music, the fact that it was once dominant means that all the rest tends to get tarred with the same brush. Its critics often fail to differentiate music lacking clear cognitive ‘coherence systems’ from that which has new ones. What Javanese gamelan experts Alton and Judith Becker say of non-Western music pertains also to much contemporary experimental music: “it has become increasingly clear that the coherence systems of other musics may have nothing to do with either tonality or thematic development… What is different in a different musical system may be perceived, then, as noise. Or it may not be perceived at all. Or it may be perceived as a ‘bad’ or ‘simple-minded’ variant of my own system.” Often the only thing that stands in the way of comprehension, even enjoyment, is a refusal to adapt, to realise that it is no good trying to hear all music the way we hear Mozart or Springsteen. We need, in the parlance of the field, to find other ‘listening strategies’. Gyorgi Ligeti’s works can be appreciated as some of the most thrilling and inventive of the twentieth century once we realise that it handles time differently for instance. Musicologist Jonathan Kramer distinguishes it as ‘vertical’ rather than ‘horizontal’ time: musical events do not relate to one another in succession, like call and response, but are stacked up into sonic textures that slowly mutate and take on almost tangible forms.
It would arguably benefit all concerned if some experimental music, like much of Stockhausen’s or Boulez’s oeuvre and certainly the ambient noises of John Cage’s notoriously ‘silent’ 4’33”, were viewed instead as ‘sound art’, a term coined by Canadian composer Dan Lander and anticipated by the Italian futurist Luigi Russolo’s 1913 manifesto The Art of Noises. That way, one is not led to expect from these compositions what we expect of music. For if music is not acknowledged as a mental process, sound is all that remains.
Added note: The comments continue on the Prospect site, and make interesting reading. Of course, there are the inevitable blogosphere crazies. Will Orzo thinks he should tell me about this field called music cognition, in which people study other people’s responses to music. Thanks Will – hey, maybe I should use some of that work in my book on music cognition! Seriously, though, anyone who actually knows this field, as opposed to having looked it up on Wikipedia, would see straight away that this is precisely what I’m drawing on in my claims about how atonalism is perceived, especially the work of Fred Lerdahl, Carol Krumhansl and David Huron. If I am regurgitating anyone’s opinions, it is theirs. If you want references, look up my article in Nature last year (453, 160).
As for Joe Schmoe – anyone figure out who he’s ranting against? Sometimes it seems to be Stubbs, sometimes Adorno, sometimes Babbitt, sometimes me. An angry man. But incoherently so.
I hope all of this will be made clearer in my forthcoming book The Music Instinct. I am aware, however, that I have not addressed either here or there the defence of serialism made by Morag Josephine Grant in her book Serial Music, Serial Aesthetics (CUP). Here she attacks Fred Lerdahl’s critique of serialism (and other modern compositional methods), which is based on his linguistic approach to music cognition. Lerdahl believes that music needs to have an audible ‘grammar’: as he says, ‘The musical surface must be available for hierarchical structuring by the listening grammar’. In short, if we can’t construct hierarchies of pitch, rhythm and so forth, then the ‘musical surface’ is shallow: everything just sounds like everything else. This is the thrust of my Prospect piece, and I think it is true: for music to be more than a collection of mere sounds, there needs to be some audible way of organizing it. Grant complains that Lerdahl is forcing music into a straitjacket: she says his ‘argument ‘ that musical language, like spoken language, is generative in structure excludes the possibility of other, non-hierarchical methods of achieving musical coherence.’
But Grant is not, as it might seem, rejecting the need for music to acknowledge cognition. Rather, she asserts that this was precisely the concern of the serialists. They, however (she says), felt that our perception was culturally conditioned, and that we have ‘the ability to develop or uncover previously suppressed abilities’. One of those is the recognition of the tone row itself: ‘the use of the row is itself a constraint, not just on the composer, but in the aid of comprehensibility as well.’ Lerdahl is unaware of this, Grant says, while serialism imposes a system precisely to aid cognition.
And this is where Grant’s thesis utterly falls apart. For it has been shown quite clearly now that serialism’s ‘system’ is not one that can be cognized. It exists only on paper; listeners simply don’t hear it. And the reason for this is that it is simply not the kind of system that the mind intuits: we don’t listen to music by remembering arbitrary sequences of notes, but rather – as Lerdahl says – we organize those notes into hierarchical patterns: hierarchies of pitch, rhythm, melodic contour and so forth.
Can musical coherence be achieved by non-hierarchical methods? I’m not sure anyone knows, but certainly Grant provides no evidence of this – it is an act of faith. And what, precisely, are those methods, if we must exclude the tone row itself as one of them? She doesn’t say. She does, however, say that ‘the intense concentration on the tiniest of fluctuations’ is ‘central to the hearing of serial music’. I’m not sure what this means: fluctuations in what? The fluctuations in rhythm are either rather traditional, as in Berg, or are so extreme (as in Boulez) that rhythm has no meaning. Fluctuations in pitch, as in deviations from the tone row, would not be heard even if they were permitted. Grant also says that serialism ‘has the ability to create structures specific to each of its utterances’. Again, make of that what you will. If it means that each composition, each tone row, creates its own private language, then we know what Wittgenstein had to say about private languages. Why can’t Grant just explain how we are meant to find coherence in serial music, other than by its utilization of traditional techniques in parameters other than pitch?
I suppose one might interpret her remarks as saying that serialism encourages us to focus on each event as an entity in itself, not as something embedded in a hierarchical grammar. That’s possible. It might be interesting. And it is what I refer to below as sound art. If that’s the intention, it seems to me to be an explicit admission that ‘coherence’ isn’t the aim at all. And to my mind, coherence is the one characteristic that music should possess – I don’t care if you ditch tonality, or rhythm, or melody, or harmony, so long as what remains is in some manner coherent.
Incidentally, and in the hope of not sounding horribly patronizing, I have a lot of time for what Grant is up to more broadly. Defending Stockhausen is a noble cause in itself, even if I don’t buy it, and anyone who does so while listening to Scottish folk music and rap wins my vote.
Anyway, here’s the piece:
Writer Joe Queenan recently caused a minor rumpus in the austere world of contemporary classical music by complaining about how painful much of it is. He called Luciano Berio’s 1968 Sinfonia “35 minutes of non-stop torture”, Stockhausen’s Kontra-Punkte (1953) like “a cat running up and down the piano”, Krzysztof Penderecki’s Threnody for the Victims of Hiroshima (1960) “belligerent bees buzzing in the basement”, and Harrison Birtwistle’s latest opera The Minotaur “funereal caterwauling”. A hundred years after Schoenberg, he said, “the public still doesn't like anything after Transfigured Night, and even that is a stretch.”
Inevitably, Queenan was lambasted as a reactionary philistine. Performances of ‘modern’ works like this well attended, his critics said. And while Queenan took pains to distance himself from the conservative concert-goers who demand a steady diet of Mozart and Brahms, his comments were denounced as the same old clichés. The problem is that, like most clichés, they become such by frequent use. Sure, these works will find audiences in London’s highbrow venues, but the fact remains that Stockhausen and Penderecki, whose works now are as old as ‘Rock Around the Clock’, have not been assimilated into the classical canon in the way that Ravel and Stravinsky have. When someone like Queenan has earnestly tried and failed to appreciate this ‘new’ music, it’s fair to ask what the problem is.
David Stubbs considers this important question in his new book Fear of Music (Zero, 2009) but doesn’t come close to answering it. His speculative suggestion – that music lacks an ‘original object’ that, in visual art, can become the subject of veneration or trade – clearly has little force, given that it must surely apply equally to Beethoven and Berio. Indeed, Stubbs’ analysis is part of the problem rather than part of the solution. Like economists trying to understand market crashes, he wants to place all the motive forces outside the system: his gaze never fixes on the music itself. To Stubbs, your responses to music are a function of your context and perspective, not of the music. His comparisons of visual and musical art assumes an equivalence that allows no possibility of their being cognitively distinct processes.
He is in good company. Social theorist Theodore Adorno’s advocacy of Schoenberg’s atonal modernism was politically motivated: tonality was the bastion of bourgeois complacency. To the hardline modernists of the 1950s and 60s, any hint of tonality was a form of recidivism to be denounced with Maoist vigour; Pierre Boulez refused for a time even to speak to tonal composers. American composer Milton Babbitt’s provocative 1958 essay ‘Who Cares if You Listen?’ argued that it was time for ‘serious’ composers to withdraw from public engagement altogether, while offering nothing in the way of explanation for the public’s antipathy to ‘difficult’ music (his included) except a belief that they were too ill-informed to understand it. After giving a lecture on the music of Boulez and Elliott Carter, the eminent pianist and critic Charles Rosen responded thus to a question from the audience about whether composers have a responsibility to write music that the public can understand: “On such occasions I normally reply politely to all questions, no matter how foolish, but this time I answered that the question did not seem to me interesting but that the obvious resentment that inspired it was very significant indeed.”
No one can deny that audiences are conservative, whether they be Parisians rioting at the première of the Rite of Spring in 1913 or punks lobbing bottles at art-rockers Suicide on tour with the Clash. And since questions like this one are often a coded demand that composers start writing ‘real music’ like Mozart did, Rosen can be forgiven some impatience. Stubbs is justifiable indignant that even fans of conceptual art will parrot trite witticisms about the ‘cacophony’ of much experimental music.
But the understanding of the cognitive mechanisms of music that has emerged in the past several decades implies that it is not enough to tell ingrates bemused by Stockhausen to knuckle down and try harder. Many musicologists accept a definition of music as ‘organized sound’ (ironically, since this was the description used by avant-garde electronic pioneer Edgar Varèse for his own musique concrète, a paradigm of all that is seen as distressing about ‘modern’ music). Yet sound does not become organized merely because the composer has used a system to arrange it. Sound is structured into music not on paper, nor even in the mind of the composer, but in the mind of the listener. So music is sound in which the organization is audibly perceptible, not just that in which it is theoretically present.
Our brains use rules of thumb, both learnt and innate, to arrange an acoustic signal into a coherent entity: to pick out key, melody and harmony, to identify rhythm and metre, and to create a sense of structure and logic. The traditional music of just about every culture on earth builds in elements that assist this decoding process. When we encounter unfamiliar music, we may need to adjust our decoding rules, or learn new ones, before we can truly hear it at all.
Chief among these rules are the ‘Gestalt principles’ identified by a group of German-based psychologists in the early twentieth century. Initially identified in visual processing, these principles help us make good guesses at how to interpret complex sensory stimuli. We make assumptions about continuity, for example: the aeroplane that flies into a cloud is the same one that flies out the other side. We group together objects that look similar, or that are close together. Although the Gestalt principles are not foolproof, they make the world more comprehensible. Both in sound and in vision, the ability to interpret sensory data this way must have had clear evolutionary benefits.
In music, this means that melodies that move in small pitch steps tend to sound unified and ‘good’, while ones with large pitch jumps are liable to seem fragmented and harder to make out. Traditional melodies in diverse cultures do indeed proceed mostly in small rather than large pitch steps. Regular rhythms also contribute to coherence, while erratic ones are apt to confuse us.
The composer’s job is to manipulate the expectations that these principles produce – enough to avoid predictability and create a lively musical surface, but not so much as to lose coherence. Out of the interplay between expectation and reality comes much of music’s capacity to excite and move us. But what happens if these rules are seriously undermined? In Boulez’s Structures I or Stockhausen’s Klavierstück VII, say, there is no discernible rhythm, and the ‘melody line’, if one can call it that, is as jagged as the Dolomites. In this situation, we can develop no expectations about the music, and this absence of an audible relationship between one note and the next cuts off a key channel of musical affect. What remains may be a temporarily diverting sound, but the indulgent listener risks becoming like the sentimental audiences about whom nineteenth-century music theorist Eduard Hanslick complained, wallowing in the sonic surface while oblivious to the musical details.
And yet how can Structures lack structure? It is one of the most ‘structured’ pieces of music ever written! It was composed using the technique of ‘integral serialism’, in which musical parameters such as pitch, dynamics and rhythm are prescribed along the lines of Schoenberg’s ‘twelve-tone’ method, introduced in the 1920s. In Schoenberg’s original formulation, this approach dictated only the choice of pitches, and it was meant to eliminate all vestige of tonality – the anchoring of a piece of music to a tonic centre, which enables us to assign it a particular key. Schoenberg considered that tonal music – which meant all Western music until that point – had become tired and formulaic, and serialism was supposed to offer a systematic way of composing atonally.
The composer first chose a tone row: all twelve notes of the chromatic scale, lined up in a particular order. This was the composition’s basic musical gene: the piece was made up of repetitions of the tone row in strict order, sounded simultaneously or in succession. Individual notes could be immediately repeated, and could be used in any octave. And various mathematical permutations of the tone rows, such as reverse order, were also permitted.
The twelve-tone method ensures that no note is used more often than any other, so that none can acquire the status of a tonic merely by repetition. By the 1950s serialism had became, in many leading schools of classical composition, the only ‘respectable’ way to compose; anything hinting at tonality was considered passé and bourgeois. Yet Schoenberg not only failed to justify his horror of tonality – a composer like Béla Bartók displayed remarkable dissonant invention without abandoning it – but more importantly, he never truly came to terms with what its abandonment implied for both composer and listener. Since atonality has no tonal ‘home’, there was nowhere to depart from or return to, so that beginning, endings, and the entire matter of large-scale structure became problematic. As Roger Scruton says, ‘When the music goes everywhere, it also goes nowhere.’
Tonality is also one of the pillars of music comprehension. Far from being a decadent Western device, it is used in just about every musical tradition in the world (it does not rely on Western scales). Cognitive studies have shown how tonality provides a sense of location in pitch space and a way to organize the sequence of notes. It is the removal of this, far more than any considerations of harmony and dissonance, that many listeners find disconcerting in serialism.
This is not to say that atonality in general, and serialism in particular, is doomed to sound aimless and incomprehensible. There are plenty of other parameters, such as rhythm, dynamics and timbre, that a composer can deploy to create coherent structures. Schoenberg often did so masterfully, and Alban Berg’s Lyric Suite (1925-6) is so beautifully wrought that one would hardly know it was a twelve-tone composition at all. But as integral serialism and other techniques progressively and systematically subverted other means of providing audible organization, so it was unsurprising that audiences found the music ever harder to ‘understand’. The serialist’s rules are not ones that can be heard – even specialists in this music can rarely hear tone rows as such. Boulez’s serial piece Le Marteau sans Maître was widely acclaimed when premiered in 1955, but it wasn’t until 22 years later that anyone else could figure out how it was serial: no one could deduce, let alone hear, the organizational ‘structure’. One can hardly blame audiences for suspecting that what is left is musically rather sparse.
This is not to imply that music must return to tonal composition, with its cadences and modulations (although that is to some degree happening anyway). But ‘experimental’ music can only qualify as such if, like any experiment, it includes the possibility of failure. If musical composition takes no account of cognition – if indeed it denies that cognition has any role to play, or determinedly frustrates it – then composers cannot complain when their music is unloved.
Sadly, although these difficulties afflict only one strand of modern classical music, the fact that it was once dominant means that all the rest tends to get tarred with the same brush. Its critics often fail to differentiate music lacking clear cognitive ‘coherence systems’ from that which has new ones. What Javanese gamelan experts Alton and Judith Becker say of non-Western music pertains also to much contemporary experimental music: “it has become increasingly clear that the coherence systems of other musics may have nothing to do with either tonality or thematic development… What is different in a different musical system may be perceived, then, as noise. Or it may not be perceived at all. Or it may be perceived as a ‘bad’ or ‘simple-minded’ variant of my own system.” Often the only thing that stands in the way of comprehension, even enjoyment, is a refusal to adapt, to realise that it is no good trying to hear all music the way we hear Mozart or Springsteen. We need, in the parlance of the field, to find other ‘listening strategies’. Gyorgi Ligeti’s works can be appreciated as some of the most thrilling and inventive of the twentieth century once we realise that it handles time differently for instance. Musicologist Jonathan Kramer distinguishes it as ‘vertical’ rather than ‘horizontal’ time: musical events do not relate to one another in succession, like call and response, but are stacked up into sonic textures that slowly mutate and take on almost tangible forms.
It would arguably benefit all concerned if some experimental music, like much of Stockhausen’s or Boulez’s oeuvre and certainly the ambient noises of John Cage’s notoriously ‘silent’ 4’33”, were viewed instead as ‘sound art’, a term coined by Canadian composer Dan Lander and anticipated by the Italian futurist Luigi Russolo’s 1913 manifesto The Art of Noises. That way, one is not led to expect from these compositions what we expect of music. For if music is not acknowledged as a mental process, sound is all that remains.
Added note: The comments continue on the Prospect site, and make interesting reading. Of course, there are the inevitable blogosphere crazies. Will Orzo thinks he should tell me about this field called music cognition, in which people study other people’s responses to music. Thanks Will – hey, maybe I should use some of that work in my book on music cognition! Seriously, though, anyone who actually knows this field, as opposed to having looked it up on Wikipedia, would see straight away that this is precisely what I’m drawing on in my claims about how atonalism is perceived, especially the work of Fred Lerdahl, Carol Krumhansl and David Huron. If I am regurgitating anyone’s opinions, it is theirs. If you want references, look up my article in Nature last year (453, 160).
As for Joe Schmoe – anyone figure out who he’s ranting against? Sometimes it seems to be Stubbs, sometimes Adorno, sometimes Babbitt, sometimes me. An angry man. But incoherently so.
Tuesday, October 20, 2009
The bioethics of human cloning
It’s a funny business, bioethics. I’d never really before now looked into what it is that people called bioethicists do – but the more I do so, the more it feels that their job is basically to offer personal opinions with a professionalized aura. There is nothing intrinsically wrong about that – practising scientists tend to spend too little time thinking hard about ethical issues (beyond the basic stuff of plagiarism, fabricating data and so forth), so it is good that someone does it. And when this kind of ‘op ed’ discourse is conducted with considered philosophical rigour, and/or informed by a humane and open-minded perspective, it seems potentially to have a lot to recommend it. But from what I’ve seen so far, it strikes me as a very mixed bag.
I’ve currently been grappling with the views of Laurie Zoloth on human cloning. Zoloth is no ringside commentator, but wields considerable clout as the Director of the Center for Bioethics, Science and Society at Northwestern University. And this is what makes me kind of surprised.
Her position is laid out in ‘Born again: faith and yearning in the cloning controversy’, a chapter in Cloning and the Future of Human Embryo Research, ed. P. Lauritzen (OUP, 2001). This essay is also available online (in more or less verbatim form) here. The title goes a long way towards articulating her position. Human cloning, she believes, is all about a yearning to avoid death, and thus a narcissistic impulse to produce a copy of oneself.
Now, this is very odd. You can imagine this being the impression of someone outside the mainstream of the debate, particularly someone intuitively opposed to the idea. And there are good reasons to feel unease at the idea of reproductive cloning in humans, even in the face of the cogent arguments that Ronald Green puts forward, in the same book, in its favour. But Green makes it clear that the principal motivation for reproductive cloning is as another variant of assisted conception, alongside conventional IVF. It might be used, for example, in cases where couples wanted to have a genetically related child but either the man or the woman had no gametes at all. One can see the arguments: is it, for example, really any more potentially confusing for a child to know fully about their genetic heritage (and parentage) than to know only half of that equation in the case of anonymous gamete donation? And cloning would also offer female couples who want to conceive a child the same potential advantage over sperm donation. Of course, that opens up another can of worms in some eyes, and all the more so if one considers male homosexual couples using surrogacy for gestation. But I’m not going to argue (here) about whether such reproductive cloning is justified; the point is that the parental wish for a genetically related child, rather than for a ‘copy’ of oneself, is the motive force behind arguments supporting it. Sure, it’s possible to contend that the wish for any genetic relation to a child is itself narcissistic – and this then is a kind of narcissism displayed by the majority of the human race, and universally accepted as a ‘natural’ human desire.
Zoloth also takes on the issue of cloning performed to ‘replace’ a dead child. She movingly describes a situation in which she could appreciate this wish, but in which she could also see that the better response was to confront the anguish of the loss. Notwithstanding the fact that there is no law or intervention that prevents parents from conceiving another child ‘normally’ in response to such a tragedy, I think few would dispute that attempts to ‘replace’ a dead child are never a healthy thing. But both here and in the case of efforts to cheat one’s own mortality through cloning, the simple fact is that such actions are deluded in any event from a scientific point of view. This isn’t, as Zoloth implies, a case of science offering the temptation and bioethicists advising us to resist. Any scientist worthy of the description who knows the first thing about genetics will be the first to point out that genetically identical individuals are not in any meaningful sense ‘the same’. In her comments here, Zoloth tacitly endorses the myth of genetic determinism that scientists are always at pains to dismantle.
It seems extraordinary that a leading bioethicist would labour under these misconceptions. Naturally, if you’re temperamentally opposed to human cloning then it makes strategic sense to pick the worst possible reason for doing it in order to argue the case against. But one might question the ethics of doing so, if done intentionally. If done unintentionally, the issue is then one of competence.
What I’ve noticed in several of the critiques of the new reproductive technologies and of human embryo research is a shameful evasion of plain speaking. Time and again, one can see where the argument is inevitably heading, but the critic will not spell that out, for what one can only assume is fear of saying something unpopular. Instead, they take refuge in woolly, wise-sounding rhetoric that masks the real message. They present their criticisms – and some are, without doubt, well motivated – but decline to explain what their alternative would be. So then, for example, Zoloth says that ‘advanced reproductive technology’ relies on the notion of infertility as a disease, which must then of course be ‘cured’. This is a valid criticism: there are real dangers of setting up a situation in which people consider it their ‘right’ to have any medical treatment that will offer them the chance of conceiving a child – and in the process having their condition pathologized. But to simply say this and leave it at that is to dismiss the plight of infertility all together – to imply that ‘you just have to learn to live with it’, or perhaps, ‘you’ll just have to adopt then’ (from a greatly diminished pool, for both social and medical reasons). Zoloth nowhere acknowledges that infertility has always been seen as a problem – not just today but in the times to which she looks for the ‘wisdom of ages’. How can it possibly be that someone who makes so much of her Jewish heritage seems oblivious to the ‘disgrace’ that Rachel felt when she could bear Jacob no children (until God relented, that is). (Mind you, Zoloth’s theology seems to hold other surprises – for can it really be the case that, as she suggests, Noah is now the object of rabbinical criticism for thinking only of his wife and children and not arguing with God about the injustice of destroying the rest of his community? Sounds like a good point to me, but can it really be the case that Jewish theologists now think one should be prepared to pick arguments with God? Wild.)
So then, the unspoken text of Zoloth’s essay is that infertility is a bad roll of the dice that you have to learn to put up with. At least, I think that’s what it is. She doesn’t put it like that, of course. She puts it like this: we must look for a ‘refinement of imperfection, not the a priori obliteration of imperfection. In this, we could serve to remind [sic] of something else: of the blinding power of human love, which sees and knows, right through the brokenness’. Got that?
If I’m right in my interpretation, this doesn’t seem to offer a great deal of empathy for people who encounter infertility. Oh, but that’s mild – for elsewhere Zoloth says that ‘The hunger of the infertile is ravenous, desperate’. There are more offensive things you can say to infertile people, but not by very much.
Well then, sic indeed – for there are times when you have to wonder quite what has happened to her prose and grammar. I suspected at first if there might be a first-language issue here – if so, all criticisms are retracted – but it doesn’t look that way. Rather, one has to suspect the old post-modernist problem of language becoming a casualty of a reluctance to be truly understood. Sometimes this tension creates an utter car-crash of metaphors. For example, in telling us how parents must reject the desire for a ‘close-as-can-be-replica’ (see above), she says they must ‘earn to have the stranger, not the copy, live by our side as though out of our side’. As though what? What else can ‘out of our side’ evoke if not, after all, a clone? And indeed, the first of all clones, Eve made from Adam’s rib! Why plunge us into that thicket? Does she really mean to? Please, what is going on? (Notice here that ‘copy’ = product of one parent’s genome alone; ‘stranger’ = product of both parents’ genomes. There is some odd asymptotic calculus here, quite aside from the fact that a ‘copy’ of one parent’s genome is surely then far more of a ‘stranger’ to the other parent.)
Then how about this: ‘We need to reflect on the meaning not only of the performance gesture of cloning but also the act of the imagination that surrounds the act in popular culture.’ Now, here’s a statement I do actually endorse; but what a tortured way to put it. Indeed, that is the very aim, in a sense, of the book for which I’m reading all this stuff. And it’s therefore with some gladness of heart that I see Zoloth giving me material to work on. ‘The whole point of ‘making babies’’, she says, ‘is not the production, it is the careful rearing of persons, the promise to have bonds of love that extend far beyond the initial ask and answer of the marketplace.’ How true. And how interesting, then, that for her the hypothetical cloned human (and, to pursue the logic, already the IVF baby) becomes not a person who can be born and reared with love but a mere product of the marketplace. That assumption, that prejudice, is just the thing that interests me.
I’ve currently been grappling with the views of Laurie Zoloth on human cloning. Zoloth is no ringside commentator, but wields considerable clout as the Director of the Center for Bioethics, Science and Society at Northwestern University. And this is what makes me kind of surprised.
Her position is laid out in ‘Born again: faith and yearning in the cloning controversy’, a chapter in Cloning and the Future of Human Embryo Research, ed. P. Lauritzen (OUP, 2001). This essay is also available online (in more or less verbatim form) here. The title goes a long way towards articulating her position. Human cloning, she believes, is all about a yearning to avoid death, and thus a narcissistic impulse to produce a copy of oneself.
Now, this is very odd. You can imagine this being the impression of someone outside the mainstream of the debate, particularly someone intuitively opposed to the idea. And there are good reasons to feel unease at the idea of reproductive cloning in humans, even in the face of the cogent arguments that Ronald Green puts forward, in the same book, in its favour. But Green makes it clear that the principal motivation for reproductive cloning is as another variant of assisted conception, alongside conventional IVF. It might be used, for example, in cases where couples wanted to have a genetically related child but either the man or the woman had no gametes at all. One can see the arguments: is it, for example, really any more potentially confusing for a child to know fully about their genetic heritage (and parentage) than to know only half of that equation in the case of anonymous gamete donation? And cloning would also offer female couples who want to conceive a child the same potential advantage over sperm donation. Of course, that opens up another can of worms in some eyes, and all the more so if one considers male homosexual couples using surrogacy for gestation. But I’m not going to argue (here) about whether such reproductive cloning is justified; the point is that the parental wish for a genetically related child, rather than for a ‘copy’ of oneself, is the motive force behind arguments supporting it. Sure, it’s possible to contend that the wish for any genetic relation to a child is itself narcissistic – and this then is a kind of narcissism displayed by the majority of the human race, and universally accepted as a ‘natural’ human desire.
Zoloth also takes on the issue of cloning performed to ‘replace’ a dead child. She movingly describes a situation in which she could appreciate this wish, but in which she could also see that the better response was to confront the anguish of the loss. Notwithstanding the fact that there is no law or intervention that prevents parents from conceiving another child ‘normally’ in response to such a tragedy, I think few would dispute that attempts to ‘replace’ a dead child are never a healthy thing. But both here and in the case of efforts to cheat one’s own mortality through cloning, the simple fact is that such actions are deluded in any event from a scientific point of view. This isn’t, as Zoloth implies, a case of science offering the temptation and bioethicists advising us to resist. Any scientist worthy of the description who knows the first thing about genetics will be the first to point out that genetically identical individuals are not in any meaningful sense ‘the same’. In her comments here, Zoloth tacitly endorses the myth of genetic determinism that scientists are always at pains to dismantle.
It seems extraordinary that a leading bioethicist would labour under these misconceptions. Naturally, if you’re temperamentally opposed to human cloning then it makes strategic sense to pick the worst possible reason for doing it in order to argue the case against. But one might question the ethics of doing so, if done intentionally. If done unintentionally, the issue is then one of competence.
What I’ve noticed in several of the critiques of the new reproductive technologies and of human embryo research is a shameful evasion of plain speaking. Time and again, one can see where the argument is inevitably heading, but the critic will not spell that out, for what one can only assume is fear of saying something unpopular. Instead, they take refuge in woolly, wise-sounding rhetoric that masks the real message. They present their criticisms – and some are, without doubt, well motivated – but decline to explain what their alternative would be. So then, for example, Zoloth says that ‘advanced reproductive technology’ relies on the notion of infertility as a disease, which must then of course be ‘cured’. This is a valid criticism: there are real dangers of setting up a situation in which people consider it their ‘right’ to have any medical treatment that will offer them the chance of conceiving a child – and in the process having their condition pathologized. But to simply say this and leave it at that is to dismiss the plight of infertility all together – to imply that ‘you just have to learn to live with it’, or perhaps, ‘you’ll just have to adopt then’ (from a greatly diminished pool, for both social and medical reasons). Zoloth nowhere acknowledges that infertility has always been seen as a problem – not just today but in the times to which she looks for the ‘wisdom of ages’. How can it possibly be that someone who makes so much of her Jewish heritage seems oblivious to the ‘disgrace’ that Rachel felt when she could bear Jacob no children (until God relented, that is). (Mind you, Zoloth’s theology seems to hold other surprises – for can it really be the case that, as she suggests, Noah is now the object of rabbinical criticism for thinking only of his wife and children and not arguing with God about the injustice of destroying the rest of his community? Sounds like a good point to me, but can it really be the case that Jewish theologists now think one should be prepared to pick arguments with God? Wild.)
So then, the unspoken text of Zoloth’s essay is that infertility is a bad roll of the dice that you have to learn to put up with. At least, I think that’s what it is. She doesn’t put it like that, of course. She puts it like this: we must look for a ‘refinement of imperfection, not the a priori obliteration of imperfection. In this, we could serve to remind [sic] of something else: of the blinding power of human love, which sees and knows, right through the brokenness’. Got that?
If I’m right in my interpretation, this doesn’t seem to offer a great deal of empathy for people who encounter infertility. Oh, but that’s mild – for elsewhere Zoloth says that ‘The hunger of the infertile is ravenous, desperate’. There are more offensive things you can say to infertile people, but not by very much.
Well then, sic indeed – for there are times when you have to wonder quite what has happened to her prose and grammar. I suspected at first if there might be a first-language issue here – if so, all criticisms are retracted – but it doesn’t look that way. Rather, one has to suspect the old post-modernist problem of language becoming a casualty of a reluctance to be truly understood. Sometimes this tension creates an utter car-crash of metaphors. For example, in telling us how parents must reject the desire for a ‘close-as-can-be-replica’ (see above), she says they must ‘earn to have the stranger, not the copy, live by our side as though out of our side’. As though what? What else can ‘out of our side’ evoke if not, after all, a clone? And indeed, the first of all clones, Eve made from Adam’s rib! Why plunge us into that thicket? Does she really mean to? Please, what is going on? (Notice here that ‘copy’ = product of one parent’s genome alone; ‘stranger’ = product of both parents’ genomes. There is some odd asymptotic calculus here, quite aside from the fact that a ‘copy’ of one parent’s genome is surely then far more of a ‘stranger’ to the other parent.)
Then how about this: ‘We need to reflect on the meaning not only of the performance gesture of cloning but also the act of the imagination that surrounds the act in popular culture.’ Now, here’s a statement I do actually endorse; but what a tortured way to put it. Indeed, that is the very aim, in a sense, of the book for which I’m reading all this stuff. And it’s therefore with some gladness of heart that I see Zoloth giving me material to work on. ‘The whole point of ‘making babies’’, she says, ‘is not the production, it is the careful rearing of persons, the promise to have bonds of love that extend far beyond the initial ask and answer of the marketplace.’ How true. And how interesting, then, that for her the hypothetical cloned human (and, to pursue the logic, already the IVF baby) becomes not a person who can be born and reared with love but a mere product of the marketplace. That assumption, that prejudice, is just the thing that interests me.
Wednesday, October 14, 2009
Google Books suits me fine
It seems that Google Books is one of the talking points of the Frankfurt book fair this year. Angela Merkel has waded into the fray to condemn the enterprise, citing its (potential) violation of copyright. As an author, I ought to be right behind such denouncement of this fiendish ploy to make our words freely available to all.
Maybe I’m naïve, but so far I think that Google Books is a rather wonderful thing. For a start, I’m not aware that there is any way to actually download and print the stuff – and who on earth is going to want to read it in this format online? And it seems that none of the books is provided in its entirety – there are pages missing, which would be infuriating if you do plan to read the lot. But more to the point, so far Google Books has encouraged me to actually buy some books that I’d not have bought otherwise: in the course of my research, I can find titles that I’d never known existed, get a good idea of their contents and make the decision about purchase via the online secondhand sellers. My only other option, if I’d discovered the books at all, would have been to make a trip into the British Library, by which point I’d have probably ended up reading them there rather than bothering to buy them. (Besides, if a book is truly relevant and interesting, I want to own it – and thanks to the wonders of the internet, it’s generally possible to do that for little more than the [inflated] cost of postage. Hopefully bookshops are benefiting from this too.)
And in the course of completing the endnotes section for my latest book, I’ve found Google Books a godsend. Inevitably there are quotes in my text that either I’ve not annotated correctly in my notes or for which I’ve never quite tracked down the original citation in the first place. With a text search in Google Books, I can locate them instantly in the books I have at home, rather than having to flick endlessly through the pages trying to find where the damned things were. Or I can, say, go straight to the original old texts by the likes of Walter Pater and discover the quote in its original context rather than at several removes. None of this does anything to deprive writers of sales, and indeed many of the relevant books are (at least in my case) old and out of copyright (and print), the authors long dead. As a result, I completed the endnotes in a couple of days, when they dragged on forever with my previous book. It was a telling indication of the way the technology has advanced, for the better, in just a couple of years. Personally, I’m looking forward to the Library of Babel that is Google Books expanding indefinitely.
Maybe I’m naïve, but so far I think that Google Books is a rather wonderful thing. For a start, I’m not aware that there is any way to actually download and print the stuff – and who on earth is going to want to read it in this format online? And it seems that none of the books is provided in its entirety – there are pages missing, which would be infuriating if you do plan to read the lot. But more to the point, so far Google Books has encouraged me to actually buy some books that I’d not have bought otherwise: in the course of my research, I can find titles that I’d never known existed, get a good idea of their contents and make the decision about purchase via the online secondhand sellers. My only other option, if I’d discovered the books at all, would have been to make a trip into the British Library, by which point I’d have probably ended up reading them there rather than bothering to buy them. (Besides, if a book is truly relevant and interesting, I want to own it – and thanks to the wonders of the internet, it’s generally possible to do that for little more than the [inflated] cost of postage. Hopefully bookshops are benefiting from this too.)
And in the course of completing the endnotes section for my latest book, I’ve found Google Books a godsend. Inevitably there are quotes in my text that either I’ve not annotated correctly in my notes or for which I’ve never quite tracked down the original citation in the first place. With a text search in Google Books, I can locate them instantly in the books I have at home, rather than having to flick endlessly through the pages trying to find where the damned things were. Or I can, say, go straight to the original old texts by the likes of Walter Pater and discover the quote in its original context rather than at several removes. None of this does anything to deprive writers of sales, and indeed many of the relevant books are (at least in my case) old and out of copyright (and print), the authors long dead. As a result, I completed the endnotes in a couple of days, when they dragged on forever with my previous book. It was a telling indication of the way the technology has advanced, for the better, in just a couple of years. Personally, I’m looking forward to the Library of Babel that is Google Books expanding indefinitely.
Tuesday, October 13, 2009
Shaking hands with robots
[This is my forthcoming Material Witness column for Nature Materials, which seemed sufficiently low-tech to warrant inclusion here.]
Should robots pretend to be human? The plots of many science fiction novels and movies – most famously, Philip K. Dick’s Do Androids Dream of Electric Sheep?, filmed by Ridley Scott as Blade Runner – hinge on the consequences of that deception. Blade Runner opens with a ‘replicant’ undergoing the ‘Voight-Kampff’ test, in which physiological functions betray human-like emotional responses to a series of questions. This is a version of the test proposed by Alan Turing in a seminal 1950 paper pondering the question of whether machines can think [1].
But human-like thought (or its appearances) is only one aspect of the issue of robotic deception. There would be no need to test Blade Runner’s replicants if they had been made of gleaming chrome, or exhibited the jerky motions of a puppet or the stilted diction of an old-fashioned voice synthesizer. To seem truly human, a robot has to perform accurate mimesis on many (perhaps too many) fronts [2].
Today we might insist on a conceptual distinction between such mimicry and the real thing. But this was precisely what Turing set out to challenge in the realm of mind: if you can’t make the distinction empirically, in what sense can you say it exists? And in former times, that applied also to the other characteristics of humanoid machines. In the Cartesian world of the eighteenth century, when many considered humans to be merely elaborate mechanisms, it was not clear that the intricate automata which entertained salon society by writing and playing music and games were rigidly demarcated from humanity. Descartes himself refuted any such boundary, implying that automata were in a limited sense alive. In his Discourse on Method (1637) he even proposed a primitive version of the Turing test, based on the ability to use language and adapt behaviour to circumstance.
One of the most famous automata of that age was a mechanical flute player made by the virtuoso French engineer Jacques de Vaucanson, who unveiled it to wide acclaim in 1738. Not only did it sound right but its breathing mimicked human mechanics, and its right arm was upholstered with real skin [3]. This feat is brought to mind by a preprint by John-John Cabibihan at the National University of Singapore and colleagues, in which the mechanical properties of candidate ‘robot skin’ polymers (silicone and poyurethane) are tested for their likeness to human skin [4]. Can we make a robot hand feel human, the researchers ask? Not yet, at least with these materials, they conclude – in the process showing what a delicate task that is (a part of the feel of human skin, for example, comes from its hysteretic response to touch).
Underlying the research is the notion that people will be socially more at ease interacting with robots that seem ‘believable’ – we will feel queasy shaking hands if the touch is wrong. That’s supported by experience [5], but also in itself raises challenging questions about the proper limits of such illusion [6]. Arguably there are times when we should maintain an evident boundary between robot and person.
References
1. Turing, A. Mind 59, 433-460 (1950).
2. Negrotti, M. Naturoids (World Scientific, Singapore, 2002).
3. Stafford, B. M. Artful Science p.191-195 (MIT Press, Cambridge, Ma., 1994).
4. Cabibihan, J.-J., Pattofatto, S., Jomâa, M., Benallal, A. & Carrozza, M. C. Preprint http://www.arxiv.org/abs/0909.3559 (2009).
5. Fong, T., Nourbakhsh, I. & Dautenhahn, K. Robotics Autonomous Syst. 42, 143-166 (2003).
6. Sharkey, N. Science 322, 1800-1801 (2008).
Should robots pretend to be human? The plots of many science fiction novels and movies – most famously, Philip K. Dick’s Do Androids Dream of Electric Sheep?, filmed by Ridley Scott as Blade Runner – hinge on the consequences of that deception. Blade Runner opens with a ‘replicant’ undergoing the ‘Voight-Kampff’ test, in which physiological functions betray human-like emotional responses to a series of questions. This is a version of the test proposed by Alan Turing in a seminal 1950 paper pondering the question of whether machines can think [1].
But human-like thought (or its appearances) is only one aspect of the issue of robotic deception. There would be no need to test Blade Runner’s replicants if they had been made of gleaming chrome, or exhibited the jerky motions of a puppet or the stilted diction of an old-fashioned voice synthesizer. To seem truly human, a robot has to perform accurate mimesis on many (perhaps too many) fronts [2].
Today we might insist on a conceptual distinction between such mimicry and the real thing. But this was precisely what Turing set out to challenge in the realm of mind: if you can’t make the distinction empirically, in what sense can you say it exists? And in former times, that applied also to the other characteristics of humanoid machines. In the Cartesian world of the eighteenth century, when many considered humans to be merely elaborate mechanisms, it was not clear that the intricate automata which entertained salon society by writing and playing music and games were rigidly demarcated from humanity. Descartes himself refuted any such boundary, implying that automata were in a limited sense alive. In his Discourse on Method (1637) he even proposed a primitive version of the Turing test, based on the ability to use language and adapt behaviour to circumstance.
One of the most famous automata of that age was a mechanical flute player made by the virtuoso French engineer Jacques de Vaucanson, who unveiled it to wide acclaim in 1738. Not only did it sound right but its breathing mimicked human mechanics, and its right arm was upholstered with real skin [3]. This feat is brought to mind by a preprint by John-John Cabibihan at the National University of Singapore and colleagues, in which the mechanical properties of candidate ‘robot skin’ polymers (silicone and poyurethane) are tested for their likeness to human skin [4]. Can we make a robot hand feel human, the researchers ask? Not yet, at least with these materials, they conclude – in the process showing what a delicate task that is (a part of the feel of human skin, for example, comes from its hysteretic response to touch).
Underlying the research is the notion that people will be socially more at ease interacting with robots that seem ‘believable’ – we will feel queasy shaking hands if the touch is wrong. That’s supported by experience [5], but also in itself raises challenging questions about the proper limits of such illusion [6]. Arguably there are times when we should maintain an evident boundary between robot and person.
References
1. Turing, A. Mind 59, 433-460 (1950).
2. Negrotti, M. Naturoids (World Scientific, Singapore, 2002).
3. Stafford, B. M. Artful Science p.191-195 (MIT Press, Cambridge, Ma., 1994).
4. Cabibihan, J.-J., Pattofatto, S., Jomâa, M., Benallal, A. & Carrozza, M. C. Preprint http://www.arxiv.org/abs/0909.3559 (2009).
5. Fong, T., Nourbakhsh, I. & Dautenhahn, K. Robotics Autonomous Syst. 42, 143-166 (2003).
6. Sharkey, N. Science 322, 1800-1801 (2008).
Monday, October 05, 2009
What toys can tell us
[My latest Muse for Nature news…]
Sometimes all you need to do scientific research is string, sealing wax and a bit of imagination.
When Agnes Gardner King went visiting her uncle William one November day in 1887, she found him playing. He was, she wrote, ‘armed with a vessel of soap and glycerine prepared for blowing soap bubbles, and a tray with a number of mathematical figures made of wire.’ He’d dip these into the tray and see what shapes the soap films made as they adhered to the wire. ‘With some scientific end in view he is studying these films’, wrote Agnes [1].
Her uncle was William Thomson, better known as Lord Kelvin, one of the greatest scientists of the Victorian age. His ‘scientific end’ was to deduce the rules that govern soap-film intersections, so that he might figure out how to divide up three-dimensional space into cells of equal size and shape with the minimal wall area. It was the kind of problem that attracted Kelvin: simple to state, relevant to the world about him, and amenable to experiment using little more than ‘toys’.
This kind of study is brought to mind by a paper in the Proceedings of the National Academy of Sciences USA by George Whitesides and colleagues at Harvard University. In an effort to understand how polymer molecules fold and flex, they have built strings of beads and shaken them in a tray [2]. There are three types of bead: large spherical or cylindrical beads of Teflon and nylon, and small ‘spacer’ beads of poly(methyl methacrylate).
They are designed to mimic real polymers in which different monomer groups interact via forces of attraction and repulsion. When agitated on a flat surface to mimic thermal molecular motion, the Teflon and nylon beads develop negative and positive electrostatic charges respectively, and so like beads repel while unlike beads attract.
This simple ‘beads-on-a-string’ model of polymers replicates, in toy form, a mathematical description of polymers used to understand their conformational behaviour [3], such as the way the polypeptide chains of proteins fold into their compact, catalytically active ‘native’ structure. With some modification – using cylindrical beads of various lengths, so that optimal pairing of oppositely charged beads happens when they have the same length – the model can be used to look at how RNA molecules fold up using the principles of complementary base-pairing between the bases that form the ‘sticky’ monomers.
The beauty of it is that the experiments are literally child’s play (the interpretation requires a little more sophistication). Even the simplest formulations of the mathematical theory are tricky to solve – but the beads, say Whitesides and colleagues, act as an ‘analog computer’ that generates solutions, allowing them rapidly to develop and test hypotheses about how folding depends on the monomer sequence.
Whitesides has used this philosophy before, making macroscopic objects with faces coated with thin films that confer different types of mutual interaction so as to explore processes of molecular-scale self-assembly driven by selective intermolecular forces [4]. This sort of collective behaviour of many interacting parts can give rise to complex, often unexpected structures and dynamics, and is difficult to describe with rigorous mathematical theories.
It’s really a reflection of the way chemists have thought about atoms and molecules ever since John Dalton used wooden balls to represent them around 1810: as hard little entities with a characteristic size and shape. Chemists still routinely use plastic models to intuit how molecules fit together. And these investigations have long gone beyond the pedagogical to become truly experimental. The great crystallographer Desmond Bernal studied the disorderly packing of atoms in liquids using ball-bearings, and, with chalk-dusted balls of Plasticene squeezed inside a football bladder, repeated the 1727 experiment by Stephen Hales on packing of polyhedral cells that was itself a precursor to Kelvin’s investigations. (Bernal called them, apologetically, ‘rather childish experiments’ [5]).
In more recent years, model systems of beads and grains have been used as analogues of the most unlikely and complex of phenomena, from earthquakes and exotic electronic behaviour [6] to the phyllotactic growth of flower-heads [7]. Aside from the obvious issue of how closely these ‘toys’ mimic the theory (let alone how well the theory mimics reality), these approaches stand at risk of offering phenomenology without true insight: with an ‘analytical’ solution to the equations, it can be easier to discern the key physics at play. But when applied judiciously, they show that creativity and imagination can trump mathematical prowess or number-crunching muscle. And they also help underline the universality of physical theory, in which, as Ralph Waldo Emerson said, ‘The sublime laws play indifferently through atoms and galaxies.’ [8]
References
1. King, A. G. Kelvin the Man p.192 (Hodder & Stoughton, London, 1925).
2. Reches, M., Snyder, P. W. & Whitesides, G. M. Proc. Natl Acad. Sci. USA advance online publication 10.1073/pnas.0905533106 (2009).
3. Lifshitz, I. M., Grosberg, A. Y. & Khokhlov, A. R. Rev. Mod. Phys. 50, 683-713 (1978).
4. Bowden, N., Terfort, A., Carbeck, J. & Whitesides, G. M. Science 276, 233-235 (1997).
5. Bernal, J. D. Proc. R. Inst. Great Britain 37, 355-393 (1959).
6. Bak, P. How Nature Works (Oxford University Press, Oxford, 1997).
7. Douady, S. & Couder, Y. Phys. Rev. Lett. 68, 2098-2101 (1992).
8. Emerson, R. W. The Conduct of Life, p.202 (J. M. Dent & Sons, London, 1908).
Sometimes all you need to do scientific research is string, sealing wax and a bit of imagination.
When Agnes Gardner King went visiting her uncle William one November day in 1887, she found him playing. He was, she wrote, ‘armed with a vessel of soap and glycerine prepared for blowing soap bubbles, and a tray with a number of mathematical figures made of wire.’ He’d dip these into the tray and see what shapes the soap films made as they adhered to the wire. ‘With some scientific end in view he is studying these films’, wrote Agnes [1].
Her uncle was William Thomson, better known as Lord Kelvin, one of the greatest scientists of the Victorian age. His ‘scientific end’ was to deduce the rules that govern soap-film intersections, so that he might figure out how to divide up three-dimensional space into cells of equal size and shape with the minimal wall area. It was the kind of problem that attracted Kelvin: simple to state, relevant to the world about him, and amenable to experiment using little more than ‘toys’.
This kind of study is brought to mind by a paper in the Proceedings of the National Academy of Sciences USA by George Whitesides and colleagues at Harvard University. In an effort to understand how polymer molecules fold and flex, they have built strings of beads and shaken them in a tray [2]. There are three types of bead: large spherical or cylindrical beads of Teflon and nylon, and small ‘spacer’ beads of poly(methyl methacrylate).
They are designed to mimic real polymers in which different monomer groups interact via forces of attraction and repulsion. When agitated on a flat surface to mimic thermal molecular motion, the Teflon and nylon beads develop negative and positive electrostatic charges respectively, and so like beads repel while unlike beads attract.
This simple ‘beads-on-a-string’ model of polymers replicates, in toy form, a mathematical description of polymers used to understand their conformational behaviour [3], such as the way the polypeptide chains of proteins fold into their compact, catalytically active ‘native’ structure. With some modification – using cylindrical beads of various lengths, so that optimal pairing of oppositely charged beads happens when they have the same length – the model can be used to look at how RNA molecules fold up using the principles of complementary base-pairing between the bases that form the ‘sticky’ monomers.
The beauty of it is that the experiments are literally child’s play (the interpretation requires a little more sophistication). Even the simplest formulations of the mathematical theory are tricky to solve – but the beads, say Whitesides and colleagues, act as an ‘analog computer’ that generates solutions, allowing them rapidly to develop and test hypotheses about how folding depends on the monomer sequence.
Whitesides has used this philosophy before, making macroscopic objects with faces coated with thin films that confer different types of mutual interaction so as to explore processes of molecular-scale self-assembly driven by selective intermolecular forces [4]. This sort of collective behaviour of many interacting parts can give rise to complex, often unexpected structures and dynamics, and is difficult to describe with rigorous mathematical theories.
It’s really a reflection of the way chemists have thought about atoms and molecules ever since John Dalton used wooden balls to represent them around 1810: as hard little entities with a characteristic size and shape. Chemists still routinely use plastic models to intuit how molecules fit together. And these investigations have long gone beyond the pedagogical to become truly experimental. The great crystallographer Desmond Bernal studied the disorderly packing of atoms in liquids using ball-bearings, and, with chalk-dusted balls of Plasticene squeezed inside a football bladder, repeated the 1727 experiment by Stephen Hales on packing of polyhedral cells that was itself a precursor to Kelvin’s investigations. (Bernal called them, apologetically, ‘rather childish experiments’ [5]).
In more recent years, model systems of beads and grains have been used as analogues of the most unlikely and complex of phenomena, from earthquakes and exotic electronic behaviour [6] to the phyllotactic growth of flower-heads [7]. Aside from the obvious issue of how closely these ‘toys’ mimic the theory (let alone how well the theory mimics reality), these approaches stand at risk of offering phenomenology without true insight: with an ‘analytical’ solution to the equations, it can be easier to discern the key physics at play. But when applied judiciously, they show that creativity and imagination can trump mathematical prowess or number-crunching muscle. And they also help underline the universality of physical theory, in which, as Ralph Waldo Emerson said, ‘The sublime laws play indifferently through atoms and galaxies.’ [8]
References
1. King, A. G. Kelvin the Man p.192 (Hodder & Stoughton, London, 1925).
2. Reches, M., Snyder, P. W. & Whitesides, G. M. Proc. Natl Acad. Sci. USA advance online publication 10.1073/pnas.0905533106 (2009).
3. Lifshitz, I. M., Grosberg, A. Y. & Khokhlov, A. R. Rev. Mod. Phys. 50, 683-713 (1978).
4. Bowden, N., Terfort, A., Carbeck, J. & Whitesides, G. M. Science 276, 233-235 (1997).
5. Bernal, J. D. Proc. R. Inst. Great Britain 37, 355-393 (1959).
6. Bak, P. How Nature Works (Oxford University Press, Oxford, 1997).
7. Douady, S. & Couder, Y. Phys. Rev. Lett. 68, 2098-2101 (1992).
8. Emerson, R. W. The Conduct of Life, p.202 (J. M. Dent & Sons, London, 1908).
Tuesday, September 29, 2009
In Praise of Erasmus
I’m aware that I risk derision and worse by saying, with attempted insouciance, that I have just been reading Erasmus’ Praise of Folly, but never mind that; he has rather wonderful things to say about ‘those who court immortal fame by writing books’:
“But people who use their erudition to write for a learned minority and are anxious to have either Persius [the learned] or Laelius [the not-so-learned] pass judgement don’t seem to me favoured by fortune but rather to be pitied for their continuous self-torture. They add, change, remove, lay aside, take up, rephrase, show to their friends, keep for nine years, and are never satisfied. And their futile reward, a word of praise from a handful of people, they win at such cost – so many late nights, such loss of sleep, sweetest of all things, and so much sweat and anguish. Then their health deteriorates, their looks are destroyed, they suffer partial or total blindness, poverty, ill will, denial of pleasure, premature old age, and early death, and any other such disasters there many be. Yet the wise man believes he is compensated for everything if he wins the approval of one or other purblind scholar.”
I get a little comfort from knowing that the literary world was as crabby and self-obsessed five hundred years ago. But even then they had their Dan Browns:
“The writer who belongs to me [Folly] is far happier in his crazy fashion. He never loses sleep as he sets down at once whatever takes his fancy and comes from his pen, even his dreams, and it costs him little beyond the price of the paper. He knows well enough that the more trivial the trifles he writes about the wider the audience which will appreciate them, made up as it is of all the ignoramuses and fools. What does it matter if three scholars can be found to damn his efforts, always supposing they’ve read them? How can the estimation of a mere handful of savants prevail against such a crowd of admirers?”
“But people who use their erudition to write for a learned minority and are anxious to have either Persius [the learned] or Laelius [the not-so-learned] pass judgement don’t seem to me favoured by fortune but rather to be pitied for their continuous self-torture. They add, change, remove, lay aside, take up, rephrase, show to their friends, keep for nine years, and are never satisfied. And their futile reward, a word of praise from a handful of people, they win at such cost – so many late nights, such loss of sleep, sweetest of all things, and so much sweat and anguish. Then their health deteriorates, their looks are destroyed, they suffer partial or total blindness, poverty, ill will, denial of pleasure, premature old age, and early death, and any other such disasters there many be. Yet the wise man believes he is compensated for everything if he wins the approval of one or other purblind scholar.”
I get a little comfort from knowing that the literary world was as crabby and self-obsessed five hundred years ago. But even then they had their Dan Browns:
“The writer who belongs to me [Folly] is far happier in his crazy fashion. He never loses sleep as he sets down at once whatever takes his fancy and comes from his pen, even his dreams, and it costs him little beyond the price of the paper. He knows well enough that the more trivial the trifles he writes about the wider the audience which will appreciate them, made up as it is of all the ignoramuses and fools. What does it matter if three scholars can be found to damn his efforts, always supposing they’ve read them? How can the estimation of a mere handful of savants prevail against such a crowd of admirers?”
Punishment is different in China
[This is my latest Muse article for Nature News. I have the feeling that I’ve encountered guanxi several times in a personal context, and it certainly is hard for a Westerner to figure out what is really going on.]
Doing business abroad is not like doing it at home. A tough, no-nonsense approach will get results in New York but may be seen as rude and aggressive in Tokyo, where business negotiations are a mixture of ceremony and courtship designed to avoid direct confrontation. In Japan, an apparent ‘yes’ can mean ‘no’, and there are sixteen ways of avoiding having to say ‘no’ directly [1]. So negotiations there are long-winded, ambiguous and, to outsiders, plain baffling.
Traditional economics makes no concessions to such cultural differences. Its models tend to adopt a ‘one size fits all’ view of human interactions, in which choices and strategies are based on the cold logic of ‘utility maximization’: whatever gets you the best deal. But behavioural economics, which sets out to investigate how real people conduct their decision-making, is undermining that picture. Now a paper in the Proceedings of the National Academy of Sciences USA reports that Chinese people respond quite differently to Americans in transactions that contain the threat of punishment for uncooperative behaviour [2].
Yi Tao of the Chinese Academy of Sciences’ Institute of Zoology in Beijing and his coworkers have looked at how Chinese university students play a version of the Prisoner’s Dilemma. This is the classic model system in game theory for studying two-person interactions that include a temptation to cheat. The game awards points according to whether the two players decide to cooperate or ‘defect’. Two defections – the players both try to cheat each other – elicit a low payoff, while both players are better rewarded if they cooperate. However, if player 1 cooperates while player 2 defects, then player 1 is the sucker, getting the worst possible payoff, while player 2 does best of all.
The ‘logical’ way to play this game is always to defect, because that gives you the best payoff whatever your opponent does. Yet if the game is played repeatedly, players realise that they can both score more highly if they agree (tacitly) to cooperate. Thus cooperation can arise from self-interest.
The Prisoner’s Dilemma (PD) – which was devised during the Cold War in a US think tank but was anticipated in the political philosophies of Thomas Hobbes and Jean-Jacques Rousseau – has been used to show how altruism can develop in animal communities and human society. This kind of game theory is also central to economic analysis of competitive markets.
It’s generally necessary for PD players to encounter one another repeatedly before the long-term benefits of mutual cooperation become apparent. That’s why, the theory suggests, we cultivate good relations with our neighbours and local shops. But in 2002, behavioural economists Ernst Fehr and Simon Gächter in Switzerland showed that cooperation may also emerge without repeated encounters if there are opportunities to punish defectors [3]. In those experiments, groups of players were given a sum of money to invest in a project, and the group was jointly rewarded according to the level of investment. Freeloaders can enjoy the group reward without investing. But if such selfish behaviour can be punished with fines, it is suppressed. Punishment is typically used insistently: players will do so even at a cost to themselves.
Gächter and his coworkers have investigated whether attitudes to punishment in this ‘public goods’ game differ across cultures [4]. They compared the amount of expenditure on punishment for players in 15 different countries, including the US, China, Turkey and Saudi Arabia. They found that the cooperation-enhancing effect of punishment, and cooperation overall, was strongest in democratic countries with a long tradition of market economies.
The behaviours in this case were similar in the US and China. But Tao and his coworkers show this doesn’t mean American and Chinese people regard or employ punishment in the same way. The participants in their study played the straightforward iterated PD, but with the added provision that each player could opt to punish rather than to cooperate or defect. Punishment deducts several points from the other player, but also costs the punisher. Ill-gotten gains from defection can be cancelled by being punished in the next round.
This might be expected to promote cooperation – and when other researchers tested the game on US students last year, that’s what they found [5]. But in China, punishment made virtually no difference to the amount of cooperation: it was, if anything, slightly lower than in control games without that option.
Tao’s team says this probably reflects the fact that, in China, individuals conduct transactions by cultivating so-called guanxi (literally ‘closed system’) networks of two-person relationships based on empathy and mutual understanding. That might sound chummy, but in fact guanxi is a delicate dance in which feelings of friendship, obligation and guilt are patiently probed and manipulated to reach the desired goal. Since the rules of the PD-with-punishment allow for nothing of this sort, there’s nothing to link cooperation to punishment.
All the same, reputations are important to guanxi networks, so in the public-goods game [4] where reputation (for reciprocity, say) may come into play, punishment is restored as an operative force – making the US and Chinese behaviours more similar, although for different reasons.
One implication is that it’s dangerous to extrapolate from lab tests of behavioural economics to evolutionary questions – for example, to ‘explain’ the adaptive role of punishment in human society. Economic behaviour too can evidently depend on the ‘culture’ in which it happens. For example, economists are divided on the issue of how incentives affect productivity, because real-world experience sometimes seems at odds with what behavioural tests imply. But that’s no longer surprising if behaviour varies according to the system of norms within which it is enacted. It’s a reminder that a search for economic and behavioural ‘first principles’ may be doomed to fail.
References
1. D. W. Hendon, R. A. Hendon & P. Herbig, Cross-Cultural Business Negotiations (Quorum, Westport CT, 1996).
2. Wu, J.-J. et al., Proc. Natl Acad. Sci. USA 10.1073/pnas.0905918106 (2009).
3. Fehr, E. & Gächter, S. Nature 415, 137-140 (2002).
4. Herrmann, B., Thöni, C. & Gächter, S. Science 319, 1362-1367 (2008).
5. Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, M. A. Nature 452, 348-351 (2008).
Doing business abroad is not like doing it at home. A tough, no-nonsense approach will get results in New York but may be seen as rude and aggressive in Tokyo, where business negotiations are a mixture of ceremony and courtship designed to avoid direct confrontation. In Japan, an apparent ‘yes’ can mean ‘no’, and there are sixteen ways of avoiding having to say ‘no’ directly [1]. So negotiations there are long-winded, ambiguous and, to outsiders, plain baffling.
Traditional economics makes no concessions to such cultural differences. Its models tend to adopt a ‘one size fits all’ view of human interactions, in which choices and strategies are based on the cold logic of ‘utility maximization’: whatever gets you the best deal. But behavioural economics, which sets out to investigate how real people conduct their decision-making, is undermining that picture. Now a paper in the Proceedings of the National Academy of Sciences USA reports that Chinese people respond quite differently to Americans in transactions that contain the threat of punishment for uncooperative behaviour [2].
Yi Tao of the Chinese Academy of Sciences’ Institute of Zoology in Beijing and his coworkers have looked at how Chinese university students play a version of the Prisoner’s Dilemma. This is the classic model system in game theory for studying two-person interactions that include a temptation to cheat. The game awards points according to whether the two players decide to cooperate or ‘defect’. Two defections – the players both try to cheat each other – elicit a low payoff, while both players are better rewarded if they cooperate. However, if player 1 cooperates while player 2 defects, then player 1 is the sucker, getting the worst possible payoff, while player 2 does best of all.
The ‘logical’ way to play this game is always to defect, because that gives you the best payoff whatever your opponent does. Yet if the game is played repeatedly, players realise that they can both score more highly if they agree (tacitly) to cooperate. Thus cooperation can arise from self-interest.
The Prisoner’s Dilemma (PD) – which was devised during the Cold War in a US think tank but was anticipated in the political philosophies of Thomas Hobbes and Jean-Jacques Rousseau – has been used to show how altruism can develop in animal communities and human society. This kind of game theory is also central to economic analysis of competitive markets.
It’s generally necessary for PD players to encounter one another repeatedly before the long-term benefits of mutual cooperation become apparent. That’s why, the theory suggests, we cultivate good relations with our neighbours and local shops. But in 2002, behavioural economists Ernst Fehr and Simon Gächter in Switzerland showed that cooperation may also emerge without repeated encounters if there are opportunities to punish defectors [3]. In those experiments, groups of players were given a sum of money to invest in a project, and the group was jointly rewarded according to the level of investment. Freeloaders can enjoy the group reward without investing. But if such selfish behaviour can be punished with fines, it is suppressed. Punishment is typically used insistently: players will do so even at a cost to themselves.
Gächter and his coworkers have investigated whether attitudes to punishment in this ‘public goods’ game differ across cultures [4]. They compared the amount of expenditure on punishment for players in 15 different countries, including the US, China, Turkey and Saudi Arabia. They found that the cooperation-enhancing effect of punishment, and cooperation overall, was strongest in democratic countries with a long tradition of market economies.
The behaviours in this case were similar in the US and China. But Tao and his coworkers show this doesn’t mean American and Chinese people regard or employ punishment in the same way. The participants in their study played the straightforward iterated PD, but with the added provision that each player could opt to punish rather than to cooperate or defect. Punishment deducts several points from the other player, but also costs the punisher. Ill-gotten gains from defection can be cancelled by being punished in the next round.
This might be expected to promote cooperation – and when other researchers tested the game on US students last year, that’s what they found [5]. But in China, punishment made virtually no difference to the amount of cooperation: it was, if anything, slightly lower than in control games without that option.
Tao’s team says this probably reflects the fact that, in China, individuals conduct transactions by cultivating so-called guanxi (literally ‘closed system’) networks of two-person relationships based on empathy and mutual understanding. That might sound chummy, but in fact guanxi is a delicate dance in which feelings of friendship, obligation and guilt are patiently probed and manipulated to reach the desired goal. Since the rules of the PD-with-punishment allow for nothing of this sort, there’s nothing to link cooperation to punishment.
All the same, reputations are important to guanxi networks, so in the public-goods game [4] where reputation (for reciprocity, say) may come into play, punishment is restored as an operative force – making the US and Chinese behaviours more similar, although for different reasons.
One implication is that it’s dangerous to extrapolate from lab tests of behavioural economics to evolutionary questions – for example, to ‘explain’ the adaptive role of punishment in human society. Economic behaviour too can evidently depend on the ‘culture’ in which it happens. For example, economists are divided on the issue of how incentives affect productivity, because real-world experience sometimes seems at odds with what behavioural tests imply. But that’s no longer surprising if behaviour varies according to the system of norms within which it is enacted. It’s a reminder that a search for economic and behavioural ‘first principles’ may be doomed to fail.
References
1. D. W. Hendon, R. A. Hendon & P. Herbig, Cross-Cultural Business Negotiations (Quorum, Westport CT, 1996).
2. Wu, J.-J. et al., Proc. Natl Acad. Sci. USA 10.1073/pnas.0905918106 (2009).
3. Fehr, E. & Gächter, S. Nature 415, 137-140 (2002).
4. Herrmann, B., Thöni, C. & Gächter, S. Science 319, 1362-1367 (2008).
5. Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, M. A. Nature 452, 348-351 (2008).
Friday, September 25, 2009
Darwin on screen
For a limited time only, I can be heard on BBC Radio 3’s Nightwaves talking about the new Darwin biopic Creation. The film is worth watching, but don’t expect any revelations. New Scientist got a bit hot under the collar about the use of ‘supernatural’ imagery – Darwin’s dead daughter Annie, it says, returns as a ghost. Well, not really: there’s never any doubt that this is Darwin’s mind talking to itself. It’s fanciful, perhaps even sentimental in the end, but hardly an affront to reason. It’s a fairer complaint, though, that this ‘makes for a cartoon account of the writing of On the Origin of Species’, especially given that the book was published 8 years after Annie died.
For a limited time only, I can be heard on BBC Radio 3’s Nightwaves talking about the new Darwin biopic Creation. The film is worth watching, but don’t expect any revelations. New Scientist got a bit hot under the collar about the use of ‘supernatural’ imagery – Darwin’s dead daughter Annie, it says, returns as a ghost. Well, not really: there’s never any doubt that this is Darwin’s mind talking to itself. It’s fanciful, perhaps even sentimental in the end, but hardly an affront to reason. It’s a fairer complaint, though, that this ‘makes for a cartoon account of the writing of On the Origin of Species’, especially given that the book was published 8 years after Annie died.
Wednesday, September 16, 2009
Truly wonderful
It’s official, then: the Royal Society Science Book Prize goes to The Age of Wonder by Richard Holmes. The decision was announced yesterday, and the Guardian is apparently going to put online a podcast of the press conference at which my fellow judges and some of the shortlisted authors discussed science books and science writing in general. I almost felt sorry for the other candidates being up against a book as good as Richard’s: it was a wonderful shortlist, but The Age of Wonder left us all awestruck in its erudition, imagination and – despite what you might imagine from the subject and the size of the book – its readability. It is a glorious read, with something to enjoy on every page (and don’t skip the footnotes!). I was slightly nervous that the other judges might deem it too ‘historical’ to count as a science book, but happily they had no such qualms. The book looks at several strands of science that emerged in the early nineteenth century, all of which allied themselves with the Romantic spirit of wonder, awe and the sublime: for example, the geographical journeys of discovery by Joseph Banks, Mungo Park and others, William Herschel’s telescopic observations, Humphrey Davy’s chemical inventiveness, and the origins of hot-air ballooning. One of the very special attributes of this book is that it makes science sound breathtakingly exciting – not only then, but still.
Both Ben Goldacre’s Bad Science and Jo Marchant’s Decoding the Heavens have received well deserved praise elsewhere (here and here) (as the other two Brits, they were the other shortlisted authors present at the ceremony). But I’d like to put in a shout for all the other three candidates too, and especially for Neil Shubin’s Your Inner Fish, which we all agreed was exquisite, making a tricky subject (evo-devo) very accessible and engaging. The memory of being confronted with three boxes stuffed full of books is now fading, and I’m left feeling lucky to have been involved in this process.
It’s official, then: the Royal Society Science Book Prize goes to The Age of Wonder by Richard Holmes. The decision was announced yesterday, and the Guardian is apparently going to put online a podcast of the press conference at which my fellow judges and some of the shortlisted authors discussed science books and science writing in general. I almost felt sorry for the other candidates being up against a book as good as Richard’s: it was a wonderful shortlist, but The Age of Wonder left us all awestruck in its erudition, imagination and – despite what you might imagine from the subject and the size of the book – its readability. It is a glorious read, with something to enjoy on every page (and don’t skip the footnotes!). I was slightly nervous that the other judges might deem it too ‘historical’ to count as a science book, but happily they had no such qualms. The book looks at several strands of science that emerged in the early nineteenth century, all of which allied themselves with the Romantic spirit of wonder, awe and the sublime: for example, the geographical journeys of discovery by Joseph Banks, Mungo Park and others, William Herschel’s telescopic observations, Humphrey Davy’s chemical inventiveness, and the origins of hot-air ballooning. One of the very special attributes of this book is that it makes science sound breathtakingly exciting – not only then, but still.
Both Ben Goldacre’s Bad Science and Jo Marchant’s Decoding the Heavens have received well deserved praise elsewhere (here and here) (as the other two Brits, they were the other shortlisted authors present at the ceremony). But I’d like to put in a shout for all the other three candidates too, and especially for Neil Shubin’s Your Inner Fish, which we all agreed was exquisite, making a tricky subject (evo-devo) very accessible and engaging. The memory of being confronted with three boxes stuffed full of books is now fading, and I’m left feeling lucky to have been involved in this process.
Friday, September 11, 2009
The soul and the embryo
Perhaps you noticed this article in the Guardian last year by David Albert Jones, a bioethicist at St Mary’s College, part of the University of Surrey. I didn’t, but plenty did, judging from the feedback.
But it came to my attention after having just read Jones’ 2004 book The Soul of the Embryo, as research for my next book. It’s a very interesting survey of how this issue – whether embryos have souls, when they get them, where they come from, what happens to them – has been debated in Christian theology throughout the ages.
It is also an unintentionally revealing portrait of the mixture of bad logic, warped and misrepresentative argumentation that seems to pervade the Catholic rejection of embryo research. In the language of the Telegraph reader: I am appalled.
Oh, there are the little deceits, such as his insinuation without evidence that IVF clinics routinely seek to overproduce embryos so that they have plenty for research purposes. (For one thing, no embryos can be used in research without the permission of the donors.) Or the sudden switch, when we reach the modern discussion of abortion, from the almost universal use of ‘embryo’ and ‘fetus’ earlier in the book to ‘unborn child’ – or even just ‘child’.
But far worse is the perverse logic and the unprincipled cherry-picking of facts and arguments. Now, I can fully understand how someone might reach the conclusion that, from the moment of fertilization, an embryo is a human being and is entitled to the basic right of being allowed to live – and that abortion is therefore murder, as is discarding of human embryos in any form. It’s a point of view, so to speak. If you believe that the embryo has the same status as a newborn child, and that killing is always wrong, then that’s a consistent position.
It means, as a corollary, that you are an unconditional pacifist, and must consider that papacy wrong and unethical not to share this position. That, somehow, does not seem to be Jones’ view.
Ah, so perhaps you feel that, no, one can’t simply say one should never in any circumstances kill, but that one should not in any circumstances kill something as small and weak and vulnerable as an embryo. So OK, we can now get into an issue of relative merits: there are circumstances in which it is alright to kill, but this isn’t one of them. Then you can no longer take refuge in absolute prohibitions, and must instead be ready to place things on the balance. Why is it wrong to prevent the further development of a ball of cells that might or (more probably, as IVF statistics make clear) might not implant in a womb (even though it is never going to have the opportunity to do so anyway) in order to find a way of alleviating some extreme human suffering? What is the ethical calculus that leads you to this position? Not only does Jones fail to explain it, he fails even to consider it. The only mention he makes of the medical potential of embryo and stem-cell research is to say that it has been exaggerated by its supporters. This man is said to be a bioethicist, remember.
But OK, he is opposed to the destruction of any embryo. It must follow that he opposes the attempted production in IVF of any more embryos than would be implanted – which means two, in the UK. This would, at a stroke, cause IVF success rates – already pretty low, at 20-30 percent – to plummet. But that would of course be a pretty unpopular course, and so Jones keeps quiet about it. After all, it would seem a bit harsh, wouldn’t it – and he is very keen not to appear harsh or unfeeling.
Now, you might have spotted that all of this takes no account of the soul. Indeed, for there is absolutely no reason why it should. But for Christian readers, all the arguments seem likely to carry so much more weight if we can get the soul in there as soon as possible. Give the embryo a soul, Jones seems to imply, and it is ethically unassailable. And so he marshals arguments for why Christian theology supports the notion of ‘ensoulment at conception’.
The problem is that the Bible is all but silent on the issue of when, how or to what degree ensoulment happens during embryogenesis. So the story must be constructed by indirect inference. This is where the façade of logic really starts to unravel. I can’t go through all the reasons for that – they are too plentiful – but here’s a taste.
One of the key arguments rests on the example of Christ as an embryo. To some theologians (Jones downplays this), the image of Christ in the womb was unsettling, particularly if it meant we were forced to regard him as a developing embryo (even in ancient Greece it was clear that embryos acquire their form gradually). So some considered Christ to have been created fully formed – as a kind of homunculus – at the moment of the Immaculate Conception, and just to have grown bigger. Jones, on the other hand, is prepared to contemplate an embryonic Christ, but stresses that the Bible emphasizes his humanity and rules out the notion that Jesus could ever have been a less-than-fully-human embryo. But souls are always possessed by humans, and so if, as the Bible says, Jesus was as fully a human as any of us, then we too must be fully human, and thus in possession of a soul, from conception. This insistence on Jesus’s humanity rules out the possibility that there could have been something exceptional about his embryonic status – indeed, this whole argument hinges on being able to make the case against this exceptionalism, so that what is true of the embryonic Jesus is necessarily true of us. (I know what you’re thinking – is this a modern theologian, and not some obsessive medieval scholastic debating angels on pinheads? Bear with me.) But the problem with that is that it overlooks a few other things that do seem to rather set the embryonic Christ apart from your run-of-the-mill embryo:
1. He is the son of God
2. He is in a virgin’s womb
3. He was created without human sperm
Oh yes, he was different like that, but otherwise, you know, just like us.
Oh, souls. They’re amazing, once you start to think about them. One common objection to the ‘ensoulment at conception’ notion, says Jones, is that some embryos become twins. Where does the other soul come from? He admits this is a bit tricky: does the old soul die and two new ones get added to the two embryos (but then where is the ‘dead’ being from which the old soul fled)? Or does one embryo get to keep the old soul, and the other get given a new one? Or maybe God knew that the embryo would split, and so added two souls in the first place? ‘The problem with twinning seems less our inability to tell a ‘soul story’ and more the inability to judge between these stories’, says Jones. ‘Until more is known empirically, it is difficult to know what sort of story to tell.’ No, that is not the problem. The problem is that it is transparently obvious that you will indeed inevitably end up doing just that – telling a story. In other words, once the ‘facts’ are known, Jones is confident that a story can be tailored to fit them. I’ve no doubt that is true – and equally, no doubt that any theological justification for it will be a masterpiece of post hoc inference. That’s to say, any such ‘story’ will be not only scientifically untestable (we’re talking about souls here, after all), but also theologically unverifiable In the process, incidentally, it seems likely to turn God’s supposedly mysterious creation of beings into high farce: ‘Oops, looks like we need another one of those souls over here.’
(Note also that, while the twinning issue seems like a minor wrinkle to the debate, it played an important role in swaying the opinion of some people involved in the debate around embryo research in the British Parliament between the Warnock Report in 1984 and the HFE Bill in 1990. They were not so ready as Jones to sweep it under the carpet.)
It was with growing dismay that I realised, as I read on, that Jones was not merely exploring changing ideas about what the soul is and where it comes from; he felt he was telling us some literal truths about this, just as if explaining the origin and evolution of species. What this means is that he pursues some arguments with the fine-toothed rigorous logic of the philosopher, but quickly moves on or changes the subject as soon as some gaping flaw in the logic presents itself. Some theologians, says Jones, have worried about the idea that, if all embryos have souls, then most of the souls in heaven are those of embryos that never developed further. What are they like? And what form do their bodies take in the Resurrection? ‘This is not an easy question’, he admits, and leaves it there. But is it a question at all?
Jones suggests that the soul of an individual being is in some ways synonymous with that individual’s life. That makes it easy enough to argue that it’s in the embryo from conception, which is arguably the moment that a ‘potential new life’ appears. But it also forces him to slip out the fact that plants too must have a soul – though not the rational soul of humans, naturally. Presumably God puts them there too. So when does a cutting acquire its new soul? Given that it is a clone (literally), does it need a new one? Hm, no answer.
Or take this, from an article by Jones in Thinking Faith, the online journal of British Jesuits on why the latest Human Fertilization and Embryology Bill (now an Act, of course) is such a terrible thing:
“It is more difficult to know what to think about ‘true hybrids’ which are the most extreme kind of human-animal embryo permitted by the Bill. True hybrids are made by mixing sperm and egg from different species and would be 50% human and 50% of some other species. This raises the issue about whether there is something wrong with crossing the species barrier. This is easiest to see if we ask what would be wrong with bringing a half-human half-chimpanzee to birth. The primary issue here is not how much protection to give to a ‘humanzee’, it is whether we should allow scientists to create humanzees in the first place (this was actually attempted by soviet and other scientists in the 1920s but happily none succeeded). The act of creating true hybrids seems to be inhuman. It fails to respect our humanity.”
I agree with Jones that making a fully developed ‘humanzee’ (if it were possible) would not do a lot for our dignity, and I can think of no reason why it would be desirable. But I wonder what kind of soul Jones thinks God would give it. Or how God would decide the issue. Or how we might decide how God would decide the issue. Better to evade the matter by saying that scientists shouldn’t do it in the first place.
I don’t object to the fact that, in bringing a Christian perspective to these bioethical issues, one has to accept that the issue of souls might arise. That goes with the territory, just as one would have to accept that in other contexts Christians may want to invoke notions of heaven, resurrection and so forth. But if, say, the latter were to happen, I think we might reasonable expect now that we would not be forced to contemplate such questions as how big heaven is and where it might be found. I think we can hope that theologians have moved beyond this sort of medieval literalism. But Jones’ view of souls has not – they are still entities (albeit immaterial) that have to be injected into living beings at some point in time, and accounted for in quantitative terms. To many people, the idea of a soul does seem to carry some valuable meaning, perhaps bound up with notions of individuality and human dignity. I can live with that. But once you start trying to make ‘soul’ a precise concept, it seems you’ll inevitably end up with this sort of absurdity.
This matters partly because people like Jones are the sort who will get their voices heard in these bioethics debates. But it rankles me most of all because he ends up cloaking his position in the seductive mantle of Christian compassion for ‘the least of these little ones’, while – as so often in these cases – refusing to think compassionately about the other side of the coin: the compassion in treatments for disease and infertility (not to mention any consideration of the role and experience of women). In other words, ideological principles before people.
Perhaps you noticed this article in the Guardian last year by David Albert Jones, a bioethicist at St Mary’s College, part of the University of Surrey. I didn’t, but plenty did, judging from the feedback.
But it came to my attention after having just read Jones’ 2004 book The Soul of the Embryo, as research for my next book. It’s a very interesting survey of how this issue – whether embryos have souls, when they get them, where they come from, what happens to them – has been debated in Christian theology throughout the ages.
It is also an unintentionally revealing portrait of the mixture of bad logic, warped and misrepresentative argumentation that seems to pervade the Catholic rejection of embryo research. In the language of the Telegraph reader: I am appalled.
Oh, there are the little deceits, such as his insinuation without evidence that IVF clinics routinely seek to overproduce embryos so that they have plenty for research purposes. (For one thing, no embryos can be used in research without the permission of the donors.) Or the sudden switch, when we reach the modern discussion of abortion, from the almost universal use of ‘embryo’ and ‘fetus’ earlier in the book to ‘unborn child’ – or even just ‘child’.
But far worse is the perverse logic and the unprincipled cherry-picking of facts and arguments. Now, I can fully understand how someone might reach the conclusion that, from the moment of fertilization, an embryo is a human being and is entitled to the basic right of being allowed to live – and that abortion is therefore murder, as is discarding of human embryos in any form. It’s a point of view, so to speak. If you believe that the embryo has the same status as a newborn child, and that killing is always wrong, then that’s a consistent position.
It means, as a corollary, that you are an unconditional pacifist, and must consider that papacy wrong and unethical not to share this position. That, somehow, does not seem to be Jones’ view.
Ah, so perhaps you feel that, no, one can’t simply say one should never in any circumstances kill, but that one should not in any circumstances kill something as small and weak and vulnerable as an embryo. So OK, we can now get into an issue of relative merits: there are circumstances in which it is alright to kill, but this isn’t one of them. Then you can no longer take refuge in absolute prohibitions, and must instead be ready to place things on the balance. Why is it wrong to prevent the further development of a ball of cells that might or (more probably, as IVF statistics make clear) might not implant in a womb (even though it is never going to have the opportunity to do so anyway) in order to find a way of alleviating some extreme human suffering? What is the ethical calculus that leads you to this position? Not only does Jones fail to explain it, he fails even to consider it. The only mention he makes of the medical potential of embryo and stem-cell research is to say that it has been exaggerated by its supporters. This man is said to be a bioethicist, remember.
But OK, he is opposed to the destruction of any embryo. It must follow that he opposes the attempted production in IVF of any more embryos than would be implanted – which means two, in the UK. This would, at a stroke, cause IVF success rates – already pretty low, at 20-30 percent – to plummet. But that would of course be a pretty unpopular course, and so Jones keeps quiet about it. After all, it would seem a bit harsh, wouldn’t it – and he is very keen not to appear harsh or unfeeling.
Now, you might have spotted that all of this takes no account of the soul. Indeed, for there is absolutely no reason why it should. But for Christian readers, all the arguments seem likely to carry so much more weight if we can get the soul in there as soon as possible. Give the embryo a soul, Jones seems to imply, and it is ethically unassailable. And so he marshals arguments for why Christian theology supports the notion of ‘ensoulment at conception’.
The problem is that the Bible is all but silent on the issue of when, how or to what degree ensoulment happens during embryogenesis. So the story must be constructed by indirect inference. This is where the façade of logic really starts to unravel. I can’t go through all the reasons for that – they are too plentiful – but here’s a taste.
One of the key arguments rests on the example of Christ as an embryo. To some theologians (Jones downplays this), the image of Christ in the womb was unsettling, particularly if it meant we were forced to regard him as a developing embryo (even in ancient Greece it was clear that embryos acquire their form gradually). So some considered Christ to have been created fully formed – as a kind of homunculus – at the moment of the Immaculate Conception, and just to have grown bigger. Jones, on the other hand, is prepared to contemplate an embryonic Christ, but stresses that the Bible emphasizes his humanity and rules out the notion that Jesus could ever have been a less-than-fully-human embryo. But souls are always possessed by humans, and so if, as the Bible says, Jesus was as fully a human as any of us, then we too must be fully human, and thus in possession of a soul, from conception. This insistence on Jesus’s humanity rules out the possibility that there could have been something exceptional about his embryonic status – indeed, this whole argument hinges on being able to make the case against this exceptionalism, so that what is true of the embryonic Jesus is necessarily true of us. (I know what you’re thinking – is this a modern theologian, and not some obsessive medieval scholastic debating angels on pinheads? Bear with me.) But the problem with that is that it overlooks a few other things that do seem to rather set the embryonic Christ apart from your run-of-the-mill embryo:
1. He is the son of God
2. He is in a virgin’s womb
3. He was created without human sperm
Oh yes, he was different like that, but otherwise, you know, just like us.
Oh, souls. They’re amazing, once you start to think about them. One common objection to the ‘ensoulment at conception’ notion, says Jones, is that some embryos become twins. Where does the other soul come from? He admits this is a bit tricky: does the old soul die and two new ones get added to the two embryos (but then where is the ‘dead’ being from which the old soul fled)? Or does one embryo get to keep the old soul, and the other get given a new one? Or maybe God knew that the embryo would split, and so added two souls in the first place? ‘The problem with twinning seems less our inability to tell a ‘soul story’ and more the inability to judge between these stories’, says Jones. ‘Until more is known empirically, it is difficult to know what sort of story to tell.’ No, that is not the problem. The problem is that it is transparently obvious that you will indeed inevitably end up doing just that – telling a story. In other words, once the ‘facts’ are known, Jones is confident that a story can be tailored to fit them. I’ve no doubt that is true – and equally, no doubt that any theological justification for it will be a masterpiece of post hoc inference. That’s to say, any such ‘story’ will be not only scientifically untestable (we’re talking about souls here, after all), but also theologically unverifiable In the process, incidentally, it seems likely to turn God’s supposedly mysterious creation of beings into high farce: ‘Oops, looks like we need another one of those souls over here.’
(Note also that, while the twinning issue seems like a minor wrinkle to the debate, it played an important role in swaying the opinion of some people involved in the debate around embryo research in the British Parliament between the Warnock Report in 1984 and the HFE Bill in 1990. They were not so ready as Jones to sweep it under the carpet.)
It was with growing dismay that I realised, as I read on, that Jones was not merely exploring changing ideas about what the soul is and where it comes from; he felt he was telling us some literal truths about this, just as if explaining the origin and evolution of species. What this means is that he pursues some arguments with the fine-toothed rigorous logic of the philosopher, but quickly moves on or changes the subject as soon as some gaping flaw in the logic presents itself. Some theologians, says Jones, have worried about the idea that, if all embryos have souls, then most of the souls in heaven are those of embryos that never developed further. What are they like? And what form do their bodies take in the Resurrection? ‘This is not an easy question’, he admits, and leaves it there. But is it a question at all?
Jones suggests that the soul of an individual being is in some ways synonymous with that individual’s life. That makes it easy enough to argue that it’s in the embryo from conception, which is arguably the moment that a ‘potential new life’ appears. But it also forces him to slip out the fact that plants too must have a soul – though not the rational soul of humans, naturally. Presumably God puts them there too. So when does a cutting acquire its new soul? Given that it is a clone (literally), does it need a new one? Hm, no answer.
Or take this, from an article by Jones in Thinking Faith, the online journal of British Jesuits on why the latest Human Fertilization and Embryology Bill (now an Act, of course) is such a terrible thing:
“It is more difficult to know what to think about ‘true hybrids’ which are the most extreme kind of human-animal embryo permitted by the Bill. True hybrids are made by mixing sperm and egg from different species and would be 50% human and 50% of some other species. This raises the issue about whether there is something wrong with crossing the species barrier. This is easiest to see if we ask what would be wrong with bringing a half-human half-chimpanzee to birth. The primary issue here is not how much protection to give to a ‘humanzee’, it is whether we should allow scientists to create humanzees in the first place (this was actually attempted by soviet and other scientists in the 1920s but happily none succeeded). The act of creating true hybrids seems to be inhuman. It fails to respect our humanity.”
I agree with Jones that making a fully developed ‘humanzee’ (if it were possible) would not do a lot for our dignity, and I can think of no reason why it would be desirable. But I wonder what kind of soul Jones thinks God would give it. Or how God would decide the issue. Or how we might decide how God would decide the issue. Better to evade the matter by saying that scientists shouldn’t do it in the first place.
I don’t object to the fact that, in bringing a Christian perspective to these bioethical issues, one has to accept that the issue of souls might arise. That goes with the territory, just as one would have to accept that in other contexts Christians may want to invoke notions of heaven, resurrection and so forth. But if, say, the latter were to happen, I think we might reasonable expect now that we would not be forced to contemplate such questions as how big heaven is and where it might be found. I think we can hope that theologians have moved beyond this sort of medieval literalism. But Jones’ view of souls has not – they are still entities (albeit immaterial) that have to be injected into living beings at some point in time, and accounted for in quantitative terms. To many people, the idea of a soul does seem to carry some valuable meaning, perhaps bound up with notions of individuality and human dignity. I can live with that. But once you start trying to make ‘soul’ a precise concept, it seems you’ll inevitably end up with this sort of absurdity.
This matters partly because people like Jones are the sort who will get their voices heard in these bioethics debates. But it rankles me most of all because he ends up cloaking his position in the seductive mantle of Christian compassion for ‘the least of these little ones’, while – as so often in these cases – refusing to think compassionately about the other side of the coin: the compassion in treatments for disease and infertility (not to mention any consideration of the role and experience of women). In other words, ideological principles before people.
Friday, August 28, 2009
Francis Collins and his God – but no, not more of that New Atheist stuff…
[Well, not really. This is the pre-edited version of my latest column for Prospect.]
The unanimous praise for President Obama’s scientific appointments is faltering. When in January he offered the position of surgeon general to the ‘media doctor’ Sanjay Gupta of CNN, many considered it a lightweight choice. In the event Gupta declined, and Obama’s new nominee, Alabama community physician Regina Benjamin, has raised no eyebrows.
But the nomination of Francis Collins to head the National Institutes of Health, the US biomedical research organization, is more controversial. At face value, Collins looks an obvious choice: former leader of the Human Genome Project, he has a proven track record of large-scale management, and commands respect from peers by remaining scientifically active rather than becoming a pen-pushing adminstrator. Geneticist Eric Lander has called him ‘a superb choice for an NIH director’, while others praise him as a ‘scientist’s scientist.’
So what’s the problem? In a nutshell, Collins’ 2006 book The Language of God. He is outspoken, even evangelical, about his Christian faith. Even that might not have been a problem if Collins had not appeared to equivocate about ‘old-time religion’ issues such as the interpretation of the Fall and the possibility of divine intervention in evolution. Some scientists are troubled by what one can find on such issues on the website of the BioLogos Foundation, established by Collins to reconcile science and religion.
Collins will step down from BioLogos before taking up his new role, and some of his colleagues offer reassurances that they have never seen his scientific judgement clouded by his religious beliefs. But with the crippling religious opposition to stem-cell science in researchers’ minds, this may not be enough to dispel concern. Collins has become a figure of almost obsessive loathing among the ‘New Atheist’ scientists seeking to combat the religiosity of American life.
Biologist P.Z. Myers of the University of Minnesota, whose Pharyngula blog is a flagship of New Atheism, calls Collins’ BioLogos ‘an embarrassment of poor reasoning and silly Christian apologetics’ and worries that ‘he will use his position to act as a propagandist for Christianity.’ Harvard psychologist Steven Pinker calls Collins ‘an advocate of profoundly anti-science beliefs.’
But Myers offers what might be in the end a more compelling reason to question Collins’ appointment: ‘he represents a very narrow, gene-jockey style of research, which… often exhibits a worrisome lack of understanding of the big picture of biology.’ He’s not alone in fearing that Collins’ enthusiasm for ‘big science’ – especially genomic stamp-collecting – will leach funding from smaller but more intellectually guided areas, such as environmental and systems biology. Collins will initially have plenty of cash to spread around – the NIH was granted a one-off sum of $10.4 bn as an economic stimulus until September 2010 – but things will get leaner, and it will take boldness and vision to find space for innovation rather than more safe but dull genome-crunching.
*****
It’s disconcerting astronomers almost to the point of embarrassment that a scar the size of the Earth has turned up unexpectedly on Jupiter. The dark ‘bruise’ in the giant planet’s dense atmosphere is evidence of some gigantic impact, presumably an asteroid or comet. The last time this happened, in 1994, it was widely anticipated and supplied a cosmic fireworks display both exhilarating and sobering: the fragmented comet Shoemaker-Levy 9 ploughed into the planet, leaving a trail of scars each of Armageddon proportions. But this wasn’t exactly a case of ‘there but for the grace of God’, so much as a reminder of Jupiter’s role as our guardian angel. The strong gravitational tug of the gas giant is thought to suck up many lumps of wandering debris that would otherwise pose a threat to Earth. Some researchers even think that the existence of a big brother to mop up impactors could be a condition of habitability for Earth-like planets around other stars.
All the same, we’d like to see such events coming. But no one foresaw the dark smudge in Jupiter’s south polar region until it was spotted by an amateur astronomer in Australia on 19 July. Word spread almost at once, and within less than a day two large infrared telescopes in Hawaii had seen the same spot, glowing brightly with sunlight reflected by the material thrown up through the jovian atmosphere.
We still don’t know what caused it, however. It could have been a faint icy comet, or a rocky asteroid. Jupiter also acquires blotches from storms, but none tends to look like this. It’s going to be tough now to figure out how big the impacting body was, or how much energy was released, especially as Jupiter’s winds will soon wipe away the traces. The similar ‘holes’ left by Shoemaker-Levy 9 were probably made by fragments several hundred metres wide: on Earth, that wouldn’t wipe us out, but it would make an almighty bang.
*****
Has anyone visited the Meta Institute for Computational Astrophysics recently? It’s well worth it: led by some heavyweight astrophysicists, it hosts seminars for specialists every Friday, as well as regular public talks and outreach events open to all. But you won’t get there by air, road or rail, because MICA exists nowhere on Earth. It is the first professional research organization to be based exclusively in virtual reality, in Second Life. The potential of virtual worlds to bring together scientists for meetings and conferences without leaving their desks has been much heralded. But MICA takes that more seriously than most. Its seminars happen in a pleasant, wooded outdoor amphitheatre looking conspicuously like the Californian coast. It has to be said that the audience is rather better looking than it tends to be in reality too.
[Well, not really. This is the pre-edited version of my latest column for Prospect.]
The unanimous praise for President Obama’s scientific appointments is faltering. When in January he offered the position of surgeon general to the ‘media doctor’ Sanjay Gupta of CNN, many considered it a lightweight choice. In the event Gupta declined, and Obama’s new nominee, Alabama community physician Regina Benjamin, has raised no eyebrows.
But the nomination of Francis Collins to head the National Institutes of Health, the US biomedical research organization, is more controversial. At face value, Collins looks an obvious choice: former leader of the Human Genome Project, he has a proven track record of large-scale management, and commands respect from peers by remaining scientifically active rather than becoming a pen-pushing adminstrator. Geneticist Eric Lander has called him ‘a superb choice for an NIH director’, while others praise him as a ‘scientist’s scientist.’
So what’s the problem? In a nutshell, Collins’ 2006 book The Language of God. He is outspoken, even evangelical, about his Christian faith. Even that might not have been a problem if Collins had not appeared to equivocate about ‘old-time religion’ issues such as the interpretation of the Fall and the possibility of divine intervention in evolution. Some scientists are troubled by what one can find on such issues on the website of the BioLogos Foundation, established by Collins to reconcile science and religion.
Collins will step down from BioLogos before taking up his new role, and some of his colleagues offer reassurances that they have never seen his scientific judgement clouded by his religious beliefs. But with the crippling religious opposition to stem-cell science in researchers’ minds, this may not be enough to dispel concern. Collins has become a figure of almost obsessive loathing among the ‘New Atheist’ scientists seeking to combat the religiosity of American life.
Biologist P.Z. Myers of the University of Minnesota, whose Pharyngula blog is a flagship of New Atheism, calls Collins’ BioLogos ‘an embarrassment of poor reasoning and silly Christian apologetics’ and worries that ‘he will use his position to act as a propagandist for Christianity.’ Harvard psychologist Steven Pinker calls Collins ‘an advocate of profoundly anti-science beliefs.’
But Myers offers what might be in the end a more compelling reason to question Collins’ appointment: ‘he represents a very narrow, gene-jockey style of research, which… often exhibits a worrisome lack of understanding of the big picture of biology.’ He’s not alone in fearing that Collins’ enthusiasm for ‘big science’ – especially genomic stamp-collecting – will leach funding from smaller but more intellectually guided areas, such as environmental and systems biology. Collins will initially have plenty of cash to spread around – the NIH was granted a one-off sum of $10.4 bn as an economic stimulus until September 2010 – but things will get leaner, and it will take boldness and vision to find space for innovation rather than more safe but dull genome-crunching.
*****
It’s disconcerting astronomers almost to the point of embarrassment that a scar the size of the Earth has turned up unexpectedly on Jupiter. The dark ‘bruise’ in the giant planet’s dense atmosphere is evidence of some gigantic impact, presumably an asteroid or comet. The last time this happened, in 1994, it was widely anticipated and supplied a cosmic fireworks display both exhilarating and sobering: the fragmented comet Shoemaker-Levy 9 ploughed into the planet, leaving a trail of scars each of Armageddon proportions. But this wasn’t exactly a case of ‘there but for the grace of God’, so much as a reminder of Jupiter’s role as our guardian angel. The strong gravitational tug of the gas giant is thought to suck up many lumps of wandering debris that would otherwise pose a threat to Earth. Some researchers even think that the existence of a big brother to mop up impactors could be a condition of habitability for Earth-like planets around other stars.
All the same, we’d like to see such events coming. But no one foresaw the dark smudge in Jupiter’s south polar region until it was spotted by an amateur astronomer in Australia on 19 July. Word spread almost at once, and within less than a day two large infrared telescopes in Hawaii had seen the same spot, glowing brightly with sunlight reflected by the material thrown up through the jovian atmosphere.
We still don’t know what caused it, however. It could have been a faint icy comet, or a rocky asteroid. Jupiter also acquires blotches from storms, but none tends to look like this. It’s going to be tough now to figure out how big the impacting body was, or how much energy was released, especially as Jupiter’s winds will soon wipe away the traces. The similar ‘holes’ left by Shoemaker-Levy 9 were probably made by fragments several hundred metres wide: on Earth, that wouldn’t wipe us out, but it would make an almighty bang.
*****
Has anyone visited the Meta Institute for Computational Astrophysics recently? It’s well worth it: led by some heavyweight astrophysicists, it hosts seminars for specialists every Friday, as well as regular public talks and outreach events open to all. But you won’t get there by air, road or rail, because MICA exists nowhere on Earth. It is the first professional research organization to be based exclusively in virtual reality, in Second Life. The potential of virtual worlds to bring together scientists for meetings and conferences without leaving their desks has been much heralded. But MICA takes that more seriously than most. Its seminars happen in a pleasant, wooded outdoor amphitheatre looking conspicuously like the Californian coast. It has to be said that the audience is rather better looking than it tends to be in reality too.
Wednesday, August 12, 2009
The weather forecast
[Here’s the pre-edited version of my review of Giles Foden’s new book Turbulence, which appears in the latest issue of Nature.]
Turbulence
Giles Foden
Faber & Faber, 2009
353 pages, £16.99
ISBN 978-0-571-20522-6
It’s a rare enough thing to encounter a novel based around one of your favourite obscure scientists; but when two of them appear in the same book, you feel Christmas must have come early. Add to this a plot that hinges on one of my pet nerdy topics – fluid dynamics – and I couldn’t help suspecting that Giles Foden had written Turbulence especially for me.
The result is compelling. Whether it fully works as fiction is another matter, to which I’ll come back. But Foden’s book one of the most attractive additions to the micro-genre of science-in-fiction for a long time.
Fluid dynamics features here in the context of weather prediction. That may seem like deeply unpromising material for a gripping story, but Foden has dramatized what has been called the most important weather forecast every made: that for the D-Day landings, the invasion of continental Europe at Normandy by the Allied forces towards the end of the Second World War. General Eisenhower, in overall command of the operation, had to be sure that the crossing of the English Channel would not be disrupted by bad weather. And he needed that information about five days in advance – a length of time that stretches today’s forecasting techniques to their limit, and which was in all honesty beyond the capability of the primitive, pre-computer prediction methods of meteorologists in 1944. Adding to that the need for a low tide to evade the German sea defences, the task confronting the Allies’ weather experts was all but impossible.
Foden tells this story through the eyes of Henry Meadows, a (fictional) young academic attached to the forecasting team led by British meteorologist James Stagg. The process by which Stagg and his fractious colleagues, including the brash American entrepreneur Irving Krick and the arrogant but astute Norwegian Sverre Pettersen, made their decision occupies the final third of the book. Stagg and Pettersen both published their own accounts in the 1970s.
Before that, Meadows is sent to rural Scotland to glean some vital clues about forecasting from the leading authority of the day, the difficult genius Wallace Ryman. Ryman is a fictionalized version of Lewis Fry Richardson, who Foden rightly calls ‘one of the unsung heroes of British science’ (he is perhaps best known for his work on fractal coastlines). Like Richardson, Ryman is a Quaker whose experiences in the Friends’ Ambulance Unit in the First World War have convinced him that war must be avoided at any cost. He therefore shuns collaboration with the military, and Meadows has to pursue his mission by stealth – an attempt that he mostly bungles in spectacular style.
In Scotland Meadows also runs into the second wayward genius in the book, this time without a pseudonymous disguise: Geoffrey Pyke, the man behind the Habbakuk project to build gigantic aircraft carriers out of ice reinforced with wood pulp. This so-called Pykrete is extraordinarily resistant to impacts and melting. There is also a fleeting appearance by ‘Julius Brecher’, a döppelganger for Max Perutz, who assisted Pyke during the war. This part of the plot may strike readers as far-fetched if they don’t know that it is quite true.
The more serious problem, however, is that Habbakuk feels like something Foden couldn’t resist cobbling on simply because it is such a striking tale. It’s certainly entertaining, and the portrayal of Pyke rings true, but there’s no real need for any of it in the plot, despite the framing device that has Meadows recounting his wartime exploits on board an ice ship built in 1980 for an Arab sheikh. When Meadows joins Pyke in London only to see the project terminated a week later, it feels like a cul-de-sac.
One could carp at a few other points of creaky plotting or narrative – Foden sometimes seems over-concerned to ensure that the reader gets the point, telling us twice why ‘Habbakuk’ is misspelled and revealing the purpose of a subplot about blood analysis in three successive encounters on the same day. But these are quibbles in a book that does a splendid job of animating a buried story of scientific endeavour and triumph. It is no mean feat to make meteorology sound both heroic and intellectually profound.
In any book like this, one has to ask whether the author succeeds in creating scientists who are fully fleshed individuals. In some ways Foden complicates his task by making Meadows explicitly withdrawn (the result of a childhood trauma in Africa) and awkward. One might argue that Meadows’ constant recourse to the turbulence metaphor and his narrow frame of reference skirt the caricature of a dry scientific life. Brecher similarly refracts everything through the prism of his own research topic (blood), while Ryman is the crabby boffin and Pyke the dotty one. But there’s motive in all this. Through Meadows we sense the dour, buttoned-up character of wartime Britain. And when he talks about turbulence and hydrodynamics, there is none of the breezy ‘beginner’s guide’ flavour that is the usual hallmark of undigested authorial research. Foden had the immense benefit of advice from his father-in-law Julian Hunt, one of the world’s leading experts on turbulence and meteorology and, fittingly, a recipient of the Lewis Fry Richardson medal for nonlinear geophysics. Skilfully balancing fact and fiction, Turbulence is a tale that is dramatic, intelligent and convincing.
[Here’s the pre-edited version of my review of Giles Foden’s new book Turbulence, which appears in the latest issue of Nature.]
Turbulence
Giles Foden
Faber & Faber, 2009
353 pages, £16.99
ISBN 978-0-571-20522-6
It’s a rare enough thing to encounter a novel based around one of your favourite obscure scientists; but when two of them appear in the same book, you feel Christmas must have come early. Add to this a plot that hinges on one of my pet nerdy topics – fluid dynamics – and I couldn’t help suspecting that Giles Foden had written Turbulence especially for me.
The result is compelling. Whether it fully works as fiction is another matter, to which I’ll come back. But Foden’s book one of the most attractive additions to the micro-genre of science-in-fiction for a long time.
Fluid dynamics features here in the context of weather prediction. That may seem like deeply unpromising material for a gripping story, but Foden has dramatized what has been called the most important weather forecast every made: that for the D-Day landings, the invasion of continental Europe at Normandy by the Allied forces towards the end of the Second World War. General Eisenhower, in overall command of the operation, had to be sure that the crossing of the English Channel would not be disrupted by bad weather. And he needed that information about five days in advance – a length of time that stretches today’s forecasting techniques to their limit, and which was in all honesty beyond the capability of the primitive, pre-computer prediction methods of meteorologists in 1944. Adding to that the need for a low tide to evade the German sea defences, the task confronting the Allies’ weather experts was all but impossible.
Foden tells this story through the eyes of Henry Meadows, a (fictional) young academic attached to the forecasting team led by British meteorologist James Stagg. The process by which Stagg and his fractious colleagues, including the brash American entrepreneur Irving Krick and the arrogant but astute Norwegian Sverre Pettersen, made their decision occupies the final third of the book. Stagg and Pettersen both published their own accounts in the 1970s.
Before that, Meadows is sent to rural Scotland to glean some vital clues about forecasting from the leading authority of the day, the difficult genius Wallace Ryman. Ryman is a fictionalized version of Lewis Fry Richardson, who Foden rightly calls ‘one of the unsung heroes of British science’ (he is perhaps best known for his work on fractal coastlines). Like Richardson, Ryman is a Quaker whose experiences in the Friends’ Ambulance Unit in the First World War have convinced him that war must be avoided at any cost. He therefore shuns collaboration with the military, and Meadows has to pursue his mission by stealth – an attempt that he mostly bungles in spectacular style.
In Scotland Meadows also runs into the second wayward genius in the book, this time without a pseudonymous disguise: Geoffrey Pyke, the man behind the Habbakuk project to build gigantic aircraft carriers out of ice reinforced with wood pulp. This so-called Pykrete is extraordinarily resistant to impacts and melting. There is also a fleeting appearance by ‘Julius Brecher’, a döppelganger for Max Perutz, who assisted Pyke during the war. This part of the plot may strike readers as far-fetched if they don’t know that it is quite true.
The more serious problem, however, is that Habbakuk feels like something Foden couldn’t resist cobbling on simply because it is such a striking tale. It’s certainly entertaining, and the portrayal of Pyke rings true, but there’s no real need for any of it in the plot, despite the framing device that has Meadows recounting his wartime exploits on board an ice ship built in 1980 for an Arab sheikh. When Meadows joins Pyke in London only to see the project terminated a week later, it feels like a cul-de-sac.
One could carp at a few other points of creaky plotting or narrative – Foden sometimes seems over-concerned to ensure that the reader gets the point, telling us twice why ‘Habbakuk’ is misspelled and revealing the purpose of a subplot about blood analysis in three successive encounters on the same day. But these are quibbles in a book that does a splendid job of animating a buried story of scientific endeavour and triumph. It is no mean feat to make meteorology sound both heroic and intellectually profound.
In any book like this, one has to ask whether the author succeeds in creating scientists who are fully fleshed individuals. In some ways Foden complicates his task by making Meadows explicitly withdrawn (the result of a childhood trauma in Africa) and awkward. One might argue that Meadows’ constant recourse to the turbulence metaphor and his narrow frame of reference skirt the caricature of a dry scientific life. Brecher similarly refracts everything through the prism of his own research topic (blood), while Ryman is the crabby boffin and Pyke the dotty one. But there’s motive in all this. Through Meadows we sense the dour, buttoned-up character of wartime Britain. And when he talks about turbulence and hydrodynamics, there is none of the breezy ‘beginner’s guide’ flavour that is the usual hallmark of undigested authorial research. Foden had the immense benefit of advice from his father-in-law Julian Hunt, one of the world’s leading experts on turbulence and meteorology and, fittingly, a recipient of the Lewis Fry Richardson medal for nonlinear geophysics. Skilfully balancing fact and fiction, Turbulence is a tale that is dramatic, intelligent and convincing.
Monday, August 10, 2009
Don’t call me sir
The Financial Times has a little article on the Royal Society Science Book Prize, on the back of an interview with Jared Diamond. In the process, they have given me a knighthood. Kind, but more than a little unlikely. I suspect I have been granted it on loan from Tim Hunt, who is chairing the judging panel. He can have it back now.
The Financial Times has a little article on the Royal Society Science Book Prize, on the back of an interview with Jared Diamond. In the process, they have given me a knighthood. Kind, but more than a little unlikely. I suspect I have been granted it on loan from Tim Hunt, who is chairing the judging panel. He can have it back now.
Friday, July 31, 2009
Artificial babies are so last century
[Here’s the pre-edited version of my latest Muse for Nature’s online news.]
The best way to understand the recent fuss about 'artificial sperm' and the 'end of men' is to consider old versions of the same debate.
Few science stories seem as guaranteed to make headlines as those that can be distorted to reinforce lazy clichés about gender. These range from the folksy – women are better multitaskers – to the ugly – ‘women who dress provocatively are more likely to be raped’.
So no one should be terribly surprised that recent reports of ‘artificial sperm’ made in the laboratory focused on the question of whether the advance makes men obsolete (see here and here). It hardly seems worth blustering about tabloid stories that claim ‘Women have always known that men are a bit of a waste of space’. And weary resignation seems the best response as even the ‘more respectable’ press plod in bovine array down the same false trail (see here and here).
But while one could have predicted that some commentators would line up to express shock and horror (or pretend to do so), and others would tell them not to be so silly, it’s far more instructive to take the long view. For we’ve been through all this before. Fears that men would become surplus to requirement for perpetuating the race were voiced in the 1920s, and on similarly fatuous grounds. Then, as now, the debate revealed much more about the society that spawned it than about the future of humankind.
First, to the latest news. Contrary to what was widely claimed, Karim Nayernia at the University of Newcastle in England and his colleagues have not made artificial human sperm. They have found a way to turn embryonic stem cells into cells with some of the attributes of sperm [1]. That, however, certainly seems a big step along the way, and Nayernia’s group has already achieved live births of mice from eggs fertilized with sperm made by this technique. That the mice pups did not live long suggests there are some serious remaining problems. (Nayernia’s paper has just been retracted, but not because of any concerns about the results – it seems that the introductory material foolishly plagiarized essentially verbatim two paragraphs from a review article by different authors.)
Now, let’s not go into the wrongheaded objections about destroying ‘perfectly healthy human embryos’ (such God-like omniscience!) to make these pseudo-sperm. And the concern of one critic that the method might be used to create children who do not know who their father is seems bizarrely to suppose that no such children already exist.
But the main worries seem to be about ‘babies being born entirely through artificial means’, or of sperm being created from the genetic material of men long dead, including perhaps some we’d rather remain that way. And (shudder) they might not even have to be men…
This research undoubtedly raises important ethical questions. But the alleged horror at such imaginary scenarios is disingenuous. We do not shy away from ‘monstrosities’ of this sort, but instead return to them compulsively. They are among our most persistent cultural myths: we have been contemplating artificial babies in ‘test tubes’ since at least the Middle Ages. We needn’t be embarrassed by this fascination, but neither should we parade it with fresh indignation (and amnesia) each time it surfaces. We should instead simply consider what it tells us about ourselves.
The modern vision of the homunculus was conjured up in 1923 by the British biologist J. B. S. Haldane in his book Daedalus; or Science and the Future, one of the influential ‘To-day and To-morrow’ series of short books by leading thinkers published by Kegan Paul. Here Haldane prophesized about ‘ectogenetic children’ conceived and gestated in artificial wombs entirely outside the body. Haldane and others saw this as having two main benefits. First, it would allow eugenic selection of the progeny; second, it would liberate women from the burden of childbearing. Those views were echoed by Dora Russell, Bertrand Russell’s wife, and other campaigners for women’s freedom such as the feminist Vera Brittain and the sexologist Norman Haire, all three of whom contributed to the To-day and To-morrow series [2].
This emancipating role of ‘artificial babies’ was precisely what terrified the conservative philosopher Anthony Ludovici, who claimed in Lysistrata, or Women’s Future and Future Women (1924) that ectogenesis would relegate men to mere sources of ‘fertilizer’, perhaps with one man considered sufficient as a sperm machine for every 200 women. Mark my words, Ludovici warned in his ludicrous diatribe, ‘in a very short while it will be a mere matter of routine to proceed to an annual slaughter of males who have either outlived their prime or else have failed to fulfil the promise of their youth in meekness, general emasculateness, and stupidity.’ It makes the current tabloid hysteria (a singularly inappropriate word here) seems mild.
All this was set within the context of the decimation of Europe’s menfolk by the Great War, and the dystopian vision of Aldous Huxley’s Brave New World. These fears were connected to the encroaching industrial mechanization of other all-to-human tasks. Concerns for the role of men resurfaced when, in 1934, American biologist Gregory Pincus announced the ‘in vitro fertilization’ of rabbit eggs (actually a form of parthenogenesis in which the eggs were stimulated to grow without fertilization by sperm). This early precursor to human IVF was reported by some as an assault on the male: ‘No father to guide them’ ran the title of an article in Collier’s Magazine in 1937.
Today, it seems, the ‘end of men’ is cast in terms of bathetic solipsism – who will take the spiders out of the bath? – mixed with the frisson of The Boys from Brazil via Jurassic Park (it being de rigeur for modern myths to find a role for Hitler). While we can laugh or scoff now at the dreams and nightmares of the 1920s, we should feel confident that our grandchildren will do the same at ours.
References
1. Lee, J. H. et al. Stem Cells Dev. doi:10.1089/scd.2009.0063 (2009). Paper here.
2. Ferreira, A. Interdiscipl. Sci. Rev. 34, 32-55 (2009). Paper here.
[Here’s the pre-edited version of my latest Muse for Nature’s online news.]
The best way to understand the recent fuss about 'artificial sperm' and the 'end of men' is to consider old versions of the same debate.
Few science stories seem as guaranteed to make headlines as those that can be distorted to reinforce lazy clichés about gender. These range from the folksy – women are better multitaskers – to the ugly – ‘women who dress provocatively are more likely to be raped’.
So no one should be terribly surprised that recent reports of ‘artificial sperm’ made in the laboratory focused on the question of whether the advance makes men obsolete (see here and here). It hardly seems worth blustering about tabloid stories that claim ‘Women have always known that men are a bit of a waste of space’. And weary resignation seems the best response as even the ‘more respectable’ press plod in bovine array down the same false trail (see here and here).
But while one could have predicted that some commentators would line up to express shock and horror (or pretend to do so), and others would tell them not to be so silly, it’s far more instructive to take the long view. For we’ve been through all this before. Fears that men would become surplus to requirement for perpetuating the race were voiced in the 1920s, and on similarly fatuous grounds. Then, as now, the debate revealed much more about the society that spawned it than about the future of humankind.
First, to the latest news. Contrary to what was widely claimed, Karim Nayernia at the University of Newcastle in England and his colleagues have not made artificial human sperm. They have found a way to turn embryonic stem cells into cells with some of the attributes of sperm [1]. That, however, certainly seems a big step along the way, and Nayernia’s group has already achieved live births of mice from eggs fertilized with sperm made by this technique. That the mice pups did not live long suggests there are some serious remaining problems. (Nayernia’s paper has just been retracted, but not because of any concerns about the results – it seems that the introductory material foolishly plagiarized essentially verbatim two paragraphs from a review article by different authors.)
Now, let’s not go into the wrongheaded objections about destroying ‘perfectly healthy human embryos’ (such God-like omniscience!) to make these pseudo-sperm. And the concern of one critic that the method might be used to create children who do not know who their father is seems bizarrely to suppose that no such children already exist.
But the main worries seem to be about ‘babies being born entirely through artificial means’, or of sperm being created from the genetic material of men long dead, including perhaps some we’d rather remain that way. And (shudder) they might not even have to be men…
This research undoubtedly raises important ethical questions. But the alleged horror at such imaginary scenarios is disingenuous. We do not shy away from ‘monstrosities’ of this sort, but instead return to them compulsively. They are among our most persistent cultural myths: we have been contemplating artificial babies in ‘test tubes’ since at least the Middle Ages. We needn’t be embarrassed by this fascination, but neither should we parade it with fresh indignation (and amnesia) each time it surfaces. We should instead simply consider what it tells us about ourselves.
The modern vision of the homunculus was conjured up in 1923 by the British biologist J. B. S. Haldane in his book Daedalus; or Science and the Future, one of the influential ‘To-day and To-morrow’ series of short books by leading thinkers published by Kegan Paul. Here Haldane prophesized about ‘ectogenetic children’ conceived and gestated in artificial wombs entirely outside the body. Haldane and others saw this as having two main benefits. First, it would allow eugenic selection of the progeny; second, it would liberate women from the burden of childbearing. Those views were echoed by Dora Russell, Bertrand Russell’s wife, and other campaigners for women’s freedom such as the feminist Vera Brittain and the sexologist Norman Haire, all three of whom contributed to the To-day and To-morrow series [2].
This emancipating role of ‘artificial babies’ was precisely what terrified the conservative philosopher Anthony Ludovici, who claimed in Lysistrata, or Women’s Future and Future Women (1924) that ectogenesis would relegate men to mere sources of ‘fertilizer’, perhaps with one man considered sufficient as a sperm machine for every 200 women. Mark my words, Ludovici warned in his ludicrous diatribe, ‘in a very short while it will be a mere matter of routine to proceed to an annual slaughter of males who have either outlived their prime or else have failed to fulfil the promise of their youth in meekness, general emasculateness, and stupidity.’ It makes the current tabloid hysteria (a singularly inappropriate word here) seems mild.
All this was set within the context of the decimation of Europe’s menfolk by the Great War, and the dystopian vision of Aldous Huxley’s Brave New World. These fears were connected to the encroaching industrial mechanization of other all-to-human tasks. Concerns for the role of men resurfaced when, in 1934, American biologist Gregory Pincus announced the ‘in vitro fertilization’ of rabbit eggs (actually a form of parthenogenesis in which the eggs were stimulated to grow without fertilization by sperm). This early precursor to human IVF was reported by some as an assault on the male: ‘No father to guide them’ ran the title of an article in Collier’s Magazine in 1937.
Today, it seems, the ‘end of men’ is cast in terms of bathetic solipsism – who will take the spiders out of the bath? – mixed with the frisson of The Boys from Brazil via Jurassic Park (it being de rigeur for modern myths to find a role for Hitler). While we can laugh or scoff now at the dreams and nightmares of the 1920s, we should feel confident that our grandchildren will do the same at ours.
References
1. Lee, J. H. et al. Stem Cells Dev. doi:10.1089/scd.2009.0063 (2009). Paper here.
2. Ferreira, A. Interdiscipl. Sci. Rev. 34, 32-55 (2009). Paper here.
Tuesday, July 21, 2009
Finally on the Reason Project
Now that the dust has settled somewhat, I’m in a position to see what people made of my debate with Sam Harris. Most of the discussion seems to have happened on what it seems we are meant now to call New Atheist sites, such as richarddawkins.net and Pharyngula, so I guess there is, let’s say, a certain angle to it. But I have the impression that some people who follow this sort of thing have already decided which boxes exist, and it’s just a matter of determining which ones to put us in. Thus my position is characterised as that of an accommodationist who figures that religion is here to stay so we might as well make our peace with it. OK then, once more: I haven’t the faintest idea if religion is an inevitable aspect of human culture. Neither have you. Neither has Sam. So let’s, please, not bother ourselves about taking positions on that. What I say is that, thus far in history, religion or movements like it (Maoism, Stalinism, Nazism, to name a few of the ones that might make us glad of a cup of tea with the vicar) have tended to occur pretty much everywhere. I humbly suggest there might be something worth learning from that, and that this something perhaps amounts to a little more than that people are suckers for idols to worship (though that probably plays a role). I suggest that it might also derive from rather more than that people have just been given bad information. So what else is there to it? I’d hoped we might talk about that.
But I guess that if you shout in a crowded marketplace, you can’t expect much nuance to survive.
A lot of folks feared that I am out of touch with what religious people think, by which they seem to mean that I’m out of touch with what the religious people they know think. Religious people think an awful lot of different things. But one key question is whether, to make an analogy, we judge communism by Marx or by Stalin. Frankly, I’m undecided about that. Or another way: do we judge or music by Stravinsky, or Andrew Lloyd Webber? Or do you start to get the feeling that this is the wrong question? (However, I can’t help feeling we get closer to the core of music by considering Stravinsky.)
Inevitably, there is a lot one might say about Sam’s final response, appended to the end of our debate. From most, I will try hard to restrain myself. But I want to comment on one issue because it seems an interesting and revealing one. Sam says:
Let’s look more closely at Ball’s notion of life’s daunting complexity (from his last post):
You say ‘You do not seem to see what an astonishing number of the world’s conflicts and missed opportunities arise from people’s false knowledge about God’. Which are you going to cite – Northern Ireland? Iraq? The Crusades? If only it wasn’t for that pesky God and his offspring, all these places would have lived in blissful peace! The Taliban? – why, they’d be lovely folks if they weren’t Muslim extremists! How wonderful, how simple and easy, to be able to blame all these things on a false belief in gods! Gosh, this counterfactual history is easier than I ever imagined!
Sorry, facetiousness is no help. I am afraid that I fall into it here as a substitute for real anger, because I find it maddening to see the suggestion that sectarian violence in Belfast, tribal conflict in Iraq, Hindu-Muslim violence in India, and goodness knows how much else suffering in the world could be solved if we could just persuade people to give up their ridiculous faiths. I fully accept that it is no good either to simply say, as I know some do, ‘Oh, it’s only human nature, and religion is just the excuse.’ No, the truth is, sadly, much more complicated. And that is why I think the answers are too. But I have been left from our exchange with the feeling that ‘complicated’ is for you just a cop-out. I guess maybe that is where we fundamentally disagree. You seem to feel that any attempt to introduce into the debate considerations about culture, history, society and politics are unwelcome and even willfully deceitful diversions from the main business of demolishing religions for believing in things for which there is no evidence. That seems to be your ‘point’ – I’m afraid I simply can’t accept it.
This is the sort of stuff that could make a person angry all over again… Ball is trying have things both ways (as he was throughout our debate): on the one hand, the fundamental problem is NOT religion (and I’m a simpleton for thinking that it is); on the other, OF COURSE religion is sometimes involved, so he’s well aware of the problem of religion (and it’s very bad form for me not to acknowledge how clear he has been in his opposition to the bad effects of religious “extremism”). [PB: Is the notion that ‘religion is not the fundamental problem but that religious extremism often is a problem’ really ‘trying to have things both ways’, or just making a rather straightforward claim?] Okay… Let’s try to map this onto the world. Take the Taliban for starters: Who does Ball imagine the Taliban would be if they weren’t “Muslim extremists”? They are, after all, Homo sapiens like the rest of us. Let’s change them by one increment: wave a magic wand and make them all Muslim moderates… Now how does the world look? Do members of the Taliban still kill people for adultery? Do they still throw acid in the faces of little girls for attempting to go to school? No. The specific character of their religious ideology—and its direct and unambiguous link to their behavior—is the most salient thing about the Taliban. In fact, it is the most salient thing about them from their own point of view. All they talk about is their religion and what it obliges them to do…
Would there be conflict over land and other resources without religion? Yes. Are there other forms of tribalism and in-group/out-group thinking that have nothing to do with religion? Of course. But what seems to me to be undeniable, is that there are countless instances of terrible things done (and noble things left undone) because of specific religious beliefs. Some of the conflicts Ball cites would not have occurred (or would have been vastly ameliorated) without the influence of religion. A million people died during the partitioning of India and Pakistan. Would a million people have died if there had been Hindus on both sides? Very likely not. In fact, it is doubtful that the subcontinent would have been partitioned in the first place. Would the violence in Iraq be the same if it were all Sunni or all Shiite? Of course not. (The country may even be more coherently united against its western occupiers, but that is another matter, and one that is also energized by religious difference).
The first thing to notice is that here Sam seems to imply the same view as I hold, namely that what is objectionable about a group like the Taliban is not that they are religious but that they are religious zealots who believe their religion compels them to throw acid in little girls’ faces. If that problem can be solved by waving a wand and turning them into moderates (we’ll come back to that…), isn’t that what we’d really want? Do we really then need to worry too that they are then Muslim moderates and not atheists?
This highlights a persistent problem I’ve felt in our debate. It seems that Sam objects both to the fact that such people are dangerous religious fanatics and that they are religious. I object only to the first (so long as neither group tries to foist their belief on others, and I fully recognize that some do try). This position seems guaranteed to make Sam consider that I am being selective and wanting it both ways – ‘oh, of course I object to that, but not to this’. Anyone who sits between two poles is bound to be accused of looking both ways. But I don’t see why this position need be so problematic, nor inconsistent. Sam seems in this example to hint that it is a tenable one, at least insofar as it addresses the issue that perhaps concerns both him and me most of all: the use of religion for oppressive and violent ends.
But Sam’s example and solution here reveal the crux of my argument about culture and society. Clearly he believes it is possible to be a Muslim without feeling the need to throw acid in people’s faces. In other words, the chosen religion of the Taliban does not compel them to believe as they do. It is their particular (mis)interpretation of that religion which does that. What this suggests to me is that the problem is not (in this case) Islam but that certain groups elect to adopt extreme and oppressive interpretations of it. The same is true, of course, for other religions: some Christians feel that their belief compels them to be pacifists, others that it compels them to shoot abortion doctors. My point is then that surely what is important is to understand why some cultural groups adopt one interpretation and some another. In the case of the Taliban, one clear aspect of their belief is that it commends the oppression of women. This is very obviously not a uniquely religious imperative, and so I suspect the real problem here is why a particular set of cultural circumstances have led this group to take a misogynist attitude while other circumstances allow another group, reading from the same book, to act otherwise.
Look at it another way. Sam suggests a thought experiment (that magic wand) in which we alter nothing about the Taliban but their inclination to interpret Islam in violent and oppressive ways. My view is that this has no real meaning. Of course it would solve the problem if we could take a bunch of zealots and simply pluck out their zealotry. But is it likely that one could do so? Isn’t the source of that zealotry likely to be found in a broader range of social, historical and cultural factors, given that it is evidently not an inevitable aspect of their religious book? And purely from a pragmatic view, isn’t it more likely that we might find ways of encouraging the spread of religious moderates than that we can hope to stamp out the religion entirely while needing to make no other social or cultural changes?
The comment about Sunni and Shiite sects in Iraq is particularly revealing. So Sam thinks this is basically an argument not between different tribal factions but about people who think that they need to kill one another because of a disagreement over who was Mohammed’s true successor? And presumably Protestants kill Catholics in Northern Ireland because the Catholics fail to heed Martin Luther?
There’s plenty more, but I think an awful lot resides in Sam’s assertion that it is “the people who spend all their time reading the Qur’an and the hadith, seeking fatwas for the their every action, and long to die as martyrs in the jihad because they are certain that every word of scripture is true” who are the ‘deeply religious’ ones. To Sam, being ‘deeply religious’ is apparently about how strongly you feel and how well you can recite your Holy Book, not about how well you understand it or how wisely you use it. It is, to return to my earlier metaphor, the people who weep at Lloyd Webber musicals who are the most ‘deeply musical’. From that starting point, I guess it’s inevitable that we wouldn’t find much convergence.
This, however, leads Sam to raise some interesting questions while apparently thinking them to be rhetorical and not questions at all. “Is the Pope a sufficient representative of Catholicism—or is he too “superficial”? Does he not “know his theology”?” Well Sam, what do you think? Does he? I’m not sure you have the faintest idea. The question doesn’t answer itself simply because he is the Pope. Whose theology, in any event? “Did Aquinas or Augustine know theirs?”, Sam asks. But Aquinas didn’t always agree with Augustine. Does Rowan Williams agree with the Pope on the interpretation of Augustine’s notion of original sin? I’d be surprised if he did. Does Augustine’s ‘original sin’ agree with what is said in the Bible? Some theologists think not. I am well aware of the attitude of “who gives a damn anyway, because they’re all wrong”, and I can appreciate why someone might say that. But I find it hard to see how one can truly argue against ‘religious belief’ without some notion of the range of what that belief is, and the merits of each.
Sadly, (honestly, it pains me) one other thing has to be mentioned. Sam addresses the complaint “Why didn’t you admit that you misinterpreted—and, therefore, unfairly attacked—Ball’s original article?”, by saying “I didn’t admit this because I don’t believe it to be true—as evidenced by virtually everything Ball has written subsequently.”
I was happy to let this go, really I was – but Sam wants to return to it again (like the compulsion to return to the scene of the crime?). This is really so simple. Sam’s letter to Nature claimed “Mr. Ball assures us that … there is no deeper contradiction to be found between scientific rationality and religious faith. “ I pointed out that I made no such statement, and that I didn’t think it was true. So: no misinterpretation? Sam went on: “As evidence of this underlying harmony, we are asked to contemplate the existence of The BioLogos Foundation” I pointed out that I didn’t offer any endorsement of the BioLogos Foundation, and didn’t wish to do so. I then called them ‘irenic’; Sam didn’t seem to know what this meant, but when I explained that, he moved swiftly on...
Certainly, the debate we had subsequently showed that we disagree over many things, and that Sam feels I am far too tolerant of religion. Fine. But on the issue of whether my article was misinterpreted in Sam’s letter to Nature, there really is no question. It is all there in black and white. Ah, but you see, Sam says “What I was hoping to avoid, and what Ball continually tried to provoke, was a tit-for-tat style of debate—you said I said X, but what I really said (or meant) was Y. Such exchanges are deadly boring.” Oh, too true. But when you get things wrong, you may be called upon to say so.
Finally, “All of Ball’s specific complaints about my misinterpreting his original article struck me as spurious.” But he will not say why. I think the reason is now pretty plain.
There is another way I could respond to this, which is to point out that Sam wants all scientists who are religious believers to be sacked from their departments and stripped of their qualifications. Of course Sam will respond by saying that he has never said anything of the sort, but I will simply say that the truth of my assertion is “evidenced by virtually everything Harris has written” and that to discuss the details would be too boring.
The fact is that this is all indeed a minor matter, and could have been easily dealt with by a brief acknowledgement that would allow us to move on. But Sam seems to have a real fear of making any concession whatsoever – a sign of a brittle position? – which regrettably turns this into an issue of intellectual honesty.
However, however. The truly sad thing about this exchange is that it has turned into adversaries two people who are unambiguously atheist, deplore the encroachment of creationism and fundamentalism, and are deeply opposed to the oppressive and anti-intellectual practices of some religious groups. I entered into this debate believing that we would find some way of agreeing to disagree. I leave it feeling that the kind of hardline atheism Sam espouses is, in its unyielding purism, potentially undermining of the very aims it claims to have.
Now that the dust has settled somewhat, I’m in a position to see what people made of my debate with Sam Harris. Most of the discussion seems to have happened on what it seems we are meant now to call New Atheist sites, such as richarddawkins.net and Pharyngula, so I guess there is, let’s say, a certain angle to it. But I have the impression that some people who follow this sort of thing have already decided which boxes exist, and it’s just a matter of determining which ones to put us in. Thus my position is characterised as that of an accommodationist who figures that religion is here to stay so we might as well make our peace with it. OK then, once more: I haven’t the faintest idea if religion is an inevitable aspect of human culture. Neither have you. Neither has Sam. So let’s, please, not bother ourselves about taking positions on that. What I say is that, thus far in history, religion or movements like it (Maoism, Stalinism, Nazism, to name a few of the ones that might make us glad of a cup of tea with the vicar) have tended to occur pretty much everywhere. I humbly suggest there might be something worth learning from that, and that this something perhaps amounts to a little more than that people are suckers for idols to worship (though that probably plays a role). I suggest that it might also derive from rather more than that people have just been given bad information. So what else is there to it? I’d hoped we might talk about that.
But I guess that if you shout in a crowded marketplace, you can’t expect much nuance to survive.
A lot of folks feared that I am out of touch with what religious people think, by which they seem to mean that I’m out of touch with what the religious people they know think. Religious people think an awful lot of different things. But one key question is whether, to make an analogy, we judge communism by Marx or by Stalin. Frankly, I’m undecided about that. Or another way: do we judge or music by Stravinsky, or Andrew Lloyd Webber? Or do you start to get the feeling that this is the wrong question? (However, I can’t help feeling we get closer to the core of music by considering Stravinsky.)
Inevitably, there is a lot one might say about Sam’s final response, appended to the end of our debate. From most, I will try hard to restrain myself. But I want to comment on one issue because it seems an interesting and revealing one. Sam says:
Let’s look more closely at Ball’s notion of life’s daunting complexity (from his last post):
You say ‘You do not seem to see what an astonishing number of the world’s conflicts and missed opportunities arise from people’s false knowledge about God’. Which are you going to cite – Northern Ireland? Iraq? The Crusades? If only it wasn’t for that pesky God and his offspring, all these places would have lived in blissful peace! The Taliban? – why, they’d be lovely folks if they weren’t Muslim extremists! How wonderful, how simple and easy, to be able to blame all these things on a false belief in gods! Gosh, this counterfactual history is easier than I ever imagined!
Sorry, facetiousness is no help. I am afraid that I fall into it here as a substitute for real anger, because I find it maddening to see the suggestion that sectarian violence in Belfast, tribal conflict in Iraq, Hindu-Muslim violence in India, and goodness knows how much else suffering in the world could be solved if we could just persuade people to give up their ridiculous faiths. I fully accept that it is no good either to simply say, as I know some do, ‘Oh, it’s only human nature, and religion is just the excuse.’ No, the truth is, sadly, much more complicated. And that is why I think the answers are too. But I have been left from our exchange with the feeling that ‘complicated’ is for you just a cop-out. I guess maybe that is where we fundamentally disagree. You seem to feel that any attempt to introduce into the debate considerations about culture, history, society and politics are unwelcome and even willfully deceitful diversions from the main business of demolishing religions for believing in things for which there is no evidence. That seems to be your ‘point’ – I’m afraid I simply can’t accept it.
This is the sort of stuff that could make a person angry all over again… Ball is trying have things both ways (as he was throughout our debate): on the one hand, the fundamental problem is NOT religion (and I’m a simpleton for thinking that it is); on the other, OF COURSE religion is sometimes involved, so he’s well aware of the problem of religion (and it’s very bad form for me not to acknowledge how clear he has been in his opposition to the bad effects of religious “extremism”). [PB: Is the notion that ‘religion is not the fundamental problem but that religious extremism often is a problem’ really ‘trying to have things both ways’, or just making a rather straightforward claim?] Okay… Let’s try to map this onto the world. Take the Taliban for starters: Who does Ball imagine the Taliban would be if they weren’t “Muslim extremists”? They are, after all, Homo sapiens like the rest of us. Let’s change them by one increment: wave a magic wand and make them all Muslim moderates… Now how does the world look? Do members of the Taliban still kill people for adultery? Do they still throw acid in the faces of little girls for attempting to go to school? No. The specific character of their religious ideology—and its direct and unambiguous link to their behavior—is the most salient thing about the Taliban. In fact, it is the most salient thing about them from their own point of view. All they talk about is their religion and what it obliges them to do…
Would there be conflict over land and other resources without religion? Yes. Are there other forms of tribalism and in-group/out-group thinking that have nothing to do with religion? Of course. But what seems to me to be undeniable, is that there are countless instances of terrible things done (and noble things left undone) because of specific religious beliefs. Some of the conflicts Ball cites would not have occurred (or would have been vastly ameliorated) without the influence of religion. A million people died during the partitioning of India and Pakistan. Would a million people have died if there had been Hindus on both sides? Very likely not. In fact, it is doubtful that the subcontinent would have been partitioned in the first place. Would the violence in Iraq be the same if it were all Sunni or all Shiite? Of course not. (The country may even be more coherently united against its western occupiers, but that is another matter, and one that is also energized by religious difference).
The first thing to notice is that here Sam seems to imply the same view as I hold, namely that what is objectionable about a group like the Taliban is not that they are religious but that they are religious zealots who believe their religion compels them to throw acid in little girls’ faces. If that problem can be solved by waving a wand and turning them into moderates (we’ll come back to that…), isn’t that what we’d really want? Do we really then need to worry too that they are then Muslim moderates and not atheists?
This highlights a persistent problem I’ve felt in our debate. It seems that Sam objects both to the fact that such people are dangerous religious fanatics and that they are religious. I object only to the first (so long as neither group tries to foist their belief on others, and I fully recognize that some do try). This position seems guaranteed to make Sam consider that I am being selective and wanting it both ways – ‘oh, of course I object to that, but not to this’. Anyone who sits between two poles is bound to be accused of looking both ways. But I don’t see why this position need be so problematic, nor inconsistent. Sam seems in this example to hint that it is a tenable one, at least insofar as it addresses the issue that perhaps concerns both him and me most of all: the use of religion for oppressive and violent ends.
But Sam’s example and solution here reveal the crux of my argument about culture and society. Clearly he believes it is possible to be a Muslim without feeling the need to throw acid in people’s faces. In other words, the chosen religion of the Taliban does not compel them to believe as they do. It is their particular (mis)interpretation of that religion which does that. What this suggests to me is that the problem is not (in this case) Islam but that certain groups elect to adopt extreme and oppressive interpretations of it. The same is true, of course, for other religions: some Christians feel that their belief compels them to be pacifists, others that it compels them to shoot abortion doctors. My point is then that surely what is important is to understand why some cultural groups adopt one interpretation and some another. In the case of the Taliban, one clear aspect of their belief is that it commends the oppression of women. This is very obviously not a uniquely religious imperative, and so I suspect the real problem here is why a particular set of cultural circumstances have led this group to take a misogynist attitude while other circumstances allow another group, reading from the same book, to act otherwise.
Look at it another way. Sam suggests a thought experiment (that magic wand) in which we alter nothing about the Taliban but their inclination to interpret Islam in violent and oppressive ways. My view is that this has no real meaning. Of course it would solve the problem if we could take a bunch of zealots and simply pluck out their zealotry. But is it likely that one could do so? Isn’t the source of that zealotry likely to be found in a broader range of social, historical and cultural factors, given that it is evidently not an inevitable aspect of their religious book? And purely from a pragmatic view, isn’t it more likely that we might find ways of encouraging the spread of religious moderates than that we can hope to stamp out the religion entirely while needing to make no other social or cultural changes?
The comment about Sunni and Shiite sects in Iraq is particularly revealing. So Sam thinks this is basically an argument not between different tribal factions but about people who think that they need to kill one another because of a disagreement over who was Mohammed’s true successor? And presumably Protestants kill Catholics in Northern Ireland because the Catholics fail to heed Martin Luther?
There’s plenty more, but I think an awful lot resides in Sam’s assertion that it is “the people who spend all their time reading the Qur’an and the hadith, seeking fatwas for the their every action, and long to die as martyrs in the jihad because they are certain that every word of scripture is true” who are the ‘deeply religious’ ones. To Sam, being ‘deeply religious’ is apparently about how strongly you feel and how well you can recite your Holy Book, not about how well you understand it or how wisely you use it. It is, to return to my earlier metaphor, the people who weep at Lloyd Webber musicals who are the most ‘deeply musical’. From that starting point, I guess it’s inevitable that we wouldn’t find much convergence.
This, however, leads Sam to raise some interesting questions while apparently thinking them to be rhetorical and not questions at all. “Is the Pope a sufficient representative of Catholicism—or is he too “superficial”? Does he not “know his theology”?” Well Sam, what do you think? Does he? I’m not sure you have the faintest idea. The question doesn’t answer itself simply because he is the Pope. Whose theology, in any event? “Did Aquinas or Augustine know theirs?”, Sam asks. But Aquinas didn’t always agree with Augustine. Does Rowan Williams agree with the Pope on the interpretation of Augustine’s notion of original sin? I’d be surprised if he did. Does Augustine’s ‘original sin’ agree with what is said in the Bible? Some theologists think not. I am well aware of the attitude of “who gives a damn anyway, because they’re all wrong”, and I can appreciate why someone might say that. But I find it hard to see how one can truly argue against ‘religious belief’ without some notion of the range of what that belief is, and the merits of each.
Sadly, (honestly, it pains me) one other thing has to be mentioned. Sam addresses the complaint “Why didn’t you admit that you misinterpreted—and, therefore, unfairly attacked—Ball’s original article?”, by saying “I didn’t admit this because I don’t believe it to be true—as evidenced by virtually everything Ball has written subsequently.”
I was happy to let this go, really I was – but Sam wants to return to it again (like the compulsion to return to the scene of the crime?). This is really so simple. Sam’s letter to Nature claimed “Mr. Ball assures us that … there is no deeper contradiction to be found between scientific rationality and religious faith. “ I pointed out that I made no such statement, and that I didn’t think it was true. So: no misinterpretation? Sam went on: “As evidence of this underlying harmony, we are asked to contemplate the existence of The BioLogos Foundation” I pointed out that I didn’t offer any endorsement of the BioLogos Foundation, and didn’t wish to do so. I then called them ‘irenic’; Sam didn’t seem to know what this meant, but when I explained that, he moved swiftly on...
Certainly, the debate we had subsequently showed that we disagree over many things, and that Sam feels I am far too tolerant of religion. Fine. But on the issue of whether my article was misinterpreted in Sam’s letter to Nature, there really is no question. It is all there in black and white. Ah, but you see, Sam says “What I was hoping to avoid, and what Ball continually tried to provoke, was a tit-for-tat style of debate—you said I said X, but what I really said (or meant) was Y. Such exchanges are deadly boring.” Oh, too true. But when you get things wrong, you may be called upon to say so.
Finally, “All of Ball’s specific complaints about my misinterpreting his original article struck me as spurious.” But he will not say why. I think the reason is now pretty plain.
There is another way I could respond to this, which is to point out that Sam wants all scientists who are religious believers to be sacked from their departments and stripped of their qualifications. Of course Sam will respond by saying that he has never said anything of the sort, but I will simply say that the truth of my assertion is “evidenced by virtually everything Harris has written” and that to discuss the details would be too boring.
The fact is that this is all indeed a minor matter, and could have been easily dealt with by a brief acknowledgement that would allow us to move on. But Sam seems to have a real fear of making any concession whatsoever – a sign of a brittle position? – which regrettably turns this into an issue of intellectual honesty.
However, however. The truly sad thing about this exchange is that it has turned into adversaries two people who are unambiguously atheist, deplore the encroachment of creationism and fundamentalism, and are deeply opposed to the oppressive and anti-intellectual practices of some religious groups. I entered into this debate believing that we would find some way of agreeing to disagree. I leave it feeling that the kind of hardline atheism Sam espouses is, in its unyielding purism, potentially undermining of the very aims it claims to have.
Subscribe to:
Posts (Atom)