Monday, June 21, 2010

Wet dreams


This morning I found myself sitting outside a café in upper Regent Street watching passers-by sample three types of water and offer their opinions on them. ‘Three types of water’ of course begs the question, and I suspect there was nothing but one type of water involved, with trivial variations in the usual trace solutes. This was a vox-pop test for the Radio 4 consumer and lifestyle programme ‘You And Yours’, which in this item was investigating the claims being made for so-called ‘ionized water’, equipment for the production of which is being installed in health-food cafés at vast expense. When the BBC folks contacted me last Friday to ask my opinion on ionized water, I think they were a little surprised when I responded ‘what’s that?’ They’d got it confused with the deionized water available in all good labs, not to mention garages that sell it for your car battery. But as I said to them, ‘ionized’ water made no sense to me. I’m pleased to say that it was quite proper that it did not. A quick search reveals that ionized water is just the latest of the ‘altered water treatments’ being advocated for turning ordinary water into a wondrous health-giving reagent. Like all the others, it is a sham. Basically it seems to involve an electrolytic process that allegedly produces alkaline water at one electrode – not entirely implausible in itself, if there is an electrolyte present, but the claims made for the health benefits of drinking ‘alkaline water’ are nonsense, and the waffle about reactive oxygen species and cancer just the usual junk. Fortunately, Steven Lower of Simon Fraser University has prepared an excellent web site debunking this stuff, which saves me the effort.

Those who want the full nonsense can get it here. Yes, complete with special ‘water clusters’. If you want to buy a water ionizer, feel free to do so here. And I’m amused to see that Ray Kurzweil, who wants to live long enough to reach the age of immortality that is just around the corner, has bought into this stuff. Ray swears that ionized water is alkaline, because like a ‘responsible scientist’ he measured the pH. His scientific curiosity did not, however, extend to investigating, if this was so, what the counterions to the hydroxyls are – in other words, which salts had been added to the water to make alkalinity possible. We are apparently supposed to believe that it is the water itself that is alkaline, which of course is chemically impossible. Keep drinking, Ray.

In any case, I was required on You And Yours to offer scientific comment on this affair. You can judge the results for yourselves here.

Thursday, June 10, 2010

Bursting out


I have a review in Nature of Albert-László Barabási’s new book Bursts. The book is nice, but the review was necessarily truncated, and here is how I really wanted to put it.

Bursts: The Hidden Pattern Behind Everything We Do

Albert-László Barabási

Dutton, New York, 2010
310 pages
$26.95

Is human behaviour deterministic or random? Psychoanalysts, economists and behavioural geneticists, however unlikely as bedfellows, all tend to assume cause and effect: we do what we do for a discernible reason, whether obeying the dictates of the unconscious, rational self-interest or our genetic predisposition. But those assumptions have not produced anything like a predictive model of human actions, and we are daily presented with reason to suspect that our actions owe more to sheer caprice than to any formula. Given the disparity of individual decisions, perhaps our behaviour shows no more pattern than coin-tossing: maybe collectively it is dominated by the randomness encoded by the gaussian distribution, the familiar bell-curve statistics of a series of independent events whose outcomes are a matter of chance.

Albert-László Barabási’s Bursts explains how this notion of randomness has been undermined by recent research, much of it conducted by him and his collaborators, that has revealed a hitherto unexpected pattern in human activities ranging from the sending of emails (and before that, postal letters) to our movements through the world. We conduct our affairs in bursts, for example sending out several emails in a short space of time and then none for hours. Even our everyday wrist movements, when monitored with accelerometers, show this bursty behaviour, with spells of motion interspersed with periods of repose. Because the distribution of bursts differs for people who are clinically depressed, these seemingly irrelevant statistics might offer a simple diagnostic tool.

Burstiness could seem so intuitively obvious as to be trivial. That we find a moment to catch up with email responses, rather than attending to them one by one at random intervals, is scarcely puzzling or surprising. But such rationalizing narratives don’t fully account for everything: why, then, does it take us a few minutes to respond to some messages but weeks to get to others? Barabási and his coworkers explained that on the assumption that we prioritize, adding new priorities to our ‘must-do’ lists each time others are cleared.

Barabási renders observations like this, which could seem dry or frivolous, both engaging and illuminating through human stories: Einstein unwittingly stalling the career of Theodor Kaluza by taking two years to reply to a letter, or the hapless artist Hasan Elahi being taken for questioning by US Homeland Security because of his ‘suspicious movement’. Barabási shows that Elahi’s globetrotting really was anomalous – whereas the algorithm he developed in his lab to predict people’s whereabouts based on their personal bursty signature forecast everyone else’s movements with more than 80 percent accuracy, Elahi foiled the program with his genuine randomness.

Burstiness is not confined to human activity, and so is not somehow a by-product of cognition. It is seen in the foraging patterns of several animals (though not, as once claimed in this journal, albatrosses). It even fits the transcriptional activity of genes and evolutionary speciation. But Barabási cannot yet say whether this ubiquity stems from the same basic cause, or whether burstiness happens to be a statistical signature that many mechanisms can generate. The same question has been raised of the power-law statistics found for many natural and social phenomena (in fact bursts also produce power laws), and of fractal structures. To put it another way, is burstiness a discriminating and informative character, or just a common epiphenomenon of several distinct processes? We don’t yet know.

Moreover, the burstiness of human behaviour doesn’t obviously warrant the air of determinism that hangs over the book. ‘Prediction at the individual level is growing increasingly feasible’, Barabási asserts. But bursts per se don’t obviously help with the sort of detailed, moment-by-moment prediction he is discussing here – like the avalanches of self-organized criticality, they remain unpredictable as individual events, differing from gaussian randomness only because they are correlated. They simply help us get the overall statistics right.

While popular science books written by researchers presenting new ideas typically have an ex cathedra quality, Bursts shows the influence of the journalistic approach of professional writers, exemplified by James Gleick and Malcolm Gladwell, narrative-driven and replete with personality sketches. Barabási is rather good at these story-telling tricks, and his opening paragraph is a masterful example of the genre, drawing us in with a puzzle we know will be resolved only much later.

Whether his daring device of punctuating the exposition with the tale of how his Transylvanian compatriot György Székely led a peasant revolt in Hungary in 1514 works is less clear. Barabási implies that this tale illustrates some of the conclusions about burstiness and unpredictability, but that’s far from obvious. Because I am apparently Barabási’s personal Hasan Elahi, a vanishingly rare outlier who happens to have an interest both in Székely Transylvania and the peasant uprisings of the early sixteenth century, I was happy to indulge him. I suspect not everyone will do so. But they should try, because Bursts reveals Barabási to be not just an inventive and profoundly interdisciplinary scientist but an unusually talented communicator.


Tuesday, June 08, 2010

Still got music on the brain


I have a piece in the FT about the forthcoming events on ‘music and the brain’ at the Aldeburgh Festival. The piece is so unadulterated that I won’t even bother pasting the ‘pre-edited’ version here (apart, that is, from the conversion of Eckart Altenmüller from a neuroscientist to a ‘euro-scientist’, a typo that has the distinction of both being mildly amusing and remaining true). More on this to follow.

Saturday, June 05, 2010

Mind over matter?


There’s a piece in today’s Guardian Review by American author and novelist Marilynne Robinson, who bravely challenges the materialistic interpretations of the brain offered by the likes of Steven Pinker and E. O. Wilson. It is an extract from her book Absence of Mind. I say’ brave’ rather than ‘persuasive.’ I’ve got some sympathy for her criticisms of the way the pop neuro- and cognitive scientists try to explain the brain by ruling out of bounds those things that seem too intangible or difficult. And although Pinker makes a valid point by confessing that we have no reason to suppose the human brain is capable of understanding the resolution to some of the hard philosophical questions, Robinson is right to suggest that this, even if it is true, is no reason to stop asking them. (The likes of Pinker will probably be pulling their elegantly coiffeured hair out at the way Robinson casually makes Freud a part of mainstream science, but let’s put that aside.)

My main complaint is that the article is encrusted with what seems to be the characteristically clotted style of American academics of letters, which strives always to be artful at the expense of plain speaking. For example, in response to E. O. Wilson’s comment that ‘The brain and its satellite glands have now been probed to the point where no particular site remains that can reasonably be supposed to harbour a nonphysical mind’, Robinson replies: ‘To prove a negative, or to treat it as having been proved, is, oddly enough, an old and essential strategy of positivism. So I do feel obliged to point out that if such a site could be found in the brain, then the mind would be physical in the same sense that anything else with a locus in the brain is physical. To define the mind as nonphysical in the first place clearly prejudices his conclusion.’ The same point might have been made with less fuss had she simply said ‘But how can a nonphysical mind have a physical location?’

Here at least, however, her meaning is clear. But how about this: ‘What grounds can there be for doubting that a sufficient biological account of the brain would yield the complex phenomenon we know and experience as the mind? It is only the pertinacity of the mind/body dichotomy that sustains the notion that a sufficient biological account of the brain would be reductionist in the negative sense. Such thinking is starkly at odds with our awareness of the utter brilliance of the physical body.’ I have read this several times, and still doubt that I really understand any of it. Would a statement like this be permitted by an editor in a commissioned piece? I’d like to think not.

And isn’t it odd, after stating ‘What Descartes actually intended by the words "soul" and "mind" seems to me an open question for Descartes himself’, to simply sign the question off with ‘No doubt there are volumes to be consulted on this subject.’ Indeed there are – why not consult them? Better still, why not tell us what Descartes actually said? (For what it is worth, I think she is trying to complicate the matter too much. The soul, for Descartes, seems to me to be simply what motivates the body-machine: what puts its hydraulics and cogs and levers into particular motions. No big deal; except that it enabled Descartes to defend himself against charges of atheism.)

The standfirst of the piece (obviously not by the author) asks ‘What is meant by the idea of a soul?’ Robinson suggests that Pinker identifies the soul with the mind, which seems fair enough on the strength of the passage she quotes. Aristotle did likewise, at least as far as humans are concerned, for he said we are distinguished from other beings by possessing a rational soul. But then, Aristotle’s soul was always a thoroughly secular, quasi-scientific notion. All I can find as Robinson’s alternative is that the soul is ‘an aspect of deep experience’. I can see that this may be developed into some kind of meaning. She might also have usefully pointed out that this apparently deviates from the traditional Catholic notion of a soul as a non-physical badge of humanness that is slotted into the organism at conception.

But the least convincing aspect of the piece is the classic ‘just-as’ reasoning of the scientific dilettante. Robinson knows about quantum entanglement (sort of). And her point there seems to be ‘if we don’t really understand that, how can we think we can understand the brain/mind?’ But the hard thing about entanglement is not ‘understanding’ it (though we can’t claim to yet do so completely), but that it defies intuition. And please, no more allusions to the ‘quantum brain’.

Similarly, just as we don’t see a bird as a modified dinosaur (ah, do we not?), she argues that ‘there is no reason to assume our species resembles in any essential way the ancient primates whose genes we carry.’ Hmm… you might want to have another attempt at that sentence. Even if we allow that Robinson perhaps means it to apply only to aspects of brain, this is more a desperate plea to liberate us from our evolutionary past than a claim with any kind of reasoned support. ‘Might not the human brain have undergone a qualitative change’ [when the first artifact appeared], she asks? Well yes, it might, and some have called that change ‘hominization’. But this does not mean we lost all our former instincts and drives. It would doubtless have been catastrophic if we had. Even I, a sceptic of evolutionary-psychological Just So stories, can see this as an attempt to resurrect the specialness of humankind that some religious people still struggle to relinquish.

Pinker et al. will have little difficulty with this rather otiose assault.

Thursday, June 03, 2010

What's the big idea?


I’m still not sure whether I did right to join the panel for the online debate being launched by Icon Books on ‘The World’s Greatest Idea’. Well, the title says it all, no? I’m dubious about any view of history as a succession of ‘great ideas’, and the notion of ranking them – abolition of slavery vs the aerofoil vs arable farming – could seem worse than meaningless. Besides, does one rate them according to how intellectually dramatic an ‘idea’ is, or how important it has been to world civilization, or how well it has served humankind, or…? But I acceded in the end because I figured it does not do to be too po-faced about an exercise that after all is just a springboard for a potential discussion about how society produces and is changed by innovation. And there is something grandly absurd about pitching sewerage against romance against simplified Chinese characters. I’m also reassured to see that someone as discerning as Patricia Fara has also taken part. Go on, place a vote – there’s no harm in it.

Friday, May 28, 2010

Not all contemporary art is rubbish


I’m thrilled to see my friend, photographic and video artist Lindsay Seers, being given some respect in Ben Lewis’s excellent piece for Prospect on why modern art is in a decadent phase. Like Ben, I think Lindsay is doing serious and interesting stuff, and I say that not just because (perhaps even despite the fact that?) I’ve been involved in some of it. I wrote a piece for Lindsay’s book Human Camera (Article Press, 2007), which I’m now inspired to put up on my web site.

Monday, May 24, 2010

Creation myths


Artificial life? Don’t ask me guv, I was too busy last week building sandcastles in Lyme Regis. However, now making up for lost time… I have a Muse on Nature’s news site (the pre-edited text of which is below – they always remove the historical quotes), and a piece on the Prospect blog. The Venter work may, if it survives the editor’s shears, also be briefly discussed on an episode of Radio 4’s Moments of Genius that I’ve also just recorded with Patricia Fara, due to be broadcast this Sunday (30th May).

*********************************************************************
Claims of ‘synthetic life’ have been made throughout history. And each time, they are best regarded as mirroring what we think life is.

The recent ‘chemical synthesis of a living organism’ by Craig Venter and his colleagues at the J. Craig Venter Institute [1] sits within in a very long tradition. Claims of this sort have been made throughout history. That’s not to cast aspersions on the new results: while one can challenge the notion that this new bacterium, whose genome is closely modelled on that of Mycoplasma mycoides, stands apart from Darwinian evolution, the work is nonetheless an unprecedented triumph of biotechnological ingenuity. But when set in historical context, the work reflects our changing conception of what life is and how it might be made. What has been done here is arguably not so much a ‘synthesis of life’ as a (semi-)synthetic recreation of what we currently deem life to be. And as with previous efforts, it should leave us questioning the adequacy of that view.

To see that the new results reiterate a perennial theme, consider the headline of the Boston Herald in 1899: ‘Creation of Life. Lower Animals Produced by Chemical Means.’ The article described how the German biologist Jacques Loeb had caused an unfertilized sea-urchin egg to divide by treating it with salts. It was a kind of artificial parthenogenesis, and needless to say, very far from a chemical synthesis of life from scratch.

But Loeb himself was then talking in earnest about ‘the artificial production of living matter’, and he was not alone in blending his discovery with speculations about the de novo creation of life. In 1912 the physiologist Edward Albert Schäfer alluded to Loeb’s results in his presidential address to the British Association, under the rubric ‘the possibility of the synthesis of living matter.’ Schäfer was optimistic: ‘The [cell] nucleus – which may be said indeed to represent the quintessence of cell-life – possesses a chemical constitution of no very great complexity; so that we may even hope some day to see the material which composes it prepared synthetically.’

Such claims are commonly seen to imply that artificial human life is next on the agenda. It was a sign of the times that the New York Times credulously reported in 1910 that ‘Prof. Herrera, a Mexican scientist, has succeeded in forming a human embryo by chemical combination.’ It is surely no coincidence that many media reports have compared Venter to Frankenstein, or that the British Observer newspaper mistakenly suggested he has ‘succeeded in ‘creating’ human life for the first time’.
  
What is life?

Beliefs about the feasibility of making artificial organisms have been governed by the prevailing view of what life is. While the universe was seen as an intrinsically fecund matrix, permitting bees and vermin to emerge from rotten flesh by spontaneous generation, it seemed natural to imagine that sentient beings might body forth from insensate matter. The mechanical models of biology developed in the seventeenth century by René Descartes and others fostered the notion that a ‘spark of life’ – after the discovery of electricity, literally that – might animate a suitably arranged assembly of organic parts. The blossoming of chemistry and evolutionary theory spurred a conviction that it was all about getting the recipe right, so that nature’s diverse grandeur sprung from primordial colloidal jelly, called protoplasm, which Thomas Henry Huxley regarded as the ‘physical basis of life’.

Yet each apparent leap forward in this endeavour more or less coincided with a realization that the problem is not so simple. Protoplasm appeared as organic chemists were beginning on the one hand to erode the concept of vitalism and on the other to appreciate the full and baffling complexity of organic matter. The claims of Loeb and Schäfer came just before tools for visualizing the sub-cellular world, such as X-ray crystallography and the electron microscope, began to show life’s microstructure in all its complication. As H. G. Wells, his son George, and Julian Huxley explained in The Science of Life (1929-30), ‘To be impatient with the biochemists because they are not producing artificial microbes is to reveal no small ignorance of the problems involved.’

The next big splash in ‘making life’ came in 1953 when Harold Urey and Stanley Miller announced their celebrated ‘prebiotic soup’ experiment, conjuring amino acids from simple inorganic raw materials [3]. This too was obviously a very far cry from a synthesis of life, but some press reports were little troubled by the distinction: the result was regarded as a new genesis in principle if not in practice. ‘If their apparatus had been as big as the ocean, and if it had worked for a million years, instead of one week’, said Time, ‘it might have created something like the first living molecule.’ Yet that same year saw the discovery of life’s informational basis – the source of much of the ‘organization’ of organic matter that had so puzzled earlier generations – in the work of Crick and Watson. Now life was not so much about molecules at all, but about cracking, and perhaps then rewriting, the code.

Burning the book

Which brings us to Venter et al. Now that the field of genomics has fostered the belief that in sequencing genomes we are reading a ‘book of life’, whose algorithmic instructions need only be rejigged to produce new organisms, it’s easy to see why the creation of a wholly synthetic genome and its ‘booting up’ in a unicellular host should be popularly deemed a synthesis of life itself. Here the membranes, the cytoplasm, everything in fact except the genes, are mere peripherals to the hard drive of life. (The shift to a new realm of metaphor tells its own story.)

But what this latest work really implies is that it is time to lay aside the very concepts of an ‘artificial organism’ and a ‘synthesis of life’. Life is not a thing one makes, nor is it even a process that arises or is set in motion. It is a property we may choose to bestow, more or less colloquially, on certain organizations of matter. ‘Life’ in biology, rather like ‘force’ in physics, is a term carried over from a time when scientists thought quite differently, where it served as a makeshift bridge over the inexplicable.

More important than such semantics, the achievement by Venter et al. is a timely reminder that anything laying claim to the function we might call life resides not in a string of genes but in the interactions between them. Efforts to make de novo organisms of any complexity – for example, ones that can manufacture new pharmaceuticals and biofuels under demanding environmental constraints – seem likely to highlight how sketchily we understanding how those interactions operate and, most importantly, what their generic principles are. The euphoria engendered by rapid whole-genome sequencing techniques is already giving way to humility (even humiliation) about the difficulty of squaring genotype with phenotype. Yet again, our ideas of where the real business of life resides are shifting again: away from a linear ‘code’ and towards something altogether more abstract, emergent and entangled. In this regard at least, the latest ‘synthesis of life’ does indeed seem likely to repeat the historical template.

References
 1. D. G. Gibson et al. Science doi:10.1126/science.1190719 (2010)
2. E. A. Schafer, Nature 90, 7-19 (1912)
3. S. Miller, Science 117, 528 (1953)

Tuesday, May 11, 2010

Debunking is hard to do


In his excellent article on ‘denialism’ in this month’s New Humanist, Keith Kahn-Harris mentions that one of the problems debunkers face is that they have to engage in ‘a minute and careful examination of the sources… [which is] a time-consuming task that requires considerable skill and fortitude.’ This was precisely what I found myself up against when I reviewed Christopher Booker’s climate-change-denial tract The Real Global Global Warming Disaster for the Observer. I examined in detail just a very few of the claims Booker made (that is, ones that we not transparently false or misleading), and in each case found considerable distortion. I put the results of that trawling on this blog, but even then there was too much information for me to find the time to get it into an easily digested and streamlined shape. The real problem is that the denialists seem to have endless time on their hands. Happily, Booker’s book doesn’t seem to have had a huge impact, but less happily that is perhaps because there is now just so much climate denialism around, thanks largely to the silliness at UEA.

This issue of New Humanist is as full of good stuff as ever, but I particularly liked A. C. Grayling’s skewering of Terry Eagleton’s book On Evil: ‘Eagleton has been too long among the theorists to risk a straightforward statement… as we are dealing with Eagleton here, note that this is of course not a mish-mash of inconsistencies, as it appears to be; this is subtlety and nuance. It is, you might say, nuance-sense.’ For one reason or another, I have recently found myself having to read various texts issuing from the cultural-studies stable, and I can regretfully say that I know just what he means.

Sunday, May 09, 2010

Private Passions


I was the guest today on Radio 3’s Private Passions, where I get to choose half an hour of music and talk about it with Michael Berkeley. It can be heard here for the next seven days, I believe, but after that it vanishes into the BBC’s vaults. As ever with radio interviews, only afterwards do I realise what eloquent things I could have said in place of ‘um, you know…’. But I enjoyed it.

Wednesday, May 05, 2010

What a shoddy piece of work is man


It seems kind of cheap to win the ‘most commented’ slot on Nature News simply by writing an article about science and religion. You just know that will happen; there is nothing like it for provoking readers to offer their tuppence’ worth, and in particular for drawing reams of comment from the fundamentalist fringe. My latest Muse (pre-edited version below) is no exception. I am, however, entertained by the thoughtful remark of Bjørn Brembs, who says:

“As usual, your article is very reasoned, thoughtful and balanced. Reading some of the comments here, however, I fear you are making a common mistake, so accurately described by PZ Myers: "Where scientists are often handicapped is that they don't recognize the depth of the denial on the other side, and that their opponents really are happily butting their heads against the rock hard foundation of the science. We tend to assume the creationists can't really be that stupid, and figure they must have some legitimate complaint about some aspect of evolution with which we can sympathize. They don't. They really are that nuts."
Does it make sense to to try and reason thoughtfully with someone who prefers "magic man did it" over "I don't know" as an answer to scientific questions? Couldn't it be that this peculiar and revealing preference alone constitutes evidence enough that this person may not be amenable to reason at all?”

Bjørn is probably right in most cases, but I should say that I’d be a sad fool indeed if I wrote pieces like this under any belief that they would convert creationists. No, I do it because I think the issues are interesting, namely: how well has evolution done in designing our genome? (Not very.) To what extent does evolution optimize anything at all? (Not much.) And how come we work pretty well despite all this mess? (That’s the really big question.)

****************************************************************
Our genome won't win any design awards and doesn't speak well of the intelligence of its 'designer'.

Helena: They do say that man was created by God.
Domin: So much the worse for them.

This exchange in Karel Capek’s 1921 play R.U.R., which coined the word ‘robot’, is abundantly vindicated by our burgeoning understanding of human biology. Harry Domin, director general of the robot-making company R.U.R., jeers that ‘God had no idea about modern technology’, implying that the design of human-like bodies is now something we can do better ourselves.

Like most tales of making artificial people, R.U.R. contains a Faustian moral about hubris. But whether or not we could do better, it’s true that the human body is hardly a masterpiece of intelligent planning. Most famously, the eye’s retina is wired back to front so that the wiring has to pass back through the screen of light receptors, imposing a blind spot.

Now John Avise, an evolutionary geneticist at the University of California at Irvine, has catalogued the array of clumsy flaws and inefficiencies at the fundamental level of the genome. His paper , published in the Proceedings of the National Academy of Sciences USA [1], throws down the gauntlet to advocates of intelligent design, the pseudo-scientific face of religious creationism. What Intelligent Designer, Avise asks, would make such a botch?

Occasional botches are, meanwhile, precisely what we would expect from Darwinian evolution, which is blind to the big picture but merely tinkers short-sightedly to wring incremental adaptive advantage from the materials at hand. Just as in technology (and for analogous reasons), this produces ‘lock-in’ effects in which strategies that are sub-optimal from a global perspective persist because it is impractical to go back and improve them.

Intelligent design (ID) does not have to deny that evolution occurs, but it invokes an interventionist God who steps in to guide the process, constructing biological devices allegedly too ‘irreducibly complex’ to have been assembled by blind random mutation and natural selection, such as (ironically) the eye or the flagellar motor of bacteria [2].

As Avise points out, ID is problematic in purely theological terms. Were I inclined to believe in an omnipotent God, I should be far more impressed by one who had intuited that a world in which natural selection operates autonomously will lead to beings that function as well as humans (for all our flaws) than by one who was constantly having to step in and make adjustments. I’m not alone in that: Robert Boyle felt that it demeaned God to suppose he needed constantly to intervene in nature: ‘all things’, he said, ‘proceed, according to the artificer’s first design, and… do not require the peculiar interposing of the artificer, or any intelligent agent employed by him [3].

But ID must also confront the issue of theodicy: the evident fact that our world is imperfect. Human free will allegedly absolves God of responsibility for our ‘evil acts’ – but what about the innocent deaths caused by disease, natural disasters and so forth? Infelicities in the course of nature were already sufficiently evident in the eighteenth century for philosopher David Hume to imply that God might be considered a ‘stupid mechanic’. And in the early twentieth century, the physician Archibald Garrod pointed out how many human ailments are the result not of God’s wrath or the malice of demons but of ‘inborn errors’ in our biochemistry [4,5]

Many of these ‘errors’ can now be pinpointed to genetic mutations: at a recent count, there are around 75,000 disease-linked mutations [6]. But the ‘unintelligent design’ of our genomes, Avise says, goes well beyond such flaws, which might otherwise be dismissed as glitches in a mostly excellent contrivance.

The ubiquity of introns – sequences that must be expensively excised from transcribed genes before translation to proteins – seems to be a potentially harmful encumbrance. And numerous regulatory mechanisms are needed to patch up problems in gene activity, for example by silencing or destroying imperfectly transcribed mRNA (the templates for protein synthesis). Regulatory breakdowns may cause disease.

Why design a genome so poorly that it needs all this surveillance? Why are there so many wasteful repetitions of genes and gene-fragments, all of which have to be redundantly replicated in cell division? And why are we plagued by chromosome-hopping ‘mobile elements’ in our DNA that seem only to pose health risks?

These design flaws, Avise says, ‘extend the age-old theodicy challenge, traditionally motivated by obvious imperfections at the levels of human morphology and behavior, into the innermost molecular sanctum of our physical being.’

Avise wisely avers that this catalogue of errors should deter attempts to use religion to explain the minutiae of the natural world, and return it to its proper sphere as (one) source of counsel about how to live.

But his paper is equally valuable in demolishing the current secular tendency to reify and idealize nature through the notion that evolution is a non-teleological means of producing ‘perfect’ design. The Panglossian view that nature is refined by natural selection to some ‘optimal’ state exerts a dangerous tug in the field of biomimetics. But we should be surprised that some enzymes seem indeed to exhibit the maximum theoretical catalytic efficiency [7], rather than to imagine that this is nature’s default state. On the whole there are too many (dynamic) variables in evolutionary biology for ‘optimal’ to be a meaningful concept.

However – although heaven forbid that this should seem to let ID off the hook – it is worth pointing out that some of the genomic inefficiencies Avise lists are still imperfectly understood. We might be wise to hold back from writing them off as ‘flaws’, lest we make the same mistake evident in the labelling as ‘junk DNA’ genomic material that seems increasingly to play a biological role. There seems little prospect that the genome will ever emerge as a paragon of good engineering, but we shouldn’t too quickly derogate that which we do not yet understand.

References
1. Avise, J. C. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.0914609107.
2. Behe, M. J. Darwin’s Black Box: The Biochemical Challenge to Evolution (Free Press, New York, 1996).
3. Boyle, R. ‘Free inquiry’, in The Works of the Honourable Robert Boyle Vol. 5, ed. T. Birch, p.163 (Georg Olms, Hildesheim, 1965-6).
4. Garrod, A. Inborn Errors of Metabolism (Oxford University Press, London, 1909).
5. Garrod, A. The Inborn Factors of Inherited Disease (Clarendon Press, Oxford, 1931).
6. Stenson, P. D. et al., Hum. Mutat. 21, 577581 (2003).
7. Albery, W. J. & Knowles, J. R. Biochemistry 15, 5631-5640 (1976).

Friday, April 30, 2010

A supercomputing crystal ball

Here's a little piece I've just written for Nature's news blog The Great Beyond .


The good news is that your future can be predicted. The bad news is that it’ll cost a billion euros. That, at least, is what a team of scientists led by Dirk Helbing of the ETH in Switzerland believes. And as they point out, a billion euros is small fare compared with the bill for of the current financial crisis – which might conceivably have been anticipated with the massive social-science simulations they want to establish.

This might seem the least auspicious moment to start placing faith in economic modelling, but Helbing’s team proposes to transform the way it is done. They will abandon the discredited and doctrinaire old models in favour of ones built from the bottom up, which harness the latest understanding of how people behave and act collectively rather than reducing the economic world to caricature for the sake of mathematical convenience.

And it is not just about the economy, stupid. The FuturIcT ‘knowledge accelerator’, the proposal  for which has just been submitted to the European Commission’s Flagship Initiatives scheme which seeks to fund visionary research, would address a wide range of environmental, technological and social issues using supercomputer simulations developed by an interdisciplinary team. The overarching aim is to provide systematic, rational and evidence-based guidance to governmental and international policy-making, free from the ideological biases and wishful thinking typical of current strategies.

Helbing’s confidence in such an approach has been bolstered by his and others’ success in modelling social phenomena ranging from traffic flow in cities to the dynamics of industrial production. Modern computer power makes it possible to simulate such systems using ‘agent-based models’ that look for large-scale patterns and regularities emerging from the interaction of large numbers of individual agents.

The FuturIcT proposal includes the establishment of ‘Crisis Observatories’ that might identify impending problems such as financial crashes, wars and social unrest, disease epidemics, and environmental crises. It would draw on expertise in fields ranging from engineering, law, anthropology and geosciences to physics and mathematics. Crisis Observatories could be operational by 2016, the FuturIcT team says, and by 2022 the programme would incorporate a Living Earth Simulator that couples human social and political activity to the dynamics of the natural planet.

Sceptics may dismiss the idea as a hubristic folly that exaggerates our ability to understand the world we have created. But when we compare the price tag to the money we devote to getting a few humans outside our atmosphere, it could be a far greater folly not to give the idea a chance.

Monday, April 26, 2010

Big quantum


Here’s a little piece I wrote for Prospect, who deemed in the end that it was too hard for their readers. But I am sure it is not, dear blogspotter, too hard for you.


If you think quantum physics is hard to understand, you’re probably confusing understanding with intuition. Don’t assume, as you fret over the notion that a quantum object can be in two places at once, that you’re simply too dumb to get your mind around it. Nobody can, not even the biggest brains in physics. The difference between quantum physicists and the rest of us is that they’ve elected to just accept the weirdness and get on with the maths – as physicist David Mermin puts it, to ‘shut up and calculate.’

But this pragmatic view is losing its appeal. Physicists are unsatisfied with the supreme ability of quantum theory to predict how stuff behaves at very small scales, and are following the lead of its original architects, such as Bohr, Heisenberg and Einstein, in demanding to know what it means. As Lucien Hardy and Robert Spekkens of the high-powered Perimeter Institute in Canada wrote recently, ‘quantum theory is very mysterious and counterintuitive and surprising and it seems to defy us to understand it. And so we take up the challenge.’

This is something of an act of faith, because it isn’t obvious that our minds, having evolved in a world of classical physics where objects have well-defined positions and velocities, can ever truly conceptualize the quantum world where, apparently, they do not. That difference, however, is part of the problem. If the microscopic world is quantum, why doesn’t everything behave that way? Where, once we reach the human scale, has the weirdness gone?

Physicists talk blithely about this happening in a ‘quantum-to-classical transition’, which they generally locate somewhere between the size of large molecules and of living cells – between perhaps a billionth and a millionth of a metre (a nanometre and a micrometre). We can observe subatomic particles obeying quantum rules – that was first done in 1927, when electrons were seen acting like interfering waves – but we can’t detect quantumness in objects big enough to see with the naked eye.

Erwin Schrödinger tried to force this issue by placing the microcosm and the macrocosm in direct contact. In his famous thought experiment, the fate of a hypothetical cat depended on the decay of a radioactive atom, dictated by quantum theory. Because quantum objects can be in a ‘superposition’ of two different states at once, this seemed to imply that the cat could be both alive and dead. Or at least, it could until we looked, for the ‘Copenhagen’ interpretation of quantum theory proposed by Bohr and Heisenberg insists that superpositions are too delicate to survive observation: when we look, they collapse into one state or the other.

The consensus is now that the cross-over from quantum to classical rules involves a process called decoherence, in which delicate quantum states get blurred by interacting with their teeming, noisy environment. An act of measurement using human-scale instruments therefore induces decoherence. According to one view, decoherence imprints a restricted amount of information about the state of the quantum object on its environment, such as the dials of our measuring instruments; the rest is lost forever. Physicist Wojciech Zurek thinks that the properties we measure this way are just those that can most reliably imprint ‘copies’ of the relevant information about the system under inspection. What we measure, then, are the ‘fittest’ states – which is why Zurek calls the idea quantum Darwinism. It has the rather remarkable corollary that the imprinted copies can be ‘used up’, so that repeated measurements will eventually stop giving the same result: measurement changes the outcome.

These are more than just esoteric speculations. Impending practical applications of quantum superpositions, for example in quantum cryptography for encoding optical data securely, or super-fast quantum computers that perform vast numbers of calculations in parallel, depend on preserving superpositions by avoiding decoherence. That’s one reason for the current excitement about experiments that probe the contested ‘middle ground’ between the unambiguously quantum and classical worlds, at scales of tens of nanometres.

Andrew Cleland and coworkers at the University of California have now achieved a long-sought goal in this arena: to place a manufactured mechanical device, big enough to see sharply in the electron microscope, in a quantum superposition of states. They made a ‘nanomechanical resonator’ – a strip of metal and ceramic almost a micrometer thick and about 30 micrometres long, fixed at one end like the reed of a harmonica – and cooled it down to within 25 thousandths of a degree from absolute zero. The strip is small enough that its vibrations follow quantum rules when cold enough, which means that they can only have particular frequencies and energies (heat will wash out this discreteness). The researchers used a superconducting electrical circuit to induce vibrations, and they report in Nature that they could put the strip into a superposition of two states – in effect, as if it is both vibrating and not vibrating at the same time.

Sadly, these vibrations are too small for us to truly ‘see’ what an object looks like that is both moving and not moving. But even more dramatic incursions of quantum oddness might be soon in store. Last year a team of European scientists outlined a proposal to create a real Schrödinger’s cat, substituting an organism small enough to stand on the verge of the quantum world: a virus. They suggested that a single virus suspended by laser beams could be put into a superposition of moving and stationary states. Conceivably, they said, this could even be done with tiny, legged animals called tardigrades or ‘water bears’, a few tenths of a millimetre long. If some way could be devised to link the organism’s motion to its biological behaviour, what then would it do while simultaneously moving and still? Nobody really knows.

Wednesday, April 21, 2010

Peter's patterns

I have a little piece on the BBC Focus site about the work of sculptor Peter Randall-Page , with whom I had the pleasure of discussing pattern formation and much else at Yorkshire Sculpture Park last month. I will put an extended version of this piece on my web site shortly (under ‘Patterns’) in which there are lots more stunning pictures of Peter’s work and natural patterns.

Friday, April 09, 2010

The right formula


Message to a heedless world: Please remember that the O in the formula H2O is a capital O meaning oxygen, not a zero meaning zero. Water is composed of hydrogen and oxygen, not hydrogen and nothing.

Heedless world replies: Get a life, man.

Heedless world continues (after some thought): How do you know the difference anyway?

Me: Zeros are narrower.

Heedless world: This is truly sad.

Tuesday, April 06, 2010

An uncertainty principle for economists?


Here’s the pre-edited version of my latest Muse for Nature News. The paper I discuss here is very long but also very ambitious, and well worth a read.
**********************************************************************
Bad risk management contributed to the current financial crisis. Two economists believe the situation could be improved by gaining a deeper understanding of what is not known.

Donald Rumsfeld is an unlikely prophet of risk analysis, but that may be how posterity will anoint him. His remark about ‘unknown unknowns’ was derided at the time as a piece of meaningless obfuscation, but more careful refection suggests he had a point. It is one thing to recognize the gaps and uncertainties in our knowledge of a situation, another to acknowledge that entirely unforeseen circumstances might utterly change the picture. (Whether you subscribe to Rumsfeld’s view that the challenges in managing post-invasion Iraq were unforeseeable is another matter.)

Contemporary economics can’t handle the unknown unknowns – or more precisely, it confuses them with known unknowns. Financial speculation is risky by definition, yet the danger is not that the risks exist, but that the highly developed calculus of risk in economic theory – some of which has won Nobel prizes – gives the impression that they are under control.

The reasons for the current financial crisis have been picked over endlessly, but one common view is that it involved a failure in risk management. It is the models for handling risk that Nobel leaureate economist Joseph Stiglitz seemed to have in mind when he remarked in 2008 that ‘Many of the problems our economy faces are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy’ [1].

Facing up to these failures could prompt the bleak conclusion that we know nothing. That’s the position taken by Nassim Nicholas Taleb in his influential book The Black Swan [2], which argues that big disruptions in the economy can never be foreseen, and yet are not anything like as rare as conventional theory would have us believe.

But in a preprint on Arxiv, Andrew Lo and Mark Mueller of MIT’s Sloan School of Management offer another view [3]. They say that what we need is a proper taxonomy of risk – not unlike, as it turns out, Rumsfeld’s infamous classification. In this way, they say, we can unite risk assessment in economics with the way uncertainties are handled in the natural sciences.

The current approach to uncertainty in economics, say Lo and Mueller, suffers from physics envy. ‘The quantitative aspirations of economists and financial analysts have for many years been based on the belief that it should be possible to build models of economic systems – and financial markets in particular – that are as predictive as those in physics,’ they point out.

Much of the foundational work in modern economics took its lead explicitly from physics. One of its principal architects, Paul Samuelson, has admitted that his seminal book Foundations of Economic Analysis [4] was inspired by the work of mathematical physicist Edwin Bidwell Wilson, a protégé of the pioneer of statistical physics Willard Gibbs.

Physicists were by then used to handling the uncertainties of thermal noise and Brownian motion, which create a gaussian or normal distribution of fluctuations. The theory of Brownian random walks was in fact first developed by physicist Louis Bachelier in 1900 to describe fluctuations in economic prices.

Economists have known since the 1960s that these fluctuations don’t in fact fit a gaussian distribution at all, but are ‘fat-tailed’, with a greater proportion of large-amplitude excursions. But many standard theories have failed to accommodate this, most notably the celebrated Black-Scholes formula used to calculate options pricing, which is actually equivalent to the ‘heat equation’ in physics.

But incorrect statistical handling of economic fluctuations is a minor issue compared with the failure of practitioners to distinguish fluctuations that are in principle modellable from those that are more qualitative – to distinguish, as Lo and Mueller put it, trading decisions (which need maths) from business decisions (which need experience and intuition).

The conventional view of economic fluctuations – that they are due to ‘external’ shocks to the market, delivered for example by political events and decisions – has some truth in it. And these external factors can’t be meaningfully factored into the equations as yet. As the authors say, from July to October 2008, in the face of increasingly negative prospects for the financial industry, the US Securities and Exchange Commission intervened to impose restrictions on certain companies in the financial services sector. ‘This unanticipated reaction by the government’, say Lo and Mueller, ‘is an example of irreducible uncertainty that cannot be modeled quantitatively, yet has substantial impact on the risks and rewards of quantitative strategies.’

They propose a five-tiered categorization of uncertainty, from the complete certainty of Newtonian mechanics, through noisy systems and those that we are forced to describe statistically because of incomplete knowledge about deterministic processes (as in coin tossing), to ‘irreducible uncertainty’, which they describe as ‘a state of total ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.’

The authors think that this is more than just an enumeration of categories, because it provides a framework for how to think about uncertainties. ‘It is possible to “believe” a model at one level of the hierarchy but not at another’, they say. And they sketch out ideas for handling some of the more challenging unknowns, as for example when qualitatively different models may apply to the data at different times.

‘By acknowledging that financial challenges cannot always be resolved with more sophisticated mathematics, and incorporating fear and greed into models and risk-management protocols explicitly rather than assuming them away’, Lo and Mueller say, ‘we believe that the financial models of the future will be considerably more successful, even if less mathematically elegant and tractable.’

They call for more support of post-graduate economic training to create a cadre of better informed practitioners, more alert to the limitations of the models. That would help; but if we want to eliminate the ruinous false confidence engendered by the clever, physics-aping maths of economic theory, why not make it standard practice to teach everyone who studies economics at any level that these models of risk and uncertainty apply only to specific and highly restricted varieties of it?

References
1. Stiglitz, J. New Statesman, 16 October 2008.
2. Taleb, N. N. The Black Swan (Allen Lane, London, 2007).
3. Lo, A. W. & Mueller, M. T. http://www.arxiv.org/abs/1003.2688.
4. Samuelson, P. A. Foundations of Economic Analysis (Harvard University Press, Cambridge, 1947).

Thursday, April 01, 2010

Bursting the genomics bubble


Here’s the pre-edited version of a Muse that’s just gone up on Nature News. There’s a bunch of interesting Human Genome Project-related stuff on the Nature site to mark the 10th anniversary of the first draft of the genome (see here and here and here, as well as comments from Francis Collins and Craig Venter). Some is celebratory, some more thoughtful. Collins considers his predictions to have been vindicated – with the exception that ‘The consequences for clinical medicine have thus far been modest’. Now, did you get the sense at the time that it was precisely the potential for advancing clinical medicine that was the HGP’s main selling point? Venter is more realistic, saying ‘Phenotypes — the next hurdle — present a much greater challenge than genotypes because of the complexity of human biological and clinical information. The experiments that will change medicine, revealing the relationship between human genetic variation and biological outcomes such as physiology and disease, will require the complete genomes of tens of thousands of humans together with comprehensive digitized phenotype data.’ Hmm… not quite what the message was at the time, although in fairness Craig was not really one of those responsible for it.

*********************************************************************
The Human Genome Project attracted investment beyond what a rational analysis would have predicted. There are pros and cons to that.

If you were a venture capitalist who had invested in the sequencing of the human genome, what would you now have to show for it? For scientists, the database of the Human Genome Project (HGP) may eventually serve as the foundation of tomorrow’s medicine, in which drugs will be tailored personally to your own genomic constitution. But for a return to the bucks you invested in this grand scheme, you want medical innovations here and now, not decades down the line. Ten years after the project’s formal completion, there’s not much sign of them.

A team of researchers in Switzerland now argue in a new preprint [1] that the HGP was an example of a ‘social bubble’, analogous to the notorious economic bubbles in which investment far outstrips any rational cost-benefit analysis of the likely returns. Monika Gisler, Didier Sornette and Ryan Woodard of ETH in Zürich say that ‘enthusiastic supporters of the HGP weaved a network of reinforcing feedbacks that led to a widespread endorsement and extraordinary commitment by those involved in the project.’

Some scientists have already suggested that the benefits of the HGP were over-hyped [2]. Even advocates now admit that the benefits for medicine may be a long time coming, and will require further advances in understanding, not just the patience to sort through all the data.

This stands in contrast to some of the claims made while the HGP was underway between 1990 and 2003. In 1999 the International Human Genome Sequencing Consortium (IHGSC) leader Francis Collins claimed that the understanding gained by the sequencing effort would ‘eventually allow clinicians to subclassify diseases and adapt therapies to the individual patient’ [3]. That might happen one day, but we’re still missing fundamental understanding of how even diseases with a known heritable risk are related to the makeup of our genomes [4]. Collins’ portrait of a patient who, in 2010, is prescribed ‘a prophylactic drug regimen based on the knowledge of [his] personal genetic data’ is not yet on the horizon. And going from knowledge of the gene to a viable therapy has proved immensely challenging even for a single-gene disease as thoroughly characterized as cystic fibrosis [5]. Collins’ claim,shortly after the unveiling of the first draft of the human genome in June 2000, that ‘new gene-based ‘designer drugs’ will be introduced to the market for diabetes mellitus, hypertension, mental illness and many other conditions’ [6] no longer seems a foregone conclusion, let alone a straightforward extension of the knowledge of all 25,000 or so genes in the human genome.

This does not, in the analysis of Gisler and colleagues, mean that the HGP was money poorly spent. Some of the benefits are already tangible, such as much faster and cheaper sequencing techniques; others may follow eventually. The researchers are more interested in the issue of how, if the HGP was such a long-term investment, it came to be funded at all. Their answer invokes the notion of bubbles borrowed from the economic literature, which Sornette has previously suggested [7] as a driver of other technical innovations such as the mid-nineteenth-century railway boom and the explosive growth of information technology at the end of the twentieth century. In economics, bubbles seem to be an expression of what John Maynard Keynes called ‘animal spirits’, whereby the instability stems from ‘the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations’ [8]. In economics such bubbles can end in disastrous speculation and financial ruin, but in technology they can be useful, creating long-lasting innovations and infrastructures that would have been deemed too risky a venture under the cold glare of reason’s spotlight.

For this reason, Gisler and colleagues say, it is well worth understanding how such bubbles occur, for this might show governments how to catalyse long-term thinking that is typically (and increasingly) absent from their own investment strategies and those of the private sector. In the case of the HGP, the researchers argue, the controversial competition between the public IHGSC project and the private enterprise conducted by the biotech firm Celera Genomics worked to the advantage of both, creating an sense of anticipation and hope that expanded the ‘social bubble’ as well as in the end reducing the cost of the research by engaging market mechanisms.

To that extent, the ‘exuberant innovation’ that social bubbles can engender seems a good thing. But it’s possible that the HGP will never really deliver economically or medically on such massive investment. Worse, the hype might have incubated a harmful rash of genetic determinism. As Gisler and colleagues point out, other ‘omics’ programmes are underway, including an expensively funded NIH initiative to develop high-throughput techniques for solving protein structures. Before animal spirits transform this into the next ‘revolution in medicine’, it might be wise to ask whether the HGP has something to tell us about the wisdom of collecting huge quantities of stamps before we know anything about them.

References
1. Gisler, M., Sornette, D. & Woodard, R. Preprint http://www.arxiv.org/abs/1003.2882.
2. Roberts, L. et al., Science 291, 1195-1200 (2001).
3. Collins, F. S. New England J. Med. 28, 28-37 (1999).
4. Dermitzakis, E. T. & Clark, A. G. Science 326, 239-240 (2009).
5. Pearson, H. Nature 460, 164-169 (2009).
6. Collins, F. S. & McKusick, V. A. J. Am. Med. Soc. 285, 540-544 (2001).
7. Sornette, D. Socio-econ. Rev. 6, 27-38 (2008).
8. Keynes, J. M., The General Theory of Employment, Interest and Money (Macmillan, London, 1936).

The Times does The Music Instinct


There are some extracts from The Music in the Eureka science supplement of the Times today, although oddly they don’t seem yet to have put it online. It’s amongst a real mash-up of stuff about the ‘science of music’, which is all kind of fun but slightly weird to find my words crash-landed there. The editors did a pretty good job, however, of plucking out bits of text and getting them into a fairly self-contained form, when they were generally part of a much longer exposition.

I notice in Eureka that Brain May, bless him, doesn’t believe in global warming. “Most of my most knowledgeable scientist friends don’t believe that global warming exists”, he says. Come on Brian, name them. Have you been chatting to the wrong Patrick Moore ? (Actually, I’m not too sure if chatting to the other one would help very much.)

Tuesday, March 30, 2010

Magnets mess with the mind's morality

Here's a little snippet I wrote for Nature's news blog. The authors seem to take it as read that magnets can alter brain functioning in this manner, but I find that remarkable.


Talk about messing with your mind. A new study [www.pnas.org/cgi/doi/10.1073/pnas.0914826107] by neuroscientist Liane Young and colleagues at Harvard University does exactly that: the researchers used magnetic signals applied to subjects’ craniums to alter their judgements of moral culpability. The magnetic stimulus made people less likely to condemn others for attempting but failing to inflict harm.

Most people make moral judgements of others’ actions based not just on their consequences but also on some view of what the intentions were. That makes us prepared to attribute diminished responsibility to children or people with severe mental illness who commit serious offences: it’s not just a matter of what they did, but how much they understood what they were doing.

Neuroimaging studies have shown that the attribution of beliefs to other people seems to involve a part of the brain called the right temporoparietal junction (RTPJ). So Young and colleagues figured that, if they disrupted how well the RTPJ functions, this might alter moral judgements of someone’s action that rely on assumptions about their intention. To do that, they applied an oscillating magnetic signal at 1 Hz to the part of the skull close to the RTPJ for 25 minutes in test subjects, and then asked them to read and respond to an account of an attempted misdemeanour. They also conducted tests while delivering the signal in regular short bursts. In one scenario, ‘Grace’ intentionally puts a white powder from a jar marked ‘toxic’ into her friend’s coffee, but the powder is in fact just sugar and the friend is fine. Was Grace acting rightly or wrongly?

Obvious? You might think differently with a magnetic oscillator fixed to your head. With the stimulation applied, subjects were more likely to judge the morality based on the outcome, as young children do (the friend was fine, so it’s OK), than on the intention (Grace believed the stuff was toxic).

That’s scary. The researchers present this as evidence of the role of the RTPJ in moral reasoning, with implications for how children do it (there is some evidence that the RTPJ is late in maturing) and for conditions such as autism that seem to involve a lack of ability to identify motives in other people. Fair enough. But to most of us it is news – and alarming news – that morality-related brain functions can be disrupted or suspended with a simple electromagnetic coil. If ever a piece of research were destined to incite paranoid fantasies about dictators inserting chips in our heads to alter and control our behaviour, this is it.

Thursday, March 25, 2010

Solar eclipse


This is more or less how my review of Ian McEwan’s new novel Solar in Prospect started out (the final paras got a little garbled in the edit). I’m amused to see that my suggestion here that his modest intentions might head off extreme reactions has been proved wrong. Lorna Bradbury in the Telegraph calls the book McEwan’s best yet, and thinks it should win the Booker (no way). And some found the comic elements ‘extremely funny’. Others think it is a stinker: one reviewer calls it ‘an odd, desultory production, by turns pompous and feebly comic’, and Leo Robson in the New Statesman says McEwan has lost his ear and that ‘With Solar, McEwan has finally committed the folly that we might not have expected from him.’ Really, they are all getting too worked up. Although I wouldn’t go as far as the dismissive comment in the Economist that this is ‘A novel to chuckle over, and chuck away’, it is simply a fairly light, intelligent piece of entertainment. Not, I imagine, that McEwan will be too bothered about any of this.

***********************************************************************

After Saturday, which several reviewers considered (unfairly) to be an insufferably smug depiction of Blair’s Britain in the approach to the invasion of Iraq, it looked as though a place was being prepared for Ian McEwan alongside Martin Amis on the pillory. Our two most celebrated novelists, the story went, were getting above themselves, pronouncing on the state of the nation from what seemed an increasingly conservative position.

Amis seems now to be in some curious quantum superposition of states, defended in a backlash to the backlash while demonized as the misogynistic wicked godfather. His latest novel The Pregnant Widow has been both praised as a return to form and derided as a farrago of caricature and solipsism. But Solar may extricate McEwan from such controversies and reinvest him with the humble status of a storyteller. For the book is a modest entertainment, dare one even say a romp, and essentially a work of genre fiction: lab lit. This genre, a second cousin of the campus novel, draws its plots from the exploits of scientists and the scientific community, and includes such titles as Allegra Goodman’s Intuition and Jonathan Lethem’s As She Climbed Across the Table.

McEwan’s interest in science is well established. The protagonist of Enduring Love is a science journalist, and the plot of Saturday hinged on the technical expertise of its central character, the neuroscientist Henry Perowne. McEwan has spoken about the uses of science in fiction, and has written passionately about the need to tackle climate change.

And that is where Solar comes in. When McEwan mentioned at the Hay Festival in 2008 that his next book had a ‘climate change’ theme, people anticipated some eco-fable set in the melting Arctic. He quickly denied any intention to proselytize; climate change would ‘just be the background hum of the book.’

So it is. Michael Beard, a Nobel laureate physicist resting on the laurels of his seminal work in quantum physics decades ago, is balding, overweight, addictively philandering, and coming to the end of his fifth marriage. Like many Nobel winners he has long ceased any productive science and is now riding the superficial circuit of plenary lectures, honorary degrees, Royal Commissions and advisory boards. Becoming the figurehead of the National Centre for Renewable Energy, marooned near Reading, seemed a good idea at the time, but the centre’s research has become mired in Beard’s ill-advised notion of making a wind turbine. Beard is privately indifferent to the global-warming threat, but when a chance arrives to give his career fresh lustre with a new kind of solar power, he grasps it greedily. With Beard running more on bluster and past glory than on scientific insight, and with his domestic life on autodestruct, we know it will all end badly. The question is simply how long Beard can stay ahead of the game. As the climate-change debate moves from the denialism of the Bush years to Obama and Copenhagen, he is increasingly a desperate, steadily inflating cork borne on the tide.

As ever, McEwan has done his homework. Mercifully, he knows much more than Lethem about how physicists think and work. And he is more successful in concealing his research than he was with the neuroscience shoehorned into Saturday. But not always. Beard’s speech to a group of climate-sceptic corporate leaders reads more like a lecture than a description of one: “Fifty years ago we were putting thirteen billion metric tons of carbon dioxide into the atmosphere every year. That figure has almost doubled.” And when Beard debunks his business partner’s doubts about global warming after the cool years of the late noughties, he gets full marks for science but risks becoming his author’s mouthpiece. “The UN estimates that already a third of a million people a year are dying from climate change” is not the kind of thing anyone says to their friend.

In case you care, the solution to the energy crisis on offer here – the process of ‘artificial photosynthesis’ to split water into hydrogen and oxygen using photocatalysis – is entirely respectable scientifically, albeit hardly the revolutionary breakthrough it is made out to be. Much the same idea was used by Stephen Poliakoff in his 1996 lablit play Blinded By the Sun; McEwan’s clever trick here is to involve quantum-mechanical effects (based on Beard’s Nobel-winning theory) to improve the efficiency, which left the nerd in me wondering if McEwan was aware of recent theories invoking such effects in real photosynthesis. I’m not sure whether to be more impressed if he is or if he isn’t.

McEwan nods toward recent episodes in which science has collided with the world outside the lab. Beard’s off-the-cuff remarks about women in science replay the debacle that engulfed former Harvard president Larry Sumner in 2005, and Beard stands in for Steven Pinker in an ensuing debate on gender differences (although Pinker’s opponent Elizabeth Spelke did a far better demolition job than does Beard’s).

He also makes wry use of personal experience. When he read at Hay a draft of the episode in which Beard eats the crisps of a fellow traveller on a train, thinking they are his own and suppressing fury when the young man ironically helps himself, someone in the audience pointed out that a similar case of false accusation of an innocent stranger appeared in The Hitchhiker’s Guide to the Galaxy. Some newspapers made a weak jibe at plagiarism. When Beard recounts the tale in a speech, a lecturer in ‘urban studies and folklore’ accuses him of appropriating a well-known urban myth, making Beard feel that his life has been rendered inauthentic – and the allusion to Douglas Adams is now inserted in the story.

One of the pleasures for a science watcher is identifying the academics from whom Beard has been assembled – I counted at least five. He is a difficult character to place centre-stage, not just selfish, unfaithful and vain but also physically repulsive – McEwan is particularly good at evoking queasiness at Beard’s gluttony and bodily decrepitude. But he has said that he wanted to leave Beard just enough possibility of goodness to engender some sympathy, and he succeeds by a whisker. When the final collapse of Beard’s crumbling schemes arrives (you can see it coming all along), there is room for compassion, even dismay.

Solar is, then, a satisfying and scientifically literate slice of genre literature, marred only slightly by McEwan’s curious addiction to the kind of implausible plot hinge that compromised Enduring Love, Atonement and most seriously, Saturday. Come the event that places opportunity in Beard’s hands, all the strings and signposts are glaringly evident – I think I even murmured to myself “No, not the corner of the coffee table”. And like the thug Baxter in Saturday, Beard’s wife's uncouth former lover Tarpin ends up doing things that just don't ring true – a failure not of ‘character motivation’ (McEwan is too good a writer to belabour that old chestnut) but of sheer plausibility.

In the end, this is McEwan-lite, a confection of contemporary preoccupations that, while lacking the emotional punch of Atonement, the political ambition of Saturday or the honed delicacy of On Chesil Beach, is more fun than any of them. And if it dissuades us from turning McEwan, like Amis, into a cultural icon to be venerated or toppled, so much the better for him and for us.

Monday, March 15, 2010

What went on in February


Here’s my little round-up for the April issue of Prospect, before it is edited to probably a third of this size. I don’t want to sound churlish, in the last item, about what is clearly a useful trial – but it did seem a good example of the kind of thing Colin Macilwain at Nature nailed recently in an excellent article about science and the media.
     I’ve also reviewed Ian McEwan’s new book Solar in this forthcoming issue of Prospect – will post that review shortly. In short: it’s fun.
************************************************************************

As the global warming debate intensifies, expect to hear more about methane, carbon dioxide’s partner in crime as a greenhouse gas. Since it doesn’t come belching from our cars and power stations, methane bulks small in our conscience, but agriculture, gas production, landfills and biomass burning have doubled methane levels in the atmosphere since pre-industrial times and it is a more potent greenhouse gas than CO2. There are immense natural resources of methane, and one doomsday scenario has some of these releasing the gas as a result of warming. A frozen form of methane and water, called methane hydrate, sits at the seafloor in many locations worldwide, but the methane could bubble out if sea temperatures rise. A team has now discovered  this happening on the Arctic continental shelf off northeastern Siberia, where the sea water has vastly more dissolved methane than expected. Some think a massive methane burp from hydrate melting 250 million years ago caused environmental changes that wiped out 70-96% of all species on the planet. There’s no reason to panic yet, but I’m just letting you know.

A few scientists and an army of bloggers still insist that global warming has nothing to do with any of this stuff, but is caused by changes in the activity of the sun. If you like that idea (or indeed if you hate it), don’t expect much enlightenment from NASA’s Solar Dynamics Observatory (SDO), launched in February to study the inner workings of the sun. We already know enough about variations in the sun to make the solar-warming hypotheses look flaky. But we don’t really understand what causes them. The 11-year sunspot cycle is thought to be the result of changes in the churning patterns of this volatile ball of hot plasma. It causes small periodic rise and fall of the sun’s energy output, along with the recurrent appearance of sunspots at the height of the cycle, and increases in solar flares that spew streams of charged particles across millions of miles of space, disrupting telecommunications and power grids on Earth and supplying a very practical reason for needing to know more about how our star works. SDO, launched by NASA at a cost of $856 million, will take images of the sun and detect convective flows of material beneath the surface, over the coming solar cycle that is due to peak around 2013.

A new study from researchers in Newcastle and Ulm of why our cells age does not, as some reports suggest, reveal the ‘secrets of ageing’, but rather debunks the notion of a ‘secret’ at all. Ageing, like embryo growth or cancer, is not a biochemical process but the net result of a complex network of processes. The new study shows how cells can become locked into a steady decline once they accumulate too much damage to their DNA, so that they don’t go on dividing with an inherent risk of initiating cancer. Although this process is triggered by the gradual erosion of the protective ‘caps’ at the ends of our chromosomes, called telomeres, it suggests that the story is far more complex than the simplistic picture in which we age because our chromosomes go bald. And it makes a magic bullet for reversing ageing seem even more of a pipe dream.

A cure for peanut allergy could be only three years away, recent headlines said. It’s a cheering prospect for this nasty condition, a source of anxiety for many parents and on very rare occasions a genuinely life-threatening problem. The reports were based on a presentation given by Andrew Clark of Addenbrooke’s Hospital in Cambridge at the meeting of the American Association for the Advancement of Science, an annual jamboree of science news. Clark and his colleagues are about to begin a major clinical trial, following earlier success in desensitizing children to the allergy by ‘training’ the immune system to tolerate initially tiny but steadily increasing doses of peanut. The news is welcome, but also an indication of the rather formulaic nature of much science and health reporting, where everyone seizes on the same story irrespective of whether it is really news. This is, after all, just the announcement of a forthcoming trial, not of its results. And besides, the desensitizing strategy is well established in principle: similar successes were reported recently by two groups at a meeting of the American Academy of Allergy, Asthma and Immunology in New Orleans.