Friday, May 28, 2010

Not all contemporary art is rubbish


I’m thrilled to see my friend, photographic and video artist Lindsay Seers, being given some respect in Ben Lewis’s excellent piece for Prospect on why modern art is in a decadent phase. Like Ben, I think Lindsay is doing serious and interesting stuff, and I say that not just because (perhaps even despite the fact that?) I’ve been involved in some of it. I wrote a piece for Lindsay’s book Human Camera (Article Press, 2007), which I’m now inspired to put up on my web site.

Monday, May 24, 2010

Creation myths


Artificial life? Don’t ask me guv, I was too busy last week building sandcastles in Lyme Regis. However, now making up for lost time… I have a Muse on Nature’s news site (the pre-edited text of which is below – they always remove the historical quotes), and a piece on the Prospect blog. The Venter work may, if it survives the editor’s shears, also be briefly discussed on an episode of Radio 4’s Moments of Genius that I’ve also just recorded with Patricia Fara, due to be broadcast this Sunday (30th May).

*********************************************************************
Claims of ‘synthetic life’ have been made throughout history. And each time, they are best regarded as mirroring what we think life is.

The recent ‘chemical synthesis of a living organism’ by Craig Venter and his colleagues at the J. Craig Venter Institute [1] sits within in a very long tradition. Claims of this sort have been made throughout history. That’s not to cast aspersions on the new results: while one can challenge the notion that this new bacterium, whose genome is closely modelled on that of Mycoplasma mycoides, stands apart from Darwinian evolution, the work is nonetheless an unprecedented triumph of biotechnological ingenuity. But when set in historical context, the work reflects our changing conception of what life is and how it might be made. What has been done here is arguably not so much a ‘synthesis of life’ as a (semi-)synthetic recreation of what we currently deem life to be. And as with previous efforts, it should leave us questioning the adequacy of that view.

To see that the new results reiterate a perennial theme, consider the headline of the Boston Herald in 1899: ‘Creation of Life. Lower Animals Produced by Chemical Means.’ The article described how the German biologist Jacques Loeb had caused an unfertilized sea-urchin egg to divide by treating it with salts. It was a kind of artificial parthenogenesis, and needless to say, very far from a chemical synthesis of life from scratch.

But Loeb himself was then talking in earnest about ‘the artificial production of living matter’, and he was not alone in blending his discovery with speculations about the de novo creation of life. In 1912 the physiologist Edward Albert Schäfer alluded to Loeb’s results in his presidential address to the British Association, under the rubric ‘the possibility of the synthesis of living matter.’ Schäfer was optimistic: ‘The [cell] nucleus – which may be said indeed to represent the quintessence of cell-life – possesses a chemical constitution of no very great complexity; so that we may even hope some day to see the material which composes it prepared synthetically.’

Such claims are commonly seen to imply that artificial human life is next on the agenda. It was a sign of the times that the New York Times credulously reported in 1910 that ‘Prof. Herrera, a Mexican scientist, has succeeded in forming a human embryo by chemical combination.’ It is surely no coincidence that many media reports have compared Venter to Frankenstein, or that the British Observer newspaper mistakenly suggested he has ‘succeeded in ‘creating’ human life for the first time’.
  
What is life?

Beliefs about the feasibility of making artificial organisms have been governed by the prevailing view of what life is. While the universe was seen as an intrinsically fecund matrix, permitting bees and vermin to emerge from rotten flesh by spontaneous generation, it seemed natural to imagine that sentient beings might body forth from insensate matter. The mechanical models of biology developed in the seventeenth century by René Descartes and others fostered the notion that a ‘spark of life’ – after the discovery of electricity, literally that – might animate a suitably arranged assembly of organic parts. The blossoming of chemistry and evolutionary theory spurred a conviction that it was all about getting the recipe right, so that nature’s diverse grandeur sprung from primordial colloidal jelly, called protoplasm, which Thomas Henry Huxley regarded as the ‘physical basis of life’.

Yet each apparent leap forward in this endeavour more or less coincided with a realization that the problem is not so simple. Protoplasm appeared as organic chemists were beginning on the one hand to erode the concept of vitalism and on the other to appreciate the full and baffling complexity of organic matter. The claims of Loeb and Schäfer came just before tools for visualizing the sub-cellular world, such as X-ray crystallography and the electron microscope, began to show life’s microstructure in all its complication. As H. G. Wells, his son George, and Julian Huxley explained in The Science of Life (1929-30), ‘To be impatient with the biochemists because they are not producing artificial microbes is to reveal no small ignorance of the problems involved.’

The next big splash in ‘making life’ came in 1953 when Harold Urey and Stanley Miller announced their celebrated ‘prebiotic soup’ experiment, conjuring amino acids from simple inorganic raw materials [3]. This too was obviously a very far cry from a synthesis of life, but some press reports were little troubled by the distinction: the result was regarded as a new genesis in principle if not in practice. ‘If their apparatus had been as big as the ocean, and if it had worked for a million years, instead of one week’, said Time, ‘it might have created something like the first living molecule.’ Yet that same year saw the discovery of life’s informational basis – the source of much of the ‘organization’ of organic matter that had so puzzled earlier generations – in the work of Crick and Watson. Now life was not so much about molecules at all, but about cracking, and perhaps then rewriting, the code.

Burning the book

Which brings us to Venter et al. Now that the field of genomics has fostered the belief that in sequencing genomes we are reading a ‘book of life’, whose algorithmic instructions need only be rejigged to produce new organisms, it’s easy to see why the creation of a wholly synthetic genome and its ‘booting up’ in a unicellular host should be popularly deemed a synthesis of life itself. Here the membranes, the cytoplasm, everything in fact except the genes, are mere peripherals to the hard drive of life. (The shift to a new realm of metaphor tells its own story.)

But what this latest work really implies is that it is time to lay aside the very concepts of an ‘artificial organism’ and a ‘synthesis of life’. Life is not a thing one makes, nor is it even a process that arises or is set in motion. It is a property we may choose to bestow, more or less colloquially, on certain organizations of matter. ‘Life’ in biology, rather like ‘force’ in physics, is a term carried over from a time when scientists thought quite differently, where it served as a makeshift bridge over the inexplicable.

More important than such semantics, the achievement by Venter et al. is a timely reminder that anything laying claim to the function we might call life resides not in a string of genes but in the interactions between them. Efforts to make de novo organisms of any complexity – for example, ones that can manufacture new pharmaceuticals and biofuels under demanding environmental constraints – seem likely to highlight how sketchily we understanding how those interactions operate and, most importantly, what their generic principles are. The euphoria engendered by rapid whole-genome sequencing techniques is already giving way to humility (even humiliation) about the difficulty of squaring genotype with phenotype. Yet again, our ideas of where the real business of life resides are shifting again: away from a linear ‘code’ and towards something altogether more abstract, emergent and entangled. In this regard at least, the latest ‘synthesis of life’ does indeed seem likely to repeat the historical template.

References
 1. D. G. Gibson et al. Science doi:10.1126/science.1190719 (2010)
2. E. A. Schafer, Nature 90, 7-19 (1912)
3. S. Miller, Science 117, 528 (1953)

Tuesday, May 11, 2010

Debunking is hard to do


In his excellent article on ‘denialism’ in this month’s New Humanist, Keith Kahn-Harris mentions that one of the problems debunkers face is that they have to engage in ‘a minute and careful examination of the sources… [which is] a time-consuming task that requires considerable skill and fortitude.’ This was precisely what I found myself up against when I reviewed Christopher Booker’s climate-change-denial tract The Real Global Global Warming Disaster for the Observer. I examined in detail just a very few of the claims Booker made (that is, ones that we not transparently false or misleading), and in each case found considerable distortion. I put the results of that trawling on this blog, but even then there was too much information for me to find the time to get it into an easily digested and streamlined shape. The real problem is that the denialists seem to have endless time on their hands. Happily, Booker’s book doesn’t seem to have had a huge impact, but less happily that is perhaps because there is now just so much climate denialism around, thanks largely to the silliness at UEA.

This issue of New Humanist is as full of good stuff as ever, but I particularly liked A. C. Grayling’s skewering of Terry Eagleton’s book On Evil: ‘Eagleton has been too long among the theorists to risk a straightforward statement… as we are dealing with Eagleton here, note that this is of course not a mish-mash of inconsistencies, as it appears to be; this is subtlety and nuance. It is, you might say, nuance-sense.’ For one reason or another, I have recently found myself having to read various texts issuing from the cultural-studies stable, and I can regretfully say that I know just what he means.

Sunday, May 09, 2010

Private Passions


I was the guest today on Radio 3’s Private Passions, where I get to choose half an hour of music and talk about it with Michael Berkeley. It can be heard here for the next seven days, I believe, but after that it vanishes into the BBC’s vaults. As ever with radio interviews, only afterwards do I realise what eloquent things I could have said in place of ‘um, you know…’. But I enjoyed it.

Wednesday, May 05, 2010

What a shoddy piece of work is man


It seems kind of cheap to win the ‘most commented’ slot on Nature News simply by writing an article about science and religion. You just know that will happen; there is nothing like it for provoking readers to offer their tuppence’ worth, and in particular for drawing reams of comment from the fundamentalist fringe. My latest Muse (pre-edited version below) is no exception. I am, however, entertained by the thoughtful remark of Bjørn Brembs, who says:

“As usual, your article is very reasoned, thoughtful and balanced. Reading some of the comments here, however, I fear you are making a common mistake, so accurately described by PZ Myers: "Where scientists are often handicapped is that they don't recognize the depth of the denial on the other side, and that their opponents really are happily butting their heads against the rock hard foundation of the science. We tend to assume the creationists can't really be that stupid, and figure they must have some legitimate complaint about some aspect of evolution with which we can sympathize. They don't. They really are that nuts."
Does it make sense to to try and reason thoughtfully with someone who prefers "magic man did it" over "I don't know" as an answer to scientific questions? Couldn't it be that this peculiar and revealing preference alone constitutes evidence enough that this person may not be amenable to reason at all?”

Bjørn is probably right in most cases, but I should say that I’d be a sad fool indeed if I wrote pieces like this under any belief that they would convert creationists. No, I do it because I think the issues are interesting, namely: how well has evolution done in designing our genome? (Not very.) To what extent does evolution optimize anything at all? (Not much.) And how come we work pretty well despite all this mess? (That’s the really big question.)

****************************************************************
Our genome won't win any design awards and doesn't speak well of the intelligence of its 'designer'.

Helena: They do say that man was created by God.
Domin: So much the worse for them.

This exchange in Karel Capek’s 1921 play R.U.R., which coined the word ‘robot’, is abundantly vindicated by our burgeoning understanding of human biology. Harry Domin, director general of the robot-making company R.U.R., jeers that ‘God had no idea about modern technology’, implying that the design of human-like bodies is now something we can do better ourselves.

Like most tales of making artificial people, R.U.R. contains a Faustian moral about hubris. But whether or not we could do better, it’s true that the human body is hardly a masterpiece of intelligent planning. Most famously, the eye’s retina is wired back to front so that the wiring has to pass back through the screen of light receptors, imposing a blind spot.

Now John Avise, an evolutionary geneticist at the University of California at Irvine, has catalogued the array of clumsy flaws and inefficiencies at the fundamental level of the genome. His paper , published in the Proceedings of the National Academy of Sciences USA [1], throws down the gauntlet to advocates of intelligent design, the pseudo-scientific face of religious creationism. What Intelligent Designer, Avise asks, would make such a botch?

Occasional botches are, meanwhile, precisely what we would expect from Darwinian evolution, which is blind to the big picture but merely tinkers short-sightedly to wring incremental adaptive advantage from the materials at hand. Just as in technology (and for analogous reasons), this produces ‘lock-in’ effects in which strategies that are sub-optimal from a global perspective persist because it is impractical to go back and improve them.

Intelligent design (ID) does not have to deny that evolution occurs, but it invokes an interventionist God who steps in to guide the process, constructing biological devices allegedly too ‘irreducibly complex’ to have been assembled by blind random mutation and natural selection, such as (ironically) the eye or the flagellar motor of bacteria [2].

As Avise points out, ID is problematic in purely theological terms. Were I inclined to believe in an omnipotent God, I should be far more impressed by one who had intuited that a world in which natural selection operates autonomously will lead to beings that function as well as humans (for all our flaws) than by one who was constantly having to step in and make adjustments. I’m not alone in that: Robert Boyle felt that it demeaned God to suppose he needed constantly to intervene in nature: ‘all things’, he said, ‘proceed, according to the artificer’s first design, and… do not require the peculiar interposing of the artificer, or any intelligent agent employed by him [3].

But ID must also confront the issue of theodicy: the evident fact that our world is imperfect. Human free will allegedly absolves God of responsibility for our ‘evil acts’ – but what about the innocent deaths caused by disease, natural disasters and so forth? Infelicities in the course of nature were already sufficiently evident in the eighteenth century for philosopher David Hume to imply that God might be considered a ‘stupid mechanic’. And in the early twentieth century, the physician Archibald Garrod pointed out how many human ailments are the result not of God’s wrath or the malice of demons but of ‘inborn errors’ in our biochemistry [4,5]

Many of these ‘errors’ can now be pinpointed to genetic mutations: at a recent count, there are around 75,000 disease-linked mutations [6]. But the ‘unintelligent design’ of our genomes, Avise says, goes well beyond such flaws, which might otherwise be dismissed as glitches in a mostly excellent contrivance.

The ubiquity of introns – sequences that must be expensively excised from transcribed genes before translation to proteins – seems to be a potentially harmful encumbrance. And numerous regulatory mechanisms are needed to patch up problems in gene activity, for example by silencing or destroying imperfectly transcribed mRNA (the templates for protein synthesis). Regulatory breakdowns may cause disease.

Why design a genome so poorly that it needs all this surveillance? Why are there so many wasteful repetitions of genes and gene-fragments, all of which have to be redundantly replicated in cell division? And why are we plagued by chromosome-hopping ‘mobile elements’ in our DNA that seem only to pose health risks?

These design flaws, Avise says, ‘extend the age-old theodicy challenge, traditionally motivated by obvious imperfections at the levels of human morphology and behavior, into the innermost molecular sanctum of our physical being.’

Avise wisely avers that this catalogue of errors should deter attempts to use religion to explain the minutiae of the natural world, and return it to its proper sphere as (one) source of counsel about how to live.

But his paper is equally valuable in demolishing the current secular tendency to reify and idealize nature through the notion that evolution is a non-teleological means of producing ‘perfect’ design. The Panglossian view that nature is refined by natural selection to some ‘optimal’ state exerts a dangerous tug in the field of biomimetics. But we should be surprised that some enzymes seem indeed to exhibit the maximum theoretical catalytic efficiency [7], rather than to imagine that this is nature’s default state. On the whole there are too many (dynamic) variables in evolutionary biology for ‘optimal’ to be a meaningful concept.

However – although heaven forbid that this should seem to let ID off the hook – it is worth pointing out that some of the genomic inefficiencies Avise lists are still imperfectly understood. We might be wise to hold back from writing them off as ‘flaws’, lest we make the same mistake evident in the labelling as ‘junk DNA’ genomic material that seems increasingly to play a biological role. There seems little prospect that the genome will ever emerge as a paragon of good engineering, but we shouldn’t too quickly derogate that which we do not yet understand.

References
1. Avise, J. C. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.0914609107.
2. Behe, M. J. Darwin’s Black Box: The Biochemical Challenge to Evolution (Free Press, New York, 1996).
3. Boyle, R. ‘Free inquiry’, in The Works of the Honourable Robert Boyle Vol. 5, ed. T. Birch, p.163 (Georg Olms, Hildesheim, 1965-6).
4. Garrod, A. Inborn Errors of Metabolism (Oxford University Press, London, 1909).
5. Garrod, A. The Inborn Factors of Inherited Disease (Clarendon Press, Oxford, 1931).
6. Stenson, P. D. et al., Hum. Mutat. 21, 577581 (2003).
7. Albery, W. J. & Knowles, J. R. Biochemistry 15, 5631-5640 (1976).

Friday, April 30, 2010

A supercomputing crystal ball

Here's a little piece I've just written for Nature's news blog The Great Beyond .


The good news is that your future can be predicted. The bad news is that it’ll cost a billion euros. That, at least, is what a team of scientists led by Dirk Helbing of the ETH in Switzerland believes. And as they point out, a billion euros is small fare compared with the bill for of the current financial crisis – which might conceivably have been anticipated with the massive social-science simulations they want to establish.

This might seem the least auspicious moment to start placing faith in economic modelling, but Helbing’s team proposes to transform the way it is done. They will abandon the discredited and doctrinaire old models in favour of ones built from the bottom up, which harness the latest understanding of how people behave and act collectively rather than reducing the economic world to caricature for the sake of mathematical convenience.

And it is not just about the economy, stupid. The FuturIcT ‘knowledge accelerator’, the proposal  for which has just been submitted to the European Commission’s Flagship Initiatives scheme which seeks to fund visionary research, would address a wide range of environmental, technological and social issues using supercomputer simulations developed by an interdisciplinary team. The overarching aim is to provide systematic, rational and evidence-based guidance to governmental and international policy-making, free from the ideological biases and wishful thinking typical of current strategies.

Helbing’s confidence in such an approach has been bolstered by his and others’ success in modelling social phenomena ranging from traffic flow in cities to the dynamics of industrial production. Modern computer power makes it possible to simulate such systems using ‘agent-based models’ that look for large-scale patterns and regularities emerging from the interaction of large numbers of individual agents.

The FuturIcT proposal includes the establishment of ‘Crisis Observatories’ that might identify impending problems such as financial crashes, wars and social unrest, disease epidemics, and environmental crises. It would draw on expertise in fields ranging from engineering, law, anthropology and geosciences to physics and mathematics. Crisis Observatories could be operational by 2016, the FuturIcT team says, and by 2022 the programme would incorporate a Living Earth Simulator that couples human social and political activity to the dynamics of the natural planet.

Sceptics may dismiss the idea as a hubristic folly that exaggerates our ability to understand the world we have created. But when we compare the price tag to the money we devote to getting a few humans outside our atmosphere, it could be a far greater folly not to give the idea a chance.

Monday, April 26, 2010

Big quantum


Here’s a little piece I wrote for Prospect, who deemed in the end that it was too hard for their readers. But I am sure it is not, dear blogspotter, too hard for you.


If you think quantum physics is hard to understand, you’re probably confusing understanding with intuition. Don’t assume, as you fret over the notion that a quantum object can be in two places at once, that you’re simply too dumb to get your mind around it. Nobody can, not even the biggest brains in physics. The difference between quantum physicists and the rest of us is that they’ve elected to just accept the weirdness and get on with the maths – as physicist David Mermin puts it, to ‘shut up and calculate.’

But this pragmatic view is losing its appeal. Physicists are unsatisfied with the supreme ability of quantum theory to predict how stuff behaves at very small scales, and are following the lead of its original architects, such as Bohr, Heisenberg and Einstein, in demanding to know what it means. As Lucien Hardy and Robert Spekkens of the high-powered Perimeter Institute in Canada wrote recently, ‘quantum theory is very mysterious and counterintuitive and surprising and it seems to defy us to understand it. And so we take up the challenge.’

This is something of an act of faith, because it isn’t obvious that our minds, having evolved in a world of classical physics where objects have well-defined positions and velocities, can ever truly conceptualize the quantum world where, apparently, they do not. That difference, however, is part of the problem. If the microscopic world is quantum, why doesn’t everything behave that way? Where, once we reach the human scale, has the weirdness gone?

Physicists talk blithely about this happening in a ‘quantum-to-classical transition’, which they generally locate somewhere between the size of large molecules and of living cells – between perhaps a billionth and a millionth of a metre (a nanometre and a micrometre). We can observe subatomic particles obeying quantum rules – that was first done in 1927, when electrons were seen acting like interfering waves – but we can’t detect quantumness in objects big enough to see with the naked eye.

Erwin Schrödinger tried to force this issue by placing the microcosm and the macrocosm in direct contact. In his famous thought experiment, the fate of a hypothetical cat depended on the decay of a radioactive atom, dictated by quantum theory. Because quantum objects can be in a ‘superposition’ of two different states at once, this seemed to imply that the cat could be both alive and dead. Or at least, it could until we looked, for the ‘Copenhagen’ interpretation of quantum theory proposed by Bohr and Heisenberg insists that superpositions are too delicate to survive observation: when we look, they collapse into one state or the other.

The consensus is now that the cross-over from quantum to classical rules involves a process called decoherence, in which delicate quantum states get blurred by interacting with their teeming, noisy environment. An act of measurement using human-scale instruments therefore induces decoherence. According to one view, decoherence imprints a restricted amount of information about the state of the quantum object on its environment, such as the dials of our measuring instruments; the rest is lost forever. Physicist Wojciech Zurek thinks that the properties we measure this way are just those that can most reliably imprint ‘copies’ of the relevant information about the system under inspection. What we measure, then, are the ‘fittest’ states – which is why Zurek calls the idea quantum Darwinism. It has the rather remarkable corollary that the imprinted copies can be ‘used up’, so that repeated measurements will eventually stop giving the same result: measurement changes the outcome.

These are more than just esoteric speculations. Impending practical applications of quantum superpositions, for example in quantum cryptography for encoding optical data securely, or super-fast quantum computers that perform vast numbers of calculations in parallel, depend on preserving superpositions by avoiding decoherence. That’s one reason for the current excitement about experiments that probe the contested ‘middle ground’ between the unambiguously quantum and classical worlds, at scales of tens of nanometres.

Andrew Cleland and coworkers at the University of California have now achieved a long-sought goal in this arena: to place a manufactured mechanical device, big enough to see sharply in the electron microscope, in a quantum superposition of states. They made a ‘nanomechanical resonator’ – a strip of metal and ceramic almost a micrometer thick and about 30 micrometres long, fixed at one end like the reed of a harmonica – and cooled it down to within 25 thousandths of a degree from absolute zero. The strip is small enough that its vibrations follow quantum rules when cold enough, which means that they can only have particular frequencies and energies (heat will wash out this discreteness). The researchers used a superconducting electrical circuit to induce vibrations, and they report in Nature that they could put the strip into a superposition of two states – in effect, as if it is both vibrating and not vibrating at the same time.

Sadly, these vibrations are too small for us to truly ‘see’ what an object looks like that is both moving and not moving. But even more dramatic incursions of quantum oddness might be soon in store. Last year a team of European scientists outlined a proposal to create a real Schrödinger’s cat, substituting an organism small enough to stand on the verge of the quantum world: a virus. They suggested that a single virus suspended by laser beams could be put into a superposition of moving and stationary states. Conceivably, they said, this could even be done with tiny, legged animals called tardigrades or ‘water bears’, a few tenths of a millimetre long. If some way could be devised to link the organism’s motion to its biological behaviour, what then would it do while simultaneously moving and still? Nobody really knows.

Wednesday, April 21, 2010

Peter's patterns

I have a little piece on the BBC Focus site about the work of sculptor Peter Randall-Page , with whom I had the pleasure of discussing pattern formation and much else at Yorkshire Sculpture Park last month. I will put an extended version of this piece on my web site shortly (under ‘Patterns’) in which there are lots more stunning pictures of Peter’s work and natural patterns.

Friday, April 09, 2010

The right formula


Message to a heedless world: Please remember that the O in the formula H2O is a capital O meaning oxygen, not a zero meaning zero. Water is composed of hydrogen and oxygen, not hydrogen and nothing.

Heedless world replies: Get a life, man.

Heedless world continues (after some thought): How do you know the difference anyway?

Me: Zeros are narrower.

Heedless world: This is truly sad.

Tuesday, April 06, 2010

An uncertainty principle for economists?


Here’s the pre-edited version of my latest Muse for Nature News. The paper I discuss here is very long but also very ambitious, and well worth a read.
**********************************************************************
Bad risk management contributed to the current financial crisis. Two economists believe the situation could be improved by gaining a deeper understanding of what is not known.

Donald Rumsfeld is an unlikely prophet of risk analysis, but that may be how posterity will anoint him. His remark about ‘unknown unknowns’ was derided at the time as a piece of meaningless obfuscation, but more careful refection suggests he had a point. It is one thing to recognize the gaps and uncertainties in our knowledge of a situation, another to acknowledge that entirely unforeseen circumstances might utterly change the picture. (Whether you subscribe to Rumsfeld’s view that the challenges in managing post-invasion Iraq were unforeseeable is another matter.)

Contemporary economics can’t handle the unknown unknowns – or more precisely, it confuses them with known unknowns. Financial speculation is risky by definition, yet the danger is not that the risks exist, but that the highly developed calculus of risk in economic theory – some of which has won Nobel prizes – gives the impression that they are under control.

The reasons for the current financial crisis have been picked over endlessly, but one common view is that it involved a failure in risk management. It is the models for handling risk that Nobel leaureate economist Joseph Stiglitz seemed to have in mind when he remarked in 2008 that ‘Many of the problems our economy faces are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy’ [1].

Facing up to these failures could prompt the bleak conclusion that we know nothing. That’s the position taken by Nassim Nicholas Taleb in his influential book The Black Swan [2], which argues that big disruptions in the economy can never be foreseen, and yet are not anything like as rare as conventional theory would have us believe.

But in a preprint on Arxiv, Andrew Lo and Mark Mueller of MIT’s Sloan School of Management offer another view [3]. They say that what we need is a proper taxonomy of risk – not unlike, as it turns out, Rumsfeld’s infamous classification. In this way, they say, we can unite risk assessment in economics with the way uncertainties are handled in the natural sciences.

The current approach to uncertainty in economics, say Lo and Mueller, suffers from physics envy. ‘The quantitative aspirations of economists and financial analysts have for many years been based on the belief that it should be possible to build models of economic systems – and financial markets in particular – that are as predictive as those in physics,’ they point out.

Much of the foundational work in modern economics took its lead explicitly from physics. One of its principal architects, Paul Samuelson, has admitted that his seminal book Foundations of Economic Analysis [4] was inspired by the work of mathematical physicist Edwin Bidwell Wilson, a protégé of the pioneer of statistical physics Willard Gibbs.

Physicists were by then used to handling the uncertainties of thermal noise and Brownian motion, which create a gaussian or normal distribution of fluctuations. The theory of Brownian random walks was in fact first developed by physicist Louis Bachelier in 1900 to describe fluctuations in economic prices.

Economists have known since the 1960s that these fluctuations don’t in fact fit a gaussian distribution at all, but are ‘fat-tailed’, with a greater proportion of large-amplitude excursions. But many standard theories have failed to accommodate this, most notably the celebrated Black-Scholes formula used to calculate options pricing, which is actually equivalent to the ‘heat equation’ in physics.

But incorrect statistical handling of economic fluctuations is a minor issue compared with the failure of practitioners to distinguish fluctuations that are in principle modellable from those that are more qualitative – to distinguish, as Lo and Mueller put it, trading decisions (which need maths) from business decisions (which need experience and intuition).

The conventional view of economic fluctuations – that they are due to ‘external’ shocks to the market, delivered for example by political events and decisions – has some truth in it. And these external factors can’t be meaningfully factored into the equations as yet. As the authors say, from July to October 2008, in the face of increasingly negative prospects for the financial industry, the US Securities and Exchange Commission intervened to impose restrictions on certain companies in the financial services sector. ‘This unanticipated reaction by the government’, say Lo and Mueller, ‘is an example of irreducible uncertainty that cannot be modeled quantitatively, yet has substantial impact on the risks and rewards of quantitative strategies.’

They propose a five-tiered categorization of uncertainty, from the complete certainty of Newtonian mechanics, through noisy systems and those that we are forced to describe statistically because of incomplete knowledge about deterministic processes (as in coin tossing), to ‘irreducible uncertainty’, which they describe as ‘a state of total ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.’

The authors think that this is more than just an enumeration of categories, because it provides a framework for how to think about uncertainties. ‘It is possible to “believe” a model at one level of the hierarchy but not at another’, they say. And they sketch out ideas for handling some of the more challenging unknowns, as for example when qualitatively different models may apply to the data at different times.

‘By acknowledging that financial challenges cannot always be resolved with more sophisticated mathematics, and incorporating fear and greed into models and risk-management protocols explicitly rather than assuming them away’, Lo and Mueller say, ‘we believe that the financial models of the future will be considerably more successful, even if less mathematically elegant and tractable.’

They call for more support of post-graduate economic training to create a cadre of better informed practitioners, more alert to the limitations of the models. That would help; but if we want to eliminate the ruinous false confidence engendered by the clever, physics-aping maths of economic theory, why not make it standard practice to teach everyone who studies economics at any level that these models of risk and uncertainty apply only to specific and highly restricted varieties of it?

References
1. Stiglitz, J. New Statesman, 16 October 2008.
2. Taleb, N. N. The Black Swan (Allen Lane, London, 2007).
3. Lo, A. W. & Mueller, M. T. http://www.arxiv.org/abs/1003.2688.
4. Samuelson, P. A. Foundations of Economic Analysis (Harvard University Press, Cambridge, 1947).

Thursday, April 01, 2010

Bursting the genomics bubble


Here’s the pre-edited version of a Muse that’s just gone up on Nature News. There’s a bunch of interesting Human Genome Project-related stuff on the Nature site to mark the 10th anniversary of the first draft of the genome (see here and here and here, as well as comments from Francis Collins and Craig Venter). Some is celebratory, some more thoughtful. Collins considers his predictions to have been vindicated – with the exception that ‘The consequences for clinical medicine have thus far been modest’. Now, did you get the sense at the time that it was precisely the potential for advancing clinical medicine that was the HGP’s main selling point? Venter is more realistic, saying ‘Phenotypes — the next hurdle — present a much greater challenge than genotypes because of the complexity of human biological and clinical information. The experiments that will change medicine, revealing the relationship between human genetic variation and biological outcomes such as physiology and disease, will require the complete genomes of tens of thousands of humans together with comprehensive digitized phenotype data.’ Hmm… not quite what the message was at the time, although in fairness Craig was not really one of those responsible for it.

*********************************************************************
The Human Genome Project attracted investment beyond what a rational analysis would have predicted. There are pros and cons to that.

If you were a venture capitalist who had invested in the sequencing of the human genome, what would you now have to show for it? For scientists, the database of the Human Genome Project (HGP) may eventually serve as the foundation of tomorrow’s medicine, in which drugs will be tailored personally to your own genomic constitution. But for a return to the bucks you invested in this grand scheme, you want medical innovations here and now, not decades down the line. Ten years after the project’s formal completion, there’s not much sign of them.

A team of researchers in Switzerland now argue in a new preprint [1] that the HGP was an example of a ‘social bubble’, analogous to the notorious economic bubbles in which investment far outstrips any rational cost-benefit analysis of the likely returns. Monika Gisler, Didier Sornette and Ryan Woodard of ETH in Zürich say that ‘enthusiastic supporters of the HGP weaved a network of reinforcing feedbacks that led to a widespread endorsement and extraordinary commitment by those involved in the project.’

Some scientists have already suggested that the benefits of the HGP were over-hyped [2]. Even advocates now admit that the benefits for medicine may be a long time coming, and will require further advances in understanding, not just the patience to sort through all the data.

This stands in contrast to some of the claims made while the HGP was underway between 1990 and 2003. In 1999 the International Human Genome Sequencing Consortium (IHGSC) leader Francis Collins claimed that the understanding gained by the sequencing effort would ‘eventually allow clinicians to subclassify diseases and adapt therapies to the individual patient’ [3]. That might happen one day, but we’re still missing fundamental understanding of how even diseases with a known heritable risk are related to the makeup of our genomes [4]. Collins’ portrait of a patient who, in 2010, is prescribed ‘a prophylactic drug regimen based on the knowledge of [his] personal genetic data’ is not yet on the horizon. And going from knowledge of the gene to a viable therapy has proved immensely challenging even for a single-gene disease as thoroughly characterized as cystic fibrosis [5]. Collins’ claim,shortly after the unveiling of the first draft of the human genome in June 2000, that ‘new gene-based ‘designer drugs’ will be introduced to the market for diabetes mellitus, hypertension, mental illness and many other conditions’ [6] no longer seems a foregone conclusion, let alone a straightforward extension of the knowledge of all 25,000 or so genes in the human genome.

This does not, in the analysis of Gisler and colleagues, mean that the HGP was money poorly spent. Some of the benefits are already tangible, such as much faster and cheaper sequencing techniques; others may follow eventually. The researchers are more interested in the issue of how, if the HGP was such a long-term investment, it came to be funded at all. Their answer invokes the notion of bubbles borrowed from the economic literature, which Sornette has previously suggested [7] as a driver of other technical innovations such as the mid-nineteenth-century railway boom and the explosive growth of information technology at the end of the twentieth century. In economics, bubbles seem to be an expression of what John Maynard Keynes called ‘animal spirits’, whereby the instability stems from ‘the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations’ [8]. In economics such bubbles can end in disastrous speculation and financial ruin, but in technology they can be useful, creating long-lasting innovations and infrastructures that would have been deemed too risky a venture under the cold glare of reason’s spotlight.

For this reason, Gisler and colleagues say, it is well worth understanding how such bubbles occur, for this might show governments how to catalyse long-term thinking that is typically (and increasingly) absent from their own investment strategies and those of the private sector. In the case of the HGP, the researchers argue, the controversial competition between the public IHGSC project and the private enterprise conducted by the biotech firm Celera Genomics worked to the advantage of both, creating an sense of anticipation and hope that expanded the ‘social bubble’ as well as in the end reducing the cost of the research by engaging market mechanisms.

To that extent, the ‘exuberant innovation’ that social bubbles can engender seems a good thing. But it’s possible that the HGP will never really deliver economically or medically on such massive investment. Worse, the hype might have incubated a harmful rash of genetic determinism. As Gisler and colleagues point out, other ‘omics’ programmes are underway, including an expensively funded NIH initiative to develop high-throughput techniques for solving protein structures. Before animal spirits transform this into the next ‘revolution in medicine’, it might be wise to ask whether the HGP has something to tell us about the wisdom of collecting huge quantities of stamps before we know anything about them.

References
1. Gisler, M., Sornette, D. & Woodard, R. Preprint http://www.arxiv.org/abs/1003.2882.
2. Roberts, L. et al., Science 291, 1195-1200 (2001).
3. Collins, F. S. New England J. Med. 28, 28-37 (1999).
4. Dermitzakis, E. T. & Clark, A. G. Science 326, 239-240 (2009).
5. Pearson, H. Nature 460, 164-169 (2009).
6. Collins, F. S. & McKusick, V. A. J. Am. Med. Soc. 285, 540-544 (2001).
7. Sornette, D. Socio-econ. Rev. 6, 27-38 (2008).
8. Keynes, J. M., The General Theory of Employment, Interest and Money (Macmillan, London, 1936).

The Times does The Music Instinct


There are some extracts from The Music in the Eureka science supplement of the Times today, although oddly they don’t seem yet to have put it online. It’s amongst a real mash-up of stuff about the ‘science of music’, which is all kind of fun but slightly weird to find my words crash-landed there. The editors did a pretty good job, however, of plucking out bits of text and getting them into a fairly self-contained form, when they were generally part of a much longer exposition.

I notice in Eureka that Brain May, bless him, doesn’t believe in global warming. “Most of my most knowledgeable scientist friends don’t believe that global warming exists”, he says. Come on Brian, name them. Have you been chatting to the wrong Patrick Moore ? (Actually, I’m not too sure if chatting to the other one would help very much.)

Tuesday, March 30, 2010

Magnets mess with the mind's morality

Here's a little snippet I wrote for Nature's news blog. The authors seem to take it as read that magnets can alter brain functioning in this manner, but I find that remarkable.


Talk about messing with your mind. A new study [www.pnas.org/cgi/doi/10.1073/pnas.0914826107] by neuroscientist Liane Young and colleagues at Harvard University does exactly that: the researchers used magnetic signals applied to subjects’ craniums to alter their judgements of moral culpability. The magnetic stimulus made people less likely to condemn others for attempting but failing to inflict harm.

Most people make moral judgements of others’ actions based not just on their consequences but also on some view of what the intentions were. That makes us prepared to attribute diminished responsibility to children or people with severe mental illness who commit serious offences: it’s not just a matter of what they did, but how much they understood what they were doing.

Neuroimaging studies have shown that the attribution of beliefs to other people seems to involve a part of the brain called the right temporoparietal junction (RTPJ). So Young and colleagues figured that, if they disrupted how well the RTPJ functions, this might alter moral judgements of someone’s action that rely on assumptions about their intention. To do that, they applied an oscillating magnetic signal at 1 Hz to the part of the skull close to the RTPJ for 25 minutes in test subjects, and then asked them to read and respond to an account of an attempted misdemeanour. They also conducted tests while delivering the signal in regular short bursts. In one scenario, ‘Grace’ intentionally puts a white powder from a jar marked ‘toxic’ into her friend’s coffee, but the powder is in fact just sugar and the friend is fine. Was Grace acting rightly or wrongly?

Obvious? You might think differently with a magnetic oscillator fixed to your head. With the stimulation applied, subjects were more likely to judge the morality based on the outcome, as young children do (the friend was fine, so it’s OK), than on the intention (Grace believed the stuff was toxic).

That’s scary. The researchers present this as evidence of the role of the RTPJ in moral reasoning, with implications for how children do it (there is some evidence that the RTPJ is late in maturing) and for conditions such as autism that seem to involve a lack of ability to identify motives in other people. Fair enough. But to most of us it is news – and alarming news – that morality-related brain functions can be disrupted or suspended with a simple electromagnetic coil. If ever a piece of research were destined to incite paranoid fantasies about dictators inserting chips in our heads to alter and control our behaviour, this is it.

Thursday, March 25, 2010

Solar eclipse


This is more or less how my review of Ian McEwan’s new novel Solar in Prospect started out (the final paras got a little garbled in the edit). I’m amused to see that my suggestion here that his modest intentions might head off extreme reactions has been proved wrong. Lorna Bradbury in the Telegraph calls the book McEwan’s best yet, and thinks it should win the Booker (no way). And some found the comic elements ‘extremely funny’. Others think it is a stinker: one reviewer calls it ‘an odd, desultory production, by turns pompous and feebly comic’, and Leo Robson in the New Statesman says McEwan has lost his ear and that ‘With Solar, McEwan has finally committed the folly that we might not have expected from him.’ Really, they are all getting too worked up. Although I wouldn’t go as far as the dismissive comment in the Economist that this is ‘A novel to chuckle over, and chuck away’, it is simply a fairly light, intelligent piece of entertainment. Not, I imagine, that McEwan will be too bothered about any of this.

***********************************************************************

After Saturday, which several reviewers considered (unfairly) to be an insufferably smug depiction of Blair’s Britain in the approach to the invasion of Iraq, it looked as though a place was being prepared for Ian McEwan alongside Martin Amis on the pillory. Our two most celebrated novelists, the story went, were getting above themselves, pronouncing on the state of the nation from what seemed an increasingly conservative position.

Amis seems now to be in some curious quantum superposition of states, defended in a backlash to the backlash while demonized as the misogynistic wicked godfather. His latest novel The Pregnant Widow has been both praised as a return to form and derided as a farrago of caricature and solipsism. But Solar may extricate McEwan from such controversies and reinvest him with the humble status of a storyteller. For the book is a modest entertainment, dare one even say a romp, and essentially a work of genre fiction: lab lit. This genre, a second cousin of the campus novel, draws its plots from the exploits of scientists and the scientific community, and includes such titles as Allegra Goodman’s Intuition and Jonathan Lethem’s As She Climbed Across the Table.

McEwan’s interest in science is well established. The protagonist of Enduring Love is a science journalist, and the plot of Saturday hinged on the technical expertise of its central character, the neuroscientist Henry Perowne. McEwan has spoken about the uses of science in fiction, and has written passionately about the need to tackle climate change.

And that is where Solar comes in. When McEwan mentioned at the Hay Festival in 2008 that his next book had a ‘climate change’ theme, people anticipated some eco-fable set in the melting Arctic. He quickly denied any intention to proselytize; climate change would ‘just be the background hum of the book.’

So it is. Michael Beard, a Nobel laureate physicist resting on the laurels of his seminal work in quantum physics decades ago, is balding, overweight, addictively philandering, and coming to the end of his fifth marriage. Like many Nobel winners he has long ceased any productive science and is now riding the superficial circuit of plenary lectures, honorary degrees, Royal Commissions and advisory boards. Becoming the figurehead of the National Centre for Renewable Energy, marooned near Reading, seemed a good idea at the time, but the centre’s research has become mired in Beard’s ill-advised notion of making a wind turbine. Beard is privately indifferent to the global-warming threat, but when a chance arrives to give his career fresh lustre with a new kind of solar power, he grasps it greedily. With Beard running more on bluster and past glory than on scientific insight, and with his domestic life on autodestruct, we know it will all end badly. The question is simply how long Beard can stay ahead of the game. As the climate-change debate moves from the denialism of the Bush years to Obama and Copenhagen, he is increasingly a desperate, steadily inflating cork borne on the tide.

As ever, McEwan has done his homework. Mercifully, he knows much more than Lethem about how physicists think and work. And he is more successful in concealing his research than he was with the neuroscience shoehorned into Saturday. But not always. Beard’s speech to a group of climate-sceptic corporate leaders reads more like a lecture than a description of one: “Fifty years ago we were putting thirteen billion metric tons of carbon dioxide into the atmosphere every year. That figure has almost doubled.” And when Beard debunks his business partner’s doubts about global warming after the cool years of the late noughties, he gets full marks for science but risks becoming his author’s mouthpiece. “The UN estimates that already a third of a million people a year are dying from climate change” is not the kind of thing anyone says to their friend.

In case you care, the solution to the energy crisis on offer here – the process of ‘artificial photosynthesis’ to split water into hydrogen and oxygen using photocatalysis – is entirely respectable scientifically, albeit hardly the revolutionary breakthrough it is made out to be. Much the same idea was used by Stephen Poliakoff in his 1996 lablit play Blinded By the Sun; McEwan’s clever trick here is to involve quantum-mechanical effects (based on Beard’s Nobel-winning theory) to improve the efficiency, which left the nerd in me wondering if McEwan was aware of recent theories invoking such effects in real photosynthesis. I’m not sure whether to be more impressed if he is or if he isn’t.

McEwan nods toward recent episodes in which science has collided with the world outside the lab. Beard’s off-the-cuff remarks about women in science replay the debacle that engulfed former Harvard president Larry Sumner in 2005, and Beard stands in for Steven Pinker in an ensuing debate on gender differences (although Pinker’s opponent Elizabeth Spelke did a far better demolition job than does Beard’s).

He also makes wry use of personal experience. When he read at Hay a draft of the episode in which Beard eats the crisps of a fellow traveller on a train, thinking they are his own and suppressing fury when the young man ironically helps himself, someone in the audience pointed out that a similar case of false accusation of an innocent stranger appeared in The Hitchhiker’s Guide to the Galaxy. Some newspapers made a weak jibe at plagiarism. When Beard recounts the tale in a speech, a lecturer in ‘urban studies and folklore’ accuses him of appropriating a well-known urban myth, making Beard feel that his life has been rendered inauthentic – and the allusion to Douglas Adams is now inserted in the story.

One of the pleasures for a science watcher is identifying the academics from whom Beard has been assembled – I counted at least five. He is a difficult character to place centre-stage, not just selfish, unfaithful and vain but also physically repulsive – McEwan is particularly good at evoking queasiness at Beard’s gluttony and bodily decrepitude. But he has said that he wanted to leave Beard just enough possibility of goodness to engender some sympathy, and he succeeds by a whisker. When the final collapse of Beard’s crumbling schemes arrives (you can see it coming all along), there is room for compassion, even dismay.

Solar is, then, a satisfying and scientifically literate slice of genre literature, marred only slightly by McEwan’s curious addiction to the kind of implausible plot hinge that compromised Enduring Love, Atonement and most seriously, Saturday. Come the event that places opportunity in Beard’s hands, all the strings and signposts are glaringly evident – I think I even murmured to myself “No, not the corner of the coffee table”. And like the thug Baxter in Saturday, Beard’s wife's uncouth former lover Tarpin ends up doing things that just don't ring true – a failure not of ‘character motivation’ (McEwan is too good a writer to belabour that old chestnut) but of sheer plausibility.

In the end, this is McEwan-lite, a confection of contemporary preoccupations that, while lacking the emotional punch of Atonement, the political ambition of Saturday or the honed delicacy of On Chesil Beach, is more fun than any of them. And if it dissuades us from turning McEwan, like Amis, into a cultural icon to be venerated or toppled, so much the better for him and for us.

Monday, March 15, 2010

What went on in February


Here’s my little round-up for the April issue of Prospect, before it is edited to probably a third of this size. I don’t want to sound churlish, in the last item, about what is clearly a useful trial – but it did seem a good example of the kind of thing Colin Macilwain at Nature nailed recently in an excellent article about science and the media.
     I’ve also reviewed Ian McEwan’s new book Solar in this forthcoming issue of Prospect – will post that review shortly. In short: it’s fun.
************************************************************************

As the global warming debate intensifies, expect to hear more about methane, carbon dioxide’s partner in crime as a greenhouse gas. Since it doesn’t come belching from our cars and power stations, methane bulks small in our conscience, but agriculture, gas production, landfills and biomass burning have doubled methane levels in the atmosphere since pre-industrial times and it is a more potent greenhouse gas than CO2. There are immense natural resources of methane, and one doomsday scenario has some of these releasing the gas as a result of warming. A frozen form of methane and water, called methane hydrate, sits at the seafloor in many locations worldwide, but the methane could bubble out if sea temperatures rise. A team has now discovered  this happening on the Arctic continental shelf off northeastern Siberia, where the sea water has vastly more dissolved methane than expected. Some think a massive methane burp from hydrate melting 250 million years ago caused environmental changes that wiped out 70-96% of all species on the planet. There’s no reason to panic yet, but I’m just letting you know.

A few scientists and an army of bloggers still insist that global warming has nothing to do with any of this stuff, but is caused by changes in the activity of the sun. If you like that idea (or indeed if you hate it), don’t expect much enlightenment from NASA’s Solar Dynamics Observatory (SDO), launched in February to study the inner workings of the sun. We already know enough about variations in the sun to make the solar-warming hypotheses look flaky. But we don’t really understand what causes them. The 11-year sunspot cycle is thought to be the result of changes in the churning patterns of this volatile ball of hot plasma. It causes small periodic rise and fall of the sun’s energy output, along with the recurrent appearance of sunspots at the height of the cycle, and increases in solar flares that spew streams of charged particles across millions of miles of space, disrupting telecommunications and power grids on Earth and supplying a very practical reason for needing to know more about how our star works. SDO, launched by NASA at a cost of $856 million, will take images of the sun and detect convective flows of material beneath the surface, over the coming solar cycle that is due to peak around 2013.

A new study from researchers in Newcastle and Ulm of why our cells age does not, as some reports suggest, reveal the ‘secrets of ageing’, but rather debunks the notion of a ‘secret’ at all. Ageing, like embryo growth or cancer, is not a biochemical process but the net result of a complex network of processes. The new study shows how cells can become locked into a steady decline once they accumulate too much damage to their DNA, so that they don’t go on dividing with an inherent risk of initiating cancer. Although this process is triggered by the gradual erosion of the protective ‘caps’ at the ends of our chromosomes, called telomeres, it suggests that the story is far more complex than the simplistic picture in which we age because our chromosomes go bald. And it makes a magic bullet for reversing ageing seem even more of a pipe dream.

A cure for peanut allergy could be only three years away, recent headlines said. It’s a cheering prospect for this nasty condition, a source of anxiety for many parents and on very rare occasions a genuinely life-threatening problem. The reports were based on a presentation given by Andrew Clark of Addenbrooke’s Hospital in Cambridge at the meeting of the American Association for the Advancement of Science, an annual jamboree of science news. Clark and his colleagues are about to begin a major clinical trial, following earlier success in desensitizing children to the allergy by ‘training’ the immune system to tolerate initially tiny but steadily increasing doses of peanut. The news is welcome, but also an indication of the rather formulaic nature of much science and health reporting, where everyone seizes on the same story irrespective of whether it is really news. This is, after all, just the announcement of a forthcoming trial, not of its results. And besides, the desensitizing strategy is well established in principle: similar successes were reported recently by two groups at a meeting of the American Academy of Allergy, Asthma and Immunology in New Orleans. 

Friday, February 26, 2010

How bugs build

I have a feature in New Scientist on insect architecture and what we can learn from it, pegged to a very interesting conference that took place in Venice last September. My feature started its life at nigh on twice the length (as many sadly do), and looked at some of the algorithmic architecture discussed at the workshop. I’m going to put a pdf of this long version on my website shortly (it’ll be under the ‘Patterns’ papers).

There's a book in the pipeline from the conference participants (and others), probably to be called Collective Architecture. This lovely image, by the way – a plaster cast of the labyrinth inside a termite nest – was taken by Rupert Soar, mentioned in the article.

Tuesday, February 23, 2010

Told by an idiot

[I have a Muse on Nature News about the perils and benefits of recommender systems. Here’s the pre-edited  version.]

Automated recommender systems need to put some jokers in the pack, if we’re not going to end up with narrow-minded tastes.

Medieval monarchy might not have much to recommend it compared to liberal democracy, but here’s one: today our rulers have no Fools. Even if the tradition was honoured more in literature – Shakespeare’s King Lear – than in reality, how often now will a national leader employ someone to laugh at their folly and remind them of bitter truths? More often, cabinets and advisers seem picked for their readiness to confirm their leader’s judgements.

Some people fear that the information age encourages this tendency to spread to the rest of us. The Internet, they say, is a series of echo chambers: people join chat groups to hear others repeat their own opinions. Climate sceptics talk only to other climate sceptics (and accuse climate scientists of doing likewise, perhaps with some justification). DailyMe.com will supply you with only the news you ask to hear, realising the vision of personalized news championed by Nicholas Negroponte of MIT’s Media Lab. The ‘Daily Me’ is now often used in a pejorative sense to decry the insularity this inculcates.

Now it seems you can’t make an online purchase without being recommended other ‘similar’ items. Music browsers such as Search Inside the Music, developed at Sun Labs, find you songs that ‘sound similar’ to ones you like already. But who’s to say you wouldn’t be more interested in stuff unlike what you like already?

That’s the dilemma addressed in a paper in the Proceedings of the National Academy of Sciences by Yi-Cheng Zhang, a physicist at the University of Fribourg in Switzerland, and his coworkers [1]. They point out that most data-mining ‘recommender’ systems such as those used by Amazon.com focus on accuracy, measured by testing whether they can reproduce known user preferences. This emphasizes the similarity of recommendations to previous choices, and can lead to self-reinforcing cycles fixated on blockbuster items [2].

But, say the researchers, the most useful recommendations may not be the most similar, but ones that offer the unexpected by introducing diversity. Like Lear’s Fool, they challenge what you thought you knew. Zhang and colleagues show that a judicious blend of algorithms optimized for accuracy and for diversity can actually offer more diversity and accuracy than any of the component algorithms on their own.

The researchers compare this effect with the value of ‘weak ties’ in our friendship networks. While we tend to seek advice from close friends – typically people sharing similar views and preferences – it is often comments from people with whom we have a more limited connection that are the most helpful, because they offer a perspective outside our regular experience.

The same is true in scientific research: scientists from disciplines outside your own can spark new trains of thought, while your fellow specialists trudge along the same track. Without fertilization from outsiders, disciplines risk stultifying. (One recent study implies that astronomy could be in danger of that [3].)

But it seems we instinctively gravitate towards the echo chamber. Networks expert Mark Newman at the University of Michigan has uncovered the stark division in purchases of books on US politics through Amazon [4]. He studied a network of 105 recent books, linked if Amazon indicated that one book was often bought by those who purchased the other. Newman found a pretty clean split into communities containing only ‘liberal’ books and only ‘conservative’ ones, with just two small bridging groups that contained a mixture. There was a similar split in links between political blogs. This clear division, Newman says, ‘is perhaps testament not only to the widely noted polarization of the current political landscape in the United States but also to the cohesion of the two factions.’ Recommender systems that offer ‘more of the same’ can only encourage this Balkanization of the ever-growing universe of information, opinion and choice.

Not everyone agrees there’s a problem. In an essay on Salon.com, David Weinberger disputed the notion of the Internet as an echo chamber [5]. He argues that some unspoken common assumptions – among liberals at that time, that George W. Bush was a bad president – allow online conversations to move on to more constructive matters, rather than becoming, say, a tedious litany of Bush-baiting. ‘If you want to see a real echo chamber’, said Weinberger, ‘open up your daily newspaper or turn on your TV.’

If people truly want more of the same, it’ll always be hard to make them hear the Fool’s wisdom. But most recommender systems do want to find what people will like, not just what they think they like. Throwing diversity into the mix is a good start, but the bigger challenge is to figure out how preferences are formed. What are the coordinates of ‘preference space’ and how do we negotiate them? There might, say, be something about the melodic contours or timbres in Beethoven’s music that a fan will find not in other early nineteenth-century composers but in twentieth-century modernists. Some music recommender systems are examining how we classify music according to non-traditional criteria, and using these as the compass directions for navigating music space. Understanding more about such preference-forming structures will not only improve the choices we’re offered but might also tell us something new about how the human brain partitions experience. And we could be in for some delicious surprises – just as when we used to browse through record stores.

References


1. Zhou, T. et al., Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1000488107.
2. Fleder, D. & Hosanagar, K. Manag. Sci. 55, 697-712 (2009).
3. Guimerà, R., Uzzi, B., Spiro, J. & Amaral, L. A. N. Science 308, 697-702 (2005).
4. Newman, M. E. J., Proc. Natl Acad. Sci. USA 103, 8577-8582 (2006).
5. http://mobile.salon.com/tech/feature/2004/02/20/echo_chamber/index.html

Monday, February 22, 2010

So what did Darwin get wrong?

I have written a review for the Sunday Times of Jerry Fodor and Massimo Piattelli-Palmarini’s new book What Darwin Got Wrong. There was an awful lot to talk about here, and it was a devil of a job fitting it into the space available and getting it down to the appropriate level. Here’s how the review started (more or less). There’s considerably more to be said, but I’ve got too much else on the go at the moment. Suffice to say, the book is well worth a read, though it is not always easy going.

What Darwin Got Wrong

Jerry Fodor and Massimo Piattelli-Palmarini
Profile, 2010
ISBN 978 1 84668 219 3
Hardback, 262 pages
£20.00

Around 1.6 million years ago, our hairy ancestors began roaming further afield in search of food, and all that trekking got them hot and bothered. So they shed most of their hair evolved into us, the naked ape.

Thus runs one of countless stories of how evolution is driven by genetic adaptation to the environment: the conventional narrative of Neodarwinism. But according to cognitive scientists Jerry Fodor and Massimo Piattelli-Palmarini, they are all mistaken.

Despite their book’s unobjectionable title – of course there were things Darwin, who knew nothing of genes and DNA, got wrong – Fodor and Piattelli-Palmarini don’t simply think he missed a few details. Although they agree, indeed insist, that all of today’s flora and fauna evolved from earlier species, they don’t think that Darwin’s natural selection from a pool of random mutations explains it.

The arguments warrant serious consideration, but let’s first be clear about one thing. An honest reading of this book offers not a shred of comfort to creationists, intelligent designers and other anti-evolutionary fantasists. That, as the authors must know, won’t prevent the book being misappropriated, nor will it save them from the opprobrium of their peers (Fodor has already had a spat with arch-Darwinist Daniel Dennett).

In Neodarwinian theory, genes mutate at random across generations, and those that bestow an advantageous physiological or behavioural trait (phenotype) spread through a population because they boost reproductive success. But there’s often no simple connection between genes and phenotype. A single gene may have several roles, for example, and genes tend to work in networks so tightly knit that evolution can’t necessarily tinker with them independently of one another.

Naïve accounts of natural selection tend to award it quasi-mystical omnipotence, whereby it can effect just about any change, and every change is interpreted as an adaptation. The Scottish zoologist D’Arcy Thompson rubbished this habit almost a century ago, but it hasn’t gone away. The palette of biology is surely constrained by other factors: perhaps, say, the reason we don’t have three arms or eyes is not that they are non-adaptive but that they are not within the repertoire of fundamental body-forming gene networks.

Fodor and Piattelli-Palmarini also point out how ‘evidence’ for Darwinism is often conflated with evidence for evolution: ‘just look at the fossil record’. And post hoc adaptationist accounts of evolutionary change (such as the one I began with) risk being merely that: plausible but unscientific Just So stories. To the authors, that’s all they can ever be, because Darwinism is a tautology: organisms are ‘adapted’ to their environment because that’s where they live. How well adapted birds are to the air, and fish to the sea!

All of this is good stuff, and convincingly calls time on simplistic Neodarwinism. But as Fodor and Piattelli-Palmarini admit, many biologists today will say ‘Oh, I’m not that kind of Darwinist’: they know (even if they rarely say it publicly) that evolution is much more complicated. They agree that there is more to life than Darwin.

But Fodor and Piattelli-Palmarini seem to want to banish him entirely, claiming that natural selection is logically flawed because it can’t possibly identify what exactly is selected for. Their argument is opaque, however. Are frogs selected to eat flies, or to eat buzzing black things which just happen invariably to be flies? The authors don’t explain why the simple answer – find out in an experiment with frogs and faux-flies – won’t do. Their objection seems to be that evolution can’t do the experiment, because it is non-intentional and can’t know what it is looking for (they say Darwin’s reliance on stock- and pigeon-breeding therefore involved a false analogy for evolution). And they worry that we can’t distinguish adaptations from genetic changes that ‘free-ride’ on them.

But blind natural selection does work in principle, as computer models unambiguously show. These models are highly, perhaps excessively simplified. But if the same thing doesn’t happen as a rule in real populations, vague logical arguments won’t tell us why not. And if we struggle to work out precisely what trait has ‘adapted’, surely that’s our problem, not nature’s.

In any event, the authors admit that at least some of the many ‘textbook paradigms of adaptationist explanation’ might be perfectly correct. Some certainly are: superbugs have acquired antibiotic-busting genes, which is about as direct an adaptation as you can get. The authors don’t wholly exclude natural selection, then, but say it may simply fine-tune other mechanisms of evolutionary change (whatever they are). Specific adaptations, they say, are historical contingencies, not examples of a general law. In the same way, there may be good specific explanations for why your bus was late this morning, and also last Thursday, but they don’t in themselves to amount to a natural law that buses are late. Fair enough, but then to say whether adaptation is the exception or the default we need statistics. The authors are silent on this.

So they don’t quite achieve a coherent story, neither are they able (or perhaps willing) to convey it at a non-specialist level. Even so, they make a persuasive case that the role of natural selection in evolution is ripe for reassessment. To say so should not be seen as scientific heresy or capitulation to the forces of unreason – it’s a brave and welcome challenge.

Monday, February 15, 2010

In which I become a Rock Legend

… or in which my past comes back to amuse me. In the course of a little research to prepare for my talk on The Music Instinct, I discover that buried within the Classic Rock Sequence played by BBC6 last Saturday is yours truly on keyboards. Now there’s a thing. Some day I might show you the photos. (No, that’s not me posing next to Dave Brock, but you know, it almost could have been.)

Sunday, February 14, 2010

The Music Instinct - the story so far

There are some reviews of The Music Instinct in the Sunday Times, the Independent, the Guardian, the Economist and Metro. Most are nice, but Steven Poole in the Guardian, while sending out some good vibes, has some big reservations too. When I first read his review, it struck me as basically friendly, with some intelligent criticisms with which I mostly disagreed. That interpretation just about survives a second reading, but there are some very odd things here.

Most of all, as someone who has long deplored the scientism-ist (you know what I mean) approach to art that denounces anything which doesn’t meet ‘scientific’ criteria (I’ve gently derided that kind of thing in print before), I was disappointed that Poole seemed so determined to impose this reading on the book. I hope anyone who reads it will recognize that the suggestion that I go through music’s repertoire dishing out gold stars or finger-wagging according to whether composers have obeyed or contravened the ‘laws of music cognition’ is a misrepresentation bordering on the grotesque.

He seems uncomfortable with anything that strays beyond the bounds of the physiology and acoustic physics of sound – that’s to say, with ideas about how we interpret music as a coherent sonic entity, why it moves us, what roles factors such as tonality play in our perception – in short, with most of the field of music psychology. Which is naturally a bit of a problem. Of course, some will prefer to leave all that stuff to the realm of the ineffable, but it’s abundantly clear that this would involve a denial of the evidence.

I agree that it’s crucial to maintain a distinction between understanding how the brain processes music and using that to define ‘scientific’ criteria of what is ‘good’ in music. So I’m frankly baffled as to why Poole thinks I am ‘judging’ music. On the contrary, one of my aims is to suggest ways that might make all kinds of music more accessible. The only instance where I might be considered to be using cognitive principles as a tool for criticism is in the case of total serialism (not simply all serialism – I took great pains to make the distinction). I do point out that Schoenberg was wrong to consider tonality as merely an obsolete convention – it is an aid to music cognition. But as I clearly say, being able to make sense of music doesn’t by any means stand or fall on the issue of whether the pitches as a whole have audible hierarchical organization, and so eliminating tonality doesn’t mean one is doomed to write incoherent music. I don’t even criticise total serialism as such, but only those proponents of it who suggest that audiences’ difficulty with it is simply due to their lack of musical education, thereby failing to understand that this technique tends systematically to undermine our natural modes of organizing sound. Their condescension is misplaced.

Speaking of condescension, Poole seems to detect it in the way I illustrate how cognitive principles can be discerned in the way many composers have organized their music. If one wanted to insist that anyone was being condescended to here (and I can’t for the life of me see why that’s necessary), it would more obviously have to be the music psychologists, given a pat of the back for finally figuring out 300 years later the aids to cognition that Baroque musicians had been codifying and using in their rules for polyphonic composition.

Mozart and Berg reduced to a series of arithmetical tricks: huh? Says who? Compare Bee Wilson in the Sunday Times: ‘Ball never presumes that music can be reduced to some kind of scientific formula’. Well, you can decide for yourself. In any case, what has arithmetic to do with it?

Now, one could certainly read some of the music psychology literature and come away with the impression that indeed all there is to Mozart is a graph of tension and release. But I criticise that view, and point out that not only is it problematic in its own terms but it clearly leaves out something important about music’s affective power that no one has even begun to quantify. Marek Kohn’s comment that I insist on taking the science no further than is warranted directly contradicts Poole’s accusation of scientism.

On performance: I can think of few less controversial statements about music than that performance technique can bring a piece to life or kill it stone dead. To interpret this as saying that the performer does all the work and the composer has next to nothing to do with the way a piece of music is perceived (to what Poole calls ‘superstitions about the supremacy of performance and improvisation’), seems wilfully perverse (not to mention being contradicted by just about everything else I say in the book). But this reflects the dismayingly adversarial way in which Poole seems to have read the whole book. It is science vs art, logic vs intuition, tonal vs atonal, composer vs performer, notated vs non-notated music. And he seems to feel that to praise one side of such dualisms is to condemn the other. I find such dichotomies pointless and unhelpful.

On ‘originality’ of melodies: I don’t ‘praise’ composers for scoring well in this measure, but on the contrary say explicitly that ‘originality’ in this sense bears no relation to musical quality.

On notation: Having played in a big band, I know very well that some jazz forms use and even depend on scored music. Poole is right to point out that my wording seems to suggest otherwise (especially to someone with absolutist tendencies). Must put that right. When I said that notated music can’t evolve (or more accurately, it can only do so within very narrow parameters), I didn’t mean to imply that all music should evolve. I meant only that some forms (such as ‘traditional’, or what tends to be called folk) are best served by reserving that freedom, and therefore by using only very sketchy forms of notation as aides-memoire where it is needed at all. (If my statement here struck Poole as ludicrous, didn’t it occur to him that he might have misconstrued it? Still, I’ll spell this out in the paperback edition too.) As for notation in pop music, I mean ‘pop music’ in the sense in which it is generally used: the popular music coeval with and dependent on the democratization of recording technology and radio, starting roughly in the 1950s, and not ‘popular music’ of the prewar era.

Blimey, all this sounds a bit aggrieved. I’ve no desire to start an argument, especially with someone whose reviews I always read avidly, and especially especially with someone who so recently had kind words for another of my books. But I’m genuinely puzzled about what is going on in this review, and simply want to make my position plain. It is no surprise that some people will recoil at the idea of ‘analysing’ music with scientific methods, but Poole is extremely technically savvy and not in the slightest a scientophobe. I wonder if there is some over-compensation going on here from technophile (something I sometimes suspect in myself.) And if you saw a double entendre in that, you’re right: Poole’s suggestion that techno is a good place to explore for examples of rhythmic violations and the significance of timbre is an excellent one – wish I’d thought of it.

Postscript: I've now had a constructive exchange with Steven. While we don't agree on everything, we're not so divergent in our views either, and I now have a better appreciation of the points of misunderstanding.