Friday, April 30, 2010

A supercomputing crystal ball

Here's a little piece I've just written for Nature's news blog The Great Beyond .


The good news is that your future can be predicted. The bad news is that it’ll cost a billion euros. That, at least, is what a team of scientists led by Dirk Helbing of the ETH in Switzerland believes. And as they point out, a billion euros is small fare compared with the bill for of the current financial crisis – which might conceivably have been anticipated with the massive social-science simulations they want to establish.

This might seem the least auspicious moment to start placing faith in economic modelling, but Helbing’s team proposes to transform the way it is done. They will abandon the discredited and doctrinaire old models in favour of ones built from the bottom up, which harness the latest understanding of how people behave and act collectively rather than reducing the economic world to caricature for the sake of mathematical convenience.

And it is not just about the economy, stupid. The FuturIcT ‘knowledge accelerator’, the proposal  for which has just been submitted to the European Commission’s Flagship Initiatives scheme which seeks to fund visionary research, would address a wide range of environmental, technological and social issues using supercomputer simulations developed by an interdisciplinary team. The overarching aim is to provide systematic, rational and evidence-based guidance to governmental and international policy-making, free from the ideological biases and wishful thinking typical of current strategies.

Helbing’s confidence in such an approach has been bolstered by his and others’ success in modelling social phenomena ranging from traffic flow in cities to the dynamics of industrial production. Modern computer power makes it possible to simulate such systems using ‘agent-based models’ that look for large-scale patterns and regularities emerging from the interaction of large numbers of individual agents.

The FuturIcT proposal includes the establishment of ‘Crisis Observatories’ that might identify impending problems such as financial crashes, wars and social unrest, disease epidemics, and environmental crises. It would draw on expertise in fields ranging from engineering, law, anthropology and geosciences to physics and mathematics. Crisis Observatories could be operational by 2016, the FuturIcT team says, and by 2022 the programme would incorporate a Living Earth Simulator that couples human social and political activity to the dynamics of the natural planet.

Sceptics may dismiss the idea as a hubristic folly that exaggerates our ability to understand the world we have created. But when we compare the price tag to the money we devote to getting a few humans outside our atmosphere, it could be a far greater folly not to give the idea a chance.

Monday, April 26, 2010

Big quantum


Here’s a little piece I wrote for Prospect, who deemed in the end that it was too hard for their readers. But I am sure it is not, dear blogspotter, too hard for you.


If you think quantum physics is hard to understand, you’re probably confusing understanding with intuition. Don’t assume, as you fret over the notion that a quantum object can be in two places at once, that you’re simply too dumb to get your mind around it. Nobody can, not even the biggest brains in physics. The difference between quantum physicists and the rest of us is that they’ve elected to just accept the weirdness and get on with the maths – as physicist David Mermin puts it, to ‘shut up and calculate.’

But this pragmatic view is losing its appeal. Physicists are unsatisfied with the supreme ability of quantum theory to predict how stuff behaves at very small scales, and are following the lead of its original architects, such as Bohr, Heisenberg and Einstein, in demanding to know what it means. As Lucien Hardy and Robert Spekkens of the high-powered Perimeter Institute in Canada wrote recently, ‘quantum theory is very mysterious and counterintuitive and surprising and it seems to defy us to understand it. And so we take up the challenge.’

This is something of an act of faith, because it isn’t obvious that our minds, having evolved in a world of classical physics where objects have well-defined positions and velocities, can ever truly conceptualize the quantum world where, apparently, they do not. That difference, however, is part of the problem. If the microscopic world is quantum, why doesn’t everything behave that way? Where, once we reach the human scale, has the weirdness gone?

Physicists talk blithely about this happening in a ‘quantum-to-classical transition’, which they generally locate somewhere between the size of large molecules and of living cells – between perhaps a billionth and a millionth of a metre (a nanometre and a micrometre). We can observe subatomic particles obeying quantum rules – that was first done in 1927, when electrons were seen acting like interfering waves – but we can’t detect quantumness in objects big enough to see with the naked eye.

Erwin Schrödinger tried to force this issue by placing the microcosm and the macrocosm in direct contact. In his famous thought experiment, the fate of a hypothetical cat depended on the decay of a radioactive atom, dictated by quantum theory. Because quantum objects can be in a ‘superposition’ of two different states at once, this seemed to imply that the cat could be both alive and dead. Or at least, it could until we looked, for the ‘Copenhagen’ interpretation of quantum theory proposed by Bohr and Heisenberg insists that superpositions are too delicate to survive observation: when we look, they collapse into one state or the other.

The consensus is now that the cross-over from quantum to classical rules involves a process called decoherence, in which delicate quantum states get blurred by interacting with their teeming, noisy environment. An act of measurement using human-scale instruments therefore induces decoherence. According to one view, decoherence imprints a restricted amount of information about the state of the quantum object on its environment, such as the dials of our measuring instruments; the rest is lost forever. Physicist Wojciech Zurek thinks that the properties we measure this way are just those that can most reliably imprint ‘copies’ of the relevant information about the system under inspection. What we measure, then, are the ‘fittest’ states – which is why Zurek calls the idea quantum Darwinism. It has the rather remarkable corollary that the imprinted copies can be ‘used up’, so that repeated measurements will eventually stop giving the same result: measurement changes the outcome.

These are more than just esoteric speculations. Impending practical applications of quantum superpositions, for example in quantum cryptography for encoding optical data securely, or super-fast quantum computers that perform vast numbers of calculations in parallel, depend on preserving superpositions by avoiding decoherence. That’s one reason for the current excitement about experiments that probe the contested ‘middle ground’ between the unambiguously quantum and classical worlds, at scales of tens of nanometres.

Andrew Cleland and coworkers at the University of California have now achieved a long-sought goal in this arena: to place a manufactured mechanical device, big enough to see sharply in the electron microscope, in a quantum superposition of states. They made a ‘nanomechanical resonator’ – a strip of metal and ceramic almost a micrometer thick and about 30 micrometres long, fixed at one end like the reed of a harmonica – and cooled it down to within 25 thousandths of a degree from absolute zero. The strip is small enough that its vibrations follow quantum rules when cold enough, which means that they can only have particular frequencies and energies (heat will wash out this discreteness). The researchers used a superconducting electrical circuit to induce vibrations, and they report in Nature that they could put the strip into a superposition of two states – in effect, as if it is both vibrating and not vibrating at the same time.

Sadly, these vibrations are too small for us to truly ‘see’ what an object looks like that is both moving and not moving. But even more dramatic incursions of quantum oddness might be soon in store. Last year a team of European scientists outlined a proposal to create a real Schrödinger’s cat, substituting an organism small enough to stand on the verge of the quantum world: a virus. They suggested that a single virus suspended by laser beams could be put into a superposition of moving and stationary states. Conceivably, they said, this could even be done with tiny, legged animals called tardigrades or ‘water bears’, a few tenths of a millimetre long. If some way could be devised to link the organism’s motion to its biological behaviour, what then would it do while simultaneously moving and still? Nobody really knows.

Wednesday, April 21, 2010

Peter's patterns

I have a little piece on the BBC Focus site about the work of sculptor Peter Randall-Page , with whom I had the pleasure of discussing pattern formation and much else at Yorkshire Sculpture Park last month. I will put an extended version of this piece on my web site shortly (under ‘Patterns’) in which there are lots more stunning pictures of Peter’s work and natural patterns.

Friday, April 09, 2010

The right formula


Message to a heedless world: Please remember that the O in the formula H2O is a capital O meaning oxygen, not a zero meaning zero. Water is composed of hydrogen and oxygen, not hydrogen and nothing.

Heedless world replies: Get a life, man.

Heedless world continues (after some thought): How do you know the difference anyway?

Me: Zeros are narrower.

Heedless world: This is truly sad.

Tuesday, April 06, 2010

An uncertainty principle for economists?


Here’s the pre-edited version of my latest Muse for Nature News. The paper I discuss here is very long but also very ambitious, and well worth a read.
**********************************************************************
Bad risk management contributed to the current financial crisis. Two economists believe the situation could be improved by gaining a deeper understanding of what is not known.

Donald Rumsfeld is an unlikely prophet of risk analysis, but that may be how posterity will anoint him. His remark about ‘unknown unknowns’ was derided at the time as a piece of meaningless obfuscation, but more careful refection suggests he had a point. It is one thing to recognize the gaps and uncertainties in our knowledge of a situation, another to acknowledge that entirely unforeseen circumstances might utterly change the picture. (Whether you subscribe to Rumsfeld’s view that the challenges in managing post-invasion Iraq were unforeseeable is another matter.)

Contemporary economics can’t handle the unknown unknowns – or more precisely, it confuses them with known unknowns. Financial speculation is risky by definition, yet the danger is not that the risks exist, but that the highly developed calculus of risk in economic theory – some of which has won Nobel prizes – gives the impression that they are under control.

The reasons for the current financial crisis have been picked over endlessly, but one common view is that it involved a failure in risk management. It is the models for handling risk that Nobel leaureate economist Joseph Stiglitz seemed to have in mind when he remarked in 2008 that ‘Many of the problems our economy faces are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy’ [1].

Facing up to these failures could prompt the bleak conclusion that we know nothing. That’s the position taken by Nassim Nicholas Taleb in his influential book The Black Swan [2], which argues that big disruptions in the economy can never be foreseen, and yet are not anything like as rare as conventional theory would have us believe.

But in a preprint on Arxiv, Andrew Lo and Mark Mueller of MIT’s Sloan School of Management offer another view [3]. They say that what we need is a proper taxonomy of risk – not unlike, as it turns out, Rumsfeld’s infamous classification. In this way, they say, we can unite risk assessment in economics with the way uncertainties are handled in the natural sciences.

The current approach to uncertainty in economics, say Lo and Mueller, suffers from physics envy. ‘The quantitative aspirations of economists and financial analysts have for many years been based on the belief that it should be possible to build models of economic systems – and financial markets in particular – that are as predictive as those in physics,’ they point out.

Much of the foundational work in modern economics took its lead explicitly from physics. One of its principal architects, Paul Samuelson, has admitted that his seminal book Foundations of Economic Analysis [4] was inspired by the work of mathematical physicist Edwin Bidwell Wilson, a protégé of the pioneer of statistical physics Willard Gibbs.

Physicists were by then used to handling the uncertainties of thermal noise and Brownian motion, which create a gaussian or normal distribution of fluctuations. The theory of Brownian random walks was in fact first developed by physicist Louis Bachelier in 1900 to describe fluctuations in economic prices.

Economists have known since the 1960s that these fluctuations don’t in fact fit a gaussian distribution at all, but are ‘fat-tailed’, with a greater proportion of large-amplitude excursions. But many standard theories have failed to accommodate this, most notably the celebrated Black-Scholes formula used to calculate options pricing, which is actually equivalent to the ‘heat equation’ in physics.

But incorrect statistical handling of economic fluctuations is a minor issue compared with the failure of practitioners to distinguish fluctuations that are in principle modellable from those that are more qualitative – to distinguish, as Lo and Mueller put it, trading decisions (which need maths) from business decisions (which need experience and intuition).

The conventional view of economic fluctuations – that they are due to ‘external’ shocks to the market, delivered for example by political events and decisions – has some truth in it. And these external factors can’t be meaningfully factored into the equations as yet. As the authors say, from July to October 2008, in the face of increasingly negative prospects for the financial industry, the US Securities and Exchange Commission intervened to impose restrictions on certain companies in the financial services sector. ‘This unanticipated reaction by the government’, say Lo and Mueller, ‘is an example of irreducible uncertainty that cannot be modeled quantitatively, yet has substantial impact on the risks and rewards of quantitative strategies.’

They propose a five-tiered categorization of uncertainty, from the complete certainty of Newtonian mechanics, through noisy systems and those that we are forced to describe statistically because of incomplete knowledge about deterministic processes (as in coin tossing), to ‘irreducible uncertainty’, which they describe as ‘a state of total ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.’

The authors think that this is more than just an enumeration of categories, because it provides a framework for how to think about uncertainties. ‘It is possible to “believe” a model at one level of the hierarchy but not at another’, they say. And they sketch out ideas for handling some of the more challenging unknowns, as for example when qualitatively different models may apply to the data at different times.

‘By acknowledging that financial challenges cannot always be resolved with more sophisticated mathematics, and incorporating fear and greed into models and risk-management protocols explicitly rather than assuming them away’, Lo and Mueller say, ‘we believe that the financial models of the future will be considerably more successful, even if less mathematically elegant and tractable.’

They call for more support of post-graduate economic training to create a cadre of better informed practitioners, more alert to the limitations of the models. That would help; but if we want to eliminate the ruinous false confidence engendered by the clever, physics-aping maths of economic theory, why not make it standard practice to teach everyone who studies economics at any level that these models of risk and uncertainty apply only to specific and highly restricted varieties of it?

References
1. Stiglitz, J. New Statesman, 16 October 2008.
2. Taleb, N. N. The Black Swan (Allen Lane, London, 2007).
3. Lo, A. W. & Mueller, M. T. http://www.arxiv.org/abs/1003.2688.
4. Samuelson, P. A. Foundations of Economic Analysis (Harvard University Press, Cambridge, 1947).

Thursday, April 01, 2010

Bursting the genomics bubble


Here’s the pre-edited version of a Muse that’s just gone up on Nature News. There’s a bunch of interesting Human Genome Project-related stuff on the Nature site to mark the 10th anniversary of the first draft of the genome (see here and here and here, as well as comments from Francis Collins and Craig Venter). Some is celebratory, some more thoughtful. Collins considers his predictions to have been vindicated – with the exception that ‘The consequences for clinical medicine have thus far been modest’. Now, did you get the sense at the time that it was precisely the potential for advancing clinical medicine that was the HGP’s main selling point? Venter is more realistic, saying ‘Phenotypes — the next hurdle — present a much greater challenge than genotypes because of the complexity of human biological and clinical information. The experiments that will change medicine, revealing the relationship between human genetic variation and biological outcomes such as physiology and disease, will require the complete genomes of tens of thousands of humans together with comprehensive digitized phenotype data.’ Hmm… not quite what the message was at the time, although in fairness Craig was not really one of those responsible for it.

*********************************************************************
The Human Genome Project attracted investment beyond what a rational analysis would have predicted. There are pros and cons to that.

If you were a venture capitalist who had invested in the sequencing of the human genome, what would you now have to show for it? For scientists, the database of the Human Genome Project (HGP) may eventually serve as the foundation of tomorrow’s medicine, in which drugs will be tailored personally to your own genomic constitution. But for a return to the bucks you invested in this grand scheme, you want medical innovations here and now, not decades down the line. Ten years after the project’s formal completion, there’s not much sign of them.

A team of researchers in Switzerland now argue in a new preprint [1] that the HGP was an example of a ‘social bubble’, analogous to the notorious economic bubbles in which investment far outstrips any rational cost-benefit analysis of the likely returns. Monika Gisler, Didier Sornette and Ryan Woodard of ETH in Zürich say that ‘enthusiastic supporters of the HGP weaved a network of reinforcing feedbacks that led to a widespread endorsement and extraordinary commitment by those involved in the project.’

Some scientists have already suggested that the benefits of the HGP were over-hyped [2]. Even advocates now admit that the benefits for medicine may be a long time coming, and will require further advances in understanding, not just the patience to sort through all the data.

This stands in contrast to some of the claims made while the HGP was underway between 1990 and 2003. In 1999 the International Human Genome Sequencing Consortium (IHGSC) leader Francis Collins claimed that the understanding gained by the sequencing effort would ‘eventually allow clinicians to subclassify diseases and adapt therapies to the individual patient’ [3]. That might happen one day, but we’re still missing fundamental understanding of how even diseases with a known heritable risk are related to the makeup of our genomes [4]. Collins’ portrait of a patient who, in 2010, is prescribed ‘a prophylactic drug regimen based on the knowledge of [his] personal genetic data’ is not yet on the horizon. And going from knowledge of the gene to a viable therapy has proved immensely challenging even for a single-gene disease as thoroughly characterized as cystic fibrosis [5]. Collins’ claim,shortly after the unveiling of the first draft of the human genome in June 2000, that ‘new gene-based ‘designer drugs’ will be introduced to the market for diabetes mellitus, hypertension, mental illness and many other conditions’ [6] no longer seems a foregone conclusion, let alone a straightforward extension of the knowledge of all 25,000 or so genes in the human genome.

This does not, in the analysis of Gisler and colleagues, mean that the HGP was money poorly spent. Some of the benefits are already tangible, such as much faster and cheaper sequencing techniques; others may follow eventually. The researchers are more interested in the issue of how, if the HGP was such a long-term investment, it came to be funded at all. Their answer invokes the notion of bubbles borrowed from the economic literature, which Sornette has previously suggested [7] as a driver of other technical innovations such as the mid-nineteenth-century railway boom and the explosive growth of information technology at the end of the twentieth century. In economics, bubbles seem to be an expression of what John Maynard Keynes called ‘animal spirits’, whereby the instability stems from ‘the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations’ [8]. In economics such bubbles can end in disastrous speculation and financial ruin, but in technology they can be useful, creating long-lasting innovations and infrastructures that would have been deemed too risky a venture under the cold glare of reason’s spotlight.

For this reason, Gisler and colleagues say, it is well worth understanding how such bubbles occur, for this might show governments how to catalyse long-term thinking that is typically (and increasingly) absent from their own investment strategies and those of the private sector. In the case of the HGP, the researchers argue, the controversial competition between the public IHGSC project and the private enterprise conducted by the biotech firm Celera Genomics worked to the advantage of both, creating an sense of anticipation and hope that expanded the ‘social bubble’ as well as in the end reducing the cost of the research by engaging market mechanisms.

To that extent, the ‘exuberant innovation’ that social bubbles can engender seems a good thing. But it’s possible that the HGP will never really deliver economically or medically on such massive investment. Worse, the hype might have incubated a harmful rash of genetic determinism. As Gisler and colleagues point out, other ‘omics’ programmes are underway, including an expensively funded NIH initiative to develop high-throughput techniques for solving protein structures. Before animal spirits transform this into the next ‘revolution in medicine’, it might be wise to ask whether the HGP has something to tell us about the wisdom of collecting huge quantities of stamps before we know anything about them.

References
1. Gisler, M., Sornette, D. & Woodard, R. Preprint http://www.arxiv.org/abs/1003.2882.
2. Roberts, L. et al., Science 291, 1195-1200 (2001).
3. Collins, F. S. New England J. Med. 28, 28-37 (1999).
4. Dermitzakis, E. T. & Clark, A. G. Science 326, 239-240 (2009).
5. Pearson, H. Nature 460, 164-169 (2009).
6. Collins, F. S. & McKusick, V. A. J. Am. Med. Soc. 285, 540-544 (2001).
7. Sornette, D. Socio-econ. Rev. 6, 27-38 (2008).
8. Keynes, J. M., The General Theory of Employment, Interest and Money (Macmillan, London, 1936).

The Times does The Music Instinct


There are some extracts from The Music in the Eureka science supplement of the Times today, although oddly they don’t seem yet to have put it online. It’s amongst a real mash-up of stuff about the ‘science of music’, which is all kind of fun but slightly weird to find my words crash-landed there. The editors did a pretty good job, however, of plucking out bits of text and getting them into a fairly self-contained form, when they were generally part of a much longer exposition.

I notice in Eureka that Brain May, bless him, doesn’t believe in global warming. “Most of my most knowledgeable scientist friends don’t believe that global warming exists”, he says. Come on Brian, name them. Have you been chatting to the wrong Patrick Moore ? (Actually, I’m not too sure if chatting to the other one would help very much.)

Tuesday, March 30, 2010

Magnets mess with the mind's morality

Here's a little snippet I wrote for Nature's news blog. The authors seem to take it as read that magnets can alter brain functioning in this manner, but I find that remarkable.


Talk about messing with your mind. A new study [www.pnas.org/cgi/doi/10.1073/pnas.0914826107] by neuroscientist Liane Young and colleagues at Harvard University does exactly that: the researchers used magnetic signals applied to subjects’ craniums to alter their judgements of moral culpability. The magnetic stimulus made people less likely to condemn others for attempting but failing to inflict harm.

Most people make moral judgements of others’ actions based not just on their consequences but also on some view of what the intentions were. That makes us prepared to attribute diminished responsibility to children or people with severe mental illness who commit serious offences: it’s not just a matter of what they did, but how much they understood what they were doing.

Neuroimaging studies have shown that the attribution of beliefs to other people seems to involve a part of the brain called the right temporoparietal junction (RTPJ). So Young and colleagues figured that, if they disrupted how well the RTPJ functions, this might alter moral judgements of someone’s action that rely on assumptions about their intention. To do that, they applied an oscillating magnetic signal at 1 Hz to the part of the skull close to the RTPJ for 25 minutes in test subjects, and then asked them to read and respond to an account of an attempted misdemeanour. They also conducted tests while delivering the signal in regular short bursts. In one scenario, ‘Grace’ intentionally puts a white powder from a jar marked ‘toxic’ into her friend’s coffee, but the powder is in fact just sugar and the friend is fine. Was Grace acting rightly or wrongly?

Obvious? You might think differently with a magnetic oscillator fixed to your head. With the stimulation applied, subjects were more likely to judge the morality based on the outcome, as young children do (the friend was fine, so it’s OK), than on the intention (Grace believed the stuff was toxic).

That’s scary. The researchers present this as evidence of the role of the RTPJ in moral reasoning, with implications for how children do it (there is some evidence that the RTPJ is late in maturing) and for conditions such as autism that seem to involve a lack of ability to identify motives in other people. Fair enough. But to most of us it is news – and alarming news – that morality-related brain functions can be disrupted or suspended with a simple electromagnetic coil. If ever a piece of research were destined to incite paranoid fantasies about dictators inserting chips in our heads to alter and control our behaviour, this is it.

Thursday, March 25, 2010

Solar eclipse


This is more or less how my review of Ian McEwan’s new novel Solar in Prospect started out (the final paras got a little garbled in the edit). I’m amused to see that my suggestion here that his modest intentions might head off extreme reactions has been proved wrong. Lorna Bradbury in the Telegraph calls the book McEwan’s best yet, and thinks it should win the Booker (no way). And some found the comic elements ‘extremely funny’. Others think it is a stinker: one reviewer calls it ‘an odd, desultory production, by turns pompous and feebly comic’, and Leo Robson in the New Statesman says McEwan has lost his ear and that ‘With Solar, McEwan has finally committed the folly that we might not have expected from him.’ Really, they are all getting too worked up. Although I wouldn’t go as far as the dismissive comment in the Economist that this is ‘A novel to chuckle over, and chuck away’, it is simply a fairly light, intelligent piece of entertainment. Not, I imagine, that McEwan will be too bothered about any of this.

***********************************************************************

After Saturday, which several reviewers considered (unfairly) to be an insufferably smug depiction of Blair’s Britain in the approach to the invasion of Iraq, it looked as though a place was being prepared for Ian McEwan alongside Martin Amis on the pillory. Our two most celebrated novelists, the story went, were getting above themselves, pronouncing on the state of the nation from what seemed an increasingly conservative position.

Amis seems now to be in some curious quantum superposition of states, defended in a backlash to the backlash while demonized as the misogynistic wicked godfather. His latest novel The Pregnant Widow has been both praised as a return to form and derided as a farrago of caricature and solipsism. But Solar may extricate McEwan from such controversies and reinvest him with the humble status of a storyteller. For the book is a modest entertainment, dare one even say a romp, and essentially a work of genre fiction: lab lit. This genre, a second cousin of the campus novel, draws its plots from the exploits of scientists and the scientific community, and includes such titles as Allegra Goodman’s Intuition and Jonathan Lethem’s As She Climbed Across the Table.

McEwan’s interest in science is well established. The protagonist of Enduring Love is a science journalist, and the plot of Saturday hinged on the technical expertise of its central character, the neuroscientist Henry Perowne. McEwan has spoken about the uses of science in fiction, and has written passionately about the need to tackle climate change.

And that is where Solar comes in. When McEwan mentioned at the Hay Festival in 2008 that his next book had a ‘climate change’ theme, people anticipated some eco-fable set in the melting Arctic. He quickly denied any intention to proselytize; climate change would ‘just be the background hum of the book.’

So it is. Michael Beard, a Nobel laureate physicist resting on the laurels of his seminal work in quantum physics decades ago, is balding, overweight, addictively philandering, and coming to the end of his fifth marriage. Like many Nobel winners he has long ceased any productive science and is now riding the superficial circuit of plenary lectures, honorary degrees, Royal Commissions and advisory boards. Becoming the figurehead of the National Centre for Renewable Energy, marooned near Reading, seemed a good idea at the time, but the centre’s research has become mired in Beard’s ill-advised notion of making a wind turbine. Beard is privately indifferent to the global-warming threat, but when a chance arrives to give his career fresh lustre with a new kind of solar power, he grasps it greedily. With Beard running more on bluster and past glory than on scientific insight, and with his domestic life on autodestruct, we know it will all end badly. The question is simply how long Beard can stay ahead of the game. As the climate-change debate moves from the denialism of the Bush years to Obama and Copenhagen, he is increasingly a desperate, steadily inflating cork borne on the tide.

As ever, McEwan has done his homework. Mercifully, he knows much more than Lethem about how physicists think and work. And he is more successful in concealing his research than he was with the neuroscience shoehorned into Saturday. But not always. Beard’s speech to a group of climate-sceptic corporate leaders reads more like a lecture than a description of one: “Fifty years ago we were putting thirteen billion metric tons of carbon dioxide into the atmosphere every year. That figure has almost doubled.” And when Beard debunks his business partner’s doubts about global warming after the cool years of the late noughties, he gets full marks for science but risks becoming his author’s mouthpiece. “The UN estimates that already a third of a million people a year are dying from climate change” is not the kind of thing anyone says to their friend.

In case you care, the solution to the energy crisis on offer here – the process of ‘artificial photosynthesis’ to split water into hydrogen and oxygen using photocatalysis – is entirely respectable scientifically, albeit hardly the revolutionary breakthrough it is made out to be. Much the same idea was used by Stephen Poliakoff in his 1996 lablit play Blinded By the Sun; McEwan’s clever trick here is to involve quantum-mechanical effects (based on Beard’s Nobel-winning theory) to improve the efficiency, which left the nerd in me wondering if McEwan was aware of recent theories invoking such effects in real photosynthesis. I’m not sure whether to be more impressed if he is or if he isn’t.

McEwan nods toward recent episodes in which science has collided with the world outside the lab. Beard’s off-the-cuff remarks about women in science replay the debacle that engulfed former Harvard president Larry Sumner in 2005, and Beard stands in for Steven Pinker in an ensuing debate on gender differences (although Pinker’s opponent Elizabeth Spelke did a far better demolition job than does Beard’s).

He also makes wry use of personal experience. When he read at Hay a draft of the episode in which Beard eats the crisps of a fellow traveller on a train, thinking they are his own and suppressing fury when the young man ironically helps himself, someone in the audience pointed out that a similar case of false accusation of an innocent stranger appeared in The Hitchhiker’s Guide to the Galaxy. Some newspapers made a weak jibe at plagiarism. When Beard recounts the tale in a speech, a lecturer in ‘urban studies and folklore’ accuses him of appropriating a well-known urban myth, making Beard feel that his life has been rendered inauthentic – and the allusion to Douglas Adams is now inserted in the story.

One of the pleasures for a science watcher is identifying the academics from whom Beard has been assembled – I counted at least five. He is a difficult character to place centre-stage, not just selfish, unfaithful and vain but also physically repulsive – McEwan is particularly good at evoking queasiness at Beard’s gluttony and bodily decrepitude. But he has said that he wanted to leave Beard just enough possibility of goodness to engender some sympathy, and he succeeds by a whisker. When the final collapse of Beard’s crumbling schemes arrives (you can see it coming all along), there is room for compassion, even dismay.

Solar is, then, a satisfying and scientifically literate slice of genre literature, marred only slightly by McEwan’s curious addiction to the kind of implausible plot hinge that compromised Enduring Love, Atonement and most seriously, Saturday. Come the event that places opportunity in Beard’s hands, all the strings and signposts are glaringly evident – I think I even murmured to myself “No, not the corner of the coffee table”. And like the thug Baxter in Saturday, Beard’s wife's uncouth former lover Tarpin ends up doing things that just don't ring true – a failure not of ‘character motivation’ (McEwan is too good a writer to belabour that old chestnut) but of sheer plausibility.

In the end, this is McEwan-lite, a confection of contemporary preoccupations that, while lacking the emotional punch of Atonement, the political ambition of Saturday or the honed delicacy of On Chesil Beach, is more fun than any of them. And if it dissuades us from turning McEwan, like Amis, into a cultural icon to be venerated or toppled, so much the better for him and for us.

Monday, March 15, 2010

What went on in February


Here’s my little round-up for the April issue of Prospect, before it is edited to probably a third of this size. I don’t want to sound churlish, in the last item, about what is clearly a useful trial – but it did seem a good example of the kind of thing Colin Macilwain at Nature nailed recently in an excellent article about science and the media.
     I’ve also reviewed Ian McEwan’s new book Solar in this forthcoming issue of Prospect – will post that review shortly. In short: it’s fun.
************************************************************************

As the global warming debate intensifies, expect to hear more about methane, carbon dioxide’s partner in crime as a greenhouse gas. Since it doesn’t come belching from our cars and power stations, methane bulks small in our conscience, but agriculture, gas production, landfills and biomass burning have doubled methane levels in the atmosphere since pre-industrial times and it is a more potent greenhouse gas than CO2. There are immense natural resources of methane, and one doomsday scenario has some of these releasing the gas as a result of warming. A frozen form of methane and water, called methane hydrate, sits at the seafloor in many locations worldwide, but the methane could bubble out if sea temperatures rise. A team has now discovered  this happening on the Arctic continental shelf off northeastern Siberia, where the sea water has vastly more dissolved methane than expected. Some think a massive methane burp from hydrate melting 250 million years ago caused environmental changes that wiped out 70-96% of all species on the planet. There’s no reason to panic yet, but I’m just letting you know.

A few scientists and an army of bloggers still insist that global warming has nothing to do with any of this stuff, but is caused by changes in the activity of the sun. If you like that idea (or indeed if you hate it), don’t expect much enlightenment from NASA’s Solar Dynamics Observatory (SDO), launched in February to study the inner workings of the sun. We already know enough about variations in the sun to make the solar-warming hypotheses look flaky. But we don’t really understand what causes them. The 11-year sunspot cycle is thought to be the result of changes in the churning patterns of this volatile ball of hot plasma. It causes small periodic rise and fall of the sun’s energy output, along with the recurrent appearance of sunspots at the height of the cycle, and increases in solar flares that spew streams of charged particles across millions of miles of space, disrupting telecommunications and power grids on Earth and supplying a very practical reason for needing to know more about how our star works. SDO, launched by NASA at a cost of $856 million, will take images of the sun and detect convective flows of material beneath the surface, over the coming solar cycle that is due to peak around 2013.

A new study from researchers in Newcastle and Ulm of why our cells age does not, as some reports suggest, reveal the ‘secrets of ageing’, but rather debunks the notion of a ‘secret’ at all. Ageing, like embryo growth or cancer, is not a biochemical process but the net result of a complex network of processes. The new study shows how cells can become locked into a steady decline once they accumulate too much damage to their DNA, so that they don’t go on dividing with an inherent risk of initiating cancer. Although this process is triggered by the gradual erosion of the protective ‘caps’ at the ends of our chromosomes, called telomeres, it suggests that the story is far more complex than the simplistic picture in which we age because our chromosomes go bald. And it makes a magic bullet for reversing ageing seem even more of a pipe dream.

A cure for peanut allergy could be only three years away, recent headlines said. It’s a cheering prospect for this nasty condition, a source of anxiety for many parents and on very rare occasions a genuinely life-threatening problem. The reports were based on a presentation given by Andrew Clark of Addenbrooke’s Hospital in Cambridge at the meeting of the American Association for the Advancement of Science, an annual jamboree of science news. Clark and his colleagues are about to begin a major clinical trial, following earlier success in desensitizing children to the allergy by ‘training’ the immune system to tolerate initially tiny but steadily increasing doses of peanut. The news is welcome, but also an indication of the rather formulaic nature of much science and health reporting, where everyone seizes on the same story irrespective of whether it is really news. This is, after all, just the announcement of a forthcoming trial, not of its results. And besides, the desensitizing strategy is well established in principle: similar successes were reported recently by two groups at a meeting of the American Academy of Allergy, Asthma and Immunology in New Orleans. 

Friday, February 26, 2010

How bugs build

I have a feature in New Scientist on insect architecture and what we can learn from it, pegged to a very interesting conference that took place in Venice last September. My feature started its life at nigh on twice the length (as many sadly do), and looked at some of the algorithmic architecture discussed at the workshop. I’m going to put a pdf of this long version on my website shortly (it’ll be under the ‘Patterns’ papers).

There's a book in the pipeline from the conference participants (and others), probably to be called Collective Architecture. This lovely image, by the way – a plaster cast of the labyrinth inside a termite nest – was taken by Rupert Soar, mentioned in the article.

Tuesday, February 23, 2010

Told by an idiot

[I have a Muse on Nature News about the perils and benefits of recommender systems. Here’s the pre-edited  version.]

Automated recommender systems need to put some jokers in the pack, if we’re not going to end up with narrow-minded tastes.

Medieval monarchy might not have much to recommend it compared to liberal democracy, but here’s one: today our rulers have no Fools. Even if the tradition was honoured more in literature – Shakespeare’s King Lear – than in reality, how often now will a national leader employ someone to laugh at their folly and remind them of bitter truths? More often, cabinets and advisers seem picked for their readiness to confirm their leader’s judgements.

Some people fear that the information age encourages this tendency to spread to the rest of us. The Internet, they say, is a series of echo chambers: people join chat groups to hear others repeat their own opinions. Climate sceptics talk only to other climate sceptics (and accuse climate scientists of doing likewise, perhaps with some justification). DailyMe.com will supply you with only the news you ask to hear, realising the vision of personalized news championed by Nicholas Negroponte of MIT’s Media Lab. The ‘Daily Me’ is now often used in a pejorative sense to decry the insularity this inculcates.

Now it seems you can’t make an online purchase without being recommended other ‘similar’ items. Music browsers such as Search Inside the Music, developed at Sun Labs, find you songs that ‘sound similar’ to ones you like already. But who’s to say you wouldn’t be more interested in stuff unlike what you like already?

That’s the dilemma addressed in a paper in the Proceedings of the National Academy of Sciences by Yi-Cheng Zhang, a physicist at the University of Fribourg in Switzerland, and his coworkers [1]. They point out that most data-mining ‘recommender’ systems such as those used by Amazon.com focus on accuracy, measured by testing whether they can reproduce known user preferences. This emphasizes the similarity of recommendations to previous choices, and can lead to self-reinforcing cycles fixated on blockbuster items [2].

But, say the researchers, the most useful recommendations may not be the most similar, but ones that offer the unexpected by introducing diversity. Like Lear’s Fool, they challenge what you thought you knew. Zhang and colleagues show that a judicious blend of algorithms optimized for accuracy and for diversity can actually offer more diversity and accuracy than any of the component algorithms on their own.

The researchers compare this effect with the value of ‘weak ties’ in our friendship networks. While we tend to seek advice from close friends – typically people sharing similar views and preferences – it is often comments from people with whom we have a more limited connection that are the most helpful, because they offer a perspective outside our regular experience.

The same is true in scientific research: scientists from disciplines outside your own can spark new trains of thought, while your fellow specialists trudge along the same track. Without fertilization from outsiders, disciplines risk stultifying. (One recent study implies that astronomy could be in danger of that [3].)

But it seems we instinctively gravitate towards the echo chamber. Networks expert Mark Newman at the University of Michigan has uncovered the stark division in purchases of books on US politics through Amazon [4]. He studied a network of 105 recent books, linked if Amazon indicated that one book was often bought by those who purchased the other. Newman found a pretty clean split into communities containing only ‘liberal’ books and only ‘conservative’ ones, with just two small bridging groups that contained a mixture. There was a similar split in links between political blogs. This clear division, Newman says, ‘is perhaps testament not only to the widely noted polarization of the current political landscape in the United States but also to the cohesion of the two factions.’ Recommender systems that offer ‘more of the same’ can only encourage this Balkanization of the ever-growing universe of information, opinion and choice.

Not everyone agrees there’s a problem. In an essay on Salon.com, David Weinberger disputed the notion of the Internet as an echo chamber [5]. He argues that some unspoken common assumptions – among liberals at that time, that George W. Bush was a bad president – allow online conversations to move on to more constructive matters, rather than becoming, say, a tedious litany of Bush-baiting. ‘If you want to see a real echo chamber’, said Weinberger, ‘open up your daily newspaper or turn on your TV.’

If people truly want more of the same, it’ll always be hard to make them hear the Fool’s wisdom. But most recommender systems do want to find what people will like, not just what they think they like. Throwing diversity into the mix is a good start, but the bigger challenge is to figure out how preferences are formed. What are the coordinates of ‘preference space’ and how do we negotiate them? There might, say, be something about the melodic contours or timbres in Beethoven’s music that a fan will find not in other early nineteenth-century composers but in twentieth-century modernists. Some music recommender systems are examining how we classify music according to non-traditional criteria, and using these as the compass directions for navigating music space. Understanding more about such preference-forming structures will not only improve the choices we’re offered but might also tell us something new about how the human brain partitions experience. And we could be in for some delicious surprises – just as when we used to browse through record stores.

References


1. Zhou, T. et al., Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1000488107.
2. Fleder, D. & Hosanagar, K. Manag. Sci. 55, 697-712 (2009).
3. Guimerà, R., Uzzi, B., Spiro, J. & Amaral, L. A. N. Science 308, 697-702 (2005).
4. Newman, M. E. J., Proc. Natl Acad. Sci. USA 103, 8577-8582 (2006).
5. http://mobile.salon.com/tech/feature/2004/02/20/echo_chamber/index.html

Monday, February 22, 2010

So what did Darwin get wrong?

I have written a review for the Sunday Times of Jerry Fodor and Massimo Piattelli-Palmarini’s new book What Darwin Got Wrong. There was an awful lot to talk about here, and it was a devil of a job fitting it into the space available and getting it down to the appropriate level. Here’s how the review started (more or less). There’s considerably more to be said, but I’ve got too much else on the go at the moment. Suffice to say, the book is well worth a read, though it is not always easy going.

What Darwin Got Wrong

Jerry Fodor and Massimo Piattelli-Palmarini
Profile, 2010
ISBN 978 1 84668 219 3
Hardback, 262 pages
£20.00

Around 1.6 million years ago, our hairy ancestors began roaming further afield in search of food, and all that trekking got them hot and bothered. So they shed most of their hair evolved into us, the naked ape.

Thus runs one of countless stories of how evolution is driven by genetic adaptation to the environment: the conventional narrative of Neodarwinism. But according to cognitive scientists Jerry Fodor and Massimo Piattelli-Palmarini, they are all mistaken.

Despite their book’s unobjectionable title – of course there were things Darwin, who knew nothing of genes and DNA, got wrong – Fodor and Piattelli-Palmarini don’t simply think he missed a few details. Although they agree, indeed insist, that all of today’s flora and fauna evolved from earlier species, they don’t think that Darwin’s natural selection from a pool of random mutations explains it.

The arguments warrant serious consideration, but let’s first be clear about one thing. An honest reading of this book offers not a shred of comfort to creationists, intelligent designers and other anti-evolutionary fantasists. That, as the authors must know, won’t prevent the book being misappropriated, nor will it save them from the opprobrium of their peers (Fodor has already had a spat with arch-Darwinist Daniel Dennett).

In Neodarwinian theory, genes mutate at random across generations, and those that bestow an advantageous physiological or behavioural trait (phenotype) spread through a population because they boost reproductive success. But there’s often no simple connection between genes and phenotype. A single gene may have several roles, for example, and genes tend to work in networks so tightly knit that evolution can’t necessarily tinker with them independently of one another.

Naïve accounts of natural selection tend to award it quasi-mystical omnipotence, whereby it can effect just about any change, and every change is interpreted as an adaptation. The Scottish zoologist D’Arcy Thompson rubbished this habit almost a century ago, but it hasn’t gone away. The palette of biology is surely constrained by other factors: perhaps, say, the reason we don’t have three arms or eyes is not that they are non-adaptive but that they are not within the repertoire of fundamental body-forming gene networks.

Fodor and Piattelli-Palmarini also point out how ‘evidence’ for Darwinism is often conflated with evidence for evolution: ‘just look at the fossil record’. And post hoc adaptationist accounts of evolutionary change (such as the one I began with) risk being merely that: plausible but unscientific Just So stories. To the authors, that’s all they can ever be, because Darwinism is a tautology: organisms are ‘adapted’ to their environment because that’s where they live. How well adapted birds are to the air, and fish to the sea!

All of this is good stuff, and convincingly calls time on simplistic Neodarwinism. But as Fodor and Piattelli-Palmarini admit, many biologists today will say ‘Oh, I’m not that kind of Darwinist’: they know (even if they rarely say it publicly) that evolution is much more complicated. They agree that there is more to life than Darwin.

But Fodor and Piattelli-Palmarini seem to want to banish him entirely, claiming that natural selection is logically flawed because it can’t possibly identify what exactly is selected for. Their argument is opaque, however. Are frogs selected to eat flies, or to eat buzzing black things which just happen invariably to be flies? The authors don’t explain why the simple answer – find out in an experiment with frogs and faux-flies – won’t do. Their objection seems to be that evolution can’t do the experiment, because it is non-intentional and can’t know what it is looking for (they say Darwin’s reliance on stock- and pigeon-breeding therefore involved a false analogy for evolution). And they worry that we can’t distinguish adaptations from genetic changes that ‘free-ride’ on them.

But blind natural selection does work in principle, as computer models unambiguously show. These models are highly, perhaps excessively simplified. But if the same thing doesn’t happen as a rule in real populations, vague logical arguments won’t tell us why not. And if we struggle to work out precisely what trait has ‘adapted’, surely that’s our problem, not nature’s.

In any event, the authors admit that at least some of the many ‘textbook paradigms of adaptationist explanation’ might be perfectly correct. Some certainly are: superbugs have acquired antibiotic-busting genes, which is about as direct an adaptation as you can get. The authors don’t wholly exclude natural selection, then, but say it may simply fine-tune other mechanisms of evolutionary change (whatever they are). Specific adaptations, they say, are historical contingencies, not examples of a general law. In the same way, there may be good specific explanations for why your bus was late this morning, and also last Thursday, but they don’t in themselves to amount to a natural law that buses are late. Fair enough, but then to say whether adaptation is the exception or the default we need statistics. The authors are silent on this.

So they don’t quite achieve a coherent story, neither are they able (or perhaps willing) to convey it at a non-specialist level. Even so, they make a persuasive case that the role of natural selection in evolution is ripe for reassessment. To say so should not be seen as scientific heresy or capitulation to the forces of unreason – it’s a brave and welcome challenge.

Monday, February 15, 2010

In which I become a Rock Legend

… or in which my past comes back to amuse me. In the course of a little research to prepare for my talk on The Music Instinct, I discover that buried within the Classic Rock Sequence played by BBC6 last Saturday is yours truly on keyboards. Now there’s a thing. Some day I might show you the photos. (No, that’s not me posing next to Dave Brock, but you know, it almost could have been.)

Sunday, February 14, 2010

The Music Instinct - the story so far

There are some reviews of The Music Instinct in the Sunday Times, the Independent, the Guardian, the Economist and Metro. Most are nice, but Steven Poole in the Guardian, while sending out some good vibes, has some big reservations too. When I first read his review, it struck me as basically friendly, with some intelligent criticisms with which I mostly disagreed. That interpretation just about survives a second reading, but there are some very odd things here.

Most of all, as someone who has long deplored the scientism-ist (you know what I mean) approach to art that denounces anything which doesn’t meet ‘scientific’ criteria (I’ve gently derided that kind of thing in print before), I was disappointed that Poole seemed so determined to impose this reading on the book. I hope anyone who reads it will recognize that the suggestion that I go through music’s repertoire dishing out gold stars or finger-wagging according to whether composers have obeyed or contravened the ‘laws of music cognition’ is a misrepresentation bordering on the grotesque.

He seems uncomfortable with anything that strays beyond the bounds of the physiology and acoustic physics of sound – that’s to say, with ideas about how we interpret music as a coherent sonic entity, why it moves us, what roles factors such as tonality play in our perception – in short, with most of the field of music psychology. Which is naturally a bit of a problem. Of course, some will prefer to leave all that stuff to the realm of the ineffable, but it’s abundantly clear that this would involve a denial of the evidence.

I agree that it’s crucial to maintain a distinction between understanding how the brain processes music and using that to define ‘scientific’ criteria of what is ‘good’ in music. So I’m frankly baffled as to why Poole thinks I am ‘judging’ music. On the contrary, one of my aims is to suggest ways that might make all kinds of music more accessible. The only instance where I might be considered to be using cognitive principles as a tool for criticism is in the case of total serialism (not simply all serialism – I took great pains to make the distinction). I do point out that Schoenberg was wrong to consider tonality as merely an obsolete convention – it is an aid to music cognition. But as I clearly say, being able to make sense of music doesn’t by any means stand or fall on the issue of whether the pitches as a whole have audible hierarchical organization, and so eliminating tonality doesn’t mean one is doomed to write incoherent music. I don’t even criticise total serialism as such, but only those proponents of it who suggest that audiences’ difficulty with it is simply due to their lack of musical education, thereby failing to understand that this technique tends systematically to undermine our natural modes of organizing sound. Their condescension is misplaced.

Speaking of condescension, Poole seems to detect it in the way I illustrate how cognitive principles can be discerned in the way many composers have organized their music. If one wanted to insist that anyone was being condescended to here (and I can’t for the life of me see why that’s necessary), it would more obviously have to be the music psychologists, given a pat of the back for finally figuring out 300 years later the aids to cognition that Baroque musicians had been codifying and using in their rules for polyphonic composition.

Mozart and Berg reduced to a series of arithmetical tricks: huh? Says who? Compare Bee Wilson in the Sunday Times: ‘Ball never presumes that music can be reduced to some kind of scientific formula’. Well, you can decide for yourself. In any case, what has arithmetic to do with it?

Now, one could certainly read some of the music psychology literature and come away with the impression that indeed all there is to Mozart is a graph of tension and release. But I criticise that view, and point out that not only is it problematic in its own terms but it clearly leaves out something important about music’s affective power that no one has even begun to quantify. Marek Kohn’s comment that I insist on taking the science no further than is warranted directly contradicts Poole’s accusation of scientism.

On performance: I can think of few less controversial statements about music than that performance technique can bring a piece to life or kill it stone dead. To interpret this as saying that the performer does all the work and the composer has next to nothing to do with the way a piece of music is perceived (to what Poole calls ‘superstitions about the supremacy of performance and improvisation’), seems wilfully perverse (not to mention being contradicted by just about everything else I say in the book). But this reflects the dismayingly adversarial way in which Poole seems to have read the whole book. It is science vs art, logic vs intuition, tonal vs atonal, composer vs performer, notated vs non-notated music. And he seems to feel that to praise one side of such dualisms is to condemn the other. I find such dichotomies pointless and unhelpful.

On ‘originality’ of melodies: I don’t ‘praise’ composers for scoring well in this measure, but on the contrary say explicitly that ‘originality’ in this sense bears no relation to musical quality.

On notation: Having played in a big band, I know very well that some jazz forms use and even depend on scored music. Poole is right to point out that my wording seems to suggest otherwise (especially to someone with absolutist tendencies). Must put that right. When I said that notated music can’t evolve (or more accurately, it can only do so within very narrow parameters), I didn’t mean to imply that all music should evolve. I meant only that some forms (such as ‘traditional’, or what tends to be called folk) are best served by reserving that freedom, and therefore by using only very sketchy forms of notation as aides-memoire where it is needed at all. (If my statement here struck Poole as ludicrous, didn’t it occur to him that he might have misconstrued it? Still, I’ll spell this out in the paperback edition too.) As for notation in pop music, I mean ‘pop music’ in the sense in which it is generally used: the popular music coeval with and dependent on the democratization of recording technology and radio, starting roughly in the 1950s, and not ‘popular music’ of the prewar era.

Blimey, all this sounds a bit aggrieved. I’ve no desire to start an argument, especially with someone whose reviews I always read avidly, and especially especially with someone who so recently had kind words for another of my books. But I’m genuinely puzzled about what is going on in this review, and simply want to make my position plain. It is no surprise that some people will recoil at the idea of ‘analysing’ music with scientific methods, but Poole is extremely technically savvy and not in the slightest a scientophobe. I wonder if there is some over-compensation going on here from technophile (something I sometimes suspect in myself.) And if you saw a double entendre in that, you’re right: Poole’s suggestion that techno is a good place to explore for examples of rhythmic violations and the significance of timbre is an excellent one – wish I’d thought of it.

Postscript: I've now had a constructive exchange with Steven. While we don't agree on everything, we're not so divergent in our views either, and I now have a better appreciation of the points of misunderstanding.

Wednesday, February 10, 2010

Sharks and Virgin Births

Brian Worley, who runs the entertaining ‘lapsed Christian’ site (I hope that is not an impolite way to describe it) called exminister, has asked if I might comment on a story about ‘virgin births’ in sharks. Brian wondered whether there was a possibility that Christians might be prompted by this report to say ‘look, virgin births are a proven scientific fact…’. It would be a very unwise Christian who did so, since this sort of asexual parthenogenetic reproduction has been known for a very long time in a variety of creatures, including vertebrates such as lizards and fish (it’s not even a new discovery in sharks). To say that it must therefore be possible in humans would be much the same as to say that humans might grow a new limb after amputation, or that they might lay eggs or breathe underwater. Now, far be it for me to underestimate people’s capacity to say some peculiar things, but I think even the most committed fundamentalist might have to admit this one is a bit of a non-starter.

Besides, if anyone did want to use parthenogenesis as a scientific defence of the Virgin Birth, they would also then have to deal with the tricky issue that it would make Jesus a clone of Mary, not to mention the treacherous theology of the nature of Jesus’s flesh and embryogenesis.

All the same, Brian is by no means out on a limb here. When parthenogenesis was first induced artificially, and thus proven as scientific fact, the Virgin Birth was most certainly invoked. This happened in the 1890s, through the work of the German biologist Jacques Loeb at the research centre for marine biology in Woods Hole, Massachusetts. He caused an unfertilized sea-urchin egg to divide by treating it with a mixture of simple salts such as sodium chloride and magnesium chloride – in essence, with a kind of reformulated sea water. In organisms that reproduce sexually, the development of an egg into a new organism generally proceeds only when it has united its genetic material with that of a spermatozoa. But Loeb’s discovery revealed that in some species this was strictly optional. It is not the provision of genes that constitutes the sperm’s role in triggering growth of an embryo, but some other function – one that can be carried out by other means. When an egg is thus provoked to commence parthenogenesis, the resulting organism is thus, as I say, a clone of the egg’s parent organism, with identical genetic constitution. Loeb had not so much created life as invented cloning.

In 1899, the Boston Herald reported on this work with the headline ‘Creation of Life. Startling Discovery of Prof. Loeb. Lower Animals Produced by Chemical Means. Process May Apply to Human Species. Immaculate Conception Explained.’ That might sound like hysterical over-extrapolation of the sort that makes scientists roll their eyes in despair. But in this case it seems fair enough, for look at what Loeb had written in his account of the discovery:
“The development of the unfertilized egg, that is an assured fact. I believe an immaculate conception may be a natural result of unusual but natural causes. The less a scientist says about that now the better. It is a wonderful subject, and in many ways an awful one. That the human species may be made artificially to reproduce itself by the withdrawal of chemical restraint by other than natural means is a matter we do not like to contemplate. But we have drawn a great step nearer to the chemical theory of life and may already see ahead of us the day when a scientist, experimenting with chemicals in a test tube, may see them unite and form a substance which shall live and move and reproduce itself.”

Loeb’s discovery was no chance affair. He had been experimenting for some years on the control and manipulation of sea-urchin development using salts, at first under the instruction of the American biologist Thomas Hunt Morgan (whose supporters later accused Loeb of stealing his ideas) at Bryn Mawr College in Pennsylvania. But the breakthrough put Loeb in the limelight, a position that he seemed rather to enjoy. Despite early scepticism, his work was widely lauded, and in 1901 he narrowly missed out on being awarded a Nobel prize.

The work was soon followed up by others, and in 1910 the French scientist Eugene Battaillon in Dijon discovered that frog eggs could be induced to start developing into embryos by being pricked with a needle. The embryologist Frank Rattray Lillie, then at the University of Chicago and later founder of the Woods Hole Oceanographic Institution, was particularly interested in whether the trick would work for humans, and hinted that this should be possible. (It is not, apparently – human development differs in some important ways from that of sea urchins and frogs.)

Loeb and other biologists viewed the prospect of human parthenogenesis triggered by salt with somewhat uneasy humour, joking that ‘maiden ladies’ might feel compelled to stop bathing in the sea. More telling was the notion that Loeb had revealed males to be redundant. He apparently received letters from women asking him to induce artificial parthenogenesis in their own ova, while the French embryologist Yves Delage, who worked on the problem, was sent letters congratulating him from freeing women from ‘the shameful bondage of needing a man to become a mother.’ These are prescient themes: artificial means of conceiving a baby, however hypothetical, are now seen both as removing women’s control of their own reproductive destiny (and placing it under the favour of male scientists) and as liberating them to take control unilaterally. Equally telling as a taste of what was to come, another report speculated about the possibility of raising ‘domestic animals and children born without help of a male through an operation which would be regulated scientifically and almost commercially, similar to raising the fry of trout.’ Aldous Huxley waits in the wings.

If you want to know more about this stuff, you’ll be able to get it in my next book Unnatural: The History of the Heretical Idea of Making People, which will be published by Bodley Head some time next year.

Listen up

The sound files (and podcasts) for my book The Music Instinct are now live. Hope they’re useful to anyone reading the book. And I discuss the book in a podcast for Blackwell’s by George Miller.

Morals don't come from God

[I have a written a Muse for Nature News on a paper probing the origins of morality (and by extension, of religion). Here it is. This stuff is always provocative, but the most stimulating aspect for me was discovering Jesse Bering’s paper
 ‘The folk psychology of souls’.]


‘Religion’, novelist Mary McCarthy wrote, ‘is only good for good people.’ Weigh the Inquisition against Martin Luther King, homicidal fanatics against Oxfam, and you have to suspect that religion supplies a context for justifying or motivating moral choices rather than a reason for them.

Into this bitterly contested arena comes a new paper by psychologists Ilkka Pyysiäinen of the University of Helsinki and Marc Hauser at Harvard [1]. They point out that individuals presented with unfamiliar moral dilemmas show no difference in their responses if they have a religious background or not.

They draw on tests of moral judgements using the web-based Moral Sense Test that Hauser and others have developed at Harvard [2-6], or variants thereof. These present dilemmas ranging from how to handle freeloaders at ‘bring a dish’ dinner parties to the propriety of killing someone to save others. Few if any of the answers can be looked up in holy books.

Thousands of people, with diverse backgrounds, age, education, religious affiliation and ethnicity, have taken the tests. Pyysiäinen and Hauser say the results (mostly still in the publication pipeline) indicate that ‘moral intuitions operate independently of religious background’, although religion may influence responses in a few highly specific cases.

This may speak to the origins of religion. Some have suggested it is an adaptation that promotes cooperation between unrelated individuals [7,8] – for example, discouraging cheating with the notion that ‘God is watching’. Others say that religious behaviour is not specifically selected for, but arises as a by-product of other cognitive functions and capacities [9,10]: for example, religion may have appropriated underlying psychological reasons for a belief in souls and an afterlife.

Since religion has little influence on moral judgements, say Pyysiäinen and Hauser, the latter hypothesis appears more likely. They argue that human populations evolved moral intuitions about behavioural norms – which themselves promoted group cooperation – before they became encoded in religious systems. The researchers suggest we may possess an innate ‘moral grammar’ that guides these intuitions.

The paper plays to a wider issue than this point of largely anthropological interest, for it challenges the assertion commonly made in defence of religion: that it inculcates a moral awareness [11]. If we follow the authors’ line of thinking, religious people are no more moral than atheists.

Pyysiäinen and Hauser do not wholly deny that religion is adaptive. They think that natural selection may have fine-tuned it, from an existing array of moral-determining cognitive functions, to optimize its benefits for cooperation. There is some evidence that religion promotes in-group altruism and self-sacrifice beyond what non-believers display [12].

Their paper may annoy both religious and atheistic zealots (which is usually a good sign). By taking it as given that religion is an evolved social behaviour rather than a matter of divine revelation, it tacitly adopts an atheistic framework. Yet at the same time it assumes that religiosity is a fundamental aspect of human psychology, thereby undermining those who see it as culturally imposed folly that can be erased with a cold shower of rationality.

It’s debatable, however, whether these moral tests are probing religion or culture as a moral-forming agency, since non-believers in a predominantly religious culture are likely to acquire the moral predispositions of the majority. Western culture, say, has long been shaped by Christian morality, as much as it has by the festivals and vocabulary of the church.

All the same, the tests show that neither culture nor religion matter very much: some other factors – presumed to be inherited – dictate our judgements.

That would explain why religious moral doctrine sometimes displays such illogic that one must suspect the judgement itself precedes it. Take, for example, the Catholic church’s early opposition to in vitro fertilization, which sat alongside a fierce prohibition against any other hindrance to procreation. And most religions have the same set of core moral principles about lying, thieving and murder, all with evident adaptive benefits to a group, beyond which the details (Christian original sin, say) are a question of historical contingency (Augustine was a powerful bishop, Pelagius an obscure monk).

But to uncover religion’s roots, is morality necessarily the place to look? It seems hard to credit that the immense cultural investment in religion was made merely to strengthen and fine-tune existing moral circuits. Some place more emphasis on the adaptive rationale for religious symbols and mystical beliefs, rather than morals [10]. And let’s not forget that religion is more than an expression of personal convictions: it is generally institutional, with a status structure.

Yet attempting to explain the origins of such a rich cultural phenomenon as religion is doomed to some extent to be a thankless task. For to ‘explain’ Chartres Cathedral and Bach’s B Minor Mass in terms of non-kin cooperation is obviously to have explained nothing.


References

1. Pyysiäinen, I. & Hauser, M. Trends Cogn. Sci. 10.1016/j.tics.2009.12.007.
2. Huebner, B. et al. Mind & Lang. (in press).
3. Huebner, B. & Hauser, M. D. Philos. Psychol. (in press).
4. Hauser, M. D. et al., Mind & Lang. 22, 1-21 (2007).
5. Banerjee, K. et al., J. Cogn. Cult. (in press).
6. Abarbanell, L. & Hauser, M. D. Cognition (in press).
7. Johnson, D. & Bering, J. Evol. Psych. 4, 219-233 (2006).
8. Johnson, D. & Krüger, O. Polit. Theol. 5, 159-176 (2004).
9. Boyer, P. The Naturalness of Religious Ideas: A Cognitive Theory of Religion (University of California Press, Berkeley, 1994).
10. Bering, J. M. Behav. Brain Sci. 29, 453-462 (2006).
11. Sinnott-Armstrong, W. Morality Without God (Oxford University Press, Oxford, 2009).
12. Bulbulia, J. & Mahoney, A. J. Cogn. Cult. 8, 295-320 (2008).

Tuesday, February 02, 2010

The Music Instinct

If you catch this within a week (or so?) of posting, there’s a little trailer here for my new book The Music Instinct on BBC Radio 3’s Nightwaves. It was the usual story, even on the delightfully thoughtful Nightwaves, i.e. barely time to garble the most basic of messages. But the music is nice. If you’re interested in more and are in striking distance of London, I’m speaking on this subject at the Royal Institution on 16th Feb. (There’s a list of other speaking dates on my web site.)

Wednesday, January 13, 2010

Is minor-key music sad for everyone?

[I wrote a recent Muse for Nature News on an interesting study of the emotional qualities of major and minor keys. Here it is (pre-edited). I should say that I could do no more here than hint at the problems I had with the Bowling et al. paper. It is very stimulating – I’d not seen a claim of this sort made before – but ultimately I find it unconvincing. Their procedure is pretty hard to follow, but I think I’ve got it right in the end. I find it very odd that they are apparently digging out some ‘implied fundamental’ for all the tonic intervals they consider, more or less regardless of whether there is any evidence that such a thing is heard (in the absence of the tonic actually being simultaneously played!). And as I say, the formant ratios for both types of speech are dominated by major intervals, but simply less so for ‘subdued’ speech – that’s to say, this speech doesn’t seem to have a ‘minor’ feel to it (if such a thing is meaningful anyway), but just less strongly major. So the issue is very much open. But in any event, empirical evidence surely shows us that music using modes close to the Western diatonic minor needn’t be sad at all in other cultures.]


Spinal Tap’s Nigel Tufnell famously declared that D minor is “the saddest of keys”. But is music in a minor key inevitably sad?

Why does Handel’s Water Music and the Beatles’ ‘There Comes The Sun’ sound happy, while Albinoni’s Adagio and ‘Eleanor Rigby’ sound sad? The first two are in major keys, the second two in minor keys. But are the emotional associations of major and minor intrinsic to the notes themselves, or culturally imposed? Many music psychologists suspect the latter, but a new study suggests there’s something fundamentally similar about major and minor keys and the properties of typically happy and sad speech, respectively.

Neuroscientists Daniel Bowling and colleagues at Duke University in Durham, North Carolina, say in a paper in the Journal of the Acoustical Society of America that the sound spectra – the profiles of different acoustic frequencies – in major-key music are close to those in excited speech, while the spectra of minor-key music are more similar to subdued speech [1]. They compared the frequency ratios of the most prominent acoustic peaks in speech (called formants) with those in Western classical music and Finnish folk songs.

The acoustic characteristics of happy, excited speech, which is relatively fast and loud, are common in most cultures, while sadness elicits slower, quieter vocalizations. We have a natural tendency to project such physiognomic associations onto non-sentient objects: a drooping willow is seen as ‘weeping’. There’s good reason to believe that music mimics some of these universal emotional behaviours, supplying a universal vocabulary that permits listeners sometimes to deduce the intended emotion in unfamiliar music. For example, Western listeners were able to judge fairly reliably whether pieces of Kyrghistani, Hindustani and Navajo Native American music were meant to be joyous or sad [2,3], while the Mafa people of Cameroon who had never heard Western music could guess more often than chance whether extracts were intended to be happy, sad or ‘fearful’ [4]. Here it seems that tempo was the main clue.

Of course, it’s simplistic to suppose that all music is ‘happy’ or ‘sad’, or that all ‘happy’ music is equally and identically ‘happy’, as opposed to joyous, blissful, contented and so forth. But these crude universal indicators of emotion do seem to work across borders.

Is mode (major/minor) another of them? The idea that the minor key, and in particular the musical interval between the first and third note of the scale (a so-called minor third) is intrinsically more anguished than the major (where the major third seems naturally ‘bright’ and optimistic) is so deeply ingrained in Western listeners that many have deemed this to be a ‘natural’ principle of music. This notion was influentially argued by musicologist Deryck Cooke in his 1959 book The Language of Music.

Cooke pointed out that musicians throughout the ages have used minor keys for vocal music with an explicitly sad content, and major keys for happy lyrics. But he failed to acknowledge that this might simply be conventional rather than innate. And when faced with the fact that some cultures, such as Spanish and Slavic, use minor keys for happy music, he offered the patronizing suggestion that such rustic people were inured to a hard life and didn’t expect to be happy.

No such chauvinism afflicts the latest work of Bowling and colleagues. But their conclusions are still open to question. For one thing, they don’t establish that people actually hear in music the characteristic spectral signatures that they identify. Also, they assume that the ratios of frequencies sounded simultaneously in speech (what in music are called harmonic intervals) can be compared with the ratios of frequencies sounded sequentially in music (melodic intervals). And most troublingly, major-type frequency ratios dominate the spectra of both excited and subdued speech, but merely less so in the latter case.

In any event, this work still faces the problem that some cultures (including Europe before the Renaissance, not to mention the ancient Greeks) don’t link minor keys to sadness. Western listeners sometimes misjudge the emotional quality of Javanese music that uses a scale with similarities to the minor mode yet is deemed ‘happy’ by the musicians. So even if a fundamental ‘sadness’ is present in the minor mode, it seems likely to be weak and easily over-written by acculturation. It’s possible even in the Western idiom to write ‘happy’ minor-key music (for example, van Morrison’s ‘Moondance’) or ‘sad’ major-key music (Billie Holiday’s ‘No Good Man’).

So let’s not conclude too soon that minor keys give everyone the blues.

References

1. Bowling, D. L., Gill, K., Choi, J. D., Prinz, J. & Purves, D. J. Acoust. Soc. Am. 127, 491-503 (2010).
2. Balkwill, L. L. & Thompson, W. F. Music Perception 17, 43-64 (1999).
3. Juslin, P. N. & Kaukka, P. Psychological Bulletin 129, 770-814 (2003).
4. Fritz, T. et al., Curr. Biol. 19, 1-4 (2009).