Thursday, December 20, 2007

Wise words from the Vatican?
[I’m no fan of the pope. And what I don’t say below (because it would simply be cut out as irrelevant) is that his message for World Peace Day includes some typically hateful homophobic stuff in regard to families. AIDS-related contraception and stem-cell research are just two of the areas in which the papacy has put twisted dogma before human well-being. But I feel we should always be ready to give credit where it is due. And so here, in my latest Muse article for Nature News, I try to do so.]

When Cardinal Joseph Ratzinger became Pope Benedict XVI in 2005, many both inside and outside the Christian world feared that the Catholic church was set on a course of hardline conservatism. But in two recent addresses, Benedict XVI shows intriguing signs that he is keen to engage with the technological age, and that he has in some ways a surprisingly thoughtful position on the dialogue between faith and reason.

In his second Encyclical Letter, released on 30 November, the pope tackles the question of how Christian thought should respond to technological change. And in a message for World Peace Day on 1 January 2008, he considers the immense challenges posed by climate change.

Let’s take the latter first, since it is in some ways more straightforward. Benedict XVI’s comments on the environment have already been interpreted in some quarters as “a surprise attack on climate change prophets of doom” who are motivated by “dubious ideology.” According to the British newspaper the Daily Mail, the pope “suggested that fears over man-made emissions melting the ice caps and causing a wave of unprecedented disasters were nothing more than scare-mongering.”

Now, non-British readers may not be aware that the Daily Mail is itself a stalwart bastion of “dubious ideology”, but this claim plumbs new depths even by the newspaper’s impressive standards of distortion and fabrication. Here’s what the pope actually said: “Humanity today is rightly concerned about the ecological balance of tomorrow. It is important for assessments in this regard to be carried out prudently, in dialogue with experts and people of wisdom, uninhibited by ideological pressure to draw hasty conclusions, and above all with the aim of reaching agreement on a model of sustainable development capable of ensuring the well-being of all while respecting environmental balances.”

Hands up those who disagree with this proposition. I thought not. When you consider that the idea that human activities might affect climate has been around for over a century, and the possibility that this might now be occurring has received serious study for more than two decades – during which time the climate science community has resolutely resisted pressing any alarm buttons until they could draw as informed a conclusion as possible – you might just begin to doubt it is they, and their current consensus that human-induced climate change seems real, who are in the pope’s sights when he talks of “hasty conclusions”. Might the charge be levelled, on the contrary, at those who pounce on every new suggestion that there are other factors in climate, such as solar fluctuations, as evidence of a global scientific conspiracy to pin the blame on humanity? I leave you to judge.

The pope’s statement is simply the one that any reasonable person would make. He calls for investment in “sufficient resources in the search for alternative sources of energy and for greater energy efficiency”, for technologically advanced countries to “reassess the high levels of consumption due to the present model of development”, and for humankind not to “selfishly consider nature to be at the complete disposal of our own interests.” Doesn’t that just sound a little like the environmentalists whom the pope is said by some to be lambasting? Admittedly, one might ask whether the Judaeo-Christian notion of human stewardship of the earth has contributed to our current sense of entitlement over its resources; but that’s another debate.

So far, then, good on Benedict XVI. And there’s more: “One must acknowledge with regret the growing number of states engaged in the arms race: even some developing nations allot a significant proportion of their scant domestic product to the purchase of weapons. The responsibility for this baneful commerce is not limited: the countries of the industrially developed world profit immensely from the sale of arms… it is truly necessary for all persons of good will to come together to reach concrete agreements aimed at an effective demilitarization, especially in the area of nuclear arms.” Goodness me, it’s almost enough to make me consider going to Christmas Mass.

The Encyclical Letter, meanwhile (entitled “On Christian Hope”), bites into some more meaty and difficult pies. On one level, its message might sound rather prosaic, however valid: science cannot provide society with a moral compass. The pope is particularly critical of Francis Bacon’s vision of a technological utopia: he and his followers “were wrong to believe that man would be redeemed through science.” Even committed technophiles ought to find that unobjectionable.

Without doubt, Benedict XVI says, progress (for which we might here read science) “offers new possibilities for good, but it also opens up appalling possibilities for evil.” He cites social philosopher Theodor Adorno’s remark that one view of ‘progress’ leads us from the sling to the atom bomb.

More generally, the pope argues that there can be no ready-made prescription for utopia: “Anyone who promises the better world that is guaranteed to last for ever is making a false promise.” Of course, one can see what is coming next: “it is not science that redeems man: man is redeemed by love” – which the pope believes may come only through faith in God. Only with that last step, however, does he enter into his own closed system of reference, in which our own moral lack can be filled only from a divine source.

More interesting is the accompanying remark that “in the field of ethical awareness and moral decision-making… decisions can never simply be made for us in advance by others… in fundamental decisions, every person and every generation is a new beginning.” Now, like most spiritual statements this one is open to interpretation, but surely one way of reading it is to conclude that, when technologies such as stem cell science throw up new ethical questions, we won’t find the answers already written down in any book. The papacy has not been noted for its enlightened attitude to that particular issue, but we might draw a small bit of encouragement from the suggestion that such developments require fresh thinking rather than a knee-jerk response based on outmoded dogma.

Most surprising of all (though I don’t claim to have my finger on the pulse of theological fashion) is the pope’s apparent assertion that the ‘eternal life’ promised biblically is not to be taken literally. He seems concerned, and with good reason, that many people now regard this as a threat rather than a promise: “do we really want this – to live eternally?” he asks. In this regard, Benedict XVI seems to possess rather more wisdom than the rich people who look forward to resurrection of their frozen heads. ‘Eternal life’, he says, is merely a metaphor for an authentic and happy life lived on earth.

True, this then makes no acknowledgement of how badly generations of earlier churchmen have misled their flock. And it seems strange that a pope who believes this interpretation can at the same time feel so evidently fondly towards St Paul and St Augustine, who between them made earthly life a deservedly miserable existence endured by sinners, and towards the Cistercian leader Bernard of Clairvaux, who in consequence pronounced that “We are wounded as soon as we come into this world, while we live in it, and when we leave it; from the soles of our feet to the top of our heads, nothing is healthy in us.”

Perhaps this is one of the many subtle points of theology I don’t understand. All the same, the suggestion that we’d better look for our happiness on an earth managed responsibly, rather than deferring it to some heavenly eternity, gives me a little hope that faith and reason are not set on inevitably divergent paths.

Friday, December 14, 2007


Can Aladdin’s carpet fly?
[Here’s a seasonal news story I just wrote for Nature, which will appear (in edited form) in the last issue of the year. I gather, incidentally, that the original text of the ‘Arabian Nights’ doesn’t specify that the carpet flies as such, but only that anyone who sits on it is transported instantly to other lands.]


A team of scientists in the US and France has the perfect offering for the pantomime season: instructions for making a flying carpet.

The magical device may owe more to Walt Disney than to The Arabian Nights, but it is not pure fantasy, according to Lakshminarayanan Mahadevan of Harvard University, Mederic Argentina of the University of Nice, and Jan Skotheim of the Rockefeller University in New York. They have studied the aerodynamics of a flexible, rippling sheet moving through a fluid, and find that it should be possible to make one that will stay aloft in air, propelled by actively powered undulations much as a marine ray swims through water [1].

No such carpet is going to ferry humans around, though. The researchers say that, to stay afloat in air, a sheet would need to be typically about 10 cm long, 0.1 mm thick, and vibrate at about 10 Hz with an amplitude of about 0.25 mm. Making a heavier carpet ‘fly’ is not absolutely forbidden by physics, but it would require such a powerful engine to drive vibrations that the researchers say “our computations and scaling laws suggest it will remain in the magical, mystical and virtual realm.”

The key to a magic carpet is to create uplift as the ripples push against the viscous fluid. If the sheet is close to a horizontal surface, like a piece of foil settling down onto the floor, then such movements can create a high pressure in the gap between the sheet and the floor. “As waves propagate along the flexible foil, they generate a fluid flow that leads to a pressure that lifts the foil, roughly balancing its weight”, Mahadevan explains.

But as well as lifting it, ripples can drive the foil forward – as any respectable magic carpet would require. “If the waves propagate from one edge”, says Mahadevan, “this causes the foil to tilt ever so slightly and then move in one direction, towards the edge that is slightly higher. Fluid is then squeezed from this end to the other, causing the sheet to progress like a submarine ray.”

To generate a big thrust and thus a high speed, the carpet has to undulate in big ripples, comparable to the carpet's total size. This makes for a very bumpy ride. ”If you want a smooth ride, you can generate a lot of small ripples”, says Mahadevan. “But you’ll be slower.” He points out that this is not so different from any other mode of transport, where speed tends to induce bumpiness while moving more smoothly means moving slower.

"It's cute, it's charming", says physicist Tom Witten at the University of Chicago. He adds that the result is not very surprising, but says "the main interest is that someone would think to pose this problem."

Could artificial flying mini-carpets really be made? Spontaneous undulating motions have already been demonstrated in ‘smart’ polymers suspended in fluids, which can be made to swell or shrink in response to external signals. In September, a team also at Harvard University described flexible sheets of plastic coated with cultured rat muscle cells that flex in response to electrical signals and could exhibit swimming movements [2]. “In air, it should be possible to make moving sheets – a kind of micro hovercraft – with very light materials, or with very powerful engines”, says Mahadevan.

Mahadevan has developed something of a speciality in looking for unusual effects from everyday physics – his previous papers have included a study of the ‘Cheerios effect’, where small floating rings (like the breakfast cereal) stick together through surface tension, and an analysis of the sawtooth shape made by ripping open envelopes with a finger.

“I think the most interesting questions are the ones that everyone has wondered about, usually idly”, he says. “I think that is what it means to be an applied mathematician – it is our responsibility to build mathematical tools and models to help explain and rationalize what we all see.”

References

1. Argentina, M. et al., Phys. Rev. Lett. 99, 224503 (2007).
2. Feinberg, A. W. et al., Science 317, 1366-1370 (2007).

Thursday, December 13, 2007

Surfers and stem cells
[This is the pre-edited version of my Lab Report column for the January issue of Prospect.]

Just when you thought that the Dancing Wu Li Masters and the Tao of Physics had finally been left in the 1970s, along comes a surfer living on the Hawaiian island of Maui who claims to have a simple theory of everything which shows that the universe is an ‘exceptionally beautiful shape’. Garrett Lisi has a physics PhD but no university affiliation, and lists his three most important things as physics, love and surfing – “and no, those aren’t in order.”

But Lisi is no semi-mystic drawing charming but ultimately unedifying analogies. He is being taken seriously by the theoretical physics community, and has been invited to the high-powered Perimeter Institute in Waterloo, Canada, where leading physicist Lee Smolin has called his work “fabulous.”

One thing rather fabulous is that it is almost comprehensible, at least by the standards of modern fundamental physics. Lisi himself admits that, in comparison to string theory, the main contender for a theory of everything, he uses only “baby mathematics.” That’s not to say it’s easy, though.

A theory of everything must unify the theory of general relativity, which describes gravity and the structure of spacetime on large scales, with quantum theory, which describes how fundamental particles behave at the subatomic scale. To put it another way, gravity must be mixed into the so-called Standard Model of particle physics, which explains the interactions between all known fundamental particles – quarks, electrons, photons and so forth.

Physicists typically attempt unification by using symmetry. To put it crudely, suppose there are two particles that look the same except that they spin in opposite directions. These can be ‘unified’ into a single particle by appreciating that they can be interconverted by reflection in a mirror – a symmetry operation.

The idea is that the proliferation of particles and forces in today’s universe happened in a series of ‘symmetry-breaking’ steps, just as lowering a square’s symmetry to rectangular creates two distinct pairs of sides from four identical ones. This is already known to be true of some forces and particles, but not all of them.

Lisi claims that the primordial symmetry is a pattern called E8, known to mathematicians for over a century but fully understood only recently; it is rather like a multi-dimensional polyhedron with 248 ‘corners’. He has shown that all the known particles, plus descriptions of gravity, can be mapped onto the corners of E8. So a bit of it looks like the Standard Model, while a bit looks like gravity and spacetime. Twenty of the ‘corners’ remain empty, corresponding to hypothetical particles not yet known: the E8 model thus predicts their existence. It’s rather like the way nineteenth-century chemists found a pattern that brought coherence and order to the chemical elements – the periodic table – while noting that it had gaps, predicting elements that were later found.

Is E8 really the answer to everything? Physicists are reserving judgement, for Lisi’s paper, which is not yet peer-reviewed or published, is just a sketch – not a theory, and barely even a model. Mathematical physicist Peter Woit is unsure about the whole approach, saying that playing with symmetry just defers the question of what breaks it to make the world we know. But the trick worked before in the 1950s, when Murray Gell-Mann predicted a new particle by mapping a group of known ones onto a symmetry group called SU(3).

Lisi’s surfer-dude persona is fun, but so what, really? The real point is that his suggestion invigorates a field that, wandering in the thickets of string theory, sorely needs it.

*****

Stem-cell researchers in Shoukhrat Mitalipov’s team at the Oregon Health and Science University might be forgiven a little chagrin. No sooner had they reported the breakthrough that has eluded the field for years than they were trumped by two reports seeming to offer an even more attractive way of making human stem cells. Having sung the praises of Mitalipov’s achievement, Ian Wilmut, the University of Edinburgh cloning pioneer who created Dolly the sheep, announced that he was ditching their approach in favour of the new one.

Stem cells are the all-purpose cells present in the very early stages of embryo growth that can develop into just about any type of specialized tissue cells. The ‘traditional’ strategy for making them with DNA matched to the eventual recipient involves stripping the genetic material from an unfertilized egg and replacing it with donor DNA, and then prompting the egg to grow into a blastocyst, the initial stage of an embryo, from which stem cells can be extracted. This is called somatic cell nuclear transfer (SCNT), and is the method used in animal cloning. It works for sheep, dogs and mice, but there had previously been no success for humans or other primates.

On 14 November last year, Mitalipov and colleagues reported stem cells made by SCNT from rhesus macaques that could develop into other cell types. But a week later, teams based at the universities of Kyoto and Wisconsin-Madison independently reported the creation of human stem cells from ordinary skin cells, by treating them with proteins that reprogrammed them. In effect, the proteins switch the gene circuits from a ‘skin cell’ to a ‘stem cell’ setting. This reversal of normal developmental pathways is extraordinary.

The two teams used different cocktails of proteins to do the reprogramming – the Wisconsin team manage to avoid an agent that carries a cancer risk – showing that there is some scope for optimising the mix. Best of all, the method avoids the creation and destruction of embryos that has dogged the ethics of stem-cell research. But Mitalipov insists that starting with eggs is still best, and he has now started collaborating with a team in Newcastle licensed to work with human embryos. After years of frustrating effort, suddenly all options seem open.

Wednesday, December 12, 2007

Money for old rope

… except without the money. At no extra work to myself, I appear in a couple of recent books:
The Public Image of Chemistry, eds J. Schummer, B. Bensaude-Vincent & B. Van Tiggelen (World Scientific, 2007). This is a kind of proceedings volume of a conference of (almost) the same name in 2004, supplemented by contributions from a session at the 5th International Conference on the History of Chemistry in 2005. There’s lots of interesting stuff in it. It contains my paper ‘Chemistry and Power in Recent American Fiction’, which was published previously in the journal Hyle.
Futures from Nature, edited by my friend Henry Gee and published by Tor in January 2008. This is a collection of 100 of the short sci-fi stories published in Nature in recent years, and includes a contribution (I won’t say a short story, more of a pastiche) by one Theo von Hohenheim, who sounds vaguely familiar. Buy it here.

And while I’m at it, I recorded today a review of the year in science for the BBC World Service’s Science in Action. Don’t know when it is being broadcast… but before the year is out, clearly.

And while I'm at it at it, I have a piece in the latest issue of Seed on why RNA is the new DNA...

Sunday, December 09, 2007

We’re only after your money

There is a very sour little piece in this Saturday’s Guardian from Wendy Cope on copyright. I should say first of all that I must acknowledge a few items:
1. Cope is right to say that a poem is much more likely to get copied (either digitally or on paper) and downloaded than an entire book – in that sense, poets are especially vulnerable to copyright violations.
2. It’s mostly damned hard making a living as a writer, and perhaps especially so as a poet, so some sensitivity to potential earnings lost seems reasonable.

But it seems rather sad to see a writer of any sort so bitterly possessive about their words. To read Cope’s piece, one might imagine that she sits scribbling away resentfully, thinking each time she finishes a poem, ‘Now, get out there and earn your keep, you little sod.’ Now, to be honest, my rather limited experience of Cope’s work tallies rather well with the notion that bitterness is one of her prime motivations, but this piece seemed so jealous of every last penny potentially denied her that one wonders why she doesn’t just throw in the towel and become a plumber. Indeed, it seems to me that she doesn’t even truly understand why people read or buy poetry. Why, if anyone genuinely loved her poems, would they be content to download a few from the web and, and then – well, then what? File the printouts? Poetry lovers must be among the most bookish people in the world – they surely relish having the books on their shelves, rather than just scanning their eyes briefly over a piece of downloaded text and then binning it.

‘You want to read my poems? Then buy the book’, is Cope’s crabby refrain. Does she pull her volumes off the shelves of public libraries, I wonder? What is particularly dispiriting about this little rant is that it gives no sense of writing being about wanting to share with people ideas, images, thoughts and stories – and recognizing that this will never happen solely through the medium of books sold – but that it is instead about creating ‘word product’ that you buggers must pay for.

No source of income is too minor or incidental that its possible loss is not begrudged. Other people reading your poems at festivals is no good, because you might not get your little commission for it. (You get paid just for standing up and reading out old words? What the hell are you complaining about?) Another thing I find odd, although perhaps it just shows that things work differently in the poetry world, is that Cope is so covetous of every last book sale because of its financial rewards. In non-fiction at least, if you’re the kind of writer who gets a substantial part of your income from royalties, as opposed to pocketing a modest advance that might with great luck be paid off in ten years’ time, then you must be selling so many books that you shouldn’t need the supplement of £1.20 for a book sale that comes from someone’s refusal to copy one of your poems and give it to friend.

But what caps it all – and indeed reveals the pathology of Cope’s obsession – is her anger and regret that all those possible royalties are going to be lost when you’re dead. “I sometimes feel a bit annoyed by the prospect of people making money out of my poems when I’m too dead to spend it”, she moans. Well personally, Wendy, if someone keeps my words alive when I’m not, I’ll be over the bloody moon, and I don’t give a damn what they make from doing so.

Thursday, December 06, 2007

Beyond recycling
[This is my Materials Witness column for the January 2008 issue of Nature Materials.]

It is surely ironic that global warming and environmental degradation now pose serious risks at a time when industry and technology are cleaner than at any other stage of the Industrial Revolution. Admittedly, that may not be globally true, but in principle we can manufacture products and generate energy more efficiently and with less pollution than ever before. So why the problem?

Partly, the answer is obvious: cleaner technologies struggle to keep pace with increased industrial activity as populations and economies grow. And green methodologies are typically costly, so aren’t universally available. But the equation is still more complex than that. For example, cars can be more fuel-efficient, less polluting and cheaper. But consumers who save money on fuel tend to spend it elsewhere: they drive more, say, or they spend it on holiday air flights. And cheap cars mean more cars. There is an ‘environmental rebound effect’ to such savings, counteracting the gains.

This is just one way in which ‘green’ manufacturing – using fewer materials and environmentally friendly processing, recycling wastes, and making products themselves recyclable or biodegradable – may fall short of its goal of making the world cleaner. All of these things are surely valuable, indeed essential, in making economic growth sustainable. But the problem goes beyond how things are made, to the issue of how they are used. We need to look not just at production, but at consumption.

One of the initiatives here is the so-called Product-Service System (PSS): a combination of product design and manufacture with the supply of related consumer services that has the potential to give consumers greater utility while reducing the ecological footprint. That might sound like marketing jargon, but it’s a tangible concept of proven value, enacted for example in formalized car-sharing schemes, leasing of temporary furnished office space, biological pest management services, and polystyrene recycling. It’s not mere philanthropy either: there’s a profit incentive too.

One of the key benefits of a PSS approach is that it might offer a way of simply making less stuff. You don’t need to be an eco-warrior to be shocked at the senseless excesses of current manufacturing. A splendid example of an alternative model is offered by a team in Sweden, who have outlined plans for a baby-pram leasing and remanufacturing scheme (O. Mont et al., J. Cleaner Prod. 14, 1509; 2006). Since baby prams generally last for much longer than they are needed (per child), who not lease one instead of buying it? If the infrastructure exists for repairing minor wear and tear, every customer gets an ‘as new’ product, and no prams end up on the waste tip in a near-pristine state.

Developing countries are often adept at informal schemes like this already: little gets thrown away there. But if implemented all the way from the product design stage, it is much more than recycling. What remains is to break our current cult of ‘product ownership’. Prams seem as good a place to start as any.

Thursday, November 29, 2007

Why 'Never Let Me Go' isn't really a 'science novel'

I have just finished reading Kazuo Ishiguro’s Never Let Me Go. What a strange book. First, there’s the tone – purposely amateurish writing (there can’t be any doubt, given his earlier books, that this is intentional), which creates an odd sense of flatness. As the Telegraph’s reviewer put it, “There is no aesthetic thrill to be had from the sentences – except that of a writer getting the desired dreary effect exactly right.” It’s a testament to Ishiguro that his control of this voice never slips, and that the story remains compelling in spite of the deliberately clumsy prose. That s probably a far harder trick to pull off than it seems. Second, there are the trademark bits of childlike quasi-surrealism, where he develops an idea that seems utterly implausible yet is presented so deadpan that you start to think “Is he serious about this?” – for instance, Tommy’s theory about the ‘art gallery’. This sort of dreamlike riffing was put to wonderful effect in The Unconsoled, which was a dream world from start to finish. It jarred a little at the end of When We Were Orphans, because it didn’t quite fit with the rest of the book – but was still strangely compelling. Here it seems to be an expression of the enforced naivety of the characters, but is disorientating when it becomes so utterly a part of the world that Kathy H depicts.

But my biggest concern is that the plot just doesn’t seem at all plausible enough to create a strong critique of cloning and related biotechnologies. Is that even the intention? I’m still unsure, as were several reviewers. The situation of the donor children is so unethical and so deeply at odds with any current ethical perspectives on cloning and reproductive technologies that one can’t really imagine how a world could have got this way. After all, in other respects it seems to be a world just like ours. It is not even set in some dystopian future, but has a feeling of being more like the 1980s. The ‘normal’ humans aren’t cold-hearted dysfunctionals – they seem pretty much like ordinary people, except that they seem to accept this donor business largely without question – whereas nothing like this would be tolerated or even contemplated for an instant today. It feels as though Ishiguro just hasn’t worked hard enough to make an alternative reality that can support the terrible scenario he portrays. As a result, whatever broader point he is making loses its force. What we are left with is a well told tale of friendship and tragedy experienced by sympathetic characters put in a situation that couldn’t arise under the social conditions presented. I enjoyed the book, but I can’t see how it can add much to the cloning debate. Perhaps, as one reviewer suggested, this is all just an allegory about mortality – in which case it works rather well, but is somewhat perverse.

I’ve just taken a look at M John Harrison’s review in the Guardian, which puts these same points extremely well:
“Inevitably, it being set in an alternate Britain, in an alternate 1990s, this novel will be described as science fiction. But there's no science here. How are the clones kept alive once they've begun "donating"? Who can afford this kind of medicine, in a society the author depicts as no richer, indeed perhaps less rich, than ours?

Ishiguro's refusal to consider questions such as these forces his story into a pure rhetorical space. You read by pawing constantly at the text, turning it over in your hands, looking for some vital seam or row of rivets. Precisely how naturalistic is it supposed to be? Precisely how parabolic? Receiving no answer, you're thrown back on the obvious explanation: the novel is about its own moral position on cloning. But that position has been visited before (one thinks immediately of Michael Marshall Smith's savage 1996 offering, Spares). There's nothing new here; there's nothing all that startling; and there certainly isn't anything to argue with. Who on earth could be "for" the exploitation of human beings in this way?

Ishiguro's contribution to the cloning debate turns out to be sleight of hand, eye candy, cover for his pathological need to be subtle… This extraordinary and, in the end, rather frighteningly clever novel isn't about cloning, or being a clone, at all. It's about why we don't explode, why we don't just wake up one day and go sobbing and crying down the street, kicking everything to pieces out of the raw, infuriating, completely personal sense of our lives never having been what they could have been.”

Monday, November 26, 2007

Listen out

Let me now be rather less coy about media appearances. This Wednesday night at 9 pm I am presenting Frontiers on BBC Radio 4, looking at digital medicine. This meant that I got to strap a ‘digital plaster’ to my chest which relayed my heartbeat to a remote monitor through a wireless link. I am apparently alive and well.

Salt-free Paxo

No one can reasonably expect Jeremy Paxman to have a fluent knowledge of all the subjects on which he has to ask sometimes remarkably different questions on University Challenge. But if the topic is chemistry, you’d better get it word-perfect, because he’s got no latitude for interpretation. Tonight’s round had a moment that went something like this:
Paxman: “Which hydrated ferrous salt was once known as green vitriol?”
Hapless student: “Iron sulphate.”
Paxman: “No, it’s just sulphate.”
I’ve seen precisely the same thing happen before. How come someone doesn’t pick Paxo up on it? The fact is, contestants are advised that they can press their button to challenge if they think their answer was unfairly dismissed. The offending portion of the filming then gets snipped out. But I suspect no one ever does this – it’s just too intimidating to say to Paxo “I think you’ve got that wrong.”

Friday, November 23, 2007

War is not an exact science
[This is my latest muse column for news@nature.com]

General theories of why we go to war are interesting. But they'll never tell the whole story.

Why are we always fighting wars? That’s the kind of question expected from naïve peaceniks, to which historians will wearily reply “Well, it’s complicated.”

But according to a new paper by an international, interdisciplinary team, it isn’t that complicated. Their answer is: climate change. David Zhang of the University of Hong Kong and his colleagues show that, in a variety of geographical regions – Europe, China and the arid zones of the Northern Hemisphere – the frequency of war has fluctuated in step with major shifts in climate, particularly the Little Ice Age from the mid-fifteenth until the mid-nineteenth century [1].

Cold spells like this, they say, significantly reduced agricultural production, and as a result food prices soared, food became scarce – and nations went to war, whether to seize more land or as a result of famine-induced mass migration.

On the one hand, this claim might seem unexceptional, even trivial: food shortages heighten social tensions. On the other hand, it is outrageous: wars, it says, have little to do with ideology, political ambition or sheer greed, but are driven primarily by the weather.

Take, for example, the seventeenth century, when Europe was torn apart by strife. The Thirty Years War alone, between 1618 and 1648, killed around a third of the population in the German states. Look at the history books and you’ll find this to be either a religious conflict resulting from the Reformation of Martin Luther and Jean Calvin, or a political power struggle between the Habsburg dynasty and their rivals. Well, forget all that, Zhang and his colleagues seem to be saying: it’s all because we were suffering the frigid depths of the Little Ice Age.

I expect historians to respond to this sort of thing with lofty disdain. You can see their point. The analysis stops at 1900, and so says nothing about the two most lethal wars in history – which, as the researchers imply, took place in an age when economic, technological and institutional changes had reduced the impact of agricultural production on world affairs. Can you really claim to have anything like a ‘theory of war’ if it neglects the global conflicts of the twentieth century?

And historians will rightly say that grand synoptic theories of history are of little use to them. Clearly, not all wars are about food. Similarly, not all food shortages lead to war. There is, in historical terms, an equally compelling case to be made that famine leads to social unrest and potential civil war, not to the conflict of nation states. But more generally, the point of history (say most historians) is to explain why particular events happened, not why generic social forces sometimes lead to generic consequences. There is a warranted scepticism of the kind of thinking that draws casual parallels between, say, Napoleon’s imperialism and current US foreign policy.

Yet some of this resistance to grand historical theorizing may be merely a backlash. In particular, it stands in opposition to the Marxist position popular among historians around the middle of the last century, and which has now fallen out of fashion. And the Marxist vision of a ‘scientific’ socio-political theory was itself a product of nineteenth century mechanistic positivism, as prevalent among conservatives like Leo Tolstoy and liberals like John Stuart Mill as it was in the revolutionary socialism of Marx and Engels. It was Tolstoy who, in War and Peace, invoked Newtonian imagery in asking “What is the force that moves nations?”

Much of this can be traced to the famous proposal of Thomas Robert Malthus, outlined in his Essay on the Principles of Population (1826), that population growth cannot continue for ever on an exponential rise because it eventually falls foul of the necessarily slower rise in means of production – basically, the food runs out. That gloomy vision was also an inspiration to Charles Darwin, who saw that in the wild this competition for limited resources must lead to natural selection.

Zhang and colleagues state explicitly that their findings provide a partial vindication of Malthus. They point out that Malthus did not fully account for the economic pressures and sheer ingenuity that could boost agricultural production when population growth demanded it, but they say that such improvements have their limits, which were exceeded when climate cooling lowered crop yields in Europe and China.

For all their apparently impressive correlation indices, however, it is probably fair to say that responses to Zhang et al.’s thesis will be a matter of taste. In the end, an awful lot seems to hinge on the coincidence of minimal agricultural production (and maximum in food prices), low average temperatures, and a peak in the number of wars (and fatalities) during the early to mid-seventeenth century in both Europe and China. The rest of the curves are suggestive, but don’t obviously create a compelling historical narrative. At best, they provoke a challenge: if one cannot now show a clear link between climate/agriculture and, say, the Napoleonic wars from the available historical records themselves, historians might be forgiven for questioning the value of this kind of statistical analysis.

Yet what if the study helps us to understand, even a little bit, what causes war? That itself is an age-old question – Zhang and colleagues identify it, for example, in Thucydides’ History of the Peloponnesian Wars in the 5th century BC. Neither are they by any means the first in modern times to look for an overarching theory of war. The issue motivated the physicist Lewis Fry Richardson between about 1920 and 1950 to plot size against frequency for many recent wars (including the two world wars), and thereby to identify the kind of power-law scaling that has led to the notion that wars are like landslides, where small disturbances can trigger events of any scale [2-4]. Other studies have focused on the cyclic nature of war and peace, as for example in ecologist Peter Turchin’s so-called cliodynamics, which attempts to develop a theory of the expansion and collapse of empires [5,6].

Perhaps most prominent in this arena is an international project called the Correlates of War, which has since 1963 been attempting to understand and quantify the factors that create (and mitigate) international conflict and thus to further the “scientific knowledge about war”. Its data sets have been used, for example, in quantitative studies of how warring nations form alliances [7], and they argue rather forcefully against any notion of collapsing the causative factors onto a single axis such as climate.

What, finally, do Zhang and colleagues have to tell us about future conflict in an anthropogenically warmed world? At face value, the study might seem to say little about that, given that it correlates war with cooling events. There is some reason to think that strong warming could be as detrimental to agriculture as strong cooling, but it’s not clear exactly how that would play out, especially in the face of both a more vigorous hydrological cycle and the possibility of more regional droughts. We already know that water availability will become a serious issue for agricultural production, but also that there’s a lot that can still be done to ameliorate that, for instance by improvements in irrigation efficiency.

We’d be wise to greet the provocative conclusions of Zhang et al. with neither naïve acceptance nor cynical dismissal. They do not amount to a theory of history, or of war, and it seems most unlikely that any such things exist. But their paper is at least a warning against a kind of fatalistic solipsism which assumes that all human conflicts are purely the result of human failings.

References

1. Zhang, D. D. et al. Proc. Natl Acad. Sci. USA doi/10.1073/pnas.0703073104
2. Richardson, L. F. Statistics of Deadly Quarrels, eds Q. Wright and C. C. Lienau (Boxwood Press, Pittsburgh, 1960).
3. Nicholson, M. Brit. J. Polit. Sci. 29, 541-563 (1999).
4. Buchanan, M. Ubiquity (Phoenix, London, 2001).
5. Turchin, P. Historical Dynamics (Princeton University Press, 2003).
6. Turchin, P. War and Peace and War (Pi Press, 2005).
7. Axelrod R. & D. S. Bennett, Brit. J. Polit. Sci. 23, 211-233 (1993).

Thursday, November 22, 2007

Schrödinger’s cat is not dead yet

[This is an article I’ve written for news@nature. One of the things I found most interesting was that Schrödinger didn’t set up his ‘cat’ thought experiment with a gun, but with an elaborate poisoning scheme. Johannes Kofler says “He puts a cat into a steel chamber and calls it "hell machine" (German: Höllenmaschine). Then there is a radioactive substance in such a tiny dose that within one hour one atom might decay but with same likelihood nothing decays. If an atom decays, a Geiger counter reacts. In this case this then triggers a small hammer which breaks a tiny flask with hydrocyanic acid which poisons the cat. Schrödinger is really very detailed in describing the situation.” There’s a translation of Schrödinger’s original paper here, but as Johannes says, the wonderful “hell machine” is simply translated as “device”, which is a bit feeble.]

Theory shows how quantum weirdness may still be going on at the large scale.

Since the particles that make up the world obey the rules of quantum theory, allowing them to do counter-intuitive things such as being in several different places or states at once, why don’t we see this sort of bizarre behaviour in the world around us? The explanation commonly offered in physics textbooks is that quantum effects apply only at very small scales, and get smoothed away at the everyday scales we can perceive.

But that’s not so, say two physicists in Austria. They claim that we’d be experiencing quantum weirdness all the time – balls that don’t follow definite paths, say, or objects ‘tunnelling’ out of sealed containers – if only we had sharper powers of perception.

Johannes Kofler and Caslav Brukner of the University of Vienna and the Institute of Quantum Optics and Quantum Information, also in Vienna, say that the emergence of the ‘classical’ laws of physics, deduced by the likes of Galileo and Newton, from the quantum world is an issue not of size but of measurement [1]. If we could make every measurement with as much precision as we liked, there would be no classical world at all, they say.

Killing the cat

Austrian physicist Erwin Schrödinger famously illustrated the apparent conflict between the quantum and classical descriptions of the world. He imagined a situation where a cat was trapped in a box with a small flask of poison that would be broken if a quantum particle was in one state, and not broken if the particle was in another.

Quantum theory states that such a particle can exist in a superposition of both states until it is observed, at which point the quantum superposition ‘collapses’ into one state or the other. Schrödinger pointed out that this means that the cat is neither dead nor alive until someone opens the box to have a look – a seemingly absurd conclusion.

Physicists generally resolve this paradox through a process called decoherence, which happens when quantum particles interact with their environment. Decoherence destroys the delicately poised quantum state and leads to classical behaviour.

The more quantum particles there are in a system, the harder it is to prevent decoherence. So somewhere in the process of coupling a single quantum particle to a macroscopic object like a flask of poison, decoherence sets in and the superposition is destroyed. This means that Schrödinger’s cat is always unambiguously in a macroscopically ‘realistic’ state, either alive or dead, and not both at once.

But that’s not the whole story, say Kofler and Brukner. They think that although decoherence typically intervenes in practice, it need not do so in principle.

Bring back the cat

The fate of Schrödinger’s cat is an example of what in 1985 physicists Anthony Leggett and Anupam Garg called macrorealism [2]. In a macrorealistic world, they said, objects are always in a single state and we can make measurements on them without altering that state. Our everyday world seems to obey these rules. According to the macrorealistic view, “there are no Schrödinger cats allowed” says Kofler.

But Kofler and Brukner have proved that a quantum state can get as ‘large’ as you like, without conforming to macrorealism.

The two researchers consider a system akin to a magnetic compass needle placed in a magnetic field. In our classical world, the needle rotates around the direction of the field in a process called precession. That movement can be described by classical physics. But in the quantum world, there would be no smooth rotation – the needle could be in a superposition of different alignments, and would just jump instantaneously into a particular alignment once we tried to measure it.

So why don’t we see quantum jumps like this? The researchers show that it depends on the precision of measurement. If the measurements are a bit fuzzy, so that we can’t distinguish one quantum state from several other, similar ones, this smoothes out the quantum oddities into a classical picture. Kofler and Brukner show that, once a degree of fuzziness is introduced into measured values, the quantum equations describing the observed objects turn into classical ones. This happens regardless of whether there is any decoherence caused by interaction with the environment.

Having kittens

Kofler says that we should be able to see this transition between classical and quantum behaviour. The transition would be curious: classical behaviour would be punctuated by occasional quantum jumps, so that, say, the compass needle would mostly rotate smoothly, but sometimes jump instantaneously.

Seeing the transition for macroscopic objects like Schrödinger’s cat would require that we be able to distinguish an impractically large number of quantum states. For a ‘cat’ containing 10**20 quantum particles, say, we would need to be able to tell the difference between 10**10 states – just too many to be feasible.

But our experimental tools should already be good enough to look for this transition in much smaller ‘Schrödinger kittens’ consisting of many but not macroscopic numbers of particles, says Kofler and Brukner.

What, then, becomes of these kittens before the transition, while they are still in the quantum regime? Are they alive or dead? ‘We prefer to say that they are neither dead nor alive,’ say Kofler and Brukner, ‘but in a new state that has no counterpart in classical physics.’

References

1. Kofler, J. & Brukner, C. Phys. Rev. Lett. 99, 180403 (2007).
2. Leggett, A. & Garg, A. Phys. Rev. Lett. 54, 857 (1985).
Not natural?
[Here’s a book review I’ve written for Nature, which I put here because the discussion is not just about the book!]

The Artificial and the Natural: An Evolving Polarity
Ed. Bernadette Bensaude-Vincent and William R. Newman
MIT Press, Cambridge, Ma., 2007

The topic of this book – how boundaries are drawn between natural and synthetic – has received too little serious attention, both in science and in society. Chemists are notoriously (and justifiably) touchy about descriptions of commercial products as ‘chemical-free’; but the usual response, which is to lament media or public ignorance, fails to recognize the complex history and sociology that lies behind preconceptions about chemical artifacts. Roald Hoffmann has written sensitively on this matter in The Same and Not the Same (Columbia University Press, 1995), and he contributes brief concluding remarks to this volume. But the issue is much broader, touching on areas ranging from stem-cell therapy and assisted conception to biomimetic engineering, synthetic biology, machine intelligence and ecosystem management.

It is not, in fact, an issue for the sciences alone. Arguably the distinction between nature and artifice is equally fraught in what we now call the fine arts – where again it tends to be swept under the carpet. While some modern artists, such as Richard Long and Andy Goldsworthy, address the matter head-on with their interventions in nature such as the production of artificial rainbows, much popular art criticism now imposes a contemporary view even on the Old Masters. Through this lens, Renaissance writer Giorgio Vasari’s astonishment that Leonardo’s painted dewdrops “looked more convincing than the real thing” appears a little childish, as though he has missed the point of art – for no one now believes that the artist’s job is to mimic nature as accurately as possible. Perhaps with good reason, but it is left to art historians to point out that there is nothing absolute about this view.

At the heart of the matter is the fact that ‘art’ has not always meant what it does today. Until the late Enlightenment, it simply referred to anything human-made, whether that be a sculpture or an engine. The panoply of mutated creatures described in Francis Bacon’s The New Atlantis (1627) were the products of ‘art’, and so were the metals generated in the alchemist’s laboratory. The equivalent word in ancient Greece was techne, the root of ‘technology’ of course, but in itself a term that embraced subtle shades of meaning, examined here in ancient medicine by Heinrich von Staden and in mechanics by Francis Wolff.

The critical issue was how this ‘art’ was related to ‘nature’, approximately identified with what Aristotle called physis. Can art produce things identical to those in nature, or only superficial imitations of them? (That latter belief left Plato rather dismissive of the visual arts.) Does art operate using the same principles as nature, or does it violate them? Alchemy was commonly deemed to operate simply by speeding up natural processes: metals ripened into gold sooner in the crucible than they did in the ground, while (al)chemical medicines accelerated natural healing. And while some considered ‘artificial’ things to be always inferior to their ‘natural’ equivalents, it was also widely held that art could exceed nature, bringing objects to a greater state of perfection, as Roger Bacon thought of alchemical gold.

The emphasis in The Artificial and the Natural is historical, ranging from Hippocrates to nylon. These motley essays are full of wonders and insights, but are ultimately frustrating too in their microcosmic way. There is no real synthesis on offer, no vision of how attitudes have evolved and fragmented. There are too many conspicuous absences for the book to represent an overview. One can hardly feel satisfied with such a survey in which Leonardo da Vinci is not even mentioned. It would have been nice to see some analysis of changing ideas about experimentation, the adoption of which was surely hindered by Aristotle’s doubts that ‘art’ (and thus laboratory manipulation) was capable of illuminating nature. Prejudices about experiments often went even further: even in the Renaissance one could feel free to disregard what they said if it conflicted with a priori ‘truths’ gleaned from nature, rather as Pythagoras advocated studying music “setting aside the judgement of the ears”. And it would have been fascinating to see how these issues were discussed in other cultures, particularly in technologically precocious China.

But most importantly, the discussion sorely lacks a contemporary perspective, except for Bernadette Bensaude-Vincent’s chapter on plastics and biomimetics. This debate is no historical curiosity, but urgently needs airing today. Legislation on trans-species embryology, reproductive technology, genome engineering and environmental protection is being drawn up based on what sometimes seems like little more than a handful of received wisdoms (some of them scriptural) moderated by conventional risk analysis. There is, with the possible exception of biodiversity discussions, almost no conceptual framework to act as a support and guide. All too often, what is considered ‘natural’ assumes an absurdly idealized view of nature that owes more to the delusions of Rousseau’s romanticism than to any historically informed perspective. By revealing how sophisticated, and yet how transitory, the distinctions have been in the past, this book is an appealingly erudite invitation to begin the conversation.

Sunday, November 18, 2007


Astronomy: the dim view

One Brian Robinson contributes to human understanding on the letters page of this Saturday’s Guardian with the following:
“Providing funding for astronomers does not in any way benefit the taxpayer. Astronomy may be interesting, but the only mouths that will get fed are the children of the astronomers. Astronomy is a hobby, and as such should not be subsidised by the Treasury any more than trainspotting.”
The invitation is to regard this as the sort of Thatcherite anti-intellectualism that is now ingrained in our political system. And indeed, the notion that anything state-funded must ‘benefit the taxpayer’ – specifically, by putting food in mouths – is depressing not only in its contempt for learning but also in its ignorance of how the much-vaunted ‘wealth creation’ in a technological society works.

But then you say, hang on a minute. Why astronomy, of all things? Why not theology, archaeology, philosophy, and all the arts other than the popular forms that are mass-marketable and exportable? And then you twig: ‘astronomy is a hobby’ – like trainspotting. This bloke thinks that professional astronomers are sitting round their telescopes saying ‘Look, I’ve just got a great view of Saturn’s rings!’ They are like the funny men in their sheds looking at Orion, only with much bigger telescopes (and sheds). In other words, Mr Robinson hasn’t the faintest notion of what astronomy is.

Now, I have some gripes with astronomers. It is not just my view, but seems to be objectively the case, that the field is sometimes narrowly incestuous and lacks the fecundity that comes from collaborating with people in other fields, with the result that its literature is often far more barren than it has any right to be, given what’s being studied here. And the astronomical definition of ‘metals’ is so scientifically illiterate that it should be banned without further ado, or else all other scientists should retaliate by calling anything in space that isn’t the Earth a ‘star’. But astronomy is not only one of the oldest and most profound of human intellectual endeavours; it also enriches our broader culture in countless ways.

The presence of Mr Robinson’s letter on the letters page, then, is not a piece of cheeky provocation, but an example of the nearly ubiquitous ignorance of science among letters-page editors. They simply didn’t see what he was driving at, and thus how laughable it is. It is truly amazing what idiocies can get into even the most august of places – the equivalent, often, of a reader writing in to say that, oh I don’t know, that Winston Churchill was obviously a Kremlin spy or that Orwell wrote Cold Comfort Farm. Next we’ll be told that astronomers are obviously fakes because their horoscopes never come true.

Monday, November 12, 2007


Is this what writers’ studies really look like?

Here is another reason to love Russell Hoban (aside from his having written the totally wonderful Riddley Walker, and a lot of other great stuff too). It is a picture of his workplace, revealed in the Guardian’s series of ‘writers’ rooms’ this week. I love it. After endless shots of beautiful mahogany desks surrounded by elegant bookshelves and looking out onto greenery, like something from Home and Garden, here at last is a study that looks as though the writer works in it. It is the first one in the series that looks possibly even worse than mine.

The mystery is what all the other writers do. Sure, there may be little stacks of books being used for their latest project – but what about all the other ‘latest projects’? The papers printed out and unread for months? The bills unpaid (or paid and not filed)? The letters unanswered (or ditto)? The books that aren’t left out for any reason, other than that there is no other place to put them? The screwdrivers and sellotape and tissues and plastic bags and stuff I’d rather not even mention? What do these people do all day? These pictures seem to demand the image of a writer who, at the end of the day, stretches out his/her arms and says “Ah, now for a really good tidy-up”. That is where my powers of imagination fail me.

It all confirms that we simply do not deserve Russell Hoban.

Sunday, November 11, 2007

Minority report

Here’s an interesting factoid culled from the doubtless unimpeachable source of Wllson da Silva, editor of Australian science magazine Cosmos: the proportion of scientist who question that humans are responsible for global warming is about the same as the proportion who questions that HIV is the cause of AIDS. Strange, then, that whenever AIDS is discussed on TV or radio, it is not considered obligatory to include an HIV sceptic for ‘balance’.

Of course, one reason for that is that people are not (yet) dying in their thousands from climate change (although even that, after the recent European heat waves, is debatable). This means it can remain fashionable, among over-educated media types with zero understanding of science, to be a climate sceptic. This, not the little band of scientific deniers, less still the so-called ignorant masses that some scientists lament, is the real problem. The intelligensia still love to parade their ‘independent-mindedness’ on this score.

Here, for example, is Simon Hoggart a couple of weeks ago in the Guardian on ‘man-made global warming’: “I'm not going to plunge into this snakepit, except to say that there are more sceptics about than the Al Gores of this world acknowledge, and they are not all paid by carbon fuel lobbies. Also, if it's true, as Booker and North claim [in their book Scared to Death], that there is evidence of global warming on other planets, might it not be possible that the sun has at least as much effect on our climate as we do? I only ask.”

No, Simon, you do not only ask. If you genuinely wanted enlightenment on this matter, you could go to the wonderful Guardian science correspondents, who would put you straight in seconds. No, you want to show us all what a free thinker you are, and sow another little bit of confusion.

I’m not even going to bother to explain why ‘other planets’ (Venus, I assume) have experienced global warming in their past. It is too depressing. What is most depressing of all is the extent to which well-educated people can merrily display such utter absence of even the basics of scientific reasoning (such as comparing like with like). I’m generally optimistic about people’s ability to learn and reason, if they have the right facts in front of them. But I sometimes wonder if that ability declines the more you know, when that knowledge excludes anything to do with science.

Monday, October 22, 2007

Lucky Jim runs out of luck (at last)

[This is also posted on the Prospect blog .]

Jim Watson seems to be genuinely taken aback by the furore his recent comments on race and IQ have aroused. He looks a little like the teenage delinquent who, after years of being a persistent neighbourhood pest, finds himself suddenly hauled in front of a court and threatened with being sent to a detention centre. Priding himself on being a social irritant, he never imagined anyone would deal with him seriously.

The truth is that there is more than metaphor in this image. Watson has throughout his career combined the intelligence of a first-rate scientist and the influence of a Nobel laureate with the emotional maturity of a spoilt schoolboy. There is nothing particularly remarkable about that – it is not hard to find examples of immaturity among public figures – but the scientific community seems to find it particularly difficult to know how to accommodate such cases. For better or worse, there are plenty of niches for emotionally immature show-offs in politics and the media – the likes of Boris Johnson, Ann Widdecombe, Jeremy Clarkson and Ann Coulter all, in their own ways, manage it with aplomb. (It is not a trait unique to right-wingers, but somehow they seem to do it more memorably.) But although they can sometimes leave po-faced opponents spluttering, the silliness is usually too explicit to be mistaken for anything else.

Science, on the other hand, has tended to be blind to this facet of human variety, so that the likes of Watson come instead to be labelled “maverick” or “controversial”, which of course is precisely what they want. The scientific press tends to handle these figures with kid gloves, pronouncing gravely on the propriety of their “colourful” remarks, as though these are sober individuals who have made a bad error of judgement. Henry Porter is a little closer to the mark in the Observer, where he calls Watson an ‘elderly loon’ – the degree of ridicule is appropriate, except that Watson is no loon, and it has been a widespread mistake to imagine that his comments are a sign of senescence.

The fact is that Watson has always considered it great sport to say foolish things that will offend people. He is of the tiresome tribe that likes to display what they deem to be ‘politically incorrectness’ as a badge of pride, forgetting that they would be ignored as bigoted boors if they did not have power and position. It is abundantly clear that behind the director of the Cold Spring Harbor Laboratory still stands the geeky young man depicted behind a model of DNA in the 1950s, whose (eminently deserved) Nobel has protected him from a need to grow up. “He was given licence to say anything that came into his mind and expect to be taken seriously,” said Harvard biologist E. O. Wilson (himself no stranger to controversy, but an individual who exudes far more wisdom and warmth than Watson ever has).

That’s a pitfall for all Nobel laureates, of course, and many are tripped by it. But few have embraced the licence with as much delight as Watson. For example, there was this little gem over a decade ago: “If you are really stupid, I would call that a disease. The lower 10 per cent who really have difficulty, even in elementary school, what’s the cause of it? A lot of people would like to say, ‘Well, poverty, things like that.’ It probably isn't. So I’d like to get rid of that, to help the lower 10 per cent.” Or this one: “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them.”

Watson has been called “extraordinarily naïve” to have made his remarks about race and intelligence and expect to get away with them. But it is not exactly naivety – he probably just assumed that, since he has said such things in the past without major incident, he could do so again. Indeed, he almost did get away with it, until the Independent decided to make it front-page news.

Watson has apologized “unreservedly” for his remarks, which he says were misunderstood. This is mostly a public-relations exercise – it is not clear that there is a great deal of scope for misunderstanding, and evidently Watson now has a genuine concern that he will be dismissed from his post at Cold Spring Harbor. At least by admitting that there is “no scientific basis” for a belief that Africans are somehow “genetically inferior”, he has provided some ammunition to counter the opportunistic use of his remarks by racist groups. But it is inevitable that those groups will now make him a martyr, forced to recant in the manner of Galileo for speaking an unpalatable truth. (The speed with which support for Watson’s comments has come crawling out of the woodwork even in august forums such as Nature’s web site is disturbing.)

The more measured dismay that some, including Richard Dawkins, have voiced over the suppression of free speech implied by the cancellation of some of Watson’s intended UK talks, is understandable, although it seems not unreasonable (indeed, it seems rather civil) for an institution to decide it does not especially want to host someone who has just expressed casual racist opinions. More to the point, it is not clear what ‘free speech’ is being suppressed here – Watson does not appear to be wanting to, and being prevented from, making a case that black people are less intelligent than other races. (In fact it is no longer clear what Watson wanted to say at all; the most likely interpretation is that he simply let a groundless prejudice slip out in an attempt to boost his ‘bad boy’ reputation, and that he now regrets it.) In a funny sort of way, Watson would be less deserving of scorn if he were now defending his remarks on the basis of the ‘evidence’ he alluded to. In that event, any kind of censorship would indeed be misplaced.

Beneath the sound a fury, however, we should remember that Watson’s immense achievements as a scientist do not oblige us to take him seriously in any other capacity. Those achievements are orthogonal to his bully-boy bigotry, and they put no distance at all between Watson and the pub boor.

The real casualty in all this is genetics research, for Watson’s comments past and present can only seem (and in fact not just seem) to validate claims that this research is in the hands of scientists with questionable judgement and sense of responsibility.

Friday, October 19, 2007

Swiss elections get spooky
[This is my latest column for muse@nature.com.]

High-profile applications of quantum trickery raise the question of what to call these new technologies. One proposal is unlikely to catch on.

The use of quantum cryptography in the forthcoming Swiss general elections on 21 October may be a publicity stunt, but it highlights the fact that the field of quantum information is now becoming an industry.

The invitation here is to regard Swiss democracy as being safeguarded by the fuzzy shroud of quantum physics, which can in principle provide a tamper-proof method of transmitting information. The reality is that just a single state – Geneva – is using commercial quantum-cryptography technology already trialled by banks and financial institutions, and that it is doing so merely to send tallies from a vote-counting centre to the state government’s repository.

The votes themselves are being delivered by paper ballot – which, given the controversies over electronic voting systems, is probably still the most secure way to collect them. In any event, with accusations of overt racism in the campaigning of the right-wing Swiss People’s Party (SVP), hacking of the voting system is perhaps the least of the worries in this election.

But it would be churlish to portray this use of quantum cryptography as worthless. There is no harm in using a high-profile event to advertise the potential benefits of the technology. If nothing else, it will get people asking what quantum cryptography is.

The technique doesn’t actually make transmitted data invulnerable to tampering. Instead, it makes it impossible to interfere with the transmission without leaving a detectable trace. Some quantum cryptographic schemes use the quantum-mechanical property of entanglement, whereby two or more quantum particles are woven together so that they become a single system. Then you can’t do something to one particle without affecting the others with which it is entangled.

Entanglement isn’t essential for quantum encryption – the first such algorithm, devised by physicists Charles Bennett and Gilles Brassard in 1984, instead relies on a property called quantum indeterminacy, denoting our fundamental inability to describe some quantum systems exactly. Entanglement, however, is the key to a popular scheme devised in 1991. Here, the sender and receiver each receive one of a pair of entangled particles, and can decode a message by comparing their measurements of the particles’ quantum states. Any eavesdropping tends to randomize the relationship between these states, and is therefore detectable.

Quantum cryptography is just one branch of the emerging discipline of quantum information technology, in which phenomena peculiar to the quantum world, such as entanglement, are used to manipulate information. Other applications include quantum computing, in which quantum particles are placed in superposition states – mixtures of the classical states that would correspond to the binary 1’s and 0’s of ordinary computers – to vastly boost the power and capacity of computation. Quantum teleportation – the exact replication of quantum particles at locations remote from the originals – also makes use of entanglement.

The roots of these new areas of quantum physics lie in the early days of quantum theory, when its founders were furiously debating what quantum theory implied about the physical world. Albert Einstein, whose Nobel-winning explanation of the photoelectric effect was one of the cornerstones of quantum mechanics, doubted that quantum particles could really have the fuzzy properties ascribed to them by the theory, to which one could do no more than assign probabilities.

In 1935 Einstein and his colleagues Boris Podolsky and Nathan Rosen proposed a thought experiment that they hoped would show quantum theory to be an incomplete account of physical reality. They showed how it seemed to predict what Einstein called ‘spooky action at a distance’ that operated instantaneously between two particles.

But we now know that this action at a distance is real – it is the result of quantum entanglement. What Einstein considered a self-evident absurdity is simply the way the world is. What’s more, entanglement and superpositions are now recognized as being key to the way our deterministic classical world, where events have definite outcomes, emerges from the murky haze of quantum probabilities.

Bennett was one of the pioneers who showed that these quantum effects aren’t just abstract curiosities, but can be exploited in applications. For this, he will surely get a Nobel prize some time soon.

So far, most researchers have been happy to talk about ‘quantum cryptography’, ‘quantum computing’ and so forth, vaguely gathered under the umbrella phrase of quantum information. But is that a good name for a technology? Charles Tahan, a physicist at the University of Cambridge who is working on these technologies, thinks not. In a recent preprint, he proposes to draw inspiration from Einstein and call it all ‘spookytechnology’.

This, says Tahan, would refer to “all functional devices, systems and materials whose utility relies in whole or in part on higher order quantum properties of matter and energy that have no counterpart in the classical world.” By higher-order, Tahan means things like entanglement and superposition. He argues that his definition is broad enough to contain more than quantum information technology, but not so broad as to be meaningless.

In that respect, Tahan points to the shortcomings of ‘nanotechnology’, a field that is not really a field at all but instead a ragbag of many areas of science and technology ranging from electronics to biomedicine.

But Tahan's label will never stick, because it violates one of the most fundamental prohibitions in scientific naming: don’t be cute. No scientist is going to want to tell people that he or she is working in a field that sounds as though it was invented by Caspar the Friendly Ghost. True, the folksy ‘buckyballs’ gained some currency as a term for the fullerene carbon molecules (despite Nature’s best efforts) – but its usage remains a little marginal, and has thankfully never caught on for ‘buckytubes’, which everyone instead calls carbon nanotubes.

Attempts to label nascent fields rarely succeed, for names have a life of their own. ‘Nanotechnology’, when coined in 1974, had nothing like the meaning it has today. ‘Spintronics’, the field of quantum electronics that in some sense lies behind this year’s physics Nobel, is arguably a slightly ugly and brutal amalgam of electronics and the quantum property of electrons called spin - yet somehow it works.

Certainly, names need to be catchy: laboured plunderings of Greek and Latin are never popular. But catchiness is extremely hard to engineer. So somehow I don’t think we’re going to see the Geneva elections become a landmark in spookytechnology.

Thursday, October 18, 2007







How tortoises turn right-side up
[This is a story I’ve just written for Nature’s news site. But the deadline was such that we couldn’t include the researchers’ nice pics of tortoises and turtles doing their stuff. So here are some of them. The first is an ideal monostatic body, and a tortoise that approximates it. The second is a flat turtle righting itself by using its neck as a pivot. The last two are G. elegans shells, which are nearly monostatic.]

Study finds three ways that tortoises avoid getting stuck on their backs.

Flip a tortoise or a turtle over, and it’ll find its feet again. Two researchers have now figured out how they do it — they use a clever combination of shell shape and leg and neck manoeuvres.

As Franz Kafka’s Gregor discovered in Metamorphosis, lying on your back can be bad news if you’re cockroach-shaped. Both cockroaches and tortoises are potentially prone to getting stuck on their rounded backs, their feet flailing in the air.

For tortoises, this is more than an accidental hazard: belligerent males often try to flip opponents over during fights for territorial rights. Gábor Domokos of Budapest University of Technology and Economics and Péter Várkonyi of Princeton University in New Jersey took a mathematical look at real animals to see whether they had evolved sensible shapes to avoid getting stuck [1].

The ideal answer would seem to be to have a shell that can’t get stuck at all — one that will spontaneously roll back under gravity, like the wobbly children's toys that “won’t fall down”. Domokos and Várkonyi have investigated the rolling mechanics of idealized shell shapes, and show that in theory, such self-righting shells do exist. They would be tall domes with a cross-section like a half-circle slightly flattened on one side.

The shells of some tortoises, such as the star tortoise Geochelone elegans, come very close to this shape. They can still get stuck because of small imperfections in the shell shape, but it takes only a little leg-wagging to make the tortoise tip over and right itself. The researchers call tall shells that have a single stable resting orientation (on the tortoise's feet) monostatic, denoted as group S1.

The tall and the squat

So, tall shells are generally good for righting with minimal effort, and confer good protection against the jaws of predators. Could this be the best answer for all turtles and tortoises? No real chelonian has a perfectly monostatic shell, which Várkonyi says is probably because tall shells could have disadvantages too: you could be rolled over by wind, for instance. Also, he says, it takes quite a bit of fine-tuning to achieve a truly monostatic shape.

Flatter shells have other advantages: they can, for example, be better for swimming or for use as spade-like implements for digging. The side-necked turtle and the pancake tortoise are flat like this, with two stable resting positions (S2): right side up and on their back.

For such flat shells, righting requires more than a bit of thrashing around. These animals tend to have long necks, which they extend and use as a pivot while pushing with their legs. The longer the neck, the easier it is for the creature to right itself, in the same way that a long lever can be pushed down with less effort than a short one.

Stuck in the middle

In between these two extremes of tall and flat are shells that are moderately domed, as found in Terrapene box turtles. Surprisingly, these have three stable positions (S3): on the back, on the front or halfway between, where the shell rests on its curved side.

Turtles of the S3 class use a combination of both strategies: bobbing of their head or feet tips the shell from the back-down position to the sideways position, and from there the creature can use its neck and feet to pivot over into the belly-down state.

The work is sure to be of interest to tortoise keepers and kids with turtle pets. But it's unlikely that this tortoise-rolling work is going to suggest new ways to help robots pick themselves up — engineers already have a number of quite simple ways of ensuring that. "You can just put ballast in the bottom," Várkonyi admits.

Reference
1. Domokos, G. & Várkonyi, P. L. Proc. R. Soc. B, doi:10.1098/rspb.2007.1188.

Tuesday, October 16, 2007

We’ll never know how we began

[This is the pre-edited text of my Crucible column for the November issue of Chemistry World.]

Oddly, it is easier to explore the origin of the universe than the origin of life on Earth. ‘Easier’ is a relative term here, because the construction of the Large Hardon Collider at CERN in Geneva makes clear the increasing extravagance needed to push back the curtain ever closer to the singularity of the Big Bang. But we can now reconstruct the origin of our universe from about 10**-30 of a second onwards, and the LHC may take us back into the primordial quark-gluon plasma and the symmetry-breaking transition of the Higgs field that created particle masses.

Yet all this is possible precisely because there is so little room for contingency in the first instants of the Big Bang. The further back we go, the less variation we are likely to find between our universe and another one hypothetically sprung from a cosmic singularity – most of what happened then is constrained by physics. So while the LHC might produce some surprises, it could instead simply confirm what we expected.

The origin of life is totally different. There isn’t really any theory that can tell us about it. It might have happened in many different ways, depending on circumstances of which we know rather little. In this sense, it is a genuinely historical event, immune to first-principles deduction in the same way as are the shapes of the early continents or the events of the Hundred Years War. What we know about the former is largely a matter of extrapolating backwards from the present-day situation, and then searching for geological confirmation. We can do the same for the history of life, constructing phylogenetic trees from comparisons of extant organisms and supplementing that with data from the fossil record. But that approach can tell us little about what life was like before it was really life at all.

For the Hundred Years War there is ample documentary evidence. But for life’s origin around 3.8 billion years ago, the geological ‘documents’ tell us very little indeed. Life left its imprint in the rocks once it was fully fledged, but there is no real data on how it got going.

It is a testament to the tenacity and boldness of scientists that they have set out to explore the question anyway. In 1863 Charles Darwin concluded that there was little point in doing so: “It is mere rubbish”, he wrote, “thinking at present on the origin of life.” But he evidently had a change of heart, since eight years later he could be found musing on his “warm little pond” filled with a broth of prebiotic compounds. By the time Alexander Oparin and J. B. S. Haldane speculated about the formation of organic molecules in primitive atmospheres in the 1920s, experimentalists had already shown that substances such as formaldehyde and the amino acid glycine could be cooked up from carbon oxides, ammonia and water.

There was, then, a long tradition behind the ground-breaking experiment of Harold Urey and Stanley Miller at Chicago in 1953. They, however, were the first to use a reducing mixture, and that is why they found such a rich mélange of organics in their brew. Despite geological evidence suggesting that the early terrestrial atmosphere was mildly oxidizing, Miller remained convinced until his recent death that this was the only plausible way life’s building blocks could have been made – some say his stubbornness on this issue ended up hindering progress in the field.

In some ways, the recent study by Paul von Ragué Schleyer of the University of Georgia and his coworkers of the prebiotic synthesis of the nucleic acid base adenine from hydrogen cyanide (D. Roy et al., Proc. Natl Acad. Sci. USA doi:10.1073 pnas.0708434104) is a far cry from Urey and Miller’s makeshift ‘bake and shake’ experiment. It uses state-of-the-art quantum chemical calculations to deduce the mechanism of this reaction, first reported by John Oró and coworkers in Texas in 1960, which produces one of the building blocks of life from five molecules of a single, simple ingredient.

But in another sense, the work might be read as an indication that the field initiated by Urey and Miller is close to having run its course in its present form. The most one could have asked of their approach – and it has amply fulfilled this demand – is that it alleviate George Wald’s objection in 1954 that “one only has to contemplate the magnitude of this task to concede that the spontaneous generation of a living organism is impossible.” There are now more or less plausibly ‘prebiotic’ ways to make most of the key molecular ingredients of proteins, RNA, DNA, carbohydrates and other complex biomolecules. There are ingenious ways of linking them together, in defiance of the deconstructive hydrolysis that dilute solution seems to threaten, ranging from surface catalysis on minerals to the use of electrochemical gradients at hot springs. There are theories of cascading complexification through autocatalytic cycles, and the whole framework of the RNA World (the answer to the chicken-and-egg problem of DNA’s dependence on proteins) seems increasingly well motivated.

And yet there is no more evidence than there was fifty years ago that this is how it all happened. Time has kicked over the tracks. The chemical origin of life has become a discipline of immense experimental and theoretical refinement, as this new paper testifies – and yet it all remains guesswork, barely constrained by hard evidence from the Hadaean eon of our planet. The true history is obliterated, and we may never glimpse it.

Sunday, October 07, 2007


Time to rethink the Outer Space Treaty
[This article on Nature’s news site formed part of the journal’s “Sputnik package”.]

An agreement forged 40 years ago can’t by itself keep space free of weaponry.

Few anniversaries have been celebrated with such mixed feelings as the launch of Sputnik-1 half a century ago. That beeping little metal orb, innocuously named “fellow traveller of Earth”, signalled the beginning of satellite telecommunications, global environmental monitoring, and space-based astronomy, as well as the dazzling saga of human journeys into the cosmos. But the flight of Sputnik was also a pivotal moment in the Cold War, a harbinger of intercontinental nuclear missiles and space-based surveillance and spying.

That’s why it seems surprising that another anniversary this year has gone relatively unheralded. In 1967, 90 nations signed the Outer Space Treaty (OST), in theory binding themselves to an agreement on the peaceful uses of space that prohibited the deployment there of weapons of mass destruction. Formally, the treaty remains in force; in practice, it is looking increasingly vulnerable as a protection against th militarization of space.

Updating and reinvigorating the commitments of the OST seems to be urgently needed, but this currently stand little chance of being realized. Among negotiators and diplomats there is now a sense of gloom, a feeling that the era of large-scale international cooperation and legislation on security issues (and perhaps more widely) may be waning.

Last year was the tenth anniversary of the Comprehensive Test Ban Treaty (CTBT), and next year the fortieth anniversary of the Nuclear Non-Proliferation Treaty. But the world’s strongest nuclear power, the United States, refuses to ratify the CTBT, while some commentators believe the world is entering a new phase of nuclear proliferation. No nuclear states have disarmed during the time of the NPT’s existence, despite the binding commitment of signatory states “to pursue negotiations in good faith on effective measures relating to nuclear disarmament”.

In this arena, the situation does seem to be in decline. For example, the US appears set on developing a new generation of nuclear weapons and deploying a ballistic missile defence system, and it withdrew from the Anti-Ballistic Missile Treaty in 2002. China and Israel have also failed to ratify the CTBT, while other nuclear powers (India, Pakistan) have not even signed it. North Korea, which withdrew from the NPT in 2003, now claims to have nuclear weapons.

Given how poorly we have done so close to home, what are the prospects for outer space? “For the past four decades”, says Sergei Ordzhonikidze, Director-General of the United Nations Office at Geneva, “the 1967 Outer Space Treaty has been the cornerstone of international space law. The treaty was a great historic achievement, and it still is. The strategic – and at the same time, noble and peaceful – idea behind [it] was to prevent the extension of an arms race into outer space.”

Some might argue that those goals were attained and that there has been no arms race in space. But a conference [1] convened in Geneva last April by the United Nations Institute for Disarmament Research suggested that the situation is increasingly precarious, and indeed that military uses of space are well underway and likely to expand.

Paradoxically, the thawing of the Cold War is one reason why the OST is losing its restraining power. During a confrontration of two nuclear superpowers, it is rather easy to see (and game theory confirms) that cooperation on arms limitation is in the national interest. But as Sergey Batsanov, Director of the Geneva Office of the Pugwash group for peaceful uses of science, pointed out in the UN meeting, “after the end of the Cold War, disarmament and non-proliferation in their traditional forms could no longer be considered as vital instruments for maintaining the over-all status quo.” Batsanov suggests we are now in a transitional phase of geopolitics in which new power structures are emerging and there is in consequence a “crisis in traditional international institutions, and the erosion, or perhaps evolution, of norms of international law (such as the inviolability of borders and non-interference in another state’s internal affairs).”

It’s not hard to see what he is alluding to there. Certainly, it seems clear that the US plans for maintaining “space superiority” – the “freedom to attack as well as the freedom from attack” - does much to harm international efforts on demilitarization of space. The tensions created with Russia by US plans to site missile defence facilities in eastern Europe is just one example of that. James Armor, Director of the US National Security Space Office, indicates that, following the “emergence of space-enabled transitional warfare” using satellite reconnaissance in Operation Desert Storm in Iraq in 1991, military space capabilities have now become “seamlessly integrated into the overall US military structure”.

But it would be unwise and unfair to imply that the United States is a lone ‘rogue agent’. China has exhibited a clear display of military capability in space; as Xu Yansong of the National Space Administration of the People’s Republic of China explained at the UN conference, China’s space activities are aimed not only at “utilizing outer space for peaceful purposes” but “protecting China’s national interests and rights, and comprehensively building up the national strength” – which could be given any number of unsettling interpretations. Yet China, like Russia, has been supportive of international regulation of space activities, and it’s not clear how much of this muscle-flexing is meant to create a bargaining tool.

The real point is that the OST is an agreement forged in a different political climate from that of today. Its military commitments amount to a prohibition of nuclear weapons and other “weapons of mass destruction” in space, and the use of the Moon and other celestial bodies “exclusively for peaceful purposes.” That’s a long way from prohibiting all space weapons. As Kiran Nair of the Indian Air Force argued, “the OST made certain allowances for military uses of outer space [that] were exploited then, and are exploited now and ill continue to be so until a balanced agreement on the military utilization of outer space is arrived at.”

What’s more, there was no explicit framework in the OST for consultations, reviews and other interactions that would sustain the treaty and ensure its continued relevance. And as Batsanov says, now there are more players in the arena, and a wider variety of potential threats.

Both Russia and China have called for a new treaty, and earlier this year President Putin announced the draft of such a document. But we don’t necessarily need to ditch the OST and start anew. Indeed, the treaty has already been the launch pad for various other agreements, for example on liability for damage caused by space objects and on the rescue of astronauts. It makes sense to build on structures already in place.

The key to success, however, is to find a way of engaging all the major players. In that respect, the United States still seems the most recalcitrant: its latest National Space Policy, announced in October 2006, states that the OST is sufficient and that the US “will oppose the development of new legal regimes or other restrictions that seek to prohibit or limit US access to or use of space.” In other words, only nuclear space weaponry is to be considered explicitly out of bounds. Armor made the prevailing Hobbesian attitude clear at the Geneva meeting: “In my view, attempts to create regimes or enforcement norm that do not specifically include and build upon military capabilities are likely to be stillborn, sterile and ultimately frustrating efforts.” Whatever framework he envisages, it’s not going to look much like the European Union.

But it needn’t be a matter of persuading nations to be more friendly and less hawkish. There are strong arguments for why pure self-interest in terms of national security (not to mention national expenditure) would be served by the renunciation of all plans to militarize space – just as was the case in 1967. Rebecca Johnson of the Acronym Institute for Disarmament Diplomacy pointed out that after the experience in Iraq, US strategists are “coming to see that consolidating the security of existing assets is more crucial than pursuing the chimera of multi-tiered invulnerability.” The recent Chinese anti-satellite test, for from being a red flag to a bullish military, might be recognized as an indication that no one stays ahead in this race for long, and the US knows well that arms races are debilitating and expensive.

The danger with the current Sputnik celebrations is that they might cast the events in 1957 as pure history, which has now given us a world of Google Earth and the International Space Station. The fact is that Sputnik and its attendant space technologies reveal a firm link between the last world war, with its rocket factories manned by slaves and its culmination in the instant destruction of two cities, and the world we now inhabit. The OST is not merely a legacy of Sputnik but the only real international framework for the way we use space. Unless it can be given fresh life and relevance, we have no grounds for imagining that the military space race is over.

Reference
1. Celebrating the Space Age: 50 Years of Space Technology, 40 Years of the Outer Space Treaty (United Nations Institute for Disarmament Research, Geneva, 2007).