Monday, October 22, 2007

Lucky Jim runs out of luck (at last)

[This is also posted on the Prospect blog .]

Jim Watson seems to be genuinely taken aback by the furore his recent comments on race and IQ have aroused. He looks a little like the teenage delinquent who, after years of being a persistent neighbourhood pest, finds himself suddenly hauled in front of a court and threatened with being sent to a detention centre. Priding himself on being a social irritant, he never imagined anyone would deal with him seriously.

The truth is that there is more than metaphor in this image. Watson has throughout his career combined the intelligence of a first-rate scientist and the influence of a Nobel laureate with the emotional maturity of a spoilt schoolboy. There is nothing particularly remarkable about that – it is not hard to find examples of immaturity among public figures – but the scientific community seems to find it particularly difficult to know how to accommodate such cases. For better or worse, there are plenty of niches for emotionally immature show-offs in politics and the media – the likes of Boris Johnson, Ann Widdecombe, Jeremy Clarkson and Ann Coulter all, in their own ways, manage it with aplomb. (It is not a trait unique to right-wingers, but somehow they seem to do it more memorably.) But although they can sometimes leave po-faced opponents spluttering, the silliness is usually too explicit to be mistaken for anything else.

Science, on the other hand, has tended to be blind to this facet of human variety, so that the likes of Watson come instead to be labelled “maverick” or “controversial”, which of course is precisely what they want. The scientific press tends to handle these figures with kid gloves, pronouncing gravely on the propriety of their “colourful” remarks, as though these are sober individuals who have made a bad error of judgement. Henry Porter is a little closer to the mark in the Observer, where he calls Watson an ‘elderly loon’ – the degree of ridicule is appropriate, except that Watson is no loon, and it has been a widespread mistake to imagine that his comments are a sign of senescence.

The fact is that Watson has always considered it great sport to say foolish things that will offend people. He is of the tiresome tribe that likes to display what they deem to be ‘politically incorrectness’ as a badge of pride, forgetting that they would be ignored as bigoted boors if they did not have power and position. It is abundantly clear that behind the director of the Cold Spring Harbor Laboratory still stands the geeky young man depicted behind a model of DNA in the 1950s, whose (eminently deserved) Nobel has protected him from a need to grow up. “He was given licence to say anything that came into his mind and expect to be taken seriously,” said Harvard biologist E. O. Wilson (himself no stranger to controversy, but an individual who exudes far more wisdom and warmth than Watson ever has).

That’s a pitfall for all Nobel laureates, of course, and many are tripped by it. But few have embraced the licence with as much delight as Watson. For example, there was this little gem over a decade ago: “If you are really stupid, I would call that a disease. The lower 10 per cent who really have difficulty, even in elementary school, what’s the cause of it? A lot of people would like to say, ‘Well, poverty, things like that.’ It probably isn't. So I’d like to get rid of that, to help the lower 10 per cent.” Or this one: “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them.”

Watson has been called “extraordinarily naïve” to have made his remarks about race and intelligence and expect to get away with them. But it is not exactly naivety – he probably just assumed that, since he has said such things in the past without major incident, he could do so again. Indeed, he almost did get away with it, until the Independent decided to make it front-page news.

Watson has apologized “unreservedly” for his remarks, which he says were misunderstood. This is mostly a public-relations exercise – it is not clear that there is a great deal of scope for misunderstanding, and evidently Watson now has a genuine concern that he will be dismissed from his post at Cold Spring Harbor. At least by admitting that there is “no scientific basis” for a belief that Africans are somehow “genetically inferior”, he has provided some ammunition to counter the opportunistic use of his remarks by racist groups. But it is inevitable that those groups will now make him a martyr, forced to recant in the manner of Galileo for speaking an unpalatable truth. (The speed with which support for Watson’s comments has come crawling out of the woodwork even in august forums such as Nature’s web site is disturbing.)

The more measured dismay that some, including Richard Dawkins, have voiced over the suppression of free speech implied by the cancellation of some of Watson’s intended UK talks, is understandable, although it seems not unreasonable (indeed, it seems rather civil) for an institution to decide it does not especially want to host someone who has just expressed casual racist opinions. More to the point, it is not clear what ‘free speech’ is being suppressed here – Watson does not appear to be wanting to, and being prevented from, making a case that black people are less intelligent than other races. (In fact it is no longer clear what Watson wanted to say at all; the most likely interpretation is that he simply let a groundless prejudice slip out in an attempt to boost his ‘bad boy’ reputation, and that he now regrets it.) In a funny sort of way, Watson would be less deserving of scorn if he were now defending his remarks on the basis of the ‘evidence’ he alluded to. In that event, any kind of censorship would indeed be misplaced.

Beneath the sound a fury, however, we should remember that Watson’s immense achievements as a scientist do not oblige us to take him seriously in any other capacity. Those achievements are orthogonal to his bully-boy bigotry, and they put no distance at all between Watson and the pub boor.

The real casualty in all this is genetics research, for Watson’s comments past and present can only seem (and in fact not just seem) to validate claims that this research is in the hands of scientists with questionable judgement and sense of responsibility.

Friday, October 19, 2007

Swiss elections get spooky
[This is my latest column for muse@nature.com.]

High-profile applications of quantum trickery raise the question of what to call these new technologies. One proposal is unlikely to catch on.

The use of quantum cryptography in the forthcoming Swiss general elections on 21 October may be a publicity stunt, but it highlights the fact that the field of quantum information is now becoming an industry.

The invitation here is to regard Swiss democracy as being safeguarded by the fuzzy shroud of quantum physics, which can in principle provide a tamper-proof method of transmitting information. The reality is that just a single state – Geneva – is using commercial quantum-cryptography technology already trialled by banks and financial institutions, and that it is doing so merely to send tallies from a vote-counting centre to the state government’s repository.

The votes themselves are being delivered by paper ballot – which, given the controversies over electronic voting systems, is probably still the most secure way to collect them. In any event, with accusations of overt racism in the campaigning of the right-wing Swiss People’s Party (SVP), hacking of the voting system is perhaps the least of the worries in this election.

But it would be churlish to portray this use of quantum cryptography as worthless. There is no harm in using a high-profile event to advertise the potential benefits of the technology. If nothing else, it will get people asking what quantum cryptography is.

The technique doesn’t actually make transmitted data invulnerable to tampering. Instead, it makes it impossible to interfere with the transmission without leaving a detectable trace. Some quantum cryptographic schemes use the quantum-mechanical property of entanglement, whereby two or more quantum particles are woven together so that they become a single system. Then you can’t do something to one particle without affecting the others with which it is entangled.

Entanglement isn’t essential for quantum encryption – the first such algorithm, devised by physicists Charles Bennett and Gilles Brassard in 1984, instead relies on a property called quantum indeterminacy, denoting our fundamental inability to describe some quantum systems exactly. Entanglement, however, is the key to a popular scheme devised in 1991. Here, the sender and receiver each receive one of a pair of entangled particles, and can decode a message by comparing their measurements of the particles’ quantum states. Any eavesdropping tends to randomize the relationship between these states, and is therefore detectable.

Quantum cryptography is just one branch of the emerging discipline of quantum information technology, in which phenomena peculiar to the quantum world, such as entanglement, are used to manipulate information. Other applications include quantum computing, in which quantum particles are placed in superposition states – mixtures of the classical states that would correspond to the binary 1’s and 0’s of ordinary computers – to vastly boost the power and capacity of computation. Quantum teleportation – the exact replication of quantum particles at locations remote from the originals – also makes use of entanglement.

The roots of these new areas of quantum physics lie in the early days of quantum theory, when its founders were furiously debating what quantum theory implied about the physical world. Albert Einstein, whose Nobel-winning explanation of the photoelectric effect was one of the cornerstones of quantum mechanics, doubted that quantum particles could really have the fuzzy properties ascribed to them by the theory, to which one could do no more than assign probabilities.

In 1935 Einstein and his colleagues Boris Podolsky and Nathan Rosen proposed a thought experiment that they hoped would show quantum theory to be an incomplete account of physical reality. They showed how it seemed to predict what Einstein called ‘spooky action at a distance’ that operated instantaneously between two particles.

But we now know that this action at a distance is real – it is the result of quantum entanglement. What Einstein considered a self-evident absurdity is simply the way the world is. What’s more, entanglement and superpositions are now recognized as being key to the way our deterministic classical world, where events have definite outcomes, emerges from the murky haze of quantum probabilities.

Bennett was one of the pioneers who showed that these quantum effects aren’t just abstract curiosities, but can be exploited in applications. For this, he will surely get a Nobel prize some time soon.

So far, most researchers have been happy to talk about ‘quantum cryptography’, ‘quantum computing’ and so forth, vaguely gathered under the umbrella phrase of quantum information. But is that a good name for a technology? Charles Tahan, a physicist at the University of Cambridge who is working on these technologies, thinks not. In a recent preprint, he proposes to draw inspiration from Einstein and call it all ‘spookytechnology’.

This, says Tahan, would refer to “all functional devices, systems and materials whose utility relies in whole or in part on higher order quantum properties of matter and energy that have no counterpart in the classical world.” By higher-order, Tahan means things like entanglement and superposition. He argues that his definition is broad enough to contain more than quantum information technology, but not so broad as to be meaningless.

In that respect, Tahan points to the shortcomings of ‘nanotechnology’, a field that is not really a field at all but instead a ragbag of many areas of science and technology ranging from electronics to biomedicine.

But Tahan's label will never stick, because it violates one of the most fundamental prohibitions in scientific naming: don’t be cute. No scientist is going to want to tell people that he or she is working in a field that sounds as though it was invented by Caspar the Friendly Ghost. True, the folksy ‘buckyballs’ gained some currency as a term for the fullerene carbon molecules (despite Nature’s best efforts) – but its usage remains a little marginal, and has thankfully never caught on for ‘buckytubes’, which everyone instead calls carbon nanotubes.

Attempts to label nascent fields rarely succeed, for names have a life of their own. ‘Nanotechnology’, when coined in 1974, had nothing like the meaning it has today. ‘Spintronics’, the field of quantum electronics that in some sense lies behind this year’s physics Nobel, is arguably a slightly ugly and brutal amalgam of electronics and the quantum property of electrons called spin - yet somehow it works.

Certainly, names need to be catchy: laboured plunderings of Greek and Latin are never popular. But catchiness is extremely hard to engineer. So somehow I don’t think we’re going to see the Geneva elections become a landmark in spookytechnology.

Thursday, October 18, 2007







How tortoises turn right-side up
[This is a story I’ve just written for Nature’s news site. But the deadline was such that we couldn’t include the researchers’ nice pics of tortoises and turtles doing their stuff. So here are some of them. The first is an ideal monostatic body, and a tortoise that approximates it. The second is a flat turtle righting itself by using its neck as a pivot. The last two are G. elegans shells, which are nearly monostatic.]

Study finds three ways that tortoises avoid getting stuck on their backs.

Flip a tortoise or a turtle over, and it’ll find its feet again. Two researchers have now figured out how they do it — they use a clever combination of shell shape and leg and neck manoeuvres.

As Franz Kafka’s Gregor discovered in Metamorphosis, lying on your back can be bad news if you’re cockroach-shaped. Both cockroaches and tortoises are potentially prone to getting stuck on their rounded backs, their feet flailing in the air.

For tortoises, this is more than an accidental hazard: belligerent males often try to flip opponents over during fights for territorial rights. Gábor Domokos of Budapest University of Technology and Economics and Péter Várkonyi of Princeton University in New Jersey took a mathematical look at real animals to see whether they had evolved sensible shapes to avoid getting stuck [1].

The ideal answer would seem to be to have a shell that can’t get stuck at all — one that will spontaneously roll back under gravity, like the wobbly children's toys that “won’t fall down”. Domokos and Várkonyi have investigated the rolling mechanics of idealized shell shapes, and show that in theory, such self-righting shells do exist. They would be tall domes with a cross-section like a half-circle slightly flattened on one side.

The shells of some tortoises, such as the star tortoise Geochelone elegans, come very close to this shape. They can still get stuck because of small imperfections in the shell shape, but it takes only a little leg-wagging to make the tortoise tip over and right itself. The researchers call tall shells that have a single stable resting orientation (on the tortoise's feet) monostatic, denoted as group S1.

The tall and the squat

So, tall shells are generally good for righting with minimal effort, and confer good protection against the jaws of predators. Could this be the best answer for all turtles and tortoises? No real chelonian has a perfectly monostatic shell, which Várkonyi says is probably because tall shells could have disadvantages too: you could be rolled over by wind, for instance. Also, he says, it takes quite a bit of fine-tuning to achieve a truly monostatic shape.

Flatter shells have other advantages: they can, for example, be better for swimming or for use as spade-like implements for digging. The side-necked turtle and the pancake tortoise are flat like this, with two stable resting positions (S2): right side up and on their back.

For such flat shells, righting requires more than a bit of thrashing around. These animals tend to have long necks, which they extend and use as a pivot while pushing with their legs. The longer the neck, the easier it is for the creature to right itself, in the same way that a long lever can be pushed down with less effort than a short one.

Stuck in the middle

In between these two extremes of tall and flat are shells that are moderately domed, as found in Terrapene box turtles. Surprisingly, these have three stable positions (S3): on the back, on the front or halfway between, where the shell rests on its curved side.

Turtles of the S3 class use a combination of both strategies: bobbing of their head or feet tips the shell from the back-down position to the sideways position, and from there the creature can use its neck and feet to pivot over into the belly-down state.

The work is sure to be of interest to tortoise keepers and kids with turtle pets. But it's unlikely that this tortoise-rolling work is going to suggest new ways to help robots pick themselves up — engineers already have a number of quite simple ways of ensuring that. "You can just put ballast in the bottom," Várkonyi admits.

Reference
1. Domokos, G. & Várkonyi, P. L. Proc. R. Soc. B, doi:10.1098/rspb.2007.1188.

Tuesday, October 16, 2007

We’ll never know how we began

[This is the pre-edited text of my Crucible column for the November issue of Chemistry World.]

Oddly, it is easier to explore the origin of the universe than the origin of life on Earth. ‘Easier’ is a relative term here, because the construction of the Large Hardon Collider at CERN in Geneva makes clear the increasing extravagance needed to push back the curtain ever closer to the singularity of the Big Bang. But we can now reconstruct the origin of our universe from about 10**-30 of a second onwards, and the LHC may take us back into the primordial quark-gluon plasma and the symmetry-breaking transition of the Higgs field that created particle masses.

Yet all this is possible precisely because there is so little room for contingency in the first instants of the Big Bang. The further back we go, the less variation we are likely to find between our universe and another one hypothetically sprung from a cosmic singularity – most of what happened then is constrained by physics. So while the LHC might produce some surprises, it could instead simply confirm what we expected.

The origin of life is totally different. There isn’t really any theory that can tell us about it. It might have happened in many different ways, depending on circumstances of which we know rather little. In this sense, it is a genuinely historical event, immune to first-principles deduction in the same way as are the shapes of the early continents or the events of the Hundred Years War. What we know about the former is largely a matter of extrapolating backwards from the present-day situation, and then searching for geological confirmation. We can do the same for the history of life, constructing phylogenetic trees from comparisons of extant organisms and supplementing that with data from the fossil record. But that approach can tell us little about what life was like before it was really life at all.

For the Hundred Years War there is ample documentary evidence. But for life’s origin around 3.8 billion years ago, the geological ‘documents’ tell us very little indeed. Life left its imprint in the rocks once it was fully fledged, but there is no real data on how it got going.

It is a testament to the tenacity and boldness of scientists that they have set out to explore the question anyway. In 1863 Charles Darwin concluded that there was little point in doing so: “It is mere rubbish”, he wrote, “thinking at present on the origin of life.” But he evidently had a change of heart, since eight years later he could be found musing on his “warm little pond” filled with a broth of prebiotic compounds. By the time Alexander Oparin and J. B. S. Haldane speculated about the formation of organic molecules in primitive atmospheres in the 1920s, experimentalists had already shown that substances such as formaldehyde and the amino acid glycine could be cooked up from carbon oxides, ammonia and water.

There was, then, a long tradition behind the ground-breaking experiment of Harold Urey and Stanley Miller at Chicago in 1953. They, however, were the first to use a reducing mixture, and that is why they found such a rich mélange of organics in their brew. Despite geological evidence suggesting that the early terrestrial atmosphere was mildly oxidizing, Miller remained convinced until his recent death that this was the only plausible way life’s building blocks could have been made – some say his stubbornness on this issue ended up hindering progress in the field.

In some ways, the recent study by Paul von Ragué Schleyer of the University of Georgia and his coworkers of the prebiotic synthesis of the nucleic acid base adenine from hydrogen cyanide (D. Roy et al., Proc. Natl Acad. Sci. USA doi:10.1073 pnas.0708434104) is a far cry from Urey and Miller’s makeshift ‘bake and shake’ experiment. It uses state-of-the-art quantum chemical calculations to deduce the mechanism of this reaction, first reported by John Oró and coworkers in Texas in 1960, which produces one of the building blocks of life from five molecules of a single, simple ingredient.

But in another sense, the work might be read as an indication that the field initiated by Urey and Miller is close to having run its course in its present form. The most one could have asked of their approach – and it has amply fulfilled this demand – is that it alleviate George Wald’s objection in 1954 that “one only has to contemplate the magnitude of this task to concede that the spontaneous generation of a living organism is impossible.” There are now more or less plausibly ‘prebiotic’ ways to make most of the key molecular ingredients of proteins, RNA, DNA, carbohydrates and other complex biomolecules. There are ingenious ways of linking them together, in defiance of the deconstructive hydrolysis that dilute solution seems to threaten, ranging from surface catalysis on minerals to the use of electrochemical gradients at hot springs. There are theories of cascading complexification through autocatalytic cycles, and the whole framework of the RNA World (the answer to the chicken-and-egg problem of DNA’s dependence on proteins) seems increasingly well motivated.

And yet there is no more evidence than there was fifty years ago that this is how it all happened. Time has kicked over the tracks. The chemical origin of life has become a discipline of immense experimental and theoretical refinement, as this new paper testifies – and yet it all remains guesswork, barely constrained by hard evidence from the Hadaean eon of our planet. The true history is obliterated, and we may never glimpse it.

Sunday, October 07, 2007


Time to rethink the Outer Space Treaty
[This article on Nature’s news site formed part of the journal’s “Sputnik package”.]

An agreement forged 40 years ago can’t by itself keep space free of weaponry.

Few anniversaries have been celebrated with such mixed feelings as the launch of Sputnik-1 half a century ago. That beeping little metal orb, innocuously named “fellow traveller of Earth”, signalled the beginning of satellite telecommunications, global environmental monitoring, and space-based astronomy, as well as the dazzling saga of human journeys into the cosmos. But the flight of Sputnik was also a pivotal moment in the Cold War, a harbinger of intercontinental nuclear missiles and space-based surveillance and spying.

That’s why it seems surprising that another anniversary this year has gone relatively unheralded. In 1967, 90 nations signed the Outer Space Treaty (OST), in theory binding themselves to an agreement on the peaceful uses of space that prohibited the deployment there of weapons of mass destruction. Formally, the treaty remains in force; in practice, it is looking increasingly vulnerable as a protection against th militarization of space.

Updating and reinvigorating the commitments of the OST seems to be urgently needed, but this currently stand little chance of being realized. Among negotiators and diplomats there is now a sense of gloom, a feeling that the era of large-scale international cooperation and legislation on security issues (and perhaps more widely) may be waning.

Last year was the tenth anniversary of the Comprehensive Test Ban Treaty (CTBT), and next year the fortieth anniversary of the Nuclear Non-Proliferation Treaty. But the world’s strongest nuclear power, the United States, refuses to ratify the CTBT, while some commentators believe the world is entering a new phase of nuclear proliferation. No nuclear states have disarmed during the time of the NPT’s existence, despite the binding commitment of signatory states “to pursue negotiations in good faith on effective measures relating to nuclear disarmament”.

In this arena, the situation does seem to be in decline. For example, the US appears set on developing a new generation of nuclear weapons and deploying a ballistic missile defence system, and it withdrew from the Anti-Ballistic Missile Treaty in 2002. China and Israel have also failed to ratify the CTBT, while other nuclear powers (India, Pakistan) have not even signed it. North Korea, which withdrew from the NPT in 2003, now claims to have nuclear weapons.

Given how poorly we have done so close to home, what are the prospects for outer space? “For the past four decades”, says Sergei Ordzhonikidze, Director-General of the United Nations Office at Geneva, “the 1967 Outer Space Treaty has been the cornerstone of international space law. The treaty was a great historic achievement, and it still is. The strategic – and at the same time, noble and peaceful – idea behind [it] was to prevent the extension of an arms race into outer space.”

Some might argue that those goals were attained and that there has been no arms race in space. But a conference [1] convened in Geneva last April by the United Nations Institute for Disarmament Research suggested that the situation is increasingly precarious, and indeed that military uses of space are well underway and likely to expand.

Paradoxically, the thawing of the Cold War is one reason why the OST is losing its restraining power. During a confrontration of two nuclear superpowers, it is rather easy to see (and game theory confirms) that cooperation on arms limitation is in the national interest. But as Sergey Batsanov, Director of the Geneva Office of the Pugwash group for peaceful uses of science, pointed out in the UN meeting, “after the end of the Cold War, disarmament and non-proliferation in their traditional forms could no longer be considered as vital instruments for maintaining the over-all status quo.” Batsanov suggests we are now in a transitional phase of geopolitics in which new power structures are emerging and there is in consequence a “crisis in traditional international institutions, and the erosion, or perhaps evolution, of norms of international law (such as the inviolability of borders and non-interference in another state’s internal affairs).”

It’s not hard to see what he is alluding to there. Certainly, it seems clear that the US plans for maintaining “space superiority” – the “freedom to attack as well as the freedom from attack” - does much to harm international efforts on demilitarization of space. The tensions created with Russia by US plans to site missile defence facilities in eastern Europe is just one example of that. James Armor, Director of the US National Security Space Office, indicates that, following the “emergence of space-enabled transitional warfare” using satellite reconnaissance in Operation Desert Storm in Iraq in 1991, military space capabilities have now become “seamlessly integrated into the overall US military structure”.

But it would be unwise and unfair to imply that the United States is a lone ‘rogue agent’. China has exhibited a clear display of military capability in space; as Xu Yansong of the National Space Administration of the People’s Republic of China explained at the UN conference, China’s space activities are aimed not only at “utilizing outer space for peaceful purposes” but “protecting China’s national interests and rights, and comprehensively building up the national strength” – which could be given any number of unsettling interpretations. Yet China, like Russia, has been supportive of international regulation of space activities, and it’s not clear how much of this muscle-flexing is meant to create a bargaining tool.

The real point is that the OST is an agreement forged in a different political climate from that of today. Its military commitments amount to a prohibition of nuclear weapons and other “weapons of mass destruction” in space, and the use of the Moon and other celestial bodies “exclusively for peaceful purposes.” That’s a long way from prohibiting all space weapons. As Kiran Nair of the Indian Air Force argued, “the OST made certain allowances for military uses of outer space [that] were exploited then, and are exploited now and ill continue to be so until a balanced agreement on the military utilization of outer space is arrived at.”

What’s more, there was no explicit framework in the OST for consultations, reviews and other interactions that would sustain the treaty and ensure its continued relevance. And as Batsanov says, now there are more players in the arena, and a wider variety of potential threats.

Both Russia and China have called for a new treaty, and earlier this year President Putin announced the draft of such a document. But we don’t necessarily need to ditch the OST and start anew. Indeed, the treaty has already been the launch pad for various other agreements, for example on liability for damage caused by space objects and on the rescue of astronauts. It makes sense to build on structures already in place.

The key to success, however, is to find a way of engaging all the major players. In that respect, the United States still seems the most recalcitrant: its latest National Space Policy, announced in October 2006, states that the OST is sufficient and that the US “will oppose the development of new legal regimes or other restrictions that seek to prohibit or limit US access to or use of space.” In other words, only nuclear space weaponry is to be considered explicitly out of bounds. Armor made the prevailing Hobbesian attitude clear at the Geneva meeting: “In my view, attempts to create regimes or enforcement norm that do not specifically include and build upon military capabilities are likely to be stillborn, sterile and ultimately frustrating efforts.” Whatever framework he envisages, it’s not going to look much like the European Union.

But it needn’t be a matter of persuading nations to be more friendly and less hawkish. There are strong arguments for why pure self-interest in terms of national security (not to mention national expenditure) would be served by the renunciation of all plans to militarize space – just as was the case in 1967. Rebecca Johnson of the Acronym Institute for Disarmament Diplomacy pointed out that after the experience in Iraq, US strategists are “coming to see that consolidating the security of existing assets is more crucial than pursuing the chimera of multi-tiered invulnerability.” The recent Chinese anti-satellite test, for from being a red flag to a bullish military, might be recognized as an indication that no one stays ahead in this race for long, and the US knows well that arms races are debilitating and expensive.

The danger with the current Sputnik celebrations is that they might cast the events in 1957 as pure history, which has now given us a world of Google Earth and the International Space Station. The fact is that Sputnik and its attendant space technologies reveal a firm link between the last world war, with its rocket factories manned by slaves and its culmination in the instant destruction of two cities, and the world we now inhabit. The OST is not merely a legacy of Sputnik but the only real international framework for the way we use space. Unless it can be given fresh life and relevance, we have no grounds for imagining that the military space race is over.

Reference
1. Celebrating the Space Age: 50 Years of Space Technology, 40 Years of the Outer Space Treaty (United Nations Institute for Disarmament Research, Geneva, 2007).

Wednesday, October 03, 2007

Yet more memory of water

This month’s issue of Chemistry World carries a letter from Martin Chaplin and Peter Fisher in response to my column discussing the special issue of Homeopathy on the ‘memory of water’. Mark Peplow asked if I wanted to respond, but I told him that he should regard publication of my response as strictly optional. In the event, he rightly chose to use the space to include another letter on the topic. So here for the record is Martin and Peter’s letter, and my response. I suppose I could be a little annoyed by the misrepresentation of what I said at the end of their letter, but I’m happy to regard it as miscomprehension.


From Martin Chaplin and Peter Fisher


We put together the ‘Memory of water’ issue of the journal Homeopathy, the subject of Philip Ball’s recent column (Chemistry World, September 2007, p38), to show the current state of play. It contained all the current scientific views representing the different experimental and theoretical approaches to the ‘memory of water’ phenomena. Some may be important and others less so, but now the different areas of the field can be fairly judged. The papers mostly demonstrated the similar theme that water preparations may have unexpected properties, contain unexpected solutes and show unexpected changes with time; all very worthy of investigation. Although not the main purpose of the papers, we show the problems as much as the potential of these changed properties in relation to homeopathy.

Ball skirts over the unexpected experimental findings that he finds ‘puzzling’, so ignoring the very heart of the phenomena we are investigating and misinterpreting the issue. He backs up his argument with statements concerning pure water and silicate solutions that are clearly not relevant to the present discussion. Also, he uses Irving-Langmuir to prop up his argument. This is fitting as Langmuir dismissed the Jones-Ray effect (http://www.lsbu.ac.uk/water/explan5.html#JR), whereby the surface tension of water is now known to be reduced by low concentrations of some ions, as this disagreed with his own theories. Finally Ball finishes with the amazing view that he knows the structure of water in such solutions with great confidence; I wish he would share that knowledge with the rest of us.

M F Chaplin CChem FRSC, London, UK

P Fisher, Editor,Homeopathy, Luton, UK



Response from Philip Ball

I have discussed elsewhere some of the experimental papers to which Chaplin and Fisher refer (see http://www.nature.com/news/2007/070806/full/070806-6.html and www.philipball.blogspot.com). Some of those observations are intriguing, but each raises its own unique set of questions and concerns, and they couldn’t possibly all be discussed in my column. Langmuir’s ideas feature nowhere in my argument; I simply point out that he coined the term ‘pathological science.’ If the issues I raise about silicate self-organization are not relevant to the discussion, why do Anick and Ives mention them in their paper? And I never stated that I or anyone else knows the structure of water or aqueous solutions with great confidence; I merely said that there are some things we do know with confidence about water’s molecular-scale structure (such as the timescale of hydrogen-bond making and breaking in the pure liquid), and they should not be ignored.

Monday, October 01, 2007

What’s God got to do with it

There’s a curious article in the September issue of the New Humanist by Yves Gingras, a historian and sociologist of science at the University of Quebec. Gingras is unhappy that scientists are using references to God to sell their science (or rather, their books), thereby “wrap[ping] modern scientific discoveries in an illusory shroud that insinuates a link between cutting-edge science and solutions to the mysteries of life, the origins of the universe and spirituality.” But who are these unscrupulous bounders? Well… Paul Davies, and… and Paul Davies, and… ah, and Frank Tipler. Well yes, Tipler. My colleagues and I decided recently that we should introduce the notion of the Tipler Point, being the point beyond which scientists lose the plot and start rambling about the soul/immortality/parallels between physics and Buddhism. A Nobel prize is apt to take you several notches closer to the Tipler Point, though clearly it’s not essential. And such mention of Buddhism brings us to Fritjof Capra, and if we’re going to admit him to the ranks of ‘scientists’ who flirt with mysticism then the game is over and we might as well bring in Carl Jung and Rudolf Steiner.

Gingras suggests that the anthropic principle is “bizarre and clearly unscientific”, and that it has affinities with intelligent design. Now, I’m no fan of the anthropic principle (see here), but I will concede that it is actually an attempt to do the very opposite of what intelligent design proposes – to obviate the need to interpret the incredible fine-tuning of the physical universe as evidence of design. The fact is that this fine-tuning is one of the most puzzling issues in modern physics, and if I were a Christian of the sort who believes in a Creator (not all have that materialist outlook), I’d seize on this as a pretty strong indication that my beliefs are on to something. The Templeton Foundation, another of Gingras’s targets, has hosted some thoughtful meetings on the theme of fine-tuning, and while I’m agnostic about the value and/or motives of the Templeton Foundation, I don’t obviously see a need to knock them for raising the question.

Paul Davies has indeed hit a lucrative theme in exploring theological angles of modern cosmology, but he does so in a measured and interesting way in which I don’t at all recognize Gingras’s description of “X-files science” or an “oscillation between science and the paranormal.” Frankly, I’m not sure Gingras is on top of his subject – when, as I expected resignedly, he fishes out Stephen Hawking’s famous “mind of God” allusion, he seems to see it as a serious suggestion, and not simply as an attempt by an excellent scientist but indifferent writer to inject a bit of pizzazz into his text. Hawking’s reference is obviously theologically naïve, and gains supposed gravitas only because of the oracular status that Hawking has, for rather disturbing reasons, been accorded.

Still, I suppose I will also be deemed guilty of peddling religious pseudo-science for daring to look, in my next book, at the theological origins of science in the twelfth century…

Friday, September 28, 2007


Space experiments should be a cheap shot
[This is the pre-edited version of my latest article for muse@nature.com - with some added comment at the end.]

We rarely learn anything Earth-shaking from space labs, which is why inexpensive missions like Foton-M3 are the way to go.

Space experiments have rarely seemed as much fun as they do on the European Space Agency’s Foton-M3 mission, which blasted off two weeks ago from the Russian launch site at Baikonur in Kazakhstan for a 12-day spell in low-Earth orbit. Among the experiments in the 400-kg payload were an exploration of capsule delivery to Earth using a 30-km space tether, a study of how human balance works and an investigation of whether organic matter in space rocks can withstand the heat of orbital re-entry so that life could be transferred between planets, as posited in the ‘panspermia’ hypothesis, by sticking a chunk of Scottish rock onto the spacecraft’s side.

None of these experiments seems likely be itself to lead to any major new discoveries or technological breakthroughs. And none can be considered terribly urgent – the balance study, which looks at how a balance organ called the otolith grows in larval fish in zero gravity, has been on the shelf for years, the first attempt being one of the minor casualties of the ill-fated Columbia space shuttle mission in early 2003.

But it would be churlish to criticize Foton-M3 for the incremental nature of its science. Most scientific research in general is like that, and the roster of experiments is not only impressively long for such a relatively cheap mission but also appealingly diverse, spanning subjects from microbiology to geoscience to condensed-matter physics.

What’s more, the tether experiment has arisen from a project called the Young Engineers’ Satellite 2 (YES2), involving more than 450 students throughout Europe. The aim is to use a tether to slow down an object falling back into Earth’s gravity from a spacecraft so that it continues falling instead of being captured in orbit. This could offer a cheap way of delivering payloads from space to Earth.

Admittedly the experiment seems not to have quite worked out as planned, because apparently not all the tether unreeled. And the notion of finding a cheap postal method for the indefensibly expensive white elephant known as the International Space Station, which has so far yielded very little worth delivering in the first place, is rather hard to swallow.

But as a way to engage students in serious space research that poses interesting scientific and technological questions and might conceivably find uses in the future, YES2 can’t be faulted.

Foton-M3 does evoke a degree of dèja-vu – how many earlier space experiments have claimed to be “improving our understanding of protein structure by growing protein crystals in weightlessness”, or learning about loss of bone mass in astronauts? But there’s bound to be some duds in over 40 experiments.

What’s curious about some of these is that they threaten to undermine their own justification. If we can design robotic instruments to look at the growth of bone or tissue cells so that we can predict how astronauts might fare on long-term space missions, can we not design robots to replace those very astronauts? Preparing the ground for human space exploration demands such advances in automation that, by the time we’re ready, we’ll have run out of good scientific reasons for it. There may be non-scientific arguments, such as the educational and inspirational value, but a mission like Foton-M3 at least raises doubts about why there is any reason for near-Earth manned spaceflight.

A report by the UK Royal Astronomical Society (RAS) Commission of the Scientific Case for Human Space Exploration, published in 2005, seems to challenge such scepticism. It claimed, for example, that “the capabilities of robotic spacecraft will fall well short of those of human explorers for the foreseeable future.”

But what this turns out to amount to is a statement of the obvious: robots are nowhere near achieving human-like intelligence and decision-making capabilities. There’s no doubt that having humans on site will permit more flexible, faster and more thoughtful responses to unexpected circumstances in lunar or planetary exploration. But since one can probably have ten robotic missions for the price of one manned (and since it might soon take as little as three months to get to Mars), that isn’t obviously a clinching argument, especially when you think about the cost of failure – the success rate for Mars missions is so far not much more than 1 in 4. And robots are, in many ways, considerably more robust than humans.

The RAS report also claimed that “there are benefits for medical science to be gained from studying the human physiological response to low and zero gravity [and] to the effects of radiation.” This claim drew heavily on a letter from the UK Space Biomedicine Group (UKSBG), who one might imagine to be rather disposed to the idea in the first place. They claim that studying bone demineralization in micro- and zero gravity “could dramatically improve the understanding and treatment of osteoporosis.”

That’s why space experiments like those on board Foton-M3 are relevant to the debate. One experiment on the mission looks at precisely this question of bone mass loss using bovine bone; another involves bone-forming and bone-degrading cells cultured in vitro. In other words, one of the key putative health spinoffs of human spaceflight, according to the RAS Commission, is already being studied in cheap unmanned missions. It is conceivable that we would learn something (the UKSBG doesn’t specify what) from live humans that we would not from dead cows, or from live mice or human cell cultures. But should that unknown increment weigh heavily on the scales that the RAS were seeking to balance?

The considerations raised by the RAS report also bear on the question of why it is that such experiments have enjoyed sustained support in the past despite being pretty uninspiring and lacking in real technological payoff. If we think (rightly or wrongly) that it is intrinsically interesting to blast people into space, we’ll tend to feel that way about the stuff they do there too (so that a golf drive in space makes headlines).

Thus, many space experiments, such as the recent demonstration that Salmonella bacteria on the space shuttle Atlantis were more virulent in zero gravity [1], gain interest not because of the results in themselves but because of the very fact of their having been obtained in space. That particular result was already known from microgravity experiments on Earth, and in any event much of the interest centred on whether it means astronauts will suffer more from germs. The glamour that seems to attach to space experiments almost invariably distorts the import of what they find, all the more so because they are used as their own justification: “look at what space experiments can tell us about stuff that happens in space!”

As a result, Foton-M3 provides a nice illustration of proper cost-benefit thinking. The ‘panspermia’ tests, say, operated by a team from the University of Aberdeen, will at best provide a useful addition to a wealth of previous studies on space- and impact-resistance of organic matter and living organisms. A study of temperature and concentration fluctuations in fluids provides a nice verification of a result that was generally expected theoretical grounds – it is the kind of experiment that would be undertaken without hesitation if it could be done in a lab, but which would certainly not warrant its own dedicated space mission.

In other words, when Foton-M3 plummeted back down to Earth near the Russian/Kazakh border on Wednesday [26 September], it should have blown a big hole in starry-eyed visions of space experimentation. This is how it should really be done: modest but intrinsically interesting investigations, realised at a modest cost, and performed by robots.

Reference

1. Wilson, J. W. et al. Proc.Natl Acad. Sci. USA online early edition, doi:10.1073/pnas.0707 (2007).

The more I think about it, the worse the RAS report seems. When it comes to space exploration generally, they do a fair job of taking into consideration the fact that robots can be guided remotely by human intelligence, and don’t need to be autonomous decision-makers. But even this was rather specious in its use of deep-sea engineering as a means of comparison – getting humans to the sea floor, and the hazards they face there, hardly compares with sending them to Mars. When, however, the discussion turned to biomedical spinoffs, the RAS Commission seemed to forget all about doing things robotically – they simply pleaded lack of expertise, which meant they seemingly relied entirely on the testimonies of the UKSBG and human spaceflight advocate Kevin Fong. At no point do they seem to ask whether the biomedical benefits proposed might be obtained equally in unmanned missions. As far as osteoporosis goes, for example, the question is not whether manned spaceflight might tell us something about it but whether:
1. there are critical questions about the condition that can be answered only by micro- or zero-gravity studies; and
2. these questions can only be answered by studying live human subjects and not animals or cell cultures.
The UKSBG point to no such specific questions, and I rather doubt that they could. (Certainly, it is not as though we need to study astronauts in order to monitor human bone mass loss in vivo.) If there are not good answers to these points, the RAS should not be using this line as a reason for human space exploration (as opposed to stuff you might as well do if you’re going up there anyway).
It’s the same story for the work on Salmonella that I mention. There are vague promises of improved understanding of the emergence of virulent strains on Earth, but no indication of why a space experiment will really tell you much more in this regard than a terrestrial simulation of zero G. Much of the interest seems to centre on the question of whether astronauts would face nastier bugs, which of course becomes an issue only if you put them up there in the first place. This is the kind of fuzzy thinking that defenders of human space exploration get away with all the time.

Wednesday, September 26, 2007

Hybrids and helium
[This is the pre-edited version of my Lab Report column for the October issue of Prospect.]

It’s not obvious that, when the Human Fertilisation and Embryology Authority was established in 1991, anyone involved had much inkling of the murky waters it would be required to patrol. The HFEA was envisaged primarily as a body for regulating assisted conception, and so it seemed sensible to give it regulatory powers over human embryo research more generally. Sixteen years later, the HFEA is having to pronounce on issues that have little bearing on fertility and conception, but instead concerns biological research that some say is blurring the boundaries of what it means to be human.

So far, the HFEA has remained commendably aloof from the ill-founded fears that this research attracts. Its latest permissive ruling on the creation of human-animal cells is the outcome of sober and informed consideration of a sort that still threatens to elude the British government. It belies (in the UK, at least) the fashionable belief that Enlightenment ideals are in eclipse.

There are many different ways human and non-human components might be mixed in embryos. Some research requires human genetic material to be put into animal cells – for example, to create human embryonic stem cells without reliance on a very limited supply of human eggs. There are also arguments for putting animal genes into human cells, which could offer new ways to study the early stages of human development, and might even help assess embryo quality for assisted conception.

Certainly, there are dangers. For example, eviscerating an animal cell nucleus (where most DNA is housed) to make way for a human genome does not remove all the host’s genetic material. Such transfers, which produce so-called cytoplasmic hybrid (‘cybrid’) cells might, if used to make stem cells for medical implantation, run the risk of introducing animal diseases into human populations. Recent findings that genomes can be altered by ‘back-transfer’ from non-genetic material adds to the uncertainties.

But no one is intending at this stage to use cybrids for stem-cell treatments; they are strictly a research tool. The HFEA has decided that there is no ‘fundamental reason’ to prohibit them – recognizing, it seems, that protests about human dignity and unnaturalness impose misplaced criteria. It stresses that the ruling is not a universal green light, however, and that licensing will be made on a case-by-case basis – as they surely should be. The first such applications are already being considered, and are likely to be approved.

The ruling says nothing yet about other human-animal fusions, such as embryos with mixtures of human and animal cells (true chimeras) or hybrids made by fertilization of eggs with sperm of another species. These too may be useful in research, but carry a higher yuk factor. On current form, it seems we can count on the HFEA not to succumb to squeamishness, panic or the mendacious rhetoric of the slippery slope.

*****

Was it vanity or bravery that prompted Craig Venter to allow his complete genome to be sequenced and made public? That probably depends on how you feel about Venter, whose company Celera controversially provided the privatized competition to the international Human Genome Project. Both those efforts constructed a composite genome from the DNA of several anonymous donors, and analysed only one of each pair of the 23 human chromosomes.

In contrast, Venter’s team has decoded both of his chromosomes, revealing the different versions of genes acquired from each parent. It is these variants, along with the way each is controlled within the genome and how they interact with the environment, that ultimately determines our physical characteristics. The analysis reveals other sources of difference between chromosomal ‘duplicates’, such as bits of genes that have bits inserted or cut out. This is, you might say, a study of how much we differ from ourselves – and it should help to undermine the simplistic notion that we’re each built from a single instruction manual that is merely read again and again from conception to the grave.

Venter bares all in a paper in the free-access electronic journal PLoS Biology, joining Jim Watson, a co-discoverer of the structure of DNA, as one of the first individuals to have had his personal genome sequenced. Some have complained that this ‘celebrity’ sequencing sends out the message that personalized genomics will be reserved for the rich and privileged. But no one yet really knows whether such knowledge will prove a benefit or a burden – Venter has discovered a possible genetic propensity towards Alzheimer’s and cardiovascular diseases. The legal and ethical aspects of access to the information are a minefield. Venter himself says that his motive is partly to stimulate efforts to make sequencing cheaper. But right now, he has become in one sense the best-known man on the planet.

*****

The moon has always been a source of myth, and now we have some modern ones. Many people will swear blind, without the slightest justification, that the Apollo missions gave us Teflon and the instant fruit drink Tang. New calls for a moon base are routinely supported now with the claim that we can mine the lunar surface for nuclear-fusion fuel in the form of helium-3, a rare commodity on Earth. BBC’s Horizon bought the idea, and it’s been paraded in front of the US House of Representatives. But as physicist Frank Close pointed out recently, there is no sound basis to it. None of the large fusion projects uses helium-3 at all, and the suggestion that it would be a cleaner fuel simply doesn’t work, at least without a total reactor redesign. That’s not even to mention the cost of it all. But no straw is too flimsy that advocates of human spaceflight will fail to grasp it.

Friday, September 14, 2007

Burning water and other myths

[Here is my latest piece for muse@nature. This stuff dismays and delights me in equal measure. Dismays, because it shows how little critical thought is exercised in daily life (by the media, at least). Delights, because it vindicates my thesis that water’s mythological status will forever make it a magnet for pathological science. In any event, do watch the video clips – they’re a hoot.]

We will never stem the idea that water can act as a fuel.

Have you heard the one about the water-powered car? If not, don’t worry – the story will come round again. And again. Crusaders against pseudoscience can rant and rave as much as they like, but in the end they might as well accept that the myth of water as fuel is never going to go away.

Its latest manifestation comes from Pennsylvania, where a former broadcast executive named John Kansius claims to have found a way to turn salt water into a fuel. Expose it to a radiofrequency field, he says, and the water burns. There are videos to prove it, and scientists and engineers have apparently verified the result.

“He may have found a way to solve the world’s energy problems”, announced one local TV presenter. “Instead of paying four bucks for gas, how would you like to run your car on salt water?” asked another. “We want it now!” concludes a wide-eyed anchorwoman. Oh, don’t we just.

“I’d probably guess you could power an automobile with this eventually”, Kansius agrees. Water, he points out, is “the most abundant element in the world.”

It’s easy to scoff, but if the effect is genuine then it is also genuinely intriguing. Plain tap water apparently doesn’t work, but test tubes of salt water can be seen burning merrily with a bright yellow flame in the r.f. field. The idea, articulated with varying degrees of vagueness in news reports when they bother to think about such things at all, is that the r.f. field is somehow dissociating water into oxygen and hydrogen. Why salt should be essential to this process is far from obvious. You might think that someone would raise that question.

But no one does. No one raises any questions at all. The reports offer a testament to the awesome lack of enquiry that makes news media everywhere quite terrifyingly defenceless against bogus science.

And it’s not just the news media. Here is all this footage of labs and people in white coats and engineers testifying how amazing it is, and not one seems to be wondering about how this amazing phenomenon works. As a rule, it is always wise to be sceptical of people claiming great breakthroughs without the slightest indication of any intellectual curiosity about them.

This is not in itself to pass any judgement on Kansius’s claims; as ever, they must stand or fall on the basis of careful experiment. But the most fundamental, the most critical question about the whole business leaps out at you so immediately that its absence from these reports, whether they be on Pennsylvania’s JET-TV or on PhysOrg.com, is staggering. The effect relies on r.f. fields, right? So how much energy is needed to produce this effect, and how much do you get out?

I can answer that right now. You start with water, you break it apart into its constituent elements, and then you recombine them by burning. Where are you ever going to extract energy from that cycle, if you believe in the first law of thermodynamics? Indeed, how are you going to break even, if you believe in the second law of thermodynamics?

But ‘energy for free’ enthusiasts don’t want to know about thermodynamics. Thermodynamics is a killjoy. Thermodynamics is like big government or big industry, always out to squash innovation. Thermodynamics is the enemy of the Edisonian spirit of the backyard inventor.

Here, however (for what it is worth) is the definitive verdict of thermodynamics: water is not a fuel. It never has been, and it never will be. Water does not burn. Water is already burnt – it is spent fuel. It is exhaust.

Oh, it feels better to have said that, but I don’t imagine for a moment that it will end these claims of ‘water as fuel’. Why not? Because water is a mythical substance. Kansius’s characterization of water as an ‘element’ attests to that: yes, water is of course not a chemical element, but it will never shake off its Aristotelian persona, because Aristotle’s four classical elements accord so closely with our experiential relationship with matter.

Indeed, one of the most renowned ‘water as fuel’ prophets, the Austrian forester Viktor Schauberger, whose experiments on water flumes and turbulence led to a most astonishing history that includes audiences with Hitler and Max Planck and water-powered Nazi secret weapons, claimed that water is indeed in some sense elemental and not ‘compound’ at all.

And water has always looked like a fuel – for it turned the water wheels of the Roman empire, and still drives hydroelectric plants and wave turbines all over the world. No wonder it seems energy-packed, if you don’t know thermodynamics.

Water, we’re told, can unlock the hydrogen economy, and holds untold reserves of deuterium for nuclear fusion. Here is nuclear pioneer Francis Aston on the discovery of fusion in 1919: “To change the hydrogen in a glass of water into helium would release enough energy to drive the Queen Mary across the Atlantic and back at full speed.” Was it a coincidence that cold fusion involves the electrolysis of (heavy) water, or that the controversial recent claims of ‘bubble fusion’ now subject to investigations of malpractice took place in water? Of course not.

As for ‘burning water’, that has a long history in itself. This was what the alchemists called alcohol when they first isolated it, and they were astonished by a water that ignites. One of the recent sightings of ‘water fuel’ happened 11 years ago in Tamil Nadu in India, where a chemist named Ramar Pillai claimed to power a scooter on ‘herbal petrol’ made by boiling herbs in water at a cost of one rupee (three cents) a litre. Pillai was granted 20 acres of land by the regional government to cultivate his herbal additive before he was rumbled.

And then there is poor Stanley Meyer, inventor of the ‘water-powered car’. Meyer just wanted to give people cheap, clean energy, but the oil companies wouldn’t have it. They harassed and intimidated him, and in 1996 he was found guilty of “gross and egregious fraud” by an Ohio court. He died in 1998 after eating at a restaurant; the coroner diagnosed an aneurysm, but the conspiracy web still suspects he was poisoned.

It’s not easy to establish how Meyer’s car was meant to work, except that it involved a fuel cell that was able to split water using less energy than was released by recombination of the elements. Dig a little deeper and you soon find the legendary Brown’s gas, a modern chemical unicorn to rival phlogistion, in which hydrogen and oxygen are combined in a non-aqueous state called ‘oxyhydrogen’. Brown’s gas was allegedly used as a vehicle fuel by its discoverer, Australian inventor Yull Brown.

I think Kansius must be making Brown’s gas. How else can you extract energy by burning water, if not via a mythical substance? Unlike Stan Meyer’s car, this story will run and run.

Friday, September 07, 2007

Arthur Eddington was innocent!
[This is, pre-edited as usual, my latest article for muse@nature. I wonder whether I have been a little guilty of the sin described herein, of over-enthusiastic demolition of the classic stories of science. In my 2005 book Elegant Solutions I made merry use of Gerald Geison’s sceptical analysis of the Pasteur discovery of molecular chirality; but Geison’s criticisms of the popular tale have themselves been controversial. All the same, his argument seemed to make sense to me, and I’m quite sure that there was indeed some myth-spinning around this tale, abetted by Pasteur himself to boost his own legend.]

Dismissing the famous ‘verification’ of Einstein’s general relativity as a work of data-fudging is unwarranted, a new study argues.

There was once a time when the history of science was conventionally told as a succession of Eureka moments in which some stroke of experimental or theoretical genius led the scales to fall from our eyes, banishing old, false ideas to the dustbin.

Now we have been encouraged to think that things don’t really happen that way, and that in contrast scientific knowledge advances messily, one theory vanquishing another in a process that involves leaps of faith, over-extrapolated results and judicious advertising. Antoine Lavoisioer’s oxygen theory, Friedrich Wöhler’s synthesis of urea and the ‘death of vitalism’, Louis Pasteur’s germ theory – all have been picked apart and reinterpreted this way.

Generally speaking, the picture that emerges is probably a more accurate reflection of how science works in practice, and is certainly preferable to the Whiggishness of classic popular ‘histories’ like Bernard Jaffe’s Crucibles: The Story of Chemistry. At its most extreme, however, this sceptical approach can lead to claims that scientific ‘understanding’ changes not because of any deepening insight into the nature of the universe but because of social and cultural factors.

One of the more recent victims of this revisionism is the ‘confirmation’ of Einstein’s theory of general relativity offered in 1919 by the British astronomer Arthur Eddington, who reported the predicted bending of light in observations made during a total ecplise. Eddington, it has been said, cooked his books to make sure that Einstein was vindicated over Newton, because he had already decided that this must be so.

This idea has become so widespread that even physicists who celebrate Einstein’s theory commonly charge Eddington with over-interpreted his data. In his Brief History of Time, Stephen Hawking says of the result that “Their measurement had been sheer luck, or a case of knowing the result they wanted to get.” Hawking reports the widespread view that the errors in the data were as big as the effect they were meant to probe. Some go further,saying that Eddington consciously excluded data that didn’t agree with Einstein’s prediction.

Is that true? According to a study by Daniel Kennefick, a physicist at the University of Arkansas [1], Eddington was in fact completely justified in asserting that his measurements matched the prediction of general relativity. Kennefick thinks that anyone now presented with the same data would have to share Eddington’s conclusion.

The story is no mere wrinkle in the history of science. Einstein’s theory rearranged everything we thought we knew about time and space, deepening his 1905 theory of special relativity so as to give a wholly new picture of what gravity is. In this sense, it transformed fundamental physics forever.

Crudely put, whereas special relativity dealt with objects moving at constant velocity, general relativity turned the spotlight on accelerating bodies. Special relativity argued that time and space are distorted once objects travel at close to the speed of light. This obliterated the Newtonian notion of an absolute reference frame with respect to which all positions, motions and times can be measured; one could only define these things in relative terms.

That was revolutionary enough. But in general relativity, Einstein asserted that gravity is the result of a distortion of spacetime by massive objects. The classic image, disliked by some physicists, is that of a cannonball (representing a star, say) on a trampoline (representing space time), creating a funnel-shaped depression that can trap a smaller rolling ball so that it circles like a planet in orbit.

Even light cannot ignore this remoulding of space by a massive body – the theory predicted that light rays from distant stars should be bent slightly as they skim past the Sun. We can’t hope to see this apparent ‘shifting’ of star positions close to the edge of the blazing Sun. But when it gets blotted out during a total solar eclipse, the bending should be visible.

This is what Eddington set out to investigate. He drew on two sets of observations made from equatorial locations during the eclipse of 29 May 1919: one at the town of Sobral in Brazil, the other on the island of Principe off Africa’s west coast.

With the technology then available, measuring the bending of starlight was very challenging. And contrary to popular belief, Newtonian physics did not predict that light would remain undeflected – Einstein himself pointed out in 1911 that Newtonian gravity should cause some deviation too. So the matter was not that of an all-or-nothing shift in stars’ positions, but hinged on the exact numbers.

The results from the two locations were conflicting. It has been claimed that those at Sobral showed little bending, and thus supported Newton, whereas those at Principe were closer to Einstein’s predictions. The case for prosecuting Eddington is that he is said to have rejected the former and concentrated on the latter.

This claim was made particularly strongly in a 1980 paper [2] by philosophers of science John Earman and Clark Glymour, whose position was made more widely known by Harry Collins and Trevor Pinch in their 1993 book The Golem [3]. Why would Eddington have done this? One possibility is that he had simply been won over by Einstein’s theory, and wanted to see it ‘proved’. But it’s also suggested that Eddington’s Quaker belief in pacifism predisposed him to see a British proof of a German theory as an opportunity for postwar reconciliation.

Kennefick has examined these claims in detail. It is true that the Principe data, which Eddington helped to collect himself, were poor: because of cloudy weather, there were only two useable photographic plates of star positions, with just five stars of each. When Eddington spoke about these measurements in a public talk in September, before he had had a chance to analyse them fully, he admitted that the deflection of starlight seemed to fall between the predictions of Newtonian and relativistic theories. He clearly needed the Sobral data to resolve the matter.

The latter came from two sets of astronomical measurements: one made with a so-called ‘Astrographic’ lens with a wide field of view, and the other using a 4-inch lens borrowed from the Royal Irish Academy. The Astrographic data were expected to be more reliable – and it seems that they supported the non-relativistic prediction. This is where the charges of data-fudging come in, because it has been asserted that Eddington ditched those results and focused instead on the ones collected with the 4-inch lens, which showed ‘full deflection’ in support of Einstein’s view.

The Sobral Astrographic data were discarded, for technical reasons which Dyson and Eddington described in their full account of the expeditions [4]. Kennefick argues that these reasons were sound – but he shows that in any case Eddington semed to have played no part in the decision. He was merely informed of the analysis of the Sobral plates by the expedition leader, the Astronomer Royal Frank Watson Dyson of the Greenwich Observatory in London. Dyson, however, was cautious of Einstein’s theory (as were many astronomers, who struggled to understand it), suspecting it was too good to be true. So it’s not obvious why he would fiddle with the data.

In any event, a modern analysis of these plates in 1979 shows that, taken together, they do support Einstein’s prediction rather well, and that the original teams made assumptions in their calculations that were justified even if they couldn’t be conclusively supported at the time.

Kennefick says that the ‘Eddington fudge’ story has mutated from the sober and nuanced analysis of Earman and Glymour to a popular view that the ‘victory’ of general relativity was nothing but a public-relations triumph. It is now sometimes cited as a reason why scientists should be distrusted in general. Kennefick admits that Eddington may well have had the biases attributed to him – but there is no evidence that he had the opportunity to indulge them, even if he had been so inclined.

It’s a salutary tale for all involved. Scientists need to be particularly careful that, in their eagerness to celebrate past achievements and to create coherent narratives for their disciplines, they do not construct triumphalist myths that invite demolition. (Crick and Watson’s discovery of the structure of DNA is shaping up as another candidate.)

But there is an undeniable attraction in exposing shams and parading a show of canny scepticism. In The Golem, Collins and Pinch imply that the ‘biases’ shown by Eddington are the norm in science. It would be foolish to claim that this kind of thing never happens, but the 1919 eclipse expeditions offer scant support for a belief that such preconceptions (or worse) are the key determinant of scientific ‘truth’.

The motto of the Royal Society – Nullius in verba, loosely translated as ‘take no one’s word for it’ – is often praised as an expression of science’s guiding principle of empiricism. But it should also be applied to tellings and retellings of history: we shouldn’t embrace cynicism just because it’s become cool to knock historical figures off their pedestals.

References
1. Kennefick, D. preprint http://xxx.arxiv.org/abs/0709.0685 (2007).
2. Earman, J. & Glymour, C. Hist. Stud. Phys. Sci. 11, 49 - 85 (1980).
3. Collins, H. M. & Pinch, T. The Golem: What Everyone Should Know About Science. Cambridge University Press, 1993.
4. Dyson, F. W. Eddington, A. S. & Davidson, C. R. Phil. Trans. R. Soc. Ser. A 220, 291-330 (1920).

Wednesday, September 05, 2007


Singing sands find a new tune
[Here’s the unedited version of my latest article for news@nature, which has a few more comments from the researchers than the final piece does (published in print this week).]

A new theory adds to the controversy over why some desert dunes emit sonorous booms.

A new theory for why sand dunes emit eerie booming drones seems likely to stir up fresh controversy, as rival theories contend to answer this ancient puzzle.

Research on this striking natural phenomenon has become something of a battleground after two groups in France, previously collaborators, published their opposing theories. Now a team at the California Institute of Technology, led by mechanical engineer Melany Hunt, says that they’re both wrong [1].

“There are strong feelings in this field”, says physicist Michael Bretz at the University of Michigan, who has studied the ‘song of the sands’. “It’ll take a while longer to get it sorted out. But the explanations keep getting better.”

The ‘singing’ of sand dunes has been known for a very long time. Marco Polo described it on his journeys through the Gobi desert in the thirteenth century, attributing the sound to evil desert spirits. The noise can be very loud, audible for over a kilometre. “It’s really magnificent”, says physicist Stéphane Douady at the Ecole Normale Supérieure in Paris, who has proposed one of the competing theories to explain it.

The effect is clearly related to avalanches of sand, and can be triggered by people sliding down the slopes to get the sand moving – as was done at least since the ninth century during a festival on a sand-covered hill in northwestern China called Mingsha Shan (Sighing Sand Mountain). Charles Darwin heard the ‘song of the sands’ in Chile, saying that it was produced on a sandy hill “when people, by ascending it, put the sand in motion.”

In the twentieth century the doyen of dune science Ralph Bagnold, an army engineer who fell in love with the North African deserts during the Second World War, suggested that the noise was caused by collision of sand grains, the frequency being determined by the average time between collisions. This implies that the frequency of the boom depends on the size of the individual grains, increasing as the grains get smaller.

The previous explanations of the French researchers focused on these collisions during sand avalanches. Douady and his coworkers Bruno Andreotti and Pascal Hersen began to study ‘singing dunes’ during a research trip in Morocco in 2001.

Douady decided that in order for the moving grains to generate a single sound frequency, their motions must become synchronized. This synchronization, he argued, comes from standing waves set up in the sliding layer. The loudness of the noise results from the way that the dune surface acts like a giant loudspeaker membrane.

But Andreotti found a slightly different explanation.. The synchronization of grain motions, he said, comes from waves excited in the sand below the sliding layer itself, which then act back on the moving grains themselves, ‘locking’ their movements together and thus converting random collisions to synchronized ones.

It might seem like a small distinction, but Douady and Andreotti found that they could not resolve their differences, and in the end they published separate papers offering their explanations [2,3]. Andreotti now works at another lab in Paris.

But both explanations have serious problems, according to Hunt. For one thing, the measurements made by her team on several booming dunes in Nevada and California seem to show that the booming frequency doesn’t depend on the grain size at all, as Bagnold suggested and with which both Andreotti and Douady concurred.

What’s more, the previous theories imply that all dunes should be able to ‘sing’, since this is a general property of sand avalanches. But in fact some dunes sing while others don’t – that is, after all, why Mingsha Shan got its name. Why is that? Andreotti has proposed that ‘silent’ dunes aren’t dry enough, or have grains of the wrong shape. But Hunt and colleagues think that the answer lies literally deeper than this.

“Douady and Andreotti have focused on the grain sizes and the surface features of the grains, but did not take large-scale properties of the dunes into account”, says Hunt’s student Nathalie Vriend. “They have not found an explanation yet why the smaller dunes or dunes in the wintertime do not make this sound.”

The Caltech teams says that dunes have to be covered in distinct layers of sand in order to create a boom. Their careful measurements of vibrations in the sand – made with an array of ‘geophones’ on the dune slopes, like those used to monitor seismic waves in earthquake studies – showed that the speed of these seismic waves increases in abrupt steps the deeper the sand is.

In particular, the speed of the seismic waves increases suddenly by almost a factor of two at a depth of about 1.5 m below the dune surface.

The Caltech researchers think that this layered structure, caused by variations in moisture content and bonding of the grains to one another, enables the surface layer to act as a kind of waveguide for acoustic energy, rather like the way an optical fibre channels light. So while they agree that the boom is transmitted to the air by a loudspeaker effect of the dune surface, they think that the frequency is set by the width of the waveguide layer of sand.

Dunes that lack this layered structure – as smaller ones do, for example, won’t ‘sing’ at all: the vibrations simply get dispersed within the sliding sands. The researchers also find that more moisture condensed between the sand grains during the winter smears out the boundaries between the layers of singing dunes and silences them.

This is unlikely to be the last word on the matter, however. For one thing, the strange properties of the sand in ‘booming dunes’ don’t seem to rely on such large-scale influences. “You can take a cupful of this sand and excite it with your finger”, says Peter Haff, a geologist at Duke University in North Carolina who has studied it. “You can feel it vibrating, like running your finger over a washboard. But you can take sand from other parts of the dune, and there’s nothing you can do to make it boom.” Haff concludes that, while these theories may offer part of the answer, “there must be something else going on at a small scale.”

Douady agrees. “The problem for the Caltech theory is that we can recreate these sounds in the lab”, he says. He thinks that the sand layering might play a role in modifying the sound, but that it is “just a decoration” to the basic mechanism of booming. “It’s like the different between singing in a small room and singing in a cathedral,” he says.

Andreotti also finds several reasons to remain unconvinced. In particular, he says “They use sensors only at the surface of the dune. We have made measurements with buried sensors about 20 cm below the surface, and didn’t detect any vibration. This is a strong and direct contradiction of the paper.” So it seems that, with everyone sticking to their own theory, the riddle of the dunes is not yet solved.

References
1. Vriend, N. M. et al. Geophys. Res. Lett. 34, L16306 (2007).
2. Andreotti, B. Phys. Rev. Lett. 93, 238001 (2004).
3. Douady, S. et al. Phys. Rev. Lett. 97, 018002 (2006).

The history of singing dunes

It is asserted as a well-known fact that this desert is the abode of many evil spirits, which amuse travellers to their destruction with most extraordinary illusions. If, during the daytime, any persons remain behind on the road, either when overtaken by sleep or detained by their natural occasions, until the caravan has passed a hill and is no longer in sight, they unexpectedly hear themselves called to by their names, and in a tone of voice to which they are accustomed. Supposing the call to proceed from their companions, they are led away by it from the direct road, and not knowing in what direction to advance, are left to perish. In the night-time they are persuaded they hear the march of a large cavalcade on one side or the other of the road, and concluding the noise to be that of the footsteps of their party, they direct theirs to the quarter from whence it seems to proceed; but upon the breaking of day, find they have been misled and drawn into a situation of danger... Marvellous indeed and almost passing belief are the stories related of these spirits of the desert, which are said at times to fill the air with the sounds of all kinds of musical instruments, and also of drums and the clash of arms; obliging the travellers to close their line of march and to proceed in more compact order.
Marco Polo (1295)

Somewhere, close to us, in an undefined direction, a drum was beating, the mysterious drum of the dunes; it was beating distinctly, sometimes more vibrating, sometimes weakened, stopping, then taking again its fantastic bearing.
The Arabs, terrified, looked at themselves; and one said, in its language: "Death is on us." And here is that suddenly my companion, my friend, almost my brother, fell from horse on the head, struck down ahead by sunstroke.
And during two hours, while I was in vain trying to save it, always this imperceptible drum filled up me the ear of its monotonous, intermittent and incomprehensible noise; and I felt the fear slip into my bones, the true fear, the hideous fear, close to this liked body, in this hole charred by the sun between four mounts of sand, while the unknown echo was throwing us, two hundred miles away of any French village, the fast beat of the drum.
Maupassant (1883)

Whilst staying in the town I heard an account from several of the inhabitants, of a hill in the neighborhood which they called "El Bramador," - the roarer or bellower. I did not at the time pay sufficient attention to the account; but, as far as I understood, the hill was covered by sand, and the noise was produced only when people, by ascending it, put the sand in motion. The same circumstances are described in detail on the authority of Seetzen and Ehrenberg, as the cause of the sounds which have been heard by many travellers on Mount Sinai near the Red Sea.
Charles Darwin (1889)

Update
Andreotti and his colleagues have submitted a comment on the paper by Vriend et al. to Geophys. Res. Lett., which is available here.

Wednesday, August 29, 2007

Letter to Prospect: a response

My column for the June issue of Prospect (available in the archives here) can be seen as somewhat sceptical about the value of the Large Hadron Collider, so it is right that Prospect should publish a letter defending it. But the one that appears in the September issue is a little odd:

“Philip Ball (June) says that "the only use of the LHC [Large Hadron Collider] that anyone ever hears about is the search for the Higgs boson." But this is not so. Physicists may look crazy, but they are not crazy enough to build such a complicated and technically demanding installation just to hunt down one particle. The LHC will be the world's most powerful instrument in particle physics for the next ten to 20 years, and it has been built to help us understand more about the 96 per cent of our universe that remains a mystery. The first thing physicists will be looking for is the Higgs boson, but this is just the beginning of a long journey into the unknown. As with earlier accelerators, there will be surprises.”

I’m glad that the author, Reinhard Budde, quoted my remark, because it reveals his non-sequitur. I did not say, as he implies, “all the LHC will do is look for the Higgs boson.” As a writer, I will make factual mistakes and no doubt also express opinions that are not wholly fair or justified. But I do try to choose my words carefully. Let me repeat them more fully:

“Particle physicists point out that because it will smash subatomic particles into one another with greater energy than ever before, it will open a window on a whole new swathe of reality. But the only use of the LHC that anyone ever hears about is the search for the Higgs boson… The LHC may turn up some surprises—evidence of extra dimensions, say, or of particles that lie outside the standard model.”

(It’s interesting that even Dr Budde doesn’t enlighten us about what else. exactly, the LHC might do, but I was happy to oblige.)

It’s a small point, but it does frustrate me; as I found out as a Nature editor, scientists seem peculiarly bad at comprehension of the written word (they have many other virtues to compensate).

For the record, I support the construction of the LHC, but with some reservations, as I stated in my piece. And by the way, I am a physicist, and I do not feel I look particularly crazy. Nor do I feel this is true of physicists as a whole, although many do have a tendency to look as though they belong in The Big Lebowski (this is a good thing). And the LHC was not built by “physicists” – it was built at the request of a rather small subsection of the global physics community. Not all physicists, or even most, are particle physicists.

Tuesday, August 28, 2007


Check out those Victorian shades, dude

For people interested in the cultural histories of materials, there is a lovely paper by Bill Brock in the latest issue of the Notes and Records of the Royal Society on the role of William Crookes in the development of sunglasses. Bill has written a new biography of Crookes (William Crookes (1832-1919) and the Commercialization of Science, in press with Ashgate), who was one of the most energetic and colourful figures in nineteenth-century British science. Shortly to be made the octogenarian president of the Royal Society, Crookes became involved in the 1900s in a search for forms of glass that would block out infrared and ultraviolet radiation. This search was stimulated by the Workman’s Compensation Act on 1897, which allowed workers to claim compensation for work-related injuries. Glassworkers were well known to suffer from cataracts, and it was hoped by the Home Office that prevention of eye damage by tinted glass would obviate the need for compensation. Crookes began to look into the question, and presented his results to the Royal Society in 1913: a glass formulation that was opaque to UV and reduced IR by 90 percent. Always with an eye on commercial possibilities, he suggested that lenses made of this stuff could have other applications too, for example to prevent snow-blindness. “During the brilliant weather of the late summer [of 1911]”, he said, “I wore some of these spectacles with great comfort; they took off the whole glare of the sun on chalk cliffs, and did not appreciably alter the natural colours of objects. Lady Crookes, whose eyes are more sensitive to glare or strong light than are my own, wore them for several hours in the sun with great comfort.” Before long, these spectacles were being considered by London opticians, although commercialization was hindered by the war. Soon the original aim of cataract prevention in glassmakers was forgotten.