Friday, September 28, 2007


Space experiments should be a cheap shot
[This is the pre-edited version of my latest article for muse@nature.com - with some added comment at the end.]

We rarely learn anything Earth-shaking from space labs, which is why inexpensive missions like Foton-M3 are the way to go.

Space experiments have rarely seemed as much fun as they do on the European Space Agency’s Foton-M3 mission, which blasted off two weeks ago from the Russian launch site at Baikonur in Kazakhstan for a 12-day spell in low-Earth orbit. Among the experiments in the 400-kg payload were an exploration of capsule delivery to Earth using a 30-km space tether, a study of how human balance works and an investigation of whether organic matter in space rocks can withstand the heat of orbital re-entry so that life could be transferred between planets, as posited in the ‘panspermia’ hypothesis, by sticking a chunk of Scottish rock onto the spacecraft’s side.

None of these experiments seems likely be itself to lead to any major new discoveries or technological breakthroughs. And none can be considered terribly urgent – the balance study, which looks at how a balance organ called the otolith grows in larval fish in zero gravity, has been on the shelf for years, the first attempt being one of the minor casualties of the ill-fated Columbia space shuttle mission in early 2003.

But it would be churlish to criticize Foton-M3 for the incremental nature of its science. Most scientific research in general is like that, and the roster of experiments is not only impressively long for such a relatively cheap mission but also appealingly diverse, spanning subjects from microbiology to geoscience to condensed-matter physics.

What’s more, the tether experiment has arisen from a project called the Young Engineers’ Satellite 2 (YES2), involving more than 450 students throughout Europe. The aim is to use a tether to slow down an object falling back into Earth’s gravity from a spacecraft so that it continues falling instead of being captured in orbit. This could offer a cheap way of delivering payloads from space to Earth.

Admittedly the experiment seems not to have quite worked out as planned, because apparently not all the tether unreeled. And the notion of finding a cheap postal method for the indefensibly expensive white elephant known as the International Space Station, which has so far yielded very little worth delivering in the first place, is rather hard to swallow.

But as a way to engage students in serious space research that poses interesting scientific and technological questions and might conceivably find uses in the future, YES2 can’t be faulted.

Foton-M3 does evoke a degree of dèja-vu – how many earlier space experiments have claimed to be “improving our understanding of protein structure by growing protein crystals in weightlessness”, or learning about loss of bone mass in astronauts? But there’s bound to be some duds in over 40 experiments.

What’s curious about some of these is that they threaten to undermine their own justification. If we can design robotic instruments to look at the growth of bone or tissue cells so that we can predict how astronauts might fare on long-term space missions, can we not design robots to replace those very astronauts? Preparing the ground for human space exploration demands such advances in automation that, by the time we’re ready, we’ll have run out of good scientific reasons for it. There may be non-scientific arguments, such as the educational and inspirational value, but a mission like Foton-M3 at least raises doubts about why there is any reason for near-Earth manned spaceflight.

A report by the UK Royal Astronomical Society (RAS) Commission of the Scientific Case for Human Space Exploration, published in 2005, seems to challenge such scepticism. It claimed, for example, that “the capabilities of robotic spacecraft will fall well short of those of human explorers for the foreseeable future.”

But what this turns out to amount to is a statement of the obvious: robots are nowhere near achieving human-like intelligence and decision-making capabilities. There’s no doubt that having humans on site will permit more flexible, faster and more thoughtful responses to unexpected circumstances in lunar or planetary exploration. But since one can probably have ten robotic missions for the price of one manned (and since it might soon take as little as three months to get to Mars), that isn’t obviously a clinching argument, especially when you think about the cost of failure – the success rate for Mars missions is so far not much more than 1 in 4. And robots are, in many ways, considerably more robust than humans.

The RAS report also claimed that “there are benefits for medical science to be gained from studying the human physiological response to low and zero gravity [and] to the effects of radiation.” This claim drew heavily on a letter from the UK Space Biomedicine Group (UKSBG), who one might imagine to be rather disposed to the idea in the first place. They claim that studying bone demineralization in micro- and zero gravity “could dramatically improve the understanding and treatment of osteoporosis.”

That’s why space experiments like those on board Foton-M3 are relevant to the debate. One experiment on the mission looks at precisely this question of bone mass loss using bovine bone; another involves bone-forming and bone-degrading cells cultured in vitro. In other words, one of the key putative health spinoffs of human spaceflight, according to the RAS Commission, is already being studied in cheap unmanned missions. It is conceivable that we would learn something (the UKSBG doesn’t specify what) from live humans that we would not from dead cows, or from live mice or human cell cultures. But should that unknown increment weigh heavily on the scales that the RAS were seeking to balance?

The considerations raised by the RAS report also bear on the question of why it is that such experiments have enjoyed sustained support in the past despite being pretty uninspiring and lacking in real technological payoff. If we think (rightly or wrongly) that it is intrinsically interesting to blast people into space, we’ll tend to feel that way about the stuff they do there too (so that a golf drive in space makes headlines).

Thus, many space experiments, such as the recent demonstration that Salmonella bacteria on the space shuttle Atlantis were more virulent in zero gravity [1], gain interest not because of the results in themselves but because of the very fact of their having been obtained in space. That particular result was already known from microgravity experiments on Earth, and in any event much of the interest centred on whether it means astronauts will suffer more from germs. The glamour that seems to attach to space experiments almost invariably distorts the import of what they find, all the more so because they are used as their own justification: “look at what space experiments can tell us about stuff that happens in space!”

As a result, Foton-M3 provides a nice illustration of proper cost-benefit thinking. The ‘panspermia’ tests, say, operated by a team from the University of Aberdeen, will at best provide a useful addition to a wealth of previous studies on space- and impact-resistance of organic matter and living organisms. A study of temperature and concentration fluctuations in fluids provides a nice verification of a result that was generally expected theoretical grounds – it is the kind of experiment that would be undertaken without hesitation if it could be done in a lab, but which would certainly not warrant its own dedicated space mission.

In other words, when Foton-M3 plummeted back down to Earth near the Russian/Kazakh border on Wednesday [26 September], it should have blown a big hole in starry-eyed visions of space experimentation. This is how it should really be done: modest but intrinsically interesting investigations, realised at a modest cost, and performed by robots.

Reference

1. Wilson, J. W. et al. Proc.Natl Acad. Sci. USA online early edition, doi:10.1073/pnas.0707 (2007).

The more I think about it, the worse the RAS report seems. When it comes to space exploration generally, they do a fair job of taking into consideration the fact that robots can be guided remotely by human intelligence, and don’t need to be autonomous decision-makers. But even this was rather specious in its use of deep-sea engineering as a means of comparison – getting humans to the sea floor, and the hazards they face there, hardly compares with sending them to Mars. When, however, the discussion turned to biomedical spinoffs, the RAS Commission seemed to forget all about doing things robotically – they simply pleaded lack of expertise, which meant they seemingly relied entirely on the testimonies of the UKSBG and human spaceflight advocate Kevin Fong. At no point do they seem to ask whether the biomedical benefits proposed might be obtained equally in unmanned missions. As far as osteoporosis goes, for example, the question is not whether manned spaceflight might tell us something about it but whether:
1. there are critical questions about the condition that can be answered only by micro- or zero-gravity studies; and
2. these questions can only be answered by studying live human subjects and not animals or cell cultures.
The UKSBG point to no such specific questions, and I rather doubt that they could. (Certainly, it is not as though we need to study astronauts in order to monitor human bone mass loss in vivo.) If there are not good answers to these points, the RAS should not be using this line as a reason for human space exploration (as opposed to stuff you might as well do if you’re going up there anyway).
It’s the same story for the work on Salmonella that I mention. There are vague promises of improved understanding of the emergence of virulent strains on Earth, but no indication of why a space experiment will really tell you much more in this regard than a terrestrial simulation of zero G. Much of the interest seems to centre on the question of whether astronauts would face nastier bugs, which of course becomes an issue only if you put them up there in the first place. This is the kind of fuzzy thinking that defenders of human space exploration get away with all the time.

Wednesday, September 26, 2007

Hybrids and helium
[This is the pre-edited version of my Lab Report column for the October issue of Prospect.]

It’s not obvious that, when the Human Fertilisation and Embryology Authority was established in 1991, anyone involved had much inkling of the murky waters it would be required to patrol. The HFEA was envisaged primarily as a body for regulating assisted conception, and so it seemed sensible to give it regulatory powers over human embryo research more generally. Sixteen years later, the HFEA is having to pronounce on issues that have little bearing on fertility and conception, but instead concerns biological research that some say is blurring the boundaries of what it means to be human.

So far, the HFEA has remained commendably aloof from the ill-founded fears that this research attracts. Its latest permissive ruling on the creation of human-animal cells is the outcome of sober and informed consideration of a sort that still threatens to elude the British government. It belies (in the UK, at least) the fashionable belief that Enlightenment ideals are in eclipse.

There are many different ways human and non-human components might be mixed in embryos. Some research requires human genetic material to be put into animal cells – for example, to create human embryonic stem cells without reliance on a very limited supply of human eggs. There are also arguments for putting animal genes into human cells, which could offer new ways to study the early stages of human development, and might even help assess embryo quality for assisted conception.

Certainly, there are dangers. For example, eviscerating an animal cell nucleus (where most DNA is housed) to make way for a human genome does not remove all the host’s genetic material. Such transfers, which produce so-called cytoplasmic hybrid (‘cybrid’) cells might, if used to make stem cells for medical implantation, run the risk of introducing animal diseases into human populations. Recent findings that genomes can be altered by ‘back-transfer’ from non-genetic material adds to the uncertainties.

But no one is intending at this stage to use cybrids for stem-cell treatments; they are strictly a research tool. The HFEA has decided that there is no ‘fundamental reason’ to prohibit them – recognizing, it seems, that protests about human dignity and unnaturalness impose misplaced criteria. It stresses that the ruling is not a universal green light, however, and that licensing will be made on a case-by-case basis – as they surely should be. The first such applications are already being considered, and are likely to be approved.

The ruling says nothing yet about other human-animal fusions, such as embryos with mixtures of human and animal cells (true chimeras) or hybrids made by fertilization of eggs with sperm of another species. These too may be useful in research, but carry a higher yuk factor. On current form, it seems we can count on the HFEA not to succumb to squeamishness, panic or the mendacious rhetoric of the slippery slope.

*****

Was it vanity or bravery that prompted Craig Venter to allow his complete genome to be sequenced and made public? That probably depends on how you feel about Venter, whose company Celera controversially provided the privatized competition to the international Human Genome Project. Both those efforts constructed a composite genome from the DNA of several anonymous donors, and analysed only one of each pair of the 23 human chromosomes.

In contrast, Venter’s team has decoded both of his chromosomes, revealing the different versions of genes acquired from each parent. It is these variants, along with the way each is controlled within the genome and how they interact with the environment, that ultimately determines our physical characteristics. The analysis reveals other sources of difference between chromosomal ‘duplicates’, such as bits of genes that have bits inserted or cut out. This is, you might say, a study of how much we differ from ourselves – and it should help to undermine the simplistic notion that we’re each built from a single instruction manual that is merely read again and again from conception to the grave.

Venter bares all in a paper in the free-access electronic journal PLoS Biology, joining Jim Watson, a co-discoverer of the structure of DNA, as one of the first individuals to have had his personal genome sequenced. Some have complained that this ‘celebrity’ sequencing sends out the message that personalized genomics will be reserved for the rich and privileged. But no one yet really knows whether such knowledge will prove a benefit or a burden – Venter has discovered a possible genetic propensity towards Alzheimer’s and cardiovascular diseases. The legal and ethical aspects of access to the information are a minefield. Venter himself says that his motive is partly to stimulate efforts to make sequencing cheaper. But right now, he has become in one sense the best-known man on the planet.

*****

The moon has always been a source of myth, and now we have some modern ones. Many people will swear blind, without the slightest justification, that the Apollo missions gave us Teflon and the instant fruit drink Tang. New calls for a moon base are routinely supported now with the claim that we can mine the lunar surface for nuclear-fusion fuel in the form of helium-3, a rare commodity on Earth. BBC’s Horizon bought the idea, and it’s been paraded in front of the US House of Representatives. But as physicist Frank Close pointed out recently, there is no sound basis to it. None of the large fusion projects uses helium-3 at all, and the suggestion that it would be a cleaner fuel simply doesn’t work, at least without a total reactor redesign. That’s not even to mention the cost of it all. But no straw is too flimsy that advocates of human spaceflight will fail to grasp it.

Friday, September 14, 2007

Burning water and other myths

[Here is my latest piece for muse@nature. This stuff dismays and delights me in equal measure. Dismays, because it shows how little critical thought is exercised in daily life (by the media, at least). Delights, because it vindicates my thesis that water’s mythological status will forever make it a magnet for pathological science. In any event, do watch the video clips – they’re a hoot.]

We will never stem the idea that water can act as a fuel.

Have you heard the one about the water-powered car? If not, don’t worry – the story will come round again. And again. Crusaders against pseudoscience can rant and rave as much as they like, but in the end they might as well accept that the myth of water as fuel is never going to go away.

Its latest manifestation comes from Pennsylvania, where a former broadcast executive named John Kansius claims to have found a way to turn salt water into a fuel. Expose it to a radiofrequency field, he says, and the water burns. There are videos to prove it, and scientists and engineers have apparently verified the result.

“He may have found a way to solve the world’s energy problems”, announced one local TV presenter. “Instead of paying four bucks for gas, how would you like to run your car on salt water?” asked another. “We want it now!” concludes a wide-eyed anchorwoman. Oh, don’t we just.

“I’d probably guess you could power an automobile with this eventually”, Kansius agrees. Water, he points out, is “the most abundant element in the world.”

It’s easy to scoff, but if the effect is genuine then it is also genuinely intriguing. Plain tap water apparently doesn’t work, but test tubes of salt water can be seen burning merrily with a bright yellow flame in the r.f. field. The idea, articulated with varying degrees of vagueness in news reports when they bother to think about such things at all, is that the r.f. field is somehow dissociating water into oxygen and hydrogen. Why salt should be essential to this process is far from obvious. You might think that someone would raise that question.

But no one does. No one raises any questions at all. The reports offer a testament to the awesome lack of enquiry that makes news media everywhere quite terrifyingly defenceless against bogus science.

And it’s not just the news media. Here is all this footage of labs and people in white coats and engineers testifying how amazing it is, and not one seems to be wondering about how this amazing phenomenon works. As a rule, it is always wise to be sceptical of people claiming great breakthroughs without the slightest indication of any intellectual curiosity about them.

This is not in itself to pass any judgement on Kansius’s claims; as ever, they must stand or fall on the basis of careful experiment. But the most fundamental, the most critical question about the whole business leaps out at you so immediately that its absence from these reports, whether they be on Pennsylvania’s JET-TV or on PhysOrg.com, is staggering. The effect relies on r.f. fields, right? So how much energy is needed to produce this effect, and how much do you get out?

I can answer that right now. You start with water, you break it apart into its constituent elements, and then you recombine them by burning. Where are you ever going to extract energy from that cycle, if you believe in the first law of thermodynamics? Indeed, how are you going to break even, if you believe in the second law of thermodynamics?

But ‘energy for free’ enthusiasts don’t want to know about thermodynamics. Thermodynamics is a killjoy. Thermodynamics is like big government or big industry, always out to squash innovation. Thermodynamics is the enemy of the Edisonian spirit of the backyard inventor.

Here, however (for what it is worth) is the definitive verdict of thermodynamics: water is not a fuel. It never has been, and it never will be. Water does not burn. Water is already burnt – it is spent fuel. It is exhaust.

Oh, it feels better to have said that, but I don’t imagine for a moment that it will end these claims of ‘water as fuel’. Why not? Because water is a mythical substance. Kansius’s characterization of water as an ‘element’ attests to that: yes, water is of course not a chemical element, but it will never shake off its Aristotelian persona, because Aristotle’s four classical elements accord so closely with our experiential relationship with matter.

Indeed, one of the most renowned ‘water as fuel’ prophets, the Austrian forester Viktor Schauberger, whose experiments on water flumes and turbulence led to a most astonishing history that includes audiences with Hitler and Max Planck and water-powered Nazi secret weapons, claimed that water is indeed in some sense elemental and not ‘compound’ at all.

And water has always looked like a fuel – for it turned the water wheels of the Roman empire, and still drives hydroelectric plants and wave turbines all over the world. No wonder it seems energy-packed, if you don’t know thermodynamics.

Water, we’re told, can unlock the hydrogen economy, and holds untold reserves of deuterium for nuclear fusion. Here is nuclear pioneer Francis Aston on the discovery of fusion in 1919: “To change the hydrogen in a glass of water into helium would release enough energy to drive the Queen Mary across the Atlantic and back at full speed.” Was it a coincidence that cold fusion involves the electrolysis of (heavy) water, or that the controversial recent claims of ‘bubble fusion’ now subject to investigations of malpractice took place in water? Of course not.

As for ‘burning water’, that has a long history in itself. This was what the alchemists called alcohol when they first isolated it, and they were astonished by a water that ignites. One of the recent sightings of ‘water fuel’ happened 11 years ago in Tamil Nadu in India, where a chemist named Ramar Pillai claimed to power a scooter on ‘herbal petrol’ made by boiling herbs in water at a cost of one rupee (three cents) a litre. Pillai was granted 20 acres of land by the regional government to cultivate his herbal additive before he was rumbled.

And then there is poor Stanley Meyer, inventor of the ‘water-powered car’. Meyer just wanted to give people cheap, clean energy, but the oil companies wouldn’t have it. They harassed and intimidated him, and in 1996 he was found guilty of “gross and egregious fraud” by an Ohio court. He died in 1998 after eating at a restaurant; the coroner diagnosed an aneurysm, but the conspiracy web still suspects he was poisoned.

It’s not easy to establish how Meyer’s car was meant to work, except that it involved a fuel cell that was able to split water using less energy than was released by recombination of the elements. Dig a little deeper and you soon find the legendary Brown’s gas, a modern chemical unicorn to rival phlogistion, in which hydrogen and oxygen are combined in a non-aqueous state called ‘oxyhydrogen’. Brown’s gas was allegedly used as a vehicle fuel by its discoverer, Australian inventor Yull Brown.

I think Kansius must be making Brown’s gas. How else can you extract energy by burning water, if not via a mythical substance? Unlike Stan Meyer’s car, this story will run and run.

Friday, September 07, 2007

Arthur Eddington was innocent!
[This is, pre-edited as usual, my latest article for muse@nature. I wonder whether I have been a little guilty of the sin described herein, of over-enthusiastic demolition of the classic stories of science. In my 2005 book Elegant Solutions I made merry use of Gerald Geison’s sceptical analysis of the Pasteur discovery of molecular chirality; but Geison’s criticisms of the popular tale have themselves been controversial. All the same, his argument seemed to make sense to me, and I’m quite sure that there was indeed some myth-spinning around this tale, abetted by Pasteur himself to boost his own legend.]

Dismissing the famous ‘verification’ of Einstein’s general relativity as a work of data-fudging is unwarranted, a new study argues.

There was once a time when the history of science was conventionally told as a succession of Eureka moments in which some stroke of experimental or theoretical genius led the scales to fall from our eyes, banishing old, false ideas to the dustbin.

Now we have been encouraged to think that things don’t really happen that way, and that in contrast scientific knowledge advances messily, one theory vanquishing another in a process that involves leaps of faith, over-extrapolated results and judicious advertising. Antoine Lavoisioer’s oxygen theory, Friedrich Wöhler’s synthesis of urea and the ‘death of vitalism’, Louis Pasteur’s germ theory – all have been picked apart and reinterpreted this way.

Generally speaking, the picture that emerges is probably a more accurate reflection of how science works in practice, and is certainly preferable to the Whiggishness of classic popular ‘histories’ like Bernard Jaffe’s Crucibles: The Story of Chemistry. At its most extreme, however, this sceptical approach can lead to claims that scientific ‘understanding’ changes not because of any deepening insight into the nature of the universe but because of social and cultural factors.

One of the more recent victims of this revisionism is the ‘confirmation’ of Einstein’s theory of general relativity offered in 1919 by the British astronomer Arthur Eddington, who reported the predicted bending of light in observations made during a total ecplise. Eddington, it has been said, cooked his books to make sure that Einstein was vindicated over Newton, because he had already decided that this must be so.

This idea has become so widespread that even physicists who celebrate Einstein’s theory commonly charge Eddington with over-interpreted his data. In his Brief History of Time, Stephen Hawking says of the result that “Their measurement had been sheer luck, or a case of knowing the result they wanted to get.” Hawking reports the widespread view that the errors in the data were as big as the effect they were meant to probe. Some go further,saying that Eddington consciously excluded data that didn’t agree with Einstein’s prediction.

Is that true? According to a study by Daniel Kennefick, a physicist at the University of Arkansas [1], Eddington was in fact completely justified in asserting that his measurements matched the prediction of general relativity. Kennefick thinks that anyone now presented with the same data would have to share Eddington’s conclusion.

The story is no mere wrinkle in the history of science. Einstein’s theory rearranged everything we thought we knew about time and space, deepening his 1905 theory of special relativity so as to give a wholly new picture of what gravity is. In this sense, it transformed fundamental physics forever.

Crudely put, whereas special relativity dealt with objects moving at constant velocity, general relativity turned the spotlight on accelerating bodies. Special relativity argued that time and space are distorted once objects travel at close to the speed of light. This obliterated the Newtonian notion of an absolute reference frame with respect to which all positions, motions and times can be measured; one could only define these things in relative terms.

That was revolutionary enough. But in general relativity, Einstein asserted that gravity is the result of a distortion of spacetime by massive objects. The classic image, disliked by some physicists, is that of a cannonball (representing a star, say) on a trampoline (representing space time), creating a funnel-shaped depression that can trap a smaller rolling ball so that it circles like a planet in orbit.

Even light cannot ignore this remoulding of space by a massive body – the theory predicted that light rays from distant stars should be bent slightly as they skim past the Sun. We can’t hope to see this apparent ‘shifting’ of star positions close to the edge of the blazing Sun. But when it gets blotted out during a total solar eclipse, the bending should be visible.

This is what Eddington set out to investigate. He drew on two sets of observations made from equatorial locations during the eclipse of 29 May 1919: one at the town of Sobral in Brazil, the other on the island of Principe off Africa’s west coast.

With the technology then available, measuring the bending of starlight was very challenging. And contrary to popular belief, Newtonian physics did not predict that light would remain undeflected – Einstein himself pointed out in 1911 that Newtonian gravity should cause some deviation too. So the matter was not that of an all-or-nothing shift in stars’ positions, but hinged on the exact numbers.

The results from the two locations were conflicting. It has been claimed that those at Sobral showed little bending, and thus supported Newton, whereas those at Principe were closer to Einstein’s predictions. The case for prosecuting Eddington is that he is said to have rejected the former and concentrated on the latter.

This claim was made particularly strongly in a 1980 paper [2] by philosophers of science John Earman and Clark Glymour, whose position was made more widely known by Harry Collins and Trevor Pinch in their 1993 book The Golem [3]. Why would Eddington have done this? One possibility is that he had simply been won over by Einstein’s theory, and wanted to see it ‘proved’. But it’s also suggested that Eddington’s Quaker belief in pacifism predisposed him to see a British proof of a German theory as an opportunity for postwar reconciliation.

Kennefick has examined these claims in detail. It is true that the Principe data, which Eddington helped to collect himself, were poor: because of cloudy weather, there were only two useable photographic plates of star positions, with just five stars of each. When Eddington spoke about these measurements in a public talk in September, before he had had a chance to analyse them fully, he admitted that the deflection of starlight seemed to fall between the predictions of Newtonian and relativistic theories. He clearly needed the Sobral data to resolve the matter.

The latter came from two sets of astronomical measurements: one made with a so-called ‘Astrographic’ lens with a wide field of view, and the other using a 4-inch lens borrowed from the Royal Irish Academy. The Astrographic data were expected to be more reliable – and it seems that they supported the non-relativistic prediction. This is where the charges of data-fudging come in, because it has been asserted that Eddington ditched those results and focused instead on the ones collected with the 4-inch lens, which showed ‘full deflection’ in support of Einstein’s view.

The Sobral Astrographic data were discarded, for technical reasons which Dyson and Eddington described in their full account of the expeditions [4]. Kennefick argues that these reasons were sound – but he shows that in any case Eddington semed to have played no part in the decision. He was merely informed of the analysis of the Sobral plates by the expedition leader, the Astronomer Royal Frank Watson Dyson of the Greenwich Observatory in London. Dyson, however, was cautious of Einstein’s theory (as were many astronomers, who struggled to understand it), suspecting it was too good to be true. So it’s not obvious why he would fiddle with the data.

In any event, a modern analysis of these plates in 1979 shows that, taken together, they do support Einstein’s prediction rather well, and that the original teams made assumptions in their calculations that were justified even if they couldn’t be conclusively supported at the time.

Kennefick says that the ‘Eddington fudge’ story has mutated from the sober and nuanced analysis of Earman and Glymour to a popular view that the ‘victory’ of general relativity was nothing but a public-relations triumph. It is now sometimes cited as a reason why scientists should be distrusted in general. Kennefick admits that Eddington may well have had the biases attributed to him – but there is no evidence that he had the opportunity to indulge them, even if he had been so inclined.

It’s a salutary tale for all involved. Scientists need to be particularly careful that, in their eagerness to celebrate past achievements and to create coherent narratives for their disciplines, they do not construct triumphalist myths that invite demolition. (Crick and Watson’s discovery of the structure of DNA is shaping up as another candidate.)

But there is an undeniable attraction in exposing shams and parading a show of canny scepticism. In The Golem, Collins and Pinch imply that the ‘biases’ shown by Eddington are the norm in science. It would be foolish to claim that this kind of thing never happens, but the 1919 eclipse expeditions offer scant support for a belief that such preconceptions (or worse) are the key determinant of scientific ‘truth’.

The motto of the Royal Society – Nullius in verba, loosely translated as ‘take no one’s word for it’ – is often praised as an expression of science’s guiding principle of empiricism. But it should also be applied to tellings and retellings of history: we shouldn’t embrace cynicism just because it’s become cool to knock historical figures off their pedestals.

References
1. Kennefick, D. preprint http://xxx.arxiv.org/abs/0709.0685 (2007).
2. Earman, J. & Glymour, C. Hist. Stud. Phys. Sci. 11, 49 - 85 (1980).
3. Collins, H. M. & Pinch, T. The Golem: What Everyone Should Know About Science. Cambridge University Press, 1993.
4. Dyson, F. W. Eddington, A. S. & Davidson, C. R. Phil. Trans. R. Soc. Ser. A 220, 291-330 (1920).

Wednesday, September 05, 2007


Singing sands find a new tune
[Here’s the unedited version of my latest article for news@nature, which has a few more comments from the researchers than the final piece does (published in print this week).]

A new theory adds to the controversy over why some desert dunes emit sonorous booms.

A new theory for why sand dunes emit eerie booming drones seems likely to stir up fresh controversy, as rival theories contend to answer this ancient puzzle.

Research on this striking natural phenomenon has become something of a battleground after two groups in France, previously collaborators, published their opposing theories. Now a team at the California Institute of Technology, led by mechanical engineer Melany Hunt, says that they’re both wrong [1].

“There are strong feelings in this field”, says physicist Michael Bretz at the University of Michigan, who has studied the ‘song of the sands’. “It’ll take a while longer to get it sorted out. But the explanations keep getting better.”

The ‘singing’ of sand dunes has been known for a very long time. Marco Polo described it on his journeys through the Gobi desert in the thirteenth century, attributing the sound to evil desert spirits. The noise can be very loud, audible for over a kilometre. “It’s really magnificent”, says physicist Stéphane Douady at the Ecole Normale Supérieure in Paris, who has proposed one of the competing theories to explain it.

The effect is clearly related to avalanches of sand, and can be triggered by people sliding down the slopes to get the sand moving – as was done at least since the ninth century during a festival on a sand-covered hill in northwestern China called Mingsha Shan (Sighing Sand Mountain). Charles Darwin heard the ‘song of the sands’ in Chile, saying that it was produced on a sandy hill “when people, by ascending it, put the sand in motion.”

In the twentieth century the doyen of dune science Ralph Bagnold, an army engineer who fell in love with the North African deserts during the Second World War, suggested that the noise was caused by collision of sand grains, the frequency being determined by the average time between collisions. This implies that the frequency of the boom depends on the size of the individual grains, increasing as the grains get smaller.

The previous explanations of the French researchers focused on these collisions during sand avalanches. Douady and his coworkers Bruno Andreotti and Pascal Hersen began to study ‘singing dunes’ during a research trip in Morocco in 2001.

Douady decided that in order for the moving grains to generate a single sound frequency, their motions must become synchronized. This synchronization, he argued, comes from standing waves set up in the sliding layer. The loudness of the noise results from the way that the dune surface acts like a giant loudspeaker membrane.

But Andreotti found a slightly different explanation.. The synchronization of grain motions, he said, comes from waves excited in the sand below the sliding layer itself, which then act back on the moving grains themselves, ‘locking’ their movements together and thus converting random collisions to synchronized ones.

It might seem like a small distinction, but Douady and Andreotti found that they could not resolve their differences, and in the end they published separate papers offering their explanations [2,3]. Andreotti now works at another lab in Paris.

But both explanations have serious problems, according to Hunt. For one thing, the measurements made by her team on several booming dunes in Nevada and California seem to show that the booming frequency doesn’t depend on the grain size at all, as Bagnold suggested and with which both Andreotti and Douady concurred.

What’s more, the previous theories imply that all dunes should be able to ‘sing’, since this is a general property of sand avalanches. But in fact some dunes sing while others don’t – that is, after all, why Mingsha Shan got its name. Why is that? Andreotti has proposed that ‘silent’ dunes aren’t dry enough, or have grains of the wrong shape. But Hunt and colleagues think that the answer lies literally deeper than this.

“Douady and Andreotti have focused on the grain sizes and the surface features of the grains, but did not take large-scale properties of the dunes into account”, says Hunt’s student Nathalie Vriend. “They have not found an explanation yet why the smaller dunes or dunes in the wintertime do not make this sound.”

The Caltech teams says that dunes have to be covered in distinct layers of sand in order to create a boom. Their careful measurements of vibrations in the sand – made with an array of ‘geophones’ on the dune slopes, like those used to monitor seismic waves in earthquake studies – showed that the speed of these seismic waves increases in abrupt steps the deeper the sand is.

In particular, the speed of the seismic waves increases suddenly by almost a factor of two at a depth of about 1.5 m below the dune surface.

The Caltech researchers think that this layered structure, caused by variations in moisture content and bonding of the grains to one another, enables the surface layer to act as a kind of waveguide for acoustic energy, rather like the way an optical fibre channels light. So while they agree that the boom is transmitted to the air by a loudspeaker effect of the dune surface, they think that the frequency is set by the width of the waveguide layer of sand.

Dunes that lack this layered structure – as smaller ones do, for example, won’t ‘sing’ at all: the vibrations simply get dispersed within the sliding sands. The researchers also find that more moisture condensed between the sand grains during the winter smears out the boundaries between the layers of singing dunes and silences them.

This is unlikely to be the last word on the matter, however. For one thing, the strange properties of the sand in ‘booming dunes’ don’t seem to rely on such large-scale influences. “You can take a cupful of this sand and excite it with your finger”, says Peter Haff, a geologist at Duke University in North Carolina who has studied it. “You can feel it vibrating, like running your finger over a washboard. But you can take sand from other parts of the dune, and there’s nothing you can do to make it boom.” Haff concludes that, while these theories may offer part of the answer, “there must be something else going on at a small scale.”

Douady agrees. “The problem for the Caltech theory is that we can recreate these sounds in the lab”, he says. He thinks that the sand layering might play a role in modifying the sound, but that it is “just a decoration” to the basic mechanism of booming. “It’s like the different between singing in a small room and singing in a cathedral,” he says.

Andreotti also finds several reasons to remain unconvinced. In particular, he says “They use sensors only at the surface of the dune. We have made measurements with buried sensors about 20 cm below the surface, and didn’t detect any vibration. This is a strong and direct contradiction of the paper.” So it seems that, with everyone sticking to their own theory, the riddle of the dunes is not yet solved.

References
1. Vriend, N. M. et al. Geophys. Res. Lett. 34, L16306 (2007).
2. Andreotti, B. Phys. Rev. Lett. 93, 238001 (2004).
3. Douady, S. et al. Phys. Rev. Lett. 97, 018002 (2006).

The history of singing dunes

It is asserted as a well-known fact that this desert is the abode of many evil spirits, which amuse travellers to their destruction with most extraordinary illusions. If, during the daytime, any persons remain behind on the road, either when overtaken by sleep or detained by their natural occasions, until the caravan has passed a hill and is no longer in sight, they unexpectedly hear themselves called to by their names, and in a tone of voice to which they are accustomed. Supposing the call to proceed from their companions, they are led away by it from the direct road, and not knowing in what direction to advance, are left to perish. In the night-time they are persuaded they hear the march of a large cavalcade on one side or the other of the road, and concluding the noise to be that of the footsteps of their party, they direct theirs to the quarter from whence it seems to proceed; but upon the breaking of day, find they have been misled and drawn into a situation of danger... Marvellous indeed and almost passing belief are the stories related of these spirits of the desert, which are said at times to fill the air with the sounds of all kinds of musical instruments, and also of drums and the clash of arms; obliging the travellers to close their line of march and to proceed in more compact order.
Marco Polo (1295)

Somewhere, close to us, in an undefined direction, a drum was beating, the mysterious drum of the dunes; it was beating distinctly, sometimes more vibrating, sometimes weakened, stopping, then taking again its fantastic bearing.
The Arabs, terrified, looked at themselves; and one said, in its language: "Death is on us." And here is that suddenly my companion, my friend, almost my brother, fell from horse on the head, struck down ahead by sunstroke.
And during two hours, while I was in vain trying to save it, always this imperceptible drum filled up me the ear of its monotonous, intermittent and incomprehensible noise; and I felt the fear slip into my bones, the true fear, the hideous fear, close to this liked body, in this hole charred by the sun between four mounts of sand, while the unknown echo was throwing us, two hundred miles away of any French village, the fast beat of the drum.
Maupassant (1883)

Whilst staying in the town I heard an account from several of the inhabitants, of a hill in the neighborhood which they called "El Bramador," - the roarer or bellower. I did not at the time pay sufficient attention to the account; but, as far as I understood, the hill was covered by sand, and the noise was produced only when people, by ascending it, put the sand in motion. The same circumstances are described in detail on the authority of Seetzen and Ehrenberg, as the cause of the sounds which have been heard by many travellers on Mount Sinai near the Red Sea.
Charles Darwin (1889)

Update
Andreotti and his colleagues have submitted a comment on the paper by Vriend et al. to Geophys. Res. Lett., which is available here.