Yet more memory of water
This month’s issue of Chemistry World carries a letter from Martin Chaplin and Peter Fisher in response to my column discussing the special issue of Homeopathy on the ‘memory of water’. Mark Peplow asked if I wanted to respond, but I told him that he should regard publication of my response as strictly optional. In the event, he rightly chose to use the space to include another letter on the topic. So here for the record is Martin and Peter’s letter, and my response. I suppose I could be a little annoyed by the misrepresentation of what I said at the end of their letter, but I’m happy to regard it as miscomprehension.
From Martin Chaplin and Peter Fisher
We put together the ‘Memory of water’ issue of the journal Homeopathy, the subject of Philip Ball’s recent column (Chemistry World, September 2007, p38), to show the current state of play. It contained all the current scientific views representing the different experimental and theoretical approaches to the ‘memory of water’ phenomena. Some may be important and others less so, but now the different areas of the field can be fairly judged. The papers mostly demonstrated the similar theme that water preparations may have unexpected properties, contain unexpected solutes and show unexpected changes with time; all very worthy of investigation. Although not the main purpose of the papers, we show the problems as much as the potential of these changed properties in relation to homeopathy.
Ball skirts over the unexpected experimental findings that he finds ‘puzzling’, so ignoring the very heart of the phenomena we are investigating and misinterpreting the issue. He backs up his argument with statements concerning pure water and silicate solutions that are clearly not relevant to the present discussion. Also, he uses Irving-Langmuir to prop up his argument. This is fitting as Langmuir dismissed the Jones-Ray effect (http://www.lsbu.ac.uk/water/explan5.html#JR), whereby the surface tension of water is now known to be reduced by low concentrations of some ions, as this disagreed with his own theories. Finally Ball finishes with the amazing view that he knows the structure of water in such solutions with great confidence; I wish he would share that knowledge with the rest of us.
M F Chaplin CChem FRSC, London, UK
P Fisher, Editor,Homeopathy, Luton, UK
Response from Philip Ball
I have discussed elsewhere some of the experimental papers to which Chaplin and Fisher refer (see http://www.nature.com/news/2007/070806/full/070806-6.html and www.philipball.blogspot.com). Some of those observations are intriguing, but each raises its own unique set of questions and concerns, and they couldn’t possibly all be discussed in my column. Langmuir’s ideas feature nowhere in my argument; I simply point out that he coined the term ‘pathological science.’ If the issues I raise about silicate self-organization are not relevant to the discussion, why do Anick and Ives mention them in their paper? And I never stated that I or anyone else knows the structure of water or aqueous solutions with great confidence; I merely said that there are some things we do know with confidence about water’s molecular-scale structure (such as the timescale of hydrogen-bond making and breaking in the pure liquid), and they should not be ignored.
Wednesday, October 03, 2007
Monday, October 01, 2007
What’s God got to do with it
There’s a curious article in the September issue of the New Humanist by Yves Gingras, a historian and sociologist of science at the University of Quebec. Gingras is unhappy that scientists are using references to God to sell their science (or rather, their books), thereby “wrap[ping] modern scientific discoveries in an illusory shroud that insinuates a link between cutting-edge science and solutions to the mysteries of life, the origins of the universe and spirituality.” But who are these unscrupulous bounders? Well… Paul Davies, and… and Paul Davies, and… ah, and Frank Tipler. Well yes, Tipler. My colleagues and I decided recently that we should introduce the notion of the Tipler Point, being the point beyond which scientists lose the plot and start rambling about the soul/immortality/parallels between physics and Buddhism. A Nobel prize is apt to take you several notches closer to the Tipler Point, though clearly it’s not essential. And such mention of Buddhism brings us to Fritjof Capra, and if we’re going to admit him to the ranks of ‘scientists’ who flirt with mysticism then the game is over and we might as well bring in Carl Jung and Rudolf Steiner.
Gingras suggests that the anthropic principle is “bizarre and clearly unscientific”, and that it has affinities with intelligent design. Now, I’m no fan of the anthropic principle (see here), but I will concede that it is actually an attempt to do the very opposite of what intelligent design proposes – to obviate the need to interpret the incredible fine-tuning of the physical universe as evidence of design. The fact is that this fine-tuning is one of the most puzzling issues in modern physics, and if I were a Christian of the sort who believes in a Creator (not all have that materialist outlook), I’d seize on this as a pretty strong indication that my beliefs are on to something. The Templeton Foundation, another of Gingras’s targets, has hosted some thoughtful meetings on the theme of fine-tuning, and while I’m agnostic about the value and/or motives of the Templeton Foundation, I don’t obviously see a need to knock them for raising the question.
Paul Davies has indeed hit a lucrative theme in exploring theological angles of modern cosmology, but he does so in a measured and interesting way in which I don’t at all recognize Gingras’s description of “X-files science” or an “oscillation between science and the paranormal.” Frankly, I’m not sure Gingras is on top of his subject – when, as I expected resignedly, he fishes out Stephen Hawking’s famous “mind of God” allusion, he seems to see it as a serious suggestion, and not simply as an attempt by an excellent scientist but indifferent writer to inject a bit of pizzazz into his text. Hawking’s reference is obviously theologically naïve, and gains supposed gravitas only because of the oracular status that Hawking has, for rather disturbing reasons, been accorded.
Still, I suppose I will also be deemed guilty of peddling religious pseudo-science for daring to look, in my next book, at the theological origins of science in the twelfth century…
There’s a curious article in the September issue of the New Humanist by Yves Gingras, a historian and sociologist of science at the University of Quebec. Gingras is unhappy that scientists are using references to God to sell their science (or rather, their books), thereby “wrap[ping] modern scientific discoveries in an illusory shroud that insinuates a link between cutting-edge science and solutions to the mysteries of life, the origins of the universe and spirituality.” But who are these unscrupulous bounders? Well… Paul Davies, and… and Paul Davies, and… ah, and Frank Tipler. Well yes, Tipler. My colleagues and I decided recently that we should introduce the notion of the Tipler Point, being the point beyond which scientists lose the plot and start rambling about the soul/immortality/parallels between physics and Buddhism. A Nobel prize is apt to take you several notches closer to the Tipler Point, though clearly it’s not essential. And such mention of Buddhism brings us to Fritjof Capra, and if we’re going to admit him to the ranks of ‘scientists’ who flirt with mysticism then the game is over and we might as well bring in Carl Jung and Rudolf Steiner.
Gingras suggests that the anthropic principle is “bizarre and clearly unscientific”, and that it has affinities with intelligent design. Now, I’m no fan of the anthropic principle (see here), but I will concede that it is actually an attempt to do the very opposite of what intelligent design proposes – to obviate the need to interpret the incredible fine-tuning of the physical universe as evidence of design. The fact is that this fine-tuning is one of the most puzzling issues in modern physics, and if I were a Christian of the sort who believes in a Creator (not all have that materialist outlook), I’d seize on this as a pretty strong indication that my beliefs are on to something. The Templeton Foundation, another of Gingras’s targets, has hosted some thoughtful meetings on the theme of fine-tuning, and while I’m agnostic about the value and/or motives of the Templeton Foundation, I don’t obviously see a need to knock them for raising the question.
Paul Davies has indeed hit a lucrative theme in exploring theological angles of modern cosmology, but he does so in a measured and interesting way in which I don’t at all recognize Gingras’s description of “X-files science” or an “oscillation between science and the paranormal.” Frankly, I’m not sure Gingras is on top of his subject – when, as I expected resignedly, he fishes out Stephen Hawking’s famous “mind of God” allusion, he seems to see it as a serious suggestion, and not simply as an attempt by an excellent scientist but indifferent writer to inject a bit of pizzazz into his text. Hawking’s reference is obviously theologically naïve, and gains supposed gravitas only because of the oracular status that Hawking has, for rather disturbing reasons, been accorded.
Still, I suppose I will also be deemed guilty of peddling religious pseudo-science for daring to look, in my next book, at the theological origins of science in the twelfth century…
Friday, September 28, 2007

Space experiments should be a cheap shot
[This is the pre-edited version of my latest article for muse@nature.com - with some added comment at the end.]
We rarely learn anything Earth-shaking from space labs, which is why inexpensive missions like Foton-M3 are the way to go.
Space experiments have rarely seemed as much fun as they do on the European Space Agency’s Foton-M3 mission, which blasted off two weeks ago from the Russian launch site at Baikonur in Kazakhstan for a 12-day spell in low-Earth orbit. Among the experiments in the 400-kg payload were an exploration of capsule delivery to Earth using a 30-km space tether, a study of how human balance works and an investigation of whether organic matter in space rocks can withstand the heat of orbital re-entry so that life could be transferred between planets, as posited in the ‘panspermia’ hypothesis, by sticking a chunk of Scottish rock onto the spacecraft’s side.
None of these experiments seems likely be itself to lead to any major new discoveries or technological breakthroughs. And none can be considered terribly urgent – the balance study, which looks at how a balance organ called the otolith grows in larval fish in zero gravity, has been on the shelf for years, the first attempt being one of the minor casualties of the ill-fated Columbia space shuttle mission in early 2003.
But it would be churlish to criticize Foton-M3 for the incremental nature of its science. Most scientific research in general is like that, and the roster of experiments is not only impressively long for such a relatively cheap mission but also appealingly diverse, spanning subjects from microbiology to geoscience to condensed-matter physics.
What’s more, the tether experiment has arisen from a project called the Young Engineers’ Satellite 2 (YES2), involving more than 450 students throughout Europe. The aim is to use a tether to slow down an object falling back into Earth’s gravity from a spacecraft so that it continues falling instead of being captured in orbit. This could offer a cheap way of delivering payloads from space to Earth.
Admittedly the experiment seems not to have quite worked out as planned, because apparently not all the tether unreeled. And the notion of finding a cheap postal method for the indefensibly expensive white elephant known as the International Space Station, which has so far yielded very little worth delivering in the first place, is rather hard to swallow.
But as a way to engage students in serious space research that poses interesting scientific and technological questions and might conceivably find uses in the future, YES2 can’t be faulted.
Foton-M3 does evoke a degree of dèja-vu – how many earlier space experiments have claimed to be “improving our understanding of protein structure by growing protein crystals in weightlessness”, or learning about loss of bone mass in astronauts? But there’s bound to be some duds in over 40 experiments.
What’s curious about some of these is that they threaten to undermine their own justification. If we can design robotic instruments to look at the growth of bone or tissue cells so that we can predict how astronauts might fare on long-term space missions, can we not design robots to replace those very astronauts? Preparing the ground for human space exploration demands such advances in automation that, by the time we’re ready, we’ll have run out of good scientific reasons for it. There may be non-scientific arguments, such as the educational and inspirational value, but a mission like Foton-M3 at least raises doubts about why there is any reason for near-Earth manned spaceflight.
A report by the UK Royal Astronomical Society (RAS) Commission of the Scientific Case for Human Space Exploration, published in 2005, seems to challenge such scepticism. It claimed, for example, that “the capabilities of robotic spacecraft will fall well short of those of human explorers for the foreseeable future.”
But what this turns out to amount to is a statement of the obvious: robots are nowhere near achieving human-like intelligence and decision-making capabilities. There’s no doubt that having humans on site will permit more flexible, faster and more thoughtful responses to unexpected circumstances in lunar or planetary exploration. But since one can probably have ten robotic missions for the price of one manned (and since it might soon take as little as three months to get to Mars), that isn’t obviously a clinching argument, especially when you think about the cost of failure – the success rate for Mars missions is so far not much more than 1 in 4. And robots are, in many ways, considerably more robust than humans.
The RAS report also claimed that “there are benefits for medical science to be gained from studying the human physiological response to low and zero gravity [and] to the effects of radiation.” This claim drew heavily on a letter from the UK Space Biomedicine Group (UKSBG), who one might imagine to be rather disposed to the idea in the first place. They claim that studying bone demineralization in micro- and zero gravity “could dramatically improve the understanding and treatment of osteoporosis.”
That’s why space experiments like those on board Foton-M3 are relevant to the debate. One experiment on the mission looks at precisely this question of bone mass loss using bovine bone; another involves bone-forming and bone-degrading cells cultured in vitro. In other words, one of the key putative health spinoffs of human spaceflight, according to the RAS Commission, is already being studied in cheap unmanned missions. It is conceivable that we would learn something (the UKSBG doesn’t specify what) from live humans that we would not from dead cows, or from live mice or human cell cultures. But should that unknown increment weigh heavily on the scales that the RAS were seeking to balance?
The considerations raised by the RAS report also bear on the question of why it is that such experiments have enjoyed sustained support in the past despite being pretty uninspiring and lacking in real technological payoff. If we think (rightly or wrongly) that it is intrinsically interesting to blast people into space, we’ll tend to feel that way about the stuff they do there too (so that a golf drive in space makes headlines).
Thus, many space experiments, such as the recent demonstration that Salmonella bacteria on the space shuttle Atlantis were more virulent in zero gravity [1], gain interest not because of the results in themselves but because of the very fact of their having been obtained in space. That particular result was already known from microgravity experiments on Earth, and in any event much of the interest centred on whether it means astronauts will suffer more from germs. The glamour that seems to attach to space experiments almost invariably distorts the import of what they find, all the more so because they are used as their own justification: “look at what space experiments can tell us about stuff that happens in space!”
As a result, Foton-M3 provides a nice illustration of proper cost-benefit thinking. The ‘panspermia’ tests, say, operated by a team from the University of Aberdeen, will at best provide a useful addition to a wealth of previous studies on space- and impact-resistance of organic matter and living organisms. A study of temperature and concentration fluctuations in fluids provides a nice verification of a result that was generally expected theoretical grounds – it is the kind of experiment that would be undertaken without hesitation if it could be done in a lab, but which would certainly not warrant its own dedicated space mission.
In other words, when Foton-M3 plummeted back down to Earth near the Russian/Kazakh border on Wednesday [26 September], it should have blown a big hole in starry-eyed visions of space experimentation. This is how it should really be done: modest but intrinsically interesting investigations, realised at a modest cost, and performed by robots.
Reference
1. Wilson, J. W. et al. Proc.Natl Acad. Sci. USA online early edition, doi:10.1073/pnas.0707 (2007).
The more I think about it, the worse the RAS report seems. When it comes to space exploration generally, they do a fair job of taking into consideration the fact that robots can be guided remotely by human intelligence, and don’t need to be autonomous decision-makers. But even this was rather specious in its use of deep-sea engineering as a means of comparison – getting humans to the sea floor, and the hazards they face there, hardly compares with sending them to Mars. When, however, the discussion turned to biomedical spinoffs, the RAS Commission seemed to forget all about doing things robotically – they simply pleaded lack of expertise, which meant they seemingly relied entirely on the testimonies of the UKSBG and human spaceflight advocate Kevin Fong. At no point do they seem to ask whether the biomedical benefits proposed might be obtained equally in unmanned missions. As far as osteoporosis goes, for example, the question is not whether manned spaceflight might tell us something about it but whether:
1. there are critical questions about the condition that can be answered only by micro- or zero-gravity studies; and
2. these questions can only be answered by studying live human subjects and not animals or cell cultures.
The UKSBG point to no such specific questions, and I rather doubt that they could. (Certainly, it is not as though we need to study astronauts in order to monitor human bone mass loss in vivo.) If there are not good answers to these points, the RAS should not be using this line as a reason for human space exploration (as opposed to stuff you might as well do if you’re going up there anyway).
It’s the same story for the work on Salmonella that I mention. There are vague promises of improved understanding of the emergence of virulent strains on Earth, but no indication of why a space experiment will really tell you much more in this regard than a terrestrial simulation of zero G. Much of the interest seems to centre on the question of whether astronauts would face nastier bugs, which of course becomes an issue only if you put them up there in the first place. This is the kind of fuzzy thinking that defenders of human space exploration get away with all the time.
Wednesday, September 26, 2007
Hybrids and helium
[This is the pre-edited version of my Lab Report column for the October issue of Prospect.]
It’s not obvious that, when the Human Fertilisation and Embryology Authority was established in 1991, anyone involved had much inkling of the murky waters it would be required to patrol. The HFEA was envisaged primarily as a body for regulating assisted conception, and so it seemed sensible to give it regulatory powers over human embryo research more generally. Sixteen years later, the HFEA is having to pronounce on issues that have little bearing on fertility and conception, but instead concerns biological research that some say is blurring the boundaries of what it means to be human.
So far, the HFEA has remained commendably aloof from the ill-founded fears that this research attracts. Its latest permissive ruling on the creation of human-animal cells is the outcome of sober and informed consideration of a sort that still threatens to elude the British government. It belies (in the UK, at least) the fashionable belief that Enlightenment ideals are in eclipse.
There are many different ways human and non-human components might be mixed in embryos. Some research requires human genetic material to be put into animal cells – for example, to create human embryonic stem cells without reliance on a very limited supply of human eggs. There are also arguments for putting animal genes into human cells, which could offer new ways to study the early stages of human development, and might even help assess embryo quality for assisted conception.
Certainly, there are dangers. For example, eviscerating an animal cell nucleus (where most DNA is housed) to make way for a human genome does not remove all the host’s genetic material. Such transfers, which produce so-called cytoplasmic hybrid (‘cybrid’) cells might, if used to make stem cells for medical implantation, run the risk of introducing animal diseases into human populations. Recent findings that genomes can be altered by ‘back-transfer’ from non-genetic material adds to the uncertainties.
But no one is intending at this stage to use cybrids for stem-cell treatments; they are strictly a research tool. The HFEA has decided that there is no ‘fundamental reason’ to prohibit them – recognizing, it seems, that protests about human dignity and unnaturalness impose misplaced criteria. It stresses that the ruling is not a universal green light, however, and that licensing will be made on a case-by-case basis – as they surely should be. The first such applications are already being considered, and are likely to be approved.
The ruling says nothing yet about other human-animal fusions, such as embryos with mixtures of human and animal cells (true chimeras) or hybrids made by fertilization of eggs with sperm of another species. These too may be useful in research, but carry a higher yuk factor. On current form, it seems we can count on the HFEA not to succumb to squeamishness, panic or the mendacious rhetoric of the slippery slope.
*****
Was it vanity or bravery that prompted Craig Venter to allow his complete genome to be sequenced and made public? That probably depends on how you feel about Venter, whose company Celera controversially provided the privatized competition to the international Human Genome Project. Both those efforts constructed a composite genome from the DNA of several anonymous donors, and analysed only one of each pair of the 23 human chromosomes.
In contrast, Venter’s team has decoded both of his chromosomes, revealing the different versions of genes acquired from each parent. It is these variants, along with the way each is controlled within the genome and how they interact with the environment, that ultimately determines our physical characteristics. The analysis reveals other sources of difference between chromosomal ‘duplicates’, such as bits of genes that have bits inserted or cut out. This is, you might say, a study of how much we differ from ourselves – and it should help to undermine the simplistic notion that we’re each built from a single instruction manual that is merely read again and again from conception to the grave.
Venter bares all in a paper in the free-access electronic journal PLoS Biology, joining Jim Watson, a co-discoverer of the structure of DNA, as one of the first individuals to have had his personal genome sequenced. Some have complained that this ‘celebrity’ sequencing sends out the message that personalized genomics will be reserved for the rich and privileged. But no one yet really knows whether such knowledge will prove a benefit or a burden – Venter has discovered a possible genetic propensity towards Alzheimer’s and cardiovascular diseases. The legal and ethical aspects of access to the information are a minefield. Venter himself says that his motive is partly to stimulate efforts to make sequencing cheaper. But right now, he has become in one sense the best-known man on the planet.
*****
The moon has always been a source of myth, and now we have some modern ones. Many people will swear blind, without the slightest justification, that the Apollo missions gave us Teflon and the instant fruit drink Tang. New calls for a moon base are routinely supported now with the claim that we can mine the lunar surface for nuclear-fusion fuel in the form of helium-3, a rare commodity on Earth. BBC’s Horizon bought the idea, and it’s been paraded in front of the US House of Representatives. But as physicist Frank Close pointed out recently, there is no sound basis to it. None of the large fusion projects uses helium-3 at all, and the suggestion that it would be a cleaner fuel simply doesn’t work, at least without a total reactor redesign. That’s not even to mention the cost of it all. But no straw is too flimsy that advocates of human spaceflight will fail to grasp it.
[This is the pre-edited version of my Lab Report column for the October issue of Prospect.]
It’s not obvious that, when the Human Fertilisation and Embryology Authority was established in 1991, anyone involved had much inkling of the murky waters it would be required to patrol. The HFEA was envisaged primarily as a body for regulating assisted conception, and so it seemed sensible to give it regulatory powers over human embryo research more generally. Sixteen years later, the HFEA is having to pronounce on issues that have little bearing on fertility and conception, but instead concerns biological research that some say is blurring the boundaries of what it means to be human.
So far, the HFEA has remained commendably aloof from the ill-founded fears that this research attracts. Its latest permissive ruling on the creation of human-animal cells is the outcome of sober and informed consideration of a sort that still threatens to elude the British government. It belies (in the UK, at least) the fashionable belief that Enlightenment ideals are in eclipse.
There are many different ways human and non-human components might be mixed in embryos. Some research requires human genetic material to be put into animal cells – for example, to create human embryonic stem cells without reliance on a very limited supply of human eggs. There are also arguments for putting animal genes into human cells, which could offer new ways to study the early stages of human development, and might even help assess embryo quality for assisted conception.
Certainly, there are dangers. For example, eviscerating an animal cell nucleus (where most DNA is housed) to make way for a human genome does not remove all the host’s genetic material. Such transfers, which produce so-called cytoplasmic hybrid (‘cybrid’) cells might, if used to make stem cells for medical implantation, run the risk of introducing animal diseases into human populations. Recent findings that genomes can be altered by ‘back-transfer’ from non-genetic material adds to the uncertainties.
But no one is intending at this stage to use cybrids for stem-cell treatments; they are strictly a research tool. The HFEA has decided that there is no ‘fundamental reason’ to prohibit them – recognizing, it seems, that protests about human dignity and unnaturalness impose misplaced criteria. It stresses that the ruling is not a universal green light, however, and that licensing will be made on a case-by-case basis – as they surely should be. The first such applications are already being considered, and are likely to be approved.
The ruling says nothing yet about other human-animal fusions, such as embryos with mixtures of human and animal cells (true chimeras) or hybrids made by fertilization of eggs with sperm of another species. These too may be useful in research, but carry a higher yuk factor. On current form, it seems we can count on the HFEA not to succumb to squeamishness, panic or the mendacious rhetoric of the slippery slope.
*****
Was it vanity or bravery that prompted Craig Venter to allow his complete genome to be sequenced and made public? That probably depends on how you feel about Venter, whose company Celera controversially provided the privatized competition to the international Human Genome Project. Both those efforts constructed a composite genome from the DNA of several anonymous donors, and analysed only one of each pair of the 23 human chromosomes.
In contrast, Venter’s team has decoded both of his chromosomes, revealing the different versions of genes acquired from each parent. It is these variants, along with the way each is controlled within the genome and how they interact with the environment, that ultimately determines our physical characteristics. The analysis reveals other sources of difference between chromosomal ‘duplicates’, such as bits of genes that have bits inserted or cut out. This is, you might say, a study of how much we differ from ourselves – and it should help to undermine the simplistic notion that we’re each built from a single instruction manual that is merely read again and again from conception to the grave.
Venter bares all in a paper in the free-access electronic journal PLoS Biology, joining Jim Watson, a co-discoverer of the structure of DNA, as one of the first individuals to have had his personal genome sequenced. Some have complained that this ‘celebrity’ sequencing sends out the message that personalized genomics will be reserved for the rich and privileged. But no one yet really knows whether such knowledge will prove a benefit or a burden – Venter has discovered a possible genetic propensity towards Alzheimer’s and cardiovascular diseases. The legal and ethical aspects of access to the information are a minefield. Venter himself says that his motive is partly to stimulate efforts to make sequencing cheaper. But right now, he has become in one sense the best-known man on the planet.
*****
The moon has always been a source of myth, and now we have some modern ones. Many people will swear blind, without the slightest justification, that the Apollo missions gave us Teflon and the instant fruit drink Tang. New calls for a moon base are routinely supported now with the claim that we can mine the lunar surface for nuclear-fusion fuel in the form of helium-3, a rare commodity on Earth. BBC’s Horizon bought the idea, and it’s been paraded in front of the US House of Representatives. But as physicist Frank Close pointed out recently, there is no sound basis to it. None of the large fusion projects uses helium-3 at all, and the suggestion that it would be a cleaner fuel simply doesn’t work, at least without a total reactor redesign. That’s not even to mention the cost of it all. But no straw is too flimsy that advocates of human spaceflight will fail to grasp it.
Friday, September 14, 2007
Burning water and other myths
[Here is my latest piece for muse@nature. This stuff dismays and delights me in equal measure. Dismays, because it shows how little critical thought is exercised in daily life (by the media, at least). Delights, because it vindicates my thesis that water’s mythological status will forever make it a magnet for pathological science. In any event, do watch the video clips – they’re a hoot.]
We will never stem the idea that water can act as a fuel.
Have you heard the one about the water-powered car? If not, don’t worry – the story will come round again. And again. Crusaders against pseudoscience can rant and rave as much as they like, but in the end they might as well accept that the myth of water as fuel is never going to go away.
Its latest manifestation comes from Pennsylvania, where a former broadcast executive named John Kansius claims to have found a way to turn salt water into a fuel. Expose it to a radiofrequency field, he says, and the water burns. There are videos to prove it, and scientists and engineers have apparently verified the result.
“He may have found a way to solve the world’s energy problems”, announced one local TV presenter. “Instead of paying four bucks for gas, how would you like to run your car on salt water?” asked another. “We want it now!” concludes a wide-eyed anchorwoman. Oh, don’t we just.
“I’d probably guess you could power an automobile with this eventually”, Kansius agrees. Water, he points out, is “the most abundant element in the world.”
It’s easy to scoff, but if the effect is genuine then it is also genuinely intriguing. Plain tap water apparently doesn’t work, but test tubes of salt water can be seen burning merrily with a bright yellow flame in the r.f. field. The idea, articulated with varying degrees of vagueness in news reports when they bother to think about such things at all, is that the r.f. field is somehow dissociating water into oxygen and hydrogen. Why salt should be essential to this process is far from obvious. You might think that someone would raise that question.
But no one does. No one raises any questions at all. The reports offer a testament to the awesome lack of enquiry that makes news media everywhere quite terrifyingly defenceless against bogus science.
And it’s not just the news media. Here is all this footage of labs and people in white coats and engineers testifying how amazing it is, and not one seems to be wondering about how this amazing phenomenon works. As a rule, it is always wise to be sceptical of people claiming great breakthroughs without the slightest indication of any intellectual curiosity about them.
This is not in itself to pass any judgement on Kansius’s claims; as ever, they must stand or fall on the basis of careful experiment. But the most fundamental, the most critical question about the whole business leaps out at you so immediately that its absence from these reports, whether they be on Pennsylvania’s JET-TV or on PhysOrg.com, is staggering. The effect relies on r.f. fields, right? So how much energy is needed to produce this effect, and how much do you get out?
I can answer that right now. You start with water, you break it apart into its constituent elements, and then you recombine them by burning. Where are you ever going to extract energy from that cycle, if you believe in the first law of thermodynamics? Indeed, how are you going to break even, if you believe in the second law of thermodynamics?
But ‘energy for free’ enthusiasts don’t want to know about thermodynamics. Thermodynamics is a killjoy. Thermodynamics is like big government or big industry, always out to squash innovation. Thermodynamics is the enemy of the Edisonian spirit of the backyard inventor.
Here, however (for what it is worth) is the definitive verdict of thermodynamics: water is not a fuel. It never has been, and it never will be. Water does not burn. Water is already burnt – it is spent fuel. It is exhaust.
Oh, it feels better to have said that, but I don’t imagine for a moment that it will end these claims of ‘water as fuel’. Why not? Because water is a mythical substance. Kansius’s characterization of water as an ‘element’ attests to that: yes, water is of course not a chemical element, but it will never shake off its Aristotelian persona, because Aristotle’s four classical elements accord so closely with our experiential relationship with matter.
Indeed, one of the most renowned ‘water as fuel’ prophets, the Austrian forester Viktor Schauberger, whose experiments on water flumes and turbulence led to a most astonishing history that includes audiences with Hitler and Max Planck and water-powered Nazi secret weapons, claimed that water is indeed in some sense elemental and not ‘compound’ at all.
And water has always looked like a fuel – for it turned the water wheels of the Roman empire, and still drives hydroelectric plants and wave turbines all over the world. No wonder it seems energy-packed, if you don’t know thermodynamics.
Water, we’re told, can unlock the hydrogen economy, and holds untold reserves of deuterium for nuclear fusion. Here is nuclear pioneer Francis Aston on the discovery of fusion in 1919: “To change the hydrogen in a glass of water into helium would release enough energy to drive the Queen Mary across the Atlantic and back at full speed.” Was it a coincidence that cold fusion involves the electrolysis of (heavy) water, or that the controversial recent claims of ‘bubble fusion’ now subject to investigations of malpractice took place in water? Of course not.
As for ‘burning water’, that has a long history in itself. This was what the alchemists called alcohol when they first isolated it, and they were astonished by a water that ignites. One of the recent sightings of ‘water fuel’ happened 11 years ago in Tamil Nadu in India, where a chemist named Ramar Pillai claimed to power a scooter on ‘herbal petrol’ made by boiling herbs in water at a cost of one rupee (three cents) a litre. Pillai was granted 20 acres of land by the regional government to cultivate his herbal additive before he was rumbled.
And then there is poor Stanley Meyer, inventor of the ‘water-powered car’. Meyer just wanted to give people cheap, clean energy, but the oil companies wouldn’t have it. They harassed and intimidated him, and in 1996 he was found guilty of “gross and egregious fraud” by an Ohio court. He died in 1998 after eating at a restaurant; the coroner diagnosed an aneurysm, but the conspiracy web still suspects he was poisoned.
It’s not easy to establish how Meyer’s car was meant to work, except that it involved a fuel cell that was able to split water using less energy than was released by recombination of the elements. Dig a little deeper and you soon find the legendary Brown’s gas, a modern chemical unicorn to rival phlogistion, in which hydrogen and oxygen are combined in a non-aqueous state called ‘oxyhydrogen’. Brown’s gas was allegedly used as a vehicle fuel by its discoverer, Australian inventor Yull Brown.
I think Kansius must be making Brown’s gas. How else can you extract energy by burning water, if not via a mythical substance? Unlike Stan Meyer’s car, this story will run and run.
[Here is my latest piece for muse@nature. This stuff dismays and delights me in equal measure. Dismays, because it shows how little critical thought is exercised in daily life (by the media, at least). Delights, because it vindicates my thesis that water’s mythological status will forever make it a magnet for pathological science. In any event, do watch the video clips – they’re a hoot.]
We will never stem the idea that water can act as a fuel.
Have you heard the one about the water-powered car? If not, don’t worry – the story will come round again. And again. Crusaders against pseudoscience can rant and rave as much as they like, but in the end they might as well accept that the myth of water as fuel is never going to go away.
Its latest manifestation comes from Pennsylvania, where a former broadcast executive named John Kansius claims to have found a way to turn salt water into a fuel. Expose it to a radiofrequency field, he says, and the water burns. There are videos to prove it, and scientists and engineers have apparently verified the result.
“He may have found a way to solve the world’s energy problems”, announced one local TV presenter. “Instead of paying four bucks for gas, how would you like to run your car on salt water?” asked another. “We want it now!” concludes a wide-eyed anchorwoman. Oh, don’t we just.
“I’d probably guess you could power an automobile with this eventually”, Kansius agrees. Water, he points out, is “the most abundant element in the world.”
It’s easy to scoff, but if the effect is genuine then it is also genuinely intriguing. Plain tap water apparently doesn’t work, but test tubes of salt water can be seen burning merrily with a bright yellow flame in the r.f. field. The idea, articulated with varying degrees of vagueness in news reports when they bother to think about such things at all, is that the r.f. field is somehow dissociating water into oxygen and hydrogen. Why salt should be essential to this process is far from obvious. You might think that someone would raise that question.
But no one does. No one raises any questions at all. The reports offer a testament to the awesome lack of enquiry that makes news media everywhere quite terrifyingly defenceless against bogus science.
And it’s not just the news media. Here is all this footage of labs and people in white coats and engineers testifying how amazing it is, and not one seems to be wondering about how this amazing phenomenon works. As a rule, it is always wise to be sceptical of people claiming great breakthroughs without the slightest indication of any intellectual curiosity about them.
This is not in itself to pass any judgement on Kansius’s claims; as ever, they must stand or fall on the basis of careful experiment. But the most fundamental, the most critical question about the whole business leaps out at you so immediately that its absence from these reports, whether they be on Pennsylvania’s JET-TV or on PhysOrg.com, is staggering. The effect relies on r.f. fields, right? So how much energy is needed to produce this effect, and how much do you get out?
I can answer that right now. You start with water, you break it apart into its constituent elements, and then you recombine them by burning. Where are you ever going to extract energy from that cycle, if you believe in the first law of thermodynamics? Indeed, how are you going to break even, if you believe in the second law of thermodynamics?
But ‘energy for free’ enthusiasts don’t want to know about thermodynamics. Thermodynamics is a killjoy. Thermodynamics is like big government or big industry, always out to squash innovation. Thermodynamics is the enemy of the Edisonian spirit of the backyard inventor.
Here, however (for what it is worth) is the definitive verdict of thermodynamics: water is not a fuel. It never has been, and it never will be. Water does not burn. Water is already burnt – it is spent fuel. It is exhaust.
Oh, it feels better to have said that, but I don’t imagine for a moment that it will end these claims of ‘water as fuel’. Why not? Because water is a mythical substance. Kansius’s characterization of water as an ‘element’ attests to that: yes, water is of course not a chemical element, but it will never shake off its Aristotelian persona, because Aristotle’s four classical elements accord so closely with our experiential relationship with matter.
Indeed, one of the most renowned ‘water as fuel’ prophets, the Austrian forester Viktor Schauberger, whose experiments on water flumes and turbulence led to a most astonishing history that includes audiences with Hitler and Max Planck and water-powered Nazi secret weapons, claimed that water is indeed in some sense elemental and not ‘compound’ at all.
And water has always looked like a fuel – for it turned the water wheels of the Roman empire, and still drives hydroelectric plants and wave turbines all over the world. No wonder it seems energy-packed, if you don’t know thermodynamics.
Water, we’re told, can unlock the hydrogen economy, and holds untold reserves of deuterium for nuclear fusion. Here is nuclear pioneer Francis Aston on the discovery of fusion in 1919: “To change the hydrogen in a glass of water into helium would release enough energy to drive the Queen Mary across the Atlantic and back at full speed.” Was it a coincidence that cold fusion involves the electrolysis of (heavy) water, or that the controversial recent claims of ‘bubble fusion’ now subject to investigations of malpractice took place in water? Of course not.
As for ‘burning water’, that has a long history in itself. This was what the alchemists called alcohol when they first isolated it, and they were astonished by a water that ignites. One of the recent sightings of ‘water fuel’ happened 11 years ago in Tamil Nadu in India, where a chemist named Ramar Pillai claimed to power a scooter on ‘herbal petrol’ made by boiling herbs in water at a cost of one rupee (three cents) a litre. Pillai was granted 20 acres of land by the regional government to cultivate his herbal additive before he was rumbled.
And then there is poor Stanley Meyer, inventor of the ‘water-powered car’. Meyer just wanted to give people cheap, clean energy, but the oil companies wouldn’t have it. They harassed and intimidated him, and in 1996 he was found guilty of “gross and egregious fraud” by an Ohio court. He died in 1998 after eating at a restaurant; the coroner diagnosed an aneurysm, but the conspiracy web still suspects he was poisoned.
It’s not easy to establish how Meyer’s car was meant to work, except that it involved a fuel cell that was able to split water using less energy than was released by recombination of the elements. Dig a little deeper and you soon find the legendary Brown’s gas, a modern chemical unicorn to rival phlogistion, in which hydrogen and oxygen are combined in a non-aqueous state called ‘oxyhydrogen’. Brown’s gas was allegedly used as a vehicle fuel by its discoverer, Australian inventor Yull Brown.
I think Kansius must be making Brown’s gas. How else can you extract energy by burning water, if not via a mythical substance? Unlike Stan Meyer’s car, this story will run and run.
Friday, September 07, 2007
Arthur Eddington was innocent!
[This is, pre-edited as usual, my latest article for muse@nature. I wonder whether I have been a little guilty of the sin described herein, of over-enthusiastic demolition of the classic stories of science. In my 2005 book Elegant Solutions I made merry use of Gerald Geison’s sceptical analysis of the Pasteur discovery of molecular chirality; but Geison’s criticisms of the popular tale have themselves been controversial. All the same, his argument seemed to make sense to me, and I’m quite sure that there was indeed some myth-spinning around this tale, abetted by Pasteur himself to boost his own legend.]
Dismissing the famous ‘verification’ of Einstein’s general relativity as a work of data-fudging is unwarranted, a new study argues.
There was once a time when the history of science was conventionally told as a succession of Eureka moments in which some stroke of experimental or theoretical genius led the scales to fall from our eyes, banishing old, false ideas to the dustbin.
Now we have been encouraged to think that things don’t really happen that way, and that in contrast scientific knowledge advances messily, one theory vanquishing another in a process that involves leaps of faith, over-extrapolated results and judicious advertising. Antoine Lavoisioer’s oxygen theory, Friedrich Wöhler’s synthesis of urea and the ‘death of vitalism’, Louis Pasteur’s germ theory – all have been picked apart and reinterpreted this way.
Generally speaking, the picture that emerges is probably a more accurate reflection of how science works in practice, and is certainly preferable to the Whiggishness of classic popular ‘histories’ like Bernard Jaffe’s Crucibles: The Story of Chemistry. At its most extreme, however, this sceptical approach can lead to claims that scientific ‘understanding’ changes not because of any deepening insight into the nature of the universe but because of social and cultural factors.
One of the more recent victims of this revisionism is the ‘confirmation’ of Einstein’s theory of general relativity offered in 1919 by the British astronomer Arthur Eddington, who reported the predicted bending of light in observations made during a total ecplise. Eddington, it has been said, cooked his books to make sure that Einstein was vindicated over Newton, because he had already decided that this must be so.
This idea has become so widespread that even physicists who celebrate Einstein’s theory commonly charge Eddington with over-interpreted his data. In his Brief History of Time, Stephen Hawking says of the result that “Their measurement had been sheer luck, or a case of knowing the result they wanted to get.” Hawking reports the widespread view that the errors in the data were as big as the effect they were meant to probe. Some go further,saying that Eddington consciously excluded data that didn’t agree with Einstein’s prediction.
Is that true? According to a study by Daniel Kennefick, a physicist at the University of Arkansas [1], Eddington was in fact completely justified in asserting that his measurements matched the prediction of general relativity. Kennefick thinks that anyone now presented with the same data would have to share Eddington’s conclusion.
The story is no mere wrinkle in the history of science. Einstein’s theory rearranged everything we thought we knew about time and space, deepening his 1905 theory of special relativity so as to give a wholly new picture of what gravity is. In this sense, it transformed fundamental physics forever.
Crudely put, whereas special relativity dealt with objects moving at constant velocity, general relativity turned the spotlight on accelerating bodies. Special relativity argued that time and space are distorted once objects travel at close to the speed of light. This obliterated the Newtonian notion of an absolute reference frame with respect to which all positions, motions and times can be measured; one could only define these things in relative terms.
That was revolutionary enough. But in general relativity, Einstein asserted that gravity is the result of a distortion of spacetime by massive objects. The classic image, disliked by some physicists, is that of a cannonball (representing a star, say) on a trampoline (representing space time), creating a funnel-shaped depression that can trap a smaller rolling ball so that it circles like a planet in orbit.
Even light cannot ignore this remoulding of space by a massive body – the theory predicted that light rays from distant stars should be bent slightly as they skim past the Sun. We can’t hope to see this apparent ‘shifting’ of star positions close to the edge of the blazing Sun. But when it gets blotted out during a total solar eclipse, the bending should be visible.
This is what Eddington set out to investigate. He drew on two sets of observations made from equatorial locations during the eclipse of 29 May 1919: one at the town of Sobral in Brazil, the other on the island of Principe off Africa’s west coast.
With the technology then available, measuring the bending of starlight was very challenging. And contrary to popular belief, Newtonian physics did not predict that light would remain undeflected – Einstein himself pointed out in 1911 that Newtonian gravity should cause some deviation too. So the matter was not that of an all-or-nothing shift in stars’ positions, but hinged on the exact numbers.
The results from the two locations were conflicting. It has been claimed that those at Sobral showed little bending, and thus supported Newton, whereas those at Principe were closer to Einstein’s predictions. The case for prosecuting Eddington is that he is said to have rejected the former and concentrated on the latter.
This claim was made particularly strongly in a 1980 paper [2] by philosophers of science John Earman and Clark Glymour, whose position was made more widely known by Harry Collins and Trevor Pinch in their 1993 book The Golem [3]. Why would Eddington have done this? One possibility is that he had simply been won over by Einstein’s theory, and wanted to see it ‘proved’. But it’s also suggested that Eddington’s Quaker belief in pacifism predisposed him to see a British proof of a German theory as an opportunity for postwar reconciliation.
Kennefick has examined these claims in detail. It is true that the Principe data, which Eddington helped to collect himself, were poor: because of cloudy weather, there were only two useable photographic plates of star positions, with just five stars of each. When Eddington spoke about these measurements in a public talk in September, before he had had a chance to analyse them fully, he admitted that the deflection of starlight seemed to fall between the predictions of Newtonian and relativistic theories. He clearly needed the Sobral data to resolve the matter.
The latter came from two sets of astronomical measurements: one made with a so-called ‘Astrographic’ lens with a wide field of view, and the other using a 4-inch lens borrowed from the Royal Irish Academy. The Astrographic data were expected to be more reliable – and it seems that they supported the non-relativistic prediction. This is where the charges of data-fudging come in, because it has been asserted that Eddington ditched those results and focused instead on the ones collected with the 4-inch lens, which showed ‘full deflection’ in support of Einstein’s view.
The Sobral Astrographic data were discarded, for technical reasons which Dyson and Eddington described in their full account of the expeditions [4]. Kennefick argues that these reasons were sound – but he shows that in any case Eddington semed to have played no part in the decision. He was merely informed of the analysis of the Sobral plates by the expedition leader, the Astronomer Royal Frank Watson Dyson of the Greenwich Observatory in London. Dyson, however, was cautious of Einstein’s theory (as were many astronomers, who struggled to understand it), suspecting it was too good to be true. So it’s not obvious why he would fiddle with the data.
In any event, a modern analysis of these plates in 1979 shows that, taken together, they do support Einstein’s prediction rather well, and that the original teams made assumptions in their calculations that were justified even if they couldn’t be conclusively supported at the time.
Kennefick says that the ‘Eddington fudge’ story has mutated from the sober and nuanced analysis of Earman and Glymour to a popular view that the ‘victory’ of general relativity was nothing but a public-relations triumph. It is now sometimes cited as a reason why scientists should be distrusted in general. Kennefick admits that Eddington may well have had the biases attributed to him – but there is no evidence that he had the opportunity to indulge them, even if he had been so inclined.
It’s a salutary tale for all involved. Scientists need to be particularly careful that, in their eagerness to celebrate past achievements and to create coherent narratives for their disciplines, they do not construct triumphalist myths that invite demolition. (Crick and Watson’s discovery of the structure of DNA is shaping up as another candidate.)
But there is an undeniable attraction in exposing shams and parading a show of canny scepticism. In The Golem, Collins and Pinch imply that the ‘biases’ shown by Eddington are the norm in science. It would be foolish to claim that this kind of thing never happens, but the 1919 eclipse expeditions offer scant support for a belief that such preconceptions (or worse) are the key determinant of scientific ‘truth’.
The motto of the Royal Society – Nullius in verba, loosely translated as ‘take no one’s word for it’ – is often praised as an expression of science’s guiding principle of empiricism. But it should also be applied to tellings and retellings of history: we shouldn’t embrace cynicism just because it’s become cool to knock historical figures off their pedestals.
References
1. Kennefick, D. preprint http://xxx.arxiv.org/abs/0709.0685 (2007).
2. Earman, J. & Glymour, C. Hist. Stud. Phys. Sci. 11, 49 - 85 (1980).
3. Collins, H. M. & Pinch, T. The Golem: What Everyone Should Know About Science. Cambridge University Press, 1993.
4. Dyson, F. W. Eddington, A. S. & Davidson, C. R. Phil. Trans. R. Soc. Ser. A 220, 291-330 (1920).
[This is, pre-edited as usual, my latest article for muse@nature. I wonder whether I have been a little guilty of the sin described herein, of over-enthusiastic demolition of the classic stories of science. In my 2005 book Elegant Solutions I made merry use of Gerald Geison’s sceptical analysis of the Pasteur discovery of molecular chirality; but Geison’s criticisms of the popular tale have themselves been controversial. All the same, his argument seemed to make sense to me, and I’m quite sure that there was indeed some myth-spinning around this tale, abetted by Pasteur himself to boost his own legend.]
Dismissing the famous ‘verification’ of Einstein’s general relativity as a work of data-fudging is unwarranted, a new study argues.
There was once a time when the history of science was conventionally told as a succession of Eureka moments in which some stroke of experimental or theoretical genius led the scales to fall from our eyes, banishing old, false ideas to the dustbin.
Now we have been encouraged to think that things don’t really happen that way, and that in contrast scientific knowledge advances messily, one theory vanquishing another in a process that involves leaps of faith, over-extrapolated results and judicious advertising. Antoine Lavoisioer’s oxygen theory, Friedrich Wöhler’s synthesis of urea and the ‘death of vitalism’, Louis Pasteur’s germ theory – all have been picked apart and reinterpreted this way.
Generally speaking, the picture that emerges is probably a more accurate reflection of how science works in practice, and is certainly preferable to the Whiggishness of classic popular ‘histories’ like Bernard Jaffe’s Crucibles: The Story of Chemistry. At its most extreme, however, this sceptical approach can lead to claims that scientific ‘understanding’ changes not because of any deepening insight into the nature of the universe but because of social and cultural factors.
One of the more recent victims of this revisionism is the ‘confirmation’ of Einstein’s theory of general relativity offered in 1919 by the British astronomer Arthur Eddington, who reported the predicted bending of light in observations made during a total ecplise. Eddington, it has been said, cooked his books to make sure that Einstein was vindicated over Newton, because he had already decided that this must be so.
This idea has become so widespread that even physicists who celebrate Einstein’s theory commonly charge Eddington with over-interpreted his data. In his Brief History of Time, Stephen Hawking says of the result that “Their measurement had been sheer luck, or a case of knowing the result they wanted to get.” Hawking reports the widespread view that the errors in the data were as big as the effect they were meant to probe. Some go further,saying that Eddington consciously excluded data that didn’t agree with Einstein’s prediction.
Is that true? According to a study by Daniel Kennefick, a physicist at the University of Arkansas [1], Eddington was in fact completely justified in asserting that his measurements matched the prediction of general relativity. Kennefick thinks that anyone now presented with the same data would have to share Eddington’s conclusion.
The story is no mere wrinkle in the history of science. Einstein’s theory rearranged everything we thought we knew about time and space, deepening his 1905 theory of special relativity so as to give a wholly new picture of what gravity is. In this sense, it transformed fundamental physics forever.
Crudely put, whereas special relativity dealt with objects moving at constant velocity, general relativity turned the spotlight on accelerating bodies. Special relativity argued that time and space are distorted once objects travel at close to the speed of light. This obliterated the Newtonian notion of an absolute reference frame with respect to which all positions, motions and times can be measured; one could only define these things in relative terms.
That was revolutionary enough. But in general relativity, Einstein asserted that gravity is the result of a distortion of spacetime by massive objects. The classic image, disliked by some physicists, is that of a cannonball (representing a star, say) on a trampoline (representing space time), creating a funnel-shaped depression that can trap a smaller rolling ball so that it circles like a planet in orbit.
Even light cannot ignore this remoulding of space by a massive body – the theory predicted that light rays from distant stars should be bent slightly as they skim past the Sun. We can’t hope to see this apparent ‘shifting’ of star positions close to the edge of the blazing Sun. But when it gets blotted out during a total solar eclipse, the bending should be visible.
This is what Eddington set out to investigate. He drew on two sets of observations made from equatorial locations during the eclipse of 29 May 1919: one at the town of Sobral in Brazil, the other on the island of Principe off Africa’s west coast.
With the technology then available, measuring the bending of starlight was very challenging. And contrary to popular belief, Newtonian physics did not predict that light would remain undeflected – Einstein himself pointed out in 1911 that Newtonian gravity should cause some deviation too. So the matter was not that of an all-or-nothing shift in stars’ positions, but hinged on the exact numbers.
The results from the two locations were conflicting. It has been claimed that those at Sobral showed little bending, and thus supported Newton, whereas those at Principe were closer to Einstein’s predictions. The case for prosecuting Eddington is that he is said to have rejected the former and concentrated on the latter.
This claim was made particularly strongly in a 1980 paper [2] by philosophers of science John Earman and Clark Glymour, whose position was made more widely known by Harry Collins and Trevor Pinch in their 1993 book The Golem [3]. Why would Eddington have done this? One possibility is that he had simply been won over by Einstein’s theory, and wanted to see it ‘proved’. But it’s also suggested that Eddington’s Quaker belief in pacifism predisposed him to see a British proof of a German theory as an opportunity for postwar reconciliation.
Kennefick has examined these claims in detail. It is true that the Principe data, which Eddington helped to collect himself, were poor: because of cloudy weather, there were only two useable photographic plates of star positions, with just five stars of each. When Eddington spoke about these measurements in a public talk in September, before he had had a chance to analyse them fully, he admitted that the deflection of starlight seemed to fall between the predictions of Newtonian and relativistic theories. He clearly needed the Sobral data to resolve the matter.
The latter came from two sets of astronomical measurements: one made with a so-called ‘Astrographic’ lens with a wide field of view, and the other using a 4-inch lens borrowed from the Royal Irish Academy. The Astrographic data were expected to be more reliable – and it seems that they supported the non-relativistic prediction. This is where the charges of data-fudging come in, because it has been asserted that Eddington ditched those results and focused instead on the ones collected with the 4-inch lens, which showed ‘full deflection’ in support of Einstein’s view.
The Sobral Astrographic data were discarded, for technical reasons which Dyson and Eddington described in their full account of the expeditions [4]. Kennefick argues that these reasons were sound – but he shows that in any case Eddington semed to have played no part in the decision. He was merely informed of the analysis of the Sobral plates by the expedition leader, the Astronomer Royal Frank Watson Dyson of the Greenwich Observatory in London. Dyson, however, was cautious of Einstein’s theory (as were many astronomers, who struggled to understand it), suspecting it was too good to be true. So it’s not obvious why he would fiddle with the data.
In any event, a modern analysis of these plates in 1979 shows that, taken together, they do support Einstein’s prediction rather well, and that the original teams made assumptions in their calculations that were justified even if they couldn’t be conclusively supported at the time.
Kennefick says that the ‘Eddington fudge’ story has mutated from the sober and nuanced analysis of Earman and Glymour to a popular view that the ‘victory’ of general relativity was nothing but a public-relations triumph. It is now sometimes cited as a reason why scientists should be distrusted in general. Kennefick admits that Eddington may well have had the biases attributed to him – but there is no evidence that he had the opportunity to indulge them, even if he had been so inclined.
It’s a salutary tale for all involved. Scientists need to be particularly careful that, in their eagerness to celebrate past achievements and to create coherent narratives for their disciplines, they do not construct triumphalist myths that invite demolition. (Crick and Watson’s discovery of the structure of DNA is shaping up as another candidate.)
But there is an undeniable attraction in exposing shams and parading a show of canny scepticism. In The Golem, Collins and Pinch imply that the ‘biases’ shown by Eddington are the norm in science. It would be foolish to claim that this kind of thing never happens, but the 1919 eclipse expeditions offer scant support for a belief that such preconceptions (or worse) are the key determinant of scientific ‘truth’.
The motto of the Royal Society – Nullius in verba, loosely translated as ‘take no one’s word for it’ – is often praised as an expression of science’s guiding principle of empiricism. But it should also be applied to tellings and retellings of history: we shouldn’t embrace cynicism just because it’s become cool to knock historical figures off their pedestals.
References
1. Kennefick, D. preprint http://xxx.arxiv.org/abs/0709.0685 (2007).
2. Earman, J. & Glymour, C. Hist. Stud. Phys. Sci. 11, 49 - 85 (1980).
3. Collins, H. M. & Pinch, T. The Golem: What Everyone Should Know About Science. Cambridge University Press, 1993.
4. Dyson, F. W. Eddington, A. S. & Davidson, C. R. Phil. Trans. R. Soc. Ser. A 220, 291-330 (1920).
Wednesday, September 05, 2007

Singing sands find a new tune
[Here’s the unedited version of my latest article for news@nature, which has a few more comments from the researchers than the final piece does (published in print this week).]
A new theory adds to the controversy over why some desert dunes emit sonorous booms.
A new theory for why sand dunes emit eerie booming drones seems likely to stir up fresh controversy, as rival theories contend to answer this ancient puzzle.
Research on this striking natural phenomenon has become something of a battleground after two groups in France, previously collaborators, published their opposing theories. Now a team at the California Institute of Technology, led by mechanical engineer Melany Hunt, says that they’re both wrong [1].
“There are strong feelings in this field”, says physicist Michael Bretz at the University of Michigan, who has studied the ‘song of the sands’. “It’ll take a while longer to get it sorted out. But the explanations keep getting better.”
The ‘singing’ of sand dunes has been known for a very long time. Marco Polo described it on his journeys through the Gobi desert in the thirteenth century, attributing the sound to evil desert spirits. The noise can be very loud, audible for over a kilometre. “It’s really magnificent”, says physicist Stéphane Douady at the Ecole Normale Supérieure in Paris, who has proposed one of the competing theories to explain it.
The effect is clearly related to avalanches of sand, and can be triggered by people sliding down the slopes to get the sand moving – as was done at least since the ninth century during a festival on a sand-covered hill in northwestern China called Mingsha Shan (Sighing Sand Mountain). Charles Darwin heard the ‘song of the sands’ in Chile, saying that it was produced on a sandy hill “when people, by ascending it, put the sand in motion.”
In the twentieth century the doyen of dune science Ralph Bagnold, an army engineer who fell in love with the North African deserts during the Second World War, suggested that the noise was caused by collision of sand grains, the frequency being determined by the average time between collisions. This implies that the frequency of the boom depends on the size of the individual grains, increasing as the grains get smaller.
The previous explanations of the French researchers focused on these collisions during sand avalanches. Douady and his coworkers Bruno Andreotti and Pascal Hersen began to study ‘singing dunes’ during a research trip in Morocco in 2001.
Douady decided that in order for the moving grains to generate a single sound frequency, their motions must become synchronized. This synchronization, he argued, comes from standing waves set up in the sliding layer. The loudness of the noise results from the way that the dune surface acts like a giant loudspeaker membrane.
But Andreotti found a slightly different explanation.. The synchronization of grain motions, he said, comes from waves excited in the sand below the sliding layer itself, which then act back on the moving grains themselves, ‘locking’ their movements together and thus converting random collisions to synchronized ones.
It might seem like a small distinction, but Douady and Andreotti found that they could not resolve their differences, and in the end they published separate papers offering their explanations [2,3]. Andreotti now works at another lab in Paris.
But both explanations have serious problems, according to Hunt. For one thing, the measurements made by her team on several booming dunes in Nevada and California seem to show that the booming frequency doesn’t depend on the grain size at all, as Bagnold suggested and with which both Andreotti and Douady concurred.
What’s more, the previous theories imply that all dunes should be able to ‘sing’, since this is a general property of sand avalanches. But in fact some dunes sing while others don’t – that is, after all, why Mingsha Shan got its name. Why is that? Andreotti has proposed that ‘silent’ dunes aren’t dry enough, or have grains of the wrong shape. But Hunt and colleagues think that the answer lies literally deeper than this.
“Douady and Andreotti have focused on the grain sizes and the surface features of the grains, but did not take large-scale properties of the dunes into account”, says Hunt’s student Nathalie Vriend. “They have not found an explanation yet why the smaller dunes or dunes in the wintertime do not make this sound.”
The Caltech teams says that dunes have to be covered in distinct layers of sand in order to create a boom. Their careful measurements of vibrations in the sand – made with an array of ‘geophones’ on the dune slopes, like those used to monitor seismic waves in earthquake studies – showed that the speed of these seismic waves increases in abrupt steps the deeper the sand is.
In particular, the speed of the seismic waves increases suddenly by almost a factor of two at a depth of about 1.5 m below the dune surface.
The Caltech researchers think that this layered structure, caused by variations in moisture content and bonding of the grains to one another, enables the surface layer to act as a kind of waveguide for acoustic energy, rather like the way an optical fibre channels light. So while they agree that the boom is transmitted to the air by a loudspeaker effect of the dune surface, they think that the frequency is set by the width of the waveguide layer of sand.
Dunes that lack this layered structure – as smaller ones do, for example, won’t ‘sing’ at all: the vibrations simply get dispersed within the sliding sands. The researchers also find that more moisture condensed between the sand grains during the winter smears out the boundaries between the layers of singing dunes and silences them.
This is unlikely to be the last word on the matter, however. For one thing, the strange properties of the sand in ‘booming dunes’ don’t seem to rely on such large-scale influences. “You can take a cupful of this sand and excite it with your finger”, says Peter Haff, a geologist at Duke University in North Carolina who has studied it. “You can feel it vibrating, like running your finger over a washboard. But you can take sand from other parts of the dune, and there’s nothing you can do to make it boom.” Haff concludes that, while these theories may offer part of the answer, “there must be something else going on at a small scale.”
Douady agrees. “The problem for the Caltech theory is that we can recreate these sounds in the lab”, he says. He thinks that the sand layering might play a role in modifying the sound, but that it is “just a decoration” to the basic mechanism of booming. “It’s like the different between singing in a small room and singing in a cathedral,” he says.
Andreotti also finds several reasons to remain unconvinced. In particular, he says “They use sensors only at the surface of the dune. We have made measurements with buried sensors about 20 cm below the surface, and didn’t detect any vibration. This is a strong and direct contradiction of the paper.” So it seems that, with everyone sticking to their own theory, the riddle of the dunes is not yet solved.
References
1. Vriend, N. M. et al. Geophys. Res. Lett. 34, L16306 (2007).
2. Andreotti, B. Phys. Rev. Lett. 93, 238001 (2004).
3. Douady, S. et al. Phys. Rev. Lett. 97, 018002 (2006).
The history of singing dunes
It is asserted as a well-known fact that this desert is the abode of many evil spirits, which amuse travellers to their destruction with most extraordinary illusions. If, during the daytime, any persons remain behind on the road, either when overtaken by sleep or detained by their natural occasions, until the caravan has passed a hill and is no longer in sight, they unexpectedly hear themselves called to by their names, and in a tone of voice to which they are accustomed. Supposing the call to proceed from their companions, they are led away by it from the direct road, and not knowing in what direction to advance, are left to perish. In the night-time they are persuaded they hear the march of a large cavalcade on one side or the other of the road, and concluding the noise to be that of the footsteps of their party, they direct theirs to the quarter from whence it seems to proceed; but upon the breaking of day, find they have been misled and drawn into a situation of danger... Marvellous indeed and almost passing belief are the stories related of these spirits of the desert, which are said at times to fill the air with the sounds of all kinds of musical instruments, and also of drums and the clash of arms; obliging the travellers to close their line of march and to proceed in more compact order.
Marco Polo (1295)
Somewhere, close to us, in an undefined direction, a drum was beating, the mysterious drum of the dunes; it was beating distinctly, sometimes more vibrating, sometimes weakened, stopping, then taking again its fantastic bearing.
The Arabs, terrified, looked at themselves; and one said, in its language: "Death is on us." And here is that suddenly my companion, my friend, almost my brother, fell from horse on the head, struck down ahead by sunstroke.
And during two hours, while I was in vain trying to save it, always this imperceptible drum filled up me the ear of its monotonous, intermittent and incomprehensible noise; and I felt the fear slip into my bones, the true fear, the hideous fear, close to this liked body, in this hole charred by the sun between four mounts of sand, while the unknown echo was throwing us, two hundred miles away of any French village, the fast beat of the drum.
Maupassant (1883)
Whilst staying in the town I heard an account from several of the inhabitants, of a hill in the neighborhood which they called "El Bramador," - the roarer or bellower. I did not at the time pay sufficient attention to the account; but, as far as I understood, the hill was covered by sand, and the noise was produced only when people, by ascending it, put the sand in motion. The same circumstances are described in detail on the authority of Seetzen and Ehrenberg, as the cause of the sounds which have been heard by many travellers on Mount Sinai near the Red Sea.
Charles Darwin (1889)
Update
Andreotti and his colleagues have submitted a comment on the paper by Vriend et al. to Geophys. Res. Lett., which is available here.
Wednesday, August 29, 2007
Letter to Prospect: a response
My column for the June issue of Prospect (available in the archives here) can be seen as somewhat sceptical about the value of the Large Hadron Collider, so it is right that Prospect should publish a letter defending it. But the one that appears in the September issue is a little odd:
“Philip Ball (June) says that "the only use of the LHC [Large Hadron Collider] that anyone ever hears about is the search for the Higgs boson." But this is not so. Physicists may look crazy, but they are not crazy enough to build such a complicated and technically demanding installation just to hunt down one particle. The LHC will be the world's most powerful instrument in particle physics for the next ten to 20 years, and it has been built to help us understand more about the 96 per cent of our universe that remains a mystery. The first thing physicists will be looking for is the Higgs boson, but this is just the beginning of a long journey into the unknown. As with earlier accelerators, there will be surprises.”
I’m glad that the author, Reinhard Budde, quoted my remark, because it reveals his non-sequitur. I did not say, as he implies, “all the LHC will do is look for the Higgs boson.” As a writer, I will make factual mistakes and no doubt also express opinions that are not wholly fair or justified. But I do try to choose my words carefully. Let me repeat them more fully:
“Particle physicists point out that because it will smash subatomic particles into one another with greater energy than ever before, it will open a window on a whole new swathe of reality. But the only use of the LHC that anyone ever hears about is the search for the Higgs boson… The LHC may turn up some surprises—evidence of extra dimensions, say, or of particles that lie outside the standard model.”
(It’s interesting that even Dr Budde doesn’t enlighten us about what else. exactly, the LHC might do, but I was happy to oblige.)
It’s a small point, but it does frustrate me; as I found out as a Nature editor, scientists seem peculiarly bad at comprehension of the written word (they have many other virtues to compensate).
For the record, I support the construction of the LHC, but with some reservations, as I stated in my piece. And by the way, I am a physicist, and I do not feel I look particularly crazy. Nor do I feel this is true of physicists as a whole, although many do have a tendency to look as though they belong in The Big Lebowski (this is a good thing). And the LHC was not built by “physicists” – it was built at the request of a rather small subsection of the global physics community. Not all physicists, or even most, are particle physicists.
My column for the June issue of Prospect (available in the archives here) can be seen as somewhat sceptical about the value of the Large Hadron Collider, so it is right that Prospect should publish a letter defending it. But the one that appears in the September issue is a little odd:
“Philip Ball (June) says that "the only use of the LHC [Large Hadron Collider] that anyone ever hears about is the search for the Higgs boson." But this is not so. Physicists may look crazy, but they are not crazy enough to build such a complicated and technically demanding installation just to hunt down one particle. The LHC will be the world's most powerful instrument in particle physics for the next ten to 20 years, and it has been built to help us understand more about the 96 per cent of our universe that remains a mystery. The first thing physicists will be looking for is the Higgs boson, but this is just the beginning of a long journey into the unknown. As with earlier accelerators, there will be surprises.”
I’m glad that the author, Reinhard Budde, quoted my remark, because it reveals his non-sequitur. I did not say, as he implies, “all the LHC will do is look for the Higgs boson.” As a writer, I will make factual mistakes and no doubt also express opinions that are not wholly fair or justified. But I do try to choose my words carefully. Let me repeat them more fully:
“Particle physicists point out that because it will smash subatomic particles into one another with greater energy than ever before, it will open a window on a whole new swathe of reality. But the only use of the LHC that anyone ever hears about is the search for the Higgs boson… The LHC may turn up some surprises—evidence of extra dimensions, say, or of particles that lie outside the standard model.”
(It’s interesting that even Dr Budde doesn’t enlighten us about what else. exactly, the LHC might do, but I was happy to oblige.)
It’s a small point, but it does frustrate me; as I found out as a Nature editor, scientists seem peculiarly bad at comprehension of the written word (they have many other virtues to compensate).
For the record, I support the construction of the LHC, but with some reservations, as I stated in my piece. And by the way, I am a physicist, and I do not feel I look particularly crazy. Nor do I feel this is true of physicists as a whole, although many do have a tendency to look as though they belong in The Big Lebowski (this is a good thing). And the LHC was not built by “physicists” – it was built at the request of a rather small subsection of the global physics community. Not all physicists, or even most, are particle physicists.
Tuesday, August 28, 2007

Check out those Victorian shades, dude
For people interested in the cultural histories of materials, there is a lovely paper by Bill Brock in the latest issue of the Notes and Records of the Royal Society on the role of William Crookes in the development of sunglasses. Bill has written a new biography of Crookes (William Crookes (1832-1919) and the Commercialization of Science, in press with Ashgate), who was one of the most energetic and colourful figures in nineteenth-century British science. Shortly to be made the octogenarian president of the Royal Society, Crookes became involved in the 1900s in a search for forms of glass that would block out infrared and ultraviolet radiation. This search was stimulated by the Workman’s Compensation Act on 1897, which allowed workers to claim compensation for work-related injuries. Glassworkers were well known to suffer from cataracts, and it was hoped by the Home Office that prevention of eye damage by tinted glass would obviate the need for compensation. Crookes began to look into the question, and presented his results to the Royal Society in 1913: a glass formulation that was opaque to UV and reduced IR by 90 percent. Always with an eye on commercial possibilities, he suggested that lenses made of this stuff could have other applications too, for example to prevent snow-blindness. “During the brilliant weather of the late summer [of 1911]”, he said, “I wore some of these spectacles with great comfort; they took off the whole glare of the sun on chalk cliffs, and did not appreciably alter the natural colours of objects. Lady Crookes, whose eyes are more sensitive to glare or strong light than are my own, wore them for several hours in the sun with great comfort.” Before long, these spectacles were being considered by London opticians, although commercialization was hindered by the war. Soon the original aim of cataract prevention in glassmakers was forgotten.
Friday, August 24, 2007

Spider-Man’s buddies and other elites
[This is the pre-edited version of my latest article for muse@nature.com.]
Marvel Universe reflects some of the undesirable properties of our social webs, while suppressing others for moral ends.
In which society do powerful males form a dominant, elitist network while the women are relegated to peripheral roles?
In which society are all the villains portrayed as warped loners while the heroes are a fraternal team united by their fight against evil?
Banish those wicked thoughts. This isn’t the real world, you cynics, but pure fantasy. We’re talking about the Marvel Universe.
This, as any comic-book geek will tell you, is where Spider-Man, Captain America and the Hulk do their thing. As indeed does the Thing. But no, not Batman or Superman – they are part of the DC Universe, and not on speaking terms with the Marvelites. Do keep up.
The story so far: in 2002 Spanish mathematician Ricardo Alberich and colleagues in Mallorca analysed the social network formed by the Marvel comic characters according to whether they have appeared together in the same story [1]. There are around 6,500 of these characters in nearly 13,000 Marvel comic books, so there’s plenty of data to reveal patterns and trends. Indeed, even the world of classical Greek and Roman mythology looks puny by comparison, with just 1,600 or so named characters in its pantheon.
What’s more, the Marvel Universe has become as incestuous as Hollywood in the way the stars hang out with one another – particularly after the relaunch of Marvel Comics in 1961, which spawned well-known gangs such as the Fantastic Four and the X-Men. Take the character Quicksilver, for instance, who first appeared as a member of Magneto’s Brotherhood of Evil Mutants (he is Magneto’s son). He later became a member of the Avengers, and then of the X-Factor, and finally the Knights of Wundagore. His twin sister is the Scarlet Witch, and his wife is Crystal, who was previously dating the Fantastic Four’s Human Torch and was a member of the Inhumans. Are you following this?
Perhaps fortunately, Alberich and team did not have to read every Marvel comic since they began (as Timely Comics) in 1939, for all these connections have been gathered into a database called the Marvel Chronology Project. They deduced that the Marvel network looks in many ways remarkably like those formed in real-world collaborations. Not only is the Marvel Universe a small world, where just about any character can be linked to any other by just a few ‘degrees of separation’, but it is a so-called scale-free world, where the distribution of links has a characteristic form that includes a few very highly connected hubs. In comparison, a random distribution of links would create no such network superstars.
This scale-free structure seems to arise when networks grow in a particular way: each new node forms links to existing nodes in a way that is probabilistic but biased so that nodes that are already highly connected are more likely to receive the new links. In this way, the rich get richer (where richness is a matter of links you have).
The Marvel Universe, like our own societies, is unplanned: it has grown from the work of many comic-book story-writers who have made no attempt to engineer any overall social network. It seems that this joint effort guided them not towards a random network, as might have been expected, but towards one that (somewhat) mirrors reality. The same thing seems to have happened in classical mythology, another ‘multi-author’ story cycle that turns out to share the scale-free social network structure [2].
But Marvel Universe isn’t a perfect match for the way real people interact. In particular, a few of the most popular characters, such as Spider-Man and Captain America, have more connections and thus more influence than anyone would in the real social world. The Marvel writers appear to have succumbed to an unrealistic degree of favouritism.
The scale-free properties of social and other networks were discovered several years ago, but it’s become increasingly clear that looking simply at the statistics of linkages is a fairly crude way of interpreting the web’s structure. Researchers are now keen to ferret out the ways in which a network is divided into distinct communities – friendship circles, say, or professional collaborators – which might then be woven together by a more tenuous web of links. Social networks are, in the sense, hierarchically organized.
A key characteristic of human social networks, identified by Mark Newman of the University of Michigan, is that highly connected (‘rich’) nodes are more likely to be connected to other rich nodes than would be expected by chance – and likewise for ‘poor’ nodes [3]. In other words, the hub members are more likely to be pals with each other than with an individual selected at random. This is called assortative mixing, and contrasts with the structure of scale-free technological and biological networks, such as the Internet and food webs, for which the opposite is true.
Pablo Gleiser of the Centro Atómico Bariloche in Argentina has now delved into the community structure of the Marvel Universe, and shows that it too has assortative ‘rich clubs’ [4], whose well-connected members provide the ‘glue’ that binds distinct communities into a cohesive net. That in itself seems to suggest that here is another way in which the Marvel Universe is a pretty good mimic of reality.
But who are in these rich clubs? Their members are all heroes, and they’re all male. In this universe, women don’t bring communities together but sit on the fringes. That’s an old story – with a smattering of honourable exceptions, women have never fared well in comic books.
Yet the bad guys are no good at forming teams either. Why is that? Gleiser thinks the answer lies in the code created by the Comics Magazine Association of America in 1954 to govern the moral framework of the stories. It stipulates that criminals should not be glamorized, and that evil should be portrayed only to make a moral point: “In every instance good shall triumph over evil and the criminal punished for his misdeeds.”
This means, says Gleiser, that “villains are not destined to play leading roles”, and as a result they aren’t going to become hub characters. He thinks that the predestined victory of the good guys meanwhile encourages collaborations so as to avoid the impression that they are omnipotent and can do their job easily.
But in a world where the CIA is devoting considerable effort to understanding the structures of organized-crime and terrorist networks, it seems that Marvel Universe has become outdated in insisting that its villains work alone (and equally, one might add, that ‘heroes’ prefer collaboration over unilateralism). And sadly, in the real world there is no one to insist that good shall triumph over evil. Not even Captain America.
References
1. Alberich, R. et al. preprint http://xxx.arxiv.org/abs/cond-mat/0202174 (2002).
2. Choi, Y.-M. & Kim, H.-J. Physica A 382, 665-671 (2007).
3. Newman, M. E. J. Phys. Rev. Lett. 89, 208701 (2002).
4. Gleiser, P. M. preprint http://xxx.arxiv.org/abs/physics/0708.2410 (2007).
Wednesday, August 22, 2007
“Here lies one whose name was writ in water…”
[It’s been pointed out to me that my commentary on the Homeopathy special issue on the memory of water, posted on the Nature news site, is now available only to subscribers. For shame. So here it is. This is the version I returned to the editors, but I’ve not checked what final small changes they might have added subsequently.]
A survey of evidence for the ‘memory’ of liquid water casts little light on its putative role in homeopathy.
I suspect it will be news to most scientists that Elsevier publishes a peer-reviewed journal called Homeopathy. I also suspect that many, on discovering this, would doubt there is anything published there that it would profit them to read. But I propose that such prejudices be put aside for the current special issue, released this Friday, which collects a dozen papers devoted to the ‘memory of water’ [1]. It’s worth seeing what they have to say – if only because that reveals this alleged phenomenon to be as elusive as ever.
The inability of water to act as a memorial was a well known poetical trope before the poet John Keats chose as his epitaph the quotation that serves as a headline here; its ephemerality was noted by Heraclitus in the fifth century BC. But ‘the memory of water’ is a phrase now firmly lodged in the public consciousness – it even supplied the title for a recent play in London’s West End. Scientists, though, tend to side with the poets in rejecting any notion that water can hold lasting impressions. Indeed, Homeopathy’s editor, Peter Fisher of the Royal London Homeopathic Hospital admits that the memory of water “casts a long shadow over homeopathy and is just about all that many scientists recall about the scientific investigation of homeopathy, equating it with poor or even fraudulent science.”
The term was coined by the French newspaper Le Monde in the wake of the 1988 Nature paper [2] that kicked off the whole affair. The lead author was the late Jacques Benveniste, head of a biomedical laboratory in Clamart run by the French National Institute of Health and Medical Research (INSERM).
Benveniste’s team described experiments in which antibodies stimulated an allergic response in human white blood cells called basophils, even when the antibody solutions were diluted far beyond the point where they would contain a single antibody molecule. The activity seemed to disappear and then reappear periodically during serial dilutions.
The results seemed to offer some experimental justification for the use of such high-dilution remedies in homeopathy. But they defied conventional scientific understanding, specifically the law of mass action which demands that the rates of chemical reactions be proportional to the concentrations of reagents. How could this be? Benveniste and colleagues suggested that perhaps the antibody activity was ‘imprinted’ in some fashion on the structure of liquid water, and transferred with each dilution.
The idea made no sense in terms of what was known about the structure of water – but what prevented it from being dismissed straight away was that liquid water has a complicated molecular-scale structure that is still not perfectly understood. Water molecules associate by means of weak chemical bonds called hydrogen bonds. Though in the main they form and break on timescales of about a trillionth of a second, nonetheless they appear to offer a vague possibility that water might form clusters of molecules with specific shapes and behaviours.
Benveniste’s experiments were investigated by a team of ‘fraud-busters’ led by Nature’s then editor John Maddox, who demanded that the studies be repeated under careful observation. Although Benveniste acquiesced (and the results proved utterly inconclusive), he complained of a witch-hunt. Certainly, it was an unprecedented act of scrutiny that not even the proponents of cold fusion a year later – another water-related pathology – had to endure.
In any event, the results were never unambiguously repeated by others. Benveniste, however, progressed from high-dilution experiments to the claim that the activity of biomolecules could be ‘digitally recorded’ and imprinted on water using radio waves. Until his death in 2004, he insisted that this would lead to a new age of ‘digital biology.’
There are many good reasons – too many to fit in this column – to doubt that water molecules in the liquid state could mimic the behaviour of antibodies or other complex biomolecules in a way that persists through dilution after dilution. As water expert José Teixeira, who bravely contributes a sceptic’s perspective to the Homeopathy special issue, says, “Any interpretation calling for ‘memory’ effects in pure water must be totally excluded.” But the idea won’t be squashed that easily, as some of the other papers show.
They report several experimental results that, at face value, are intriguing and puzzling. Louis Rey, a private researcher in Switzerland, reports that salt solutions show markedly different thermoluminescence signals, for different homeopathic dilutions, when frozen and then rewarmed. Bohumil Vybíral and Pavel Vorácek of the University of Hradec Králové in the Czech Republic describe curious viscosity changes in water left to stand undisturbed. And Benveniste’s collaborator Yolène Thomas of the Institut Andre Lwoff in France reports some of the results of radiofrequency ‘programming’ of water with specific biomolecular behaviour, including the induction of E. coli-like ‘signals’, the inhibition of protein coagulation, and blood vessel dilation in a guinea pig heart.
The volume is, in other words, a cabinet of curiosities. There is rarely even a token effort to explain the relevance of these experiments to the supposed workings of homeopathy, with its archaic rituals of shaking (‘succussion’) and ‘magic-number’ dilutions (one must always use factors of ten, and generally only specific ones, such as 100**6, 100**12 and 100**30). The procedures and protocols on display here are often unusual if not bizarre, because it seems the one thing you must not do on any account is the simplest experiment that would probe any alleged ‘memory’ effect: to look for the persistent activity of a single, well-defined agent in a simple reaction – say an enzyme or an inorganic catalyst – as dilution clears the solution of any active ingredient.
If that sounds bad, it is nothing compared with the level of theoretical discussion. This ‘field’ has acquired its own deus ex machina, an unsubstantiated theory of ‘quantum coherent domains’ in water proposed in 1988 [3] that is vague enough to fit anything demanded of it. Aside from that, the ‘explanations’ on offer seem either to consider that water physics can be reinvented from scratch by replacing decades of careful research with wishful thinking, or they call on impurities to perform the kind of miraculous feats of biomolecular mimicry and replication that chemists have been striving to achieve for many years.
The French philosopher Gaston Bachelard once wrote “We attribute to water virtues that are antithetic to the ills of a sick person. Man projects his desire to be cured and dreams of a compassionate substance.” On this evidence, that dream is as strong as ever.
References
1. Homeopathy 96, 141 - 226 (2007).
2. Davenas, E. et al., Nature 333, 816 (1988).
3. Del Guidice, E. et al. Phys. Rev. Lett. 61, 1085 (1988).
[It’s been pointed out to me that my commentary on the Homeopathy special issue on the memory of water, posted on the Nature news site, is now available only to subscribers. For shame. So here it is. This is the version I returned to the editors, but I’ve not checked what final small changes they might have added subsequently.]
A survey of evidence for the ‘memory’ of liquid water casts little light on its putative role in homeopathy.
I suspect it will be news to most scientists that Elsevier publishes a peer-reviewed journal called Homeopathy. I also suspect that many, on discovering this, would doubt there is anything published there that it would profit them to read. But I propose that such prejudices be put aside for the current special issue, released this Friday, which collects a dozen papers devoted to the ‘memory of water’ [1]. It’s worth seeing what they have to say – if only because that reveals this alleged phenomenon to be as elusive as ever.
The inability of water to act as a memorial was a well known poetical trope before the poet John Keats chose as his epitaph the quotation that serves as a headline here; its ephemerality was noted by Heraclitus in the fifth century BC. But ‘the memory of water’ is a phrase now firmly lodged in the public consciousness – it even supplied the title for a recent play in London’s West End. Scientists, though, tend to side with the poets in rejecting any notion that water can hold lasting impressions. Indeed, Homeopathy’s editor, Peter Fisher of the Royal London Homeopathic Hospital admits that the memory of water “casts a long shadow over homeopathy and is just about all that many scientists recall about the scientific investigation of homeopathy, equating it with poor or even fraudulent science.”
The term was coined by the French newspaper Le Monde in the wake of the 1988 Nature paper [2] that kicked off the whole affair. The lead author was the late Jacques Benveniste, head of a biomedical laboratory in Clamart run by the French National Institute of Health and Medical Research (INSERM).
Benveniste’s team described experiments in which antibodies stimulated an allergic response in human white blood cells called basophils, even when the antibody solutions were diluted far beyond the point where they would contain a single antibody molecule. The activity seemed to disappear and then reappear periodically during serial dilutions.
The results seemed to offer some experimental justification for the use of such high-dilution remedies in homeopathy. But they defied conventional scientific understanding, specifically the law of mass action which demands that the rates of chemical reactions be proportional to the concentrations of reagents. How could this be? Benveniste and colleagues suggested that perhaps the antibody activity was ‘imprinted’ in some fashion on the structure of liquid water, and transferred with each dilution.
The idea made no sense in terms of what was known about the structure of water – but what prevented it from being dismissed straight away was that liquid water has a complicated molecular-scale structure that is still not perfectly understood. Water molecules associate by means of weak chemical bonds called hydrogen bonds. Though in the main they form and break on timescales of about a trillionth of a second, nonetheless they appear to offer a vague possibility that water might form clusters of molecules with specific shapes and behaviours.
Benveniste’s experiments were investigated by a team of ‘fraud-busters’ led by Nature’s then editor John Maddox, who demanded that the studies be repeated under careful observation. Although Benveniste acquiesced (and the results proved utterly inconclusive), he complained of a witch-hunt. Certainly, it was an unprecedented act of scrutiny that not even the proponents of cold fusion a year later – another water-related pathology – had to endure.
In any event, the results were never unambiguously repeated by others. Benveniste, however, progressed from high-dilution experiments to the claim that the activity of biomolecules could be ‘digitally recorded’ and imprinted on water using radio waves. Until his death in 2004, he insisted that this would lead to a new age of ‘digital biology.’
There are many good reasons – too many to fit in this column – to doubt that water molecules in the liquid state could mimic the behaviour of antibodies or other complex biomolecules in a way that persists through dilution after dilution. As water expert José Teixeira, who bravely contributes a sceptic’s perspective to the Homeopathy special issue, says, “Any interpretation calling for ‘memory’ effects in pure water must be totally excluded.” But the idea won’t be squashed that easily, as some of the other papers show.
They report several experimental results that, at face value, are intriguing and puzzling. Louis Rey, a private researcher in Switzerland, reports that salt solutions show markedly different thermoluminescence signals, for different homeopathic dilutions, when frozen and then rewarmed. Bohumil Vybíral and Pavel Vorácek of the University of Hradec Králové in the Czech Republic describe curious viscosity changes in water left to stand undisturbed. And Benveniste’s collaborator Yolène Thomas of the Institut Andre Lwoff in France reports some of the results of radiofrequency ‘programming’ of water with specific biomolecular behaviour, including the induction of E. coli-like ‘signals’, the inhibition of protein coagulation, and blood vessel dilation in a guinea pig heart.
The volume is, in other words, a cabinet of curiosities. There is rarely even a token effort to explain the relevance of these experiments to the supposed workings of homeopathy, with its archaic rituals of shaking (‘succussion’) and ‘magic-number’ dilutions (one must always use factors of ten, and generally only specific ones, such as 100**6, 100**12 and 100**30). The procedures and protocols on display here are often unusual if not bizarre, because it seems the one thing you must not do on any account is the simplest experiment that would probe any alleged ‘memory’ effect: to look for the persistent activity of a single, well-defined agent in a simple reaction – say an enzyme or an inorganic catalyst – as dilution clears the solution of any active ingredient.
If that sounds bad, it is nothing compared with the level of theoretical discussion. This ‘field’ has acquired its own deus ex machina, an unsubstantiated theory of ‘quantum coherent domains’ in water proposed in 1988 [3] that is vague enough to fit anything demanded of it. Aside from that, the ‘explanations’ on offer seem either to consider that water physics can be reinvented from scratch by replacing decades of careful research with wishful thinking, or they call on impurities to perform the kind of miraculous feats of biomolecular mimicry and replication that chemists have been striving to achieve for many years.
The French philosopher Gaston Bachelard once wrote “We attribute to water virtues that are antithetic to the ills of a sick person. Man projects his desire to be cured and dreams of a compassionate substance.” On this evidence, that dream is as strong as ever.
References
1. Homeopathy 96, 141 - 226 (2007).
2. Davenas, E. et al., Nature 333, 816 (1988).
3. Del Guidice, E. et al. Phys. Rev. Lett. 61, 1085 (1988).
Tuesday, August 21, 2007

After the flood
[This is the pre-edited version of my Lab Report column for the September issue of Prospect.]
Can there be a pub in the country that has not witnessed some sage shaking his head over his pint and opining “Well, if you will build on a flood plain…”? These bar-room prophets are, as usual, merely parroting the phrases they have heard from Westminster, where flood plains have become the talk of the House. “Gordon Brown has to accept the inconvenient truth that if you build houses on flood plains it increases the likelihood that people will be flooded”, says shadow local government secretary Eric Pickles. But the chief executive of the National Housing Federation counters that “there’s simply no way we can’t build any more new homes because of concerns about flood plains… much of the country is a flood plain.”
But what exactly is a flood plain? Perhaps the most pertinent answer is that it is a reminder that rivers are not, like canals, compelled to respect fixed boundaries. They are, in fact, not things at all, but processes. Surface water flow from rain or snow melt, erosion, and sediment transport combine to produce a river channel that constantly shifts, redefining its own landscape. The meanders gradually push back the surrounding hill slopes and smooth out a broad, flat valley floor, thick with fertile sediment: the perfect setting for agrarian settlements, or so it seems. The catch is that when the river waters rise above the banks, there is nothing to hold them back from washing across this wide plain. Levees may try, but struggle against the fact that a river’s curves are always moving slowly: the Mississippi shifts its tracks by up to 20 m a year. One of the fundamental problems for building near rivers is that buildings stay put, but rivers don’t.
What’s the solution? To judge from recent events, it hasn’t changed much in a hundred years: you pile up sandbags. But some precautions are still little heeded: replacing soil with concrete exacerbates the dangers by increasing runoff, and the inadequacies of Britain’s Victorian drainage system are no secret. There’s nothing particularly sophisticated about flood defence: it’s largely question of installing physical barriers and gates. But permanent walls can create conflicts with access and amenity – no one would tolerate a three-foot wall all along the Thames. And some areas are simply impossible to protect this way. So there’s no real call for new science or technology: it’s more a matter of recognizing that flood threats now have to be considered routine, not once-in-a-lifetime risks.
The UK floods were the worst for 60 years, and claimed at least nine lives. But the tribulations of a soggy summer in Gloucester are put in perspective by the situation in Asia. In China, heavy rainfall in the north brought flooding to the Yangtze, and the combined effects of storms has affected one tenth of the population. In a reversal of the usual situation where the parched north envies the moist south, a heatwave in the southern provinces has left more than a million short of drinking water. Meanwhile, an unusually intense monsoon has devastated parts of India and Bangladesh, killing more than 2000, displacing hundreds of thousands from their homes and affecting millions more. A map of the flooded areas of Bangladesh is almost surreal, showing more than half the country ‘under water’.
There’s little new in this, however. Low-lying Bangladesh floods to some degree most years. The Yellow River is commonly known as China’s Sorrow, which has brought recurrent catastrophe to the country’s Great Plain well over a thousand times in history despite herculean efforts to contain its flow with dikes. A flood in 1887-8 created a lake the size of Lake Ontario and, one way or another, killed an estimated six million.
But perhaps surprisingly, some in China have been more ready than in the West to blame the recent events on global warming. Dong Wenjie, director-general of the Beijing Climate Centre, claims that the frequency and intensity of extreme weather events are increasing, and that this “is closely associated with global warming.” Well, maybe. No single event can itself be interpreted one way or the other. The most one can really say is that it is in line with what global warming predicts, as the hydrological cycle that moves water between the seas and skies intensifies – although that by no means implies more rain everywhere. That regional variation, in fact, was a central component of the recent claim by scientists to have detected evidence of global warming on 20th-century rainfall: computer models predict that this influence has a particular geographical fingerprint that has now been identified in the data. It’s a clear sign that the future predictions of more extreme weather – droughts as well as floods – need to be taken seriously.
One question so far given rather little consideration is what this implies for the major hydraulic engineering projects underway in Asia. Ten years ago, specialists in water-resource management were predicting that the problems evident with existing big projects, such as the Aswan Dam on the Nile, might curtail the era of mega-dams and suchlike. Now that looks unlikely: China’s Three Gorges dam is basically complete, and both China and India seem set on ambitious and controversial schemes to transfer waters between their major rivers. The South-North Water Diversion Project in China is scheduled to deliver water to Beijing in time for the Olympics from over 1,000 km away, while the massive Interlinking Rivers project in India would convert the entire country into a grid of waterways controlled by dams, with the aim of alleviating both flooding and drought.
Both of these schemes are already fraught with economic, environmental, social and scientific questions. The prospect of greater variability and more extremes of rainfall can only make the issues more uncertain, and prompts us to shed the illusion that we understand what rivers can and will do.
Sunday, August 19, 2007
The Hydra lives:
more on homeopathy
There’s no rest for the wicked, it seems. My wickedness was to voice criticisms, here and on the Nature site, of a collection of papers on the ‘memory of water’ published in the journal Homeopathy, and I return from holiday to find many responses (see Nature’s weblog and the comments on my article below) to attend to. So here goes.
I am gratified that I found the right metaphor: the ‘memory of water’ does indeed seem to be a many-headed Hydra on which new heads appear as fast as you can lop them off. I’ve discussed several of the papers in the journal, but it seems that I’m being called upon to address them all. Peter Fisher complains that I don’t discuss the experiments at all, but surely he must now know about my Nature column, in which I say:
“These papers report several experimental results that, at face value, are intriguing and puzzling. Louis Rey, a private researcher in Switzerland, reports that salt solutions show markedly different thermoluminescence signals, for different homeopathic dilutions, when frozen and then rewarmed. Bohumil Vybíral and Pavel Vorácek of the University of Hradec Králové in the Czech Republic describe curious viscosity changes in water left to stand undisturbed. And Benveniste's collaborator Yolène Thomas, of the Andre Lwoff Institute in Villejuif, outside Paris, reports some of the results of radiofrequency 'programming' of water with specific biomolecular behaviour, including the induction of Escherichia coli -like 'signals', the inhibition of protein coagulation, and blood-vessel dilation in a guinea pig heart.”
To do a thorough analysis of all the papers would require far more words than I can put into a Nature news article, or could reasonably post even on my own blog (the original piece below already ran to over 2000 words). The problem is that, as I’ve said before, the devil is in the details – and there are a lot of details.
Let me illustrate that with reference to Rustum Roy’s paper (Rao et al.), which Martin Chaplin, Dana Ullman (apologies for the gender confusion) and Rustum himself all seem keen that I talk about. I’m all too happy to acknowledge Rustum’s credentials. I have the highest respect for his work, and in fact I once attempted to organize a symposium for a Materials Research Society meeting with him on the ethics of that topic (something that was shamefully declined by the MRS, of which I am otherwise a huge fan, on the grounds that it would arouse too much controversy).
The paper is hard to evaluate on its own – it indicates that the full details will be published elsewhere. They key experimental claim is that the UV-Vis spectra of different remedies (Natrum muriaticum and Nux vomica) are distinguishable not only from one another but also among the different potencies (6C, 12C, 30C) of remedy. That is surprising if, chemically speaking, the solutions are all ‘identical’ mixtures of 95% ethanol in water. But are they? Who knows. There is no way of evaluating that here. There is no analysis of chemical composition – it looks as though the remedies were simply bought from suppliers and not analysed by any other means than those reported. So I find these to be a really odd set of experiments: in effect, someone hands you a collection of bottles without any clear indication of what’s in them, you conduct spectroscopy on them and find that the spectra are different, and then you conclude, without checking further, that the differences cannot be chemical. If indeed these solutions are all nominally identical ethanol solutions that differ only in the way they have been prepared, these findings are hard to explain. But this paper alone does not make that case – it simply asks us to believe it. One does not have to be a resolute sceptic to demand more information.
There is a troubling issue, however. In searching around to see what else had been written about this special issue, I came across a comment on Paul Wilson’s web site suggesting that the comparisons shown in Figures 1 and 2 are misleading. In short, the comparisons of spectra of Nat mur and Nux vom in Figure 1 are said to be “representative”. Figure 2, meanwhile, shows the range of variation for 10 preparations of each of these two remedies. But the plot for Nat mur in Figure 1 corresponds to the lowest boundary of the range shown in Figure 2, while the plot for Nux vom corresponds to the uppermost boundary. In other words, Figure 1 shows not representative spectra at all, but the two that are the most different out of all 10 samples. I have checked this suggestion for myself, and found it to be true, at least for the 30C samples. I may simply be misunderstanding something here, but if not, it’s hard not to see this aspect of the paper as very misleading, whatever the explanation for it. Why wasn’t it picked up in peer review?
I’m not going to comment at any length on the hypotheses put forward in Rao et al., because they aren’t in any way directly connected to the experiments, and so there’s simply no support for them at all in the results. I don’t see for a moment, however, either how these hypotheses can be sustained in their own right, or (less still) how they can explain any physiological effects of the remedies.
I don’t, as Rustum implies, demand an explanation for alleged ‘memory of water’ effects before I will accept them as genuine – I agree that experiment should take primacy. I merely want to point out that the ‘explanations’ on offer do not offer much cause to think that a great deal of critical thinking is going on here. Rustum is perhaps right to suggest that I may have been too acquiescent to the Nature news editor’s erudite suggestion for the title of my column. I’m not sure, however, that Keats really meant to imply that his name would so soon be forgotten…
On the Nature site, George Vithoulkas gives me great delight, for it seems that homeopaths aren’t even sufficiently agreed about how their remedies are supposed to work that they can distinguish ‘evidence’ for a mechanism from its opposite. My only other comment in this regard is to use Vithoulkas’s comment to point out that the common attribution of this ‘like cures like’ notion, as a general principle of medicine, to Paracelsus is wrong (not that this would give it any greater credibility!).
OK, am I excused now?
Update:
The Faculty of Homeopathy has just issued the following rejoinder to Richard Dawkins' TV programme last night in which he exposed the lack of scientific credibility of homeopathy. This special issue of Homeopathy on the memory of water has been cited as 'evidence' that there is some scientific weight to the field after all. This is exactly what I knew would happen: the mere fact of the papers' existence will now be used to defend homeopathy as a science. Let's hope that some people, at least, will be moved to examine the quality of that evidence.
I hope to comment on Richard's series in a later post. It was nice to see him being more charming and less bristling than he tends to be when talking about religion - his points carry much more force this way.
Statement in response to The Enemies of Reason - “The Irrational Health Service” Channel 4, Monday 20 August
The Faculty of Homeopathy and British Homeopathic Association support an easily understood approach to difficult scientific issues. However, Professor Richard Dawkins’ Channel 4 programme “The Irrational Health Service” presented an unbalanced and biased picture of the facts and evidence about homeopathy.
Contrary to the impression given by the programme, there has never been more evidence for the effectiveness of homeopathy than now: http://www.trusthomeopathy.org/pdf/Summaryofresearchevidence.pdf This comes from audits and outcome studies, cost effectiveness studies, narrative medicine and statistical overviews (or meta-analyses). Four out of five meta-analyses of homeopathy as a whole show positive effect for homeopathy, as do several focusing on specific conditions.
There is also an increasing body of work about the scientific properties of highly diluted substances, which Professor Dawkins dismissed. The most recent issue of the Faculty of Homeopathy’s journal Homeopathy contains articles by scientists from around the world, which are a timely reminder about how much there is still to learn about the science of these dilutions. The outright dismissal of any potential activity of these substances is increasingly untenable.
more on homeopathy
There’s no rest for the wicked, it seems. My wickedness was to voice criticisms, here and on the Nature site, of a collection of papers on the ‘memory of water’ published in the journal Homeopathy, and I return from holiday to find many responses (see Nature’s weblog and the comments on my article below) to attend to. So here goes.
I am gratified that I found the right metaphor: the ‘memory of water’ does indeed seem to be a many-headed Hydra on which new heads appear as fast as you can lop them off. I’ve discussed several of the papers in the journal, but it seems that I’m being called upon to address them all. Peter Fisher complains that I don’t discuss the experiments at all, but surely he must now know about my Nature column, in which I say:
“These papers report several experimental results that, at face value, are intriguing and puzzling. Louis Rey, a private researcher in Switzerland, reports that salt solutions show markedly different thermoluminescence signals, for different homeopathic dilutions, when frozen and then rewarmed. Bohumil Vybíral and Pavel Vorácek of the University of Hradec Králové in the Czech Republic describe curious viscosity changes in water left to stand undisturbed. And Benveniste's collaborator Yolène Thomas, of the Andre Lwoff Institute in Villejuif, outside Paris, reports some of the results of radiofrequency 'programming' of water with specific biomolecular behaviour, including the induction of Escherichia coli -like 'signals', the inhibition of protein coagulation, and blood-vessel dilation in a guinea pig heart.”
To do a thorough analysis of all the papers would require far more words than I can put into a Nature news article, or could reasonably post even on my own blog (the original piece below already ran to over 2000 words). The problem is that, as I’ve said before, the devil is in the details – and there are a lot of details.
Let me illustrate that with reference to Rustum Roy’s paper (Rao et al.), which Martin Chaplin, Dana Ullman (apologies for the gender confusion) and Rustum himself all seem keen that I talk about. I’m all too happy to acknowledge Rustum’s credentials. I have the highest respect for his work, and in fact I once attempted to organize a symposium for a Materials Research Society meeting with him on the ethics of that topic (something that was shamefully declined by the MRS, of which I am otherwise a huge fan, on the grounds that it would arouse too much controversy).
The paper is hard to evaluate on its own – it indicates that the full details will be published elsewhere. They key experimental claim is that the UV-Vis spectra of different remedies (Natrum muriaticum and Nux vomica) are distinguishable not only from one another but also among the different potencies (6C, 12C, 30C) of remedy. That is surprising if, chemically speaking, the solutions are all ‘identical’ mixtures of 95% ethanol in water. But are they? Who knows. There is no way of evaluating that here. There is no analysis of chemical composition – it looks as though the remedies were simply bought from suppliers and not analysed by any other means than those reported. So I find these to be a really odd set of experiments: in effect, someone hands you a collection of bottles without any clear indication of what’s in them, you conduct spectroscopy on them and find that the spectra are different, and then you conclude, without checking further, that the differences cannot be chemical. If indeed these solutions are all nominally identical ethanol solutions that differ only in the way they have been prepared, these findings are hard to explain. But this paper alone does not make that case – it simply asks us to believe it. One does not have to be a resolute sceptic to demand more information.
There is a troubling issue, however. In searching around to see what else had been written about this special issue, I came across a comment on Paul Wilson’s web site suggesting that the comparisons shown in Figures 1 and 2 are misleading. In short, the comparisons of spectra of Nat mur and Nux vom in Figure 1 are said to be “representative”. Figure 2, meanwhile, shows the range of variation for 10 preparations of each of these two remedies. But the plot for Nat mur in Figure 1 corresponds to the lowest boundary of the range shown in Figure 2, while the plot for Nux vom corresponds to the uppermost boundary. In other words, Figure 1 shows not representative spectra at all, but the two that are the most different out of all 10 samples. I have checked this suggestion for myself, and found it to be true, at least for the 30C samples. I may simply be misunderstanding something here, but if not, it’s hard not to see this aspect of the paper as very misleading, whatever the explanation for it. Why wasn’t it picked up in peer review?
I’m not going to comment at any length on the hypotheses put forward in Rao et al., because they aren’t in any way directly connected to the experiments, and so there’s simply no support for them at all in the results. I don’t see for a moment, however, either how these hypotheses can be sustained in their own right, or (less still) how they can explain any physiological effects of the remedies.
I don’t, as Rustum implies, demand an explanation for alleged ‘memory of water’ effects before I will accept them as genuine – I agree that experiment should take primacy. I merely want to point out that the ‘explanations’ on offer do not offer much cause to think that a great deal of critical thinking is going on here. Rustum is perhaps right to suggest that I may have been too acquiescent to the Nature news editor’s erudite suggestion for the title of my column. I’m not sure, however, that Keats really meant to imply that his name would so soon be forgotten…
On the Nature site, George Vithoulkas gives me great delight, for it seems that homeopaths aren’t even sufficiently agreed about how their remedies are supposed to work that they can distinguish ‘evidence’ for a mechanism from its opposite. My only other comment in this regard is to use Vithoulkas’s comment to point out that the common attribution of this ‘like cures like’ notion, as a general principle of medicine, to Paracelsus is wrong (not that this would give it any greater credibility!).
OK, am I excused now?
Update:
The Faculty of Homeopathy has just issued the following rejoinder to Richard Dawkins' TV programme last night in which he exposed the lack of scientific credibility of homeopathy. This special issue of Homeopathy on the memory of water has been cited as 'evidence' that there is some scientific weight to the field after all. This is exactly what I knew would happen: the mere fact of the papers' existence will now be used to defend homeopathy as a science. Let's hope that some people, at least, will be moved to examine the quality of that evidence.
I hope to comment on Richard's series in a later post. It was nice to see him being more charming and less bristling than he tends to be when talking about religion - his points carry much more force this way.
Statement in response to The Enemies of Reason - “The Irrational Health Service” Channel 4, Monday 20 August
The Faculty of Homeopathy and British Homeopathic Association support an easily understood approach to difficult scientific issues. However, Professor Richard Dawkins’ Channel 4 programme “The Irrational Health Service” presented an unbalanced and biased picture of the facts and evidence about homeopathy.
Contrary to the impression given by the programme, there has never been more evidence for the effectiveness of homeopathy than now: http://www.trusthomeopathy.org/pdf/Summaryofresearchevidence.pdf This comes from audits and outcome studies, cost effectiveness studies, narrative medicine and statistical overviews (or meta-analyses). Four out of five meta-analyses of homeopathy as a whole show positive effect for homeopathy, as do several focusing on specific conditions.
There is also an increasing body of work about the scientific properties of highly diluted substances, which Professor Dawkins dismissed. The most recent issue of the Faculty of Homeopathy’s journal Homeopathy contains articles by scientists from around the world, which are a timely reminder about how much there is still to learn about the science of these dilutions. The outright dismissal of any potential activity of these substances is increasingly untenable.
Thursday, August 09, 2007

Chemistry in pictures
Joachim Schummer and Tami Spector have just published in Hyle a paper based on their presentation at the 2004 meeting ‘The Public Images of Chemistry’ in Paris. This was one of the most interesting talks of the conference, looking at how the images used to portray chemists and their profession both by themselves and by others over the past several centuries have influenced and been influenced by public perceptions. They look at tropes drawn (often subconsciously) from aesthetics in the visual arts, and at how the classic ‘brochure’ photos of today often still allude to the images of flask-gazing cranks found in depictions of alchemists and derived from uroscopy. (See, for example, the logo for my ‘Lab Report’ column in Prospect.) I shamelessly plagiarize these ideas at every opportunity. Recommended reading.
Monday, August 06, 2007

A wardrobe for Mars
[This is my Material Witness column for the September issue of Nature Materials.]
No one has a date booked for a party on the moon or on Mars, but that hasn’t stopped some from thinking about what to wear. One thing is clear: there is nothing fashionably retro about the Apollo look. If, as seems to be the plan, we are going out there this time to do some serious work, the bulky gas bags in which Alan Shepard and his buddies played golf and rode around in buggies aren’t up to the job. Pressurized with oxygen, the suits could be bent at the arm and leg joints only with considerable effort. A few hours’ of lunar hiking and you’d be exhausted.
In comparison, the fetching silver suits worn for the pre-Apollo Mercury missions look almost figure-hugging. But that’s because they were worn ‘soft’ – the astronauts didn’t venture outside their pressurized cabins, and the suits would have inflated only in the event of a pressure loss. In the vacuum of space, high pressure is needed to prevent body fluids from boiling.
But pressurization is only a part of the challenge. Space-suit design presents a formidable, multi-faceted materials challenge. The solution has to involve a many-layered skin – sometimes more than a dozen layers, each with a different function. This makes the suit inevitably bulky and expensive.
While the Mercury suits were basically souped-up high-altitude pilots’ suits, made from Neoprene-coated and aluminized nylon, today’s spacewear tends to follow the Apollo principle of several distinct garments worn in layers. A liquid cooling and ventilation garment (LCVG) offers protection from temperatures that can reach 135 oC in the Sun’s glare, while allowing body moisture to escape; a pressure suit (PS) acts as a gas-filled balloon; and a thermomechanical garment (TMG) protects against heat loss, energetic radiation, puncture by micrometeoroids, and abrasion.
These suits initially made use of the materials to hand, but inevitably this resulted in some ‘lock-in’ whereby ‘tradition’ dominated materials choices rather than this being reconsidered with each redesign. Some Apollo materials, such as the polyimide Kapton and the polyamide Kevlar, are still used – Kapton’s rigidity and low gas permeability recommends it for containing the ballooning PS, while Kevlar’s strength is still hard to beat for the TMG. But not all the choices are ideal: a Spandex LCVG has rather poor wicking and ventilation properties. Indeed, a reanalysis from scratch suggests superior replacements for most of the ‘traditional’ materials (J. L. Marcy et al., J. Mat. Eng. Perf. 13, 208; 2004; see paper here).
Space suits have increased in mass since Apollo, because they are now used in zero rather than lunar gravity. But martian gravity is a third that of Earth. To improve suit flexibility and reduce mass, Dava Newman and coworkers at the Massachusetts Institute of Technology are reconsidering the basic principles: using tight-fitting garments rather than gas to exert pressure, while strengthening with a stiff skeleton along lines of non-extension. These BioSuits are several years away from being ready for Mars – but there’s plenty of time yet to prepare for that party.
Subscribe to:
Posts (Atom)