Whatever you do, don’t call them militant
Blimey, it’s a lot worse than I thought. I had worried somewhat that I had unfairly prejudged the Reason Project in my Nature Muse, suggesting (mildly, I thought) that this might be the same old line of seeking out the worst in religion to expose the urgent need for its destruction and then imagining that this can be done by simply telling people the facts, so far as we currently know them, about the origins of humanity and the universe. But on current showing, that is precisely what it seems to be.
As a firm atheist, I don’t particularly object to that. I just find it a bit over-optimistic, and a tad intellectually lame. It reminds me of the old deficit model that used to motivate the Public Understanding of Science movement: just give people the right facts, and then they’ll agree with us. I am in favour of any movement that campaigns to kick out of schools the invidious misinformation of creationism, intelligent design and the rest of the shoddy fundamentalist agenda. I am very much in favour of a movement that aims to denounce religious intolerance and that attacks the kind of harmful and ignorant nonsense that seems increasingly to be coming from the Vatican. And I believe I said that in my article.
But what depresses me is that the Reason Project and many of its supporters are so sure of the battle-lines that they have lost the ability of basic English comprehension. It is this that has earned me the delightful honour of a place in the Reason Project’s Hall of Shame, no less – because it has decided that I am placing the irenic BioLogos Foundation, the Templeton Foundation, and other apologists, on a pedestal, making them the nice, friendly good guys who only want us all to get along. Does my article say that? No, it simply quotes from the BioLogos mission statement (just as it quotes from the Reason Project mission statement). That this is taken as registering approval is a bit disturbing. The fact that I suggest the Reason Project in some respects ‘should be applauded’, and say no such thing about the BioLogos Foundation, doesn’t seem to be noticed. (The fact is that I’m utterly indifferent to the BioLogos Foundation. I find its aims uninspiring and its current statements about the relation of science and religion somewhat shallow.)
Did I say (as most of the comments on the Reason Project page imply) that science and religion are compatible? No. They are systems of thought that seem to me to stem from quite different axioms, and are bound to run into logical contradictions. But humans seem remarkably good at living with contradictions. We all do it. It is not a particularly laudable attribute, but it is what we are like (most of us). Many people (not me) are apparently able to reconcile religious belief with a deep trust in science. I’m not sure how, but they do. I suspect they just take the bits they like and ignore the bits that clash. On current evidence, this seems to offend a lot of people associated with the Reason Project.
There doesn’t seem to be much to be gained from responding to the various comments on the Reason Project site, since they are so lamentable. (A sample: ‘It is sad to see that, in their desperation to recapitalize, a journal with the prestige of Nature is whoring around looking for some of that Tempelton prize money. Go write and editorialize for a religious newspaper or magazine if you want to espouse religious viewpoints.’ ‘Dr Ball essentially states that while religion, admittedly, wrongly picks on certain aspects of science, religion and science can coexist, and thus we should not eliminate religion.’ ‘I find it particularly disturbing that this article [sic] was in a science magazine. Once again we have a document declairing [sic] ignorance as a right of passage.’ ‘I found this part objectionable, “atheistic absolutism works as long as it ignores what people are like.” I feel it misses the point that people are not like anything. It is the memes that make people believe in sky fairies and all the other wishful thinking crap.’ ‘Philip Ball says it himself—“religion is a social construct.” Science is not. In defending faith without evidence, does he really not see the irony in this statement?!’ [Oh yes, the irony!] ‘Philip Ball presents Francis Collins as a happy peacemaker vs the “militant atheists.”’ Stop now, it’s too depressing.)
But hey, that’s the blogosphere for you. If I was Sam Harris, however, I’d be worried. And what truly depresses me is that this may actually reflect the level of comprehension and reflection found ‘at the top’ of the Reason Project. I think we haven’t heard the last of this yet.
Tuesday, May 19, 2009
Saturday, May 16, 2009
Do we need another reason?
[This is the pre-edited version of my latest piece for Nature's online news. Do we really need more about science and religion? Probably not, although my excuse for this piece is the recent launch of two fairly high-profile projects pertaining to that topic. Richard Holmes puts the case much more succinctly in his splendid book The Age of Wonder: “The old, rigid debates and boundaries – science versus religion, science versus the arts, science versus traditional ethics – are no longer enough. We should be impatient with them. We need a wider, more generous, more imaginative perspective.”]
The ‘war’ between science and religion is stuck in a rut. Can we change the record now?
The 50th anniversary of C.P. Snow’s famous ‘Two Cultures’ lecture has elicited mixed views. Some feel that the divide between the sciences and the humanities is as broad and uncomfortable as it was in 1959; others say the world has moved on. But perhaps we need instead to acknowledge that today’s divisions exist between two quite different cultures.
To my mind, the most problematic of these is the distinction between those who believe in the value of knowledge and learning, whether artists, scientists, historians or politicians, and those who reject, even denigrate, intellectualism in world affairs. Some have suggested that these poles are personified by the present and previous incumbents of the White House.
But others feel that the most serious disparity is now between those who trust in science and Enlightenment rationalism, and those guided by religious scripture. This feeling has apparently motivated the recent launch of the Reason Project, an initiative organized by neuroscientist and writer Sam Harris, which boasts a stellar advisory board that includes Richard Dawkins, Daniel Dennett, Steven Weinberg, Harry Kroto, Craig Venter and Steven Pinker, along with Salman Rushdie, Ayaan Hirsi Ali and Ian McEwan.
The project aims ‘to spread scientific knowledge and secular values in society’ and ‘to encourage critical thinking and erode the influence of dogmatism, superstition, and bigotry in our world.’ It is not hard, given the list of backers, to see what that means: doing battle with religion.
There are plenty of reasons why this may be necessary. They are well rehearsed, pertaining mostly to the conflict between scientific and fundamentalist ways of understanding human origins. And it’s perilously easy, to the east of the Atlantic, to get complacent about this: when a wealthy, treacle-voiced American said proudly to me recently ‘I’m a creationist’, I was reminded that there are places where this isn’t deemed tantamount to announcing ‘I’m impressionable and ignorant’.
Important though such issues are, the Reason Project’s supporters would probably agree that they pale in comparison with the use (or generally, abuse) of religious dogma to justify suppression of human rights, maltreatment and murder. To the extent that those are in the project’s sights, it should be applauded. But with Dawkins (The God Delusion) and Christopher Hitchens (God Is Not Great) on board, one can’t help suspecting that the Almighty Himself is the prime target.
This debate now tends to cluster into two camps. One, exemplified by the Reason Project, insists that science and religion are fundamentally incompatible, and that the world ain’t big enough for the both of them.
The other side is exemplified by another recently launched project, the BioLogos Foundation, established by the former leader of the Human Genome Project Francis Collins. In this view, science and religion can and should make their peace: there is no reason why they cannot coexist. The mission statement of BioLogos speaks of ‘America’s escalating culture war between science and faith’, and explains that the Foundation ‘emphasizes the compatibility of Christian faith with what science has discovered about the origins of the universe and life.’
(There is, incidentally, a third camp too, which insists that religion must expunge heretical science such as Darwinism. Without denying that this is a dangerously widespread view, its vacuity disqualifies it from discussion here.)
BioLogos is funded by the Templeton Foundation, which likewise seeks to identify common ground between science and religion. To the militant atheists, this is sheer appeasement, if not indeed capitulation, in an insidious war of stealth where religion insinuates itself into the heartlands of science.
That is what evolutionary biologist Jerry Coyne, a board member of the Reason Project, laments in an essay called ‘Truckling to the Faithful: A Spoonful of Jesus Helps Darwin Go Down.’ Coyne accuses the US National Academy of Sciences, and especially its National Center for Science Education, of irenic pandering to the religious masses.
What the Reason Project has in its favour is philosophical rigour. That may also be its failing, because it looks unlikely to venture beyond those walls. Like most utopian ideas, atheistic absolutism works so long as it ignores what people are like and remains in a cultural and historical vacuum. Logical neatness and self-consistency is, unfortunately, not enough.
Sadly, when that is pointed out – as for example when the Royal Society’s former director of education Michael Reiss suggested that it was best to understand religiously motivated delusions such as creationism as world views rather than as mere ignorance that the right information would set right – scientists tend to react badly. Reiss, a biologist and an ordained Christian clergyman, was forced to resign, I suspect because some scientists found a whiff of relativism in his remarks.
I’m glad people make it their business to expose bigotry and oppression. If some choose to focus on instances where those things are religiously motivated – well, why not? But it seems important to acknowledge that the supposed conflict between science and faith is actually not that big a deal. What is a big deal is the relatively recent strength of fundamentalist opposition to selected aspects of scientific thought, which has made the USA and Turkey the two Western countries with the lowest proportion of population who believe in evolution. Were it not for such developments, science and religion could continue their wary truce, with no compulsion to iron out the differences.
In other words, this is not a matter of science versus faith, but of the rejection of aspects of science that challenge power structures. (After all, fundamentalism rarely objects to technology per se, and indeed is often disturbingly keen to acquire it.) That’s not to minimize the problem, but recognizing it for what it is will avoid false dichotomies, and perhaps make it easier to find solutions. The over-exposed example of Galileo’s trial can still serve here to illustrate the point. If we choose to believe that the Catholic Church condemned Galileo’s heliocentrism because it conflicted with scripture, we have an unassailable case against superstitious dogma. If we recognize that the issue was at least as much about maintaining the Church’s authority, we have to concede some rationality in the papal position, however repugnant the motives.
So there is little to be gained from trying to topple the temple – it’s the false priests who are the menace. If we can recognize that religion, like any ideology, is a social construct – with benefits, dangers, arbitrary inventions and, most of all, roots in human nature – then we might forgo a lot of empty argument and get back to the worldly wonders of the lab bench. Given the ‘usual suspects’ feeling that attends both the Reason Project and most Templeton initiatives, I suspect many have come to that conclusion already.
[This is the pre-edited version of my latest piece for Nature's online news. Do we really need more about science and religion? Probably not, although my excuse for this piece is the recent launch of two fairly high-profile projects pertaining to that topic. Richard Holmes puts the case much more succinctly in his splendid book The Age of Wonder: “The old, rigid debates and boundaries – science versus religion, science versus the arts, science versus traditional ethics – are no longer enough. We should be impatient with them. We need a wider, more generous, more imaginative perspective.”]
The ‘war’ between science and religion is stuck in a rut. Can we change the record now?
The 50th anniversary of C.P. Snow’s famous ‘Two Cultures’ lecture has elicited mixed views. Some feel that the divide between the sciences and the humanities is as broad and uncomfortable as it was in 1959; others say the world has moved on. But perhaps we need instead to acknowledge that today’s divisions exist between two quite different cultures.
To my mind, the most problematic of these is the distinction between those who believe in the value of knowledge and learning, whether artists, scientists, historians or politicians, and those who reject, even denigrate, intellectualism in world affairs. Some have suggested that these poles are personified by the present and previous incumbents of the White House.
But others feel that the most serious disparity is now between those who trust in science and Enlightenment rationalism, and those guided by religious scripture. This feeling has apparently motivated the recent launch of the Reason Project, an initiative organized by neuroscientist and writer Sam Harris, which boasts a stellar advisory board that includes Richard Dawkins, Daniel Dennett, Steven Weinberg, Harry Kroto, Craig Venter and Steven Pinker, along with Salman Rushdie, Ayaan Hirsi Ali and Ian McEwan.
The project aims ‘to spread scientific knowledge and secular values in society’ and ‘to encourage critical thinking and erode the influence of dogmatism, superstition, and bigotry in our world.’ It is not hard, given the list of backers, to see what that means: doing battle with religion.
There are plenty of reasons why this may be necessary. They are well rehearsed, pertaining mostly to the conflict between scientific and fundamentalist ways of understanding human origins. And it’s perilously easy, to the east of the Atlantic, to get complacent about this: when a wealthy, treacle-voiced American said proudly to me recently ‘I’m a creationist’, I was reminded that there are places where this isn’t deemed tantamount to announcing ‘I’m impressionable and ignorant’.
Important though such issues are, the Reason Project’s supporters would probably agree that they pale in comparison with the use (or generally, abuse) of religious dogma to justify suppression of human rights, maltreatment and murder. To the extent that those are in the project’s sights, it should be applauded. But with Dawkins (The God Delusion) and Christopher Hitchens (God Is Not Great) on board, one can’t help suspecting that the Almighty Himself is the prime target.
This debate now tends to cluster into two camps. One, exemplified by the Reason Project, insists that science and religion are fundamentally incompatible, and that the world ain’t big enough for the both of them.
The other side is exemplified by another recently launched project, the BioLogos Foundation, established by the former leader of the Human Genome Project Francis Collins. In this view, science and religion can and should make their peace: there is no reason why they cannot coexist. The mission statement of BioLogos speaks of ‘America’s escalating culture war between science and faith’, and explains that the Foundation ‘emphasizes the compatibility of Christian faith with what science has discovered about the origins of the universe and life.’
(There is, incidentally, a third camp too, which insists that religion must expunge heretical science such as Darwinism. Without denying that this is a dangerously widespread view, its vacuity disqualifies it from discussion here.)
BioLogos is funded by the Templeton Foundation, which likewise seeks to identify common ground between science and religion. To the militant atheists, this is sheer appeasement, if not indeed capitulation, in an insidious war of stealth where religion insinuates itself into the heartlands of science.
That is what evolutionary biologist Jerry Coyne, a board member of the Reason Project, laments in an essay called ‘Truckling to the Faithful: A Spoonful of Jesus Helps Darwin Go Down.’ Coyne accuses the US National Academy of Sciences, and especially its National Center for Science Education, of irenic pandering to the religious masses.
What the Reason Project has in its favour is philosophical rigour. That may also be its failing, because it looks unlikely to venture beyond those walls. Like most utopian ideas, atheistic absolutism works so long as it ignores what people are like and remains in a cultural and historical vacuum. Logical neatness and self-consistency is, unfortunately, not enough.
Sadly, when that is pointed out – as for example when the Royal Society’s former director of education Michael Reiss suggested that it was best to understand religiously motivated delusions such as creationism as world views rather than as mere ignorance that the right information would set right – scientists tend to react badly. Reiss, a biologist and an ordained Christian clergyman, was forced to resign, I suspect because some scientists found a whiff of relativism in his remarks.
I’m glad people make it their business to expose bigotry and oppression. If some choose to focus on instances where those things are religiously motivated – well, why not? But it seems important to acknowledge that the supposed conflict between science and faith is actually not that big a deal. What is a big deal is the relatively recent strength of fundamentalist opposition to selected aspects of scientific thought, which has made the USA and Turkey the two Western countries with the lowest proportion of population who believe in evolution. Were it not for such developments, science and religion could continue their wary truce, with no compulsion to iron out the differences.
In other words, this is not a matter of science versus faith, but of the rejection of aspects of science that challenge power structures. (After all, fundamentalism rarely objects to technology per se, and indeed is often disturbingly keen to acquire it.) That’s not to minimize the problem, but recognizing it for what it is will avoid false dichotomies, and perhaps make it easier to find solutions. The over-exposed example of Galileo’s trial can still serve here to illustrate the point. If we choose to believe that the Catholic Church condemned Galileo’s heliocentrism because it conflicted with scripture, we have an unassailable case against superstitious dogma. If we recognize that the issue was at least as much about maintaining the Church’s authority, we have to concede some rationality in the papal position, however repugnant the motives.
So there is little to be gained from trying to topple the temple – it’s the false priests who are the menace. If we can recognize that religion, like any ideology, is a social construct – with benefits, dangers, arbitrary inventions and, most of all, roots in human nature – then we might forgo a lot of empty argument and get back to the worldly wonders of the lab bench. Given the ‘usual suspects’ feeling that attends both the Reason Project and most Templeton initiatives, I suspect many have come to that conclusion already.
Thursday, May 14, 2009
What I’m reading
I have come across a diverting blog called Writers Read, which posts short pieces about, well, what writers are reading. Some might regard that as solipsistic, but personally I’m always intrigued by what other writers think about books. I was asked to contribute my own list, which right now makes me sound like a voluminous reader. The fact is, it takes me forever to finish a book when I’m not reading for professional purposes. I would add, hopefully without giving anything away, that I’m currently working through Richard Holmes’ much-praised The Age of Wonder, and it is truly spectacular. I feel dwarfed by his achievement. But in a good way.
I have come across a diverting blog called Writers Read, which posts short pieces about, well, what writers are reading. Some might regard that as solipsistic, but personally I’m always intrigued by what other writers think about books. I was asked to contribute my own list, which right now makes me sound like a voluminous reader. The fact is, it takes me forever to finish a book when I’m not reading for professional purposes. I would add, hopefully without giving anything away, that I’m currently working through Richard Holmes’ much-praised The Age of Wonder, and it is truly spectacular. I feel dwarfed by his achievement. But in a good way.
Saturday, May 09, 2009
Not so fantastic
I have reviewed Eugenie Samuel Reich’s new book Plastic Fantastic in the Sunday Times (here). The book tells the story of the fraud perpetrated by physicist Jan Hendrik Schön between around 1997 and 2002, during which time he fabricated data in a string of papers about organic microelectronics and nanoelectronics. It was something of an eye-opener to discover the details behind the affair, even though I thought I was fairly well aware of the basic facts. I missed, by the skin of my teeth, being implicated in the matter as a Nature editor, since I quit that job shortly around the time that Schön’s papers started to roll in.
For understandable reasons of space, the Sunday Times lopped off the final sentence in my review, which cited a quote from Cambridge physicist Peter Littlewood that, to my mind, captures the essence of what went on, as Reich makes clear: ‘For a long time I didn’t believe it could have been fraud, because I didn’t believe one person could make all that up. Then I realized, we all made it up.’
I have reviewed Eugenie Samuel Reich’s new book Plastic Fantastic in the Sunday Times (here). The book tells the story of the fraud perpetrated by physicist Jan Hendrik Schön between around 1997 and 2002, during which time he fabricated data in a string of papers about organic microelectronics and nanoelectronics. It was something of an eye-opener to discover the details behind the affair, even though I thought I was fairly well aware of the basic facts. I missed, by the skin of my teeth, being implicated in the matter as a Nature editor, since I quit that job shortly around the time that Schön’s papers started to roll in.
For understandable reasons of space, the Sunday Times lopped off the final sentence in my review, which cited a quote from Cambridge physicist Peter Littlewood that, to my mind, captures the essence of what went on, as Reich makes clear: ‘For a long time I didn’t believe it could have been fraud, because I didn’t believe one person could make all that up. Then I realized, we all made it up.’
Saturday, April 25, 2009
A circular argument?
[Here’s the pre-edited version of my latest Muse for Nature News. I have a book review of the stimulating reference 8 appearing shortly in Nature Physics.]
A new proposal for the signature of life on alien worlds resurrects an old idea linking light and life.
The difficulty of saying with scientific rigour what constitutes ‘life’ brings to mind Justice Potter Stewart’s famous description of pornography in 1964: it is hard to define, but we know it when we see it.
Yet do we really? Astrobiologists are haunted by the suspicion of terracentricity: we imagine that life on other planets will look like life here, and bias our searches for extraterrestrials accordingly.
Some of this community struggle nobly to free themselves from such prejudice, questioning for example the complacent conviction that life depends on water. Others seek very general signatures that make no assumptions about biochemical specifics.
One of the first such proposals, made by James Lovelock in the context of lander explorations of Mars [1], argued that sustained chemical disequilibrium in the planetary environment should be a telltale sign. This proposal had the virtue that it could rely on surveying a planet’s atmosphere alone. On Earth, the high proportion of oxygen, along with the presence of other trace gases, should be a giveaway, relying as it does on the operation of photosynthesis to prevent oxygen from getting locked into minerals.
One of the most ingenious ideas is that life affects the topography of a planet, for example by mediating chemical reactions that erode rock, by forming soil and protecting it from erosion, and by dictating climate. Geomorphologists William Dietrich and J. Taylor Perron have argued [2] that the types and distributions of landforms on Earth probably carry an imprint of life’s influence, and that a better understanding of their formation processes might lead to a clear distinction between the contours of planets with and without life.
In 1993, the idea of ‘remote sensing’ of the fingerprints of life from space was explored experimentally when Carl Sagan and coworkers used the data from a planned flyby of the Earth by NASA’s Galileo spacecraft to investigate our planet as though it were an unknown world [3]. From the chemistry of Earth’s atmosphere, images of the planetary surface, and detection of radio-wave emissions, the researchers inferred – somewhat reassuringly – that the presence of water-based life, probably of an intelligent kind, seems highly likely.
Now a new kind of fingerprint has been proposed by William Sparks of the Space Telescope Science Institute in Baltimore and his coworkers. They suggest we search for a characteristic signature of life in the light scattered from the surfaces of extrasolar planets [4]. They say that living organisms will be likely to make the light circularly polarized, meaning that the plane of its oscillating electromagnetic fields is not random but has a characteristic twist to it, either to the left or the right. This shouldn’t happen if the light simply bounces off inorganic surfaces.
Circular polarization is a feature of light scattered from organisms on Earth, where it has its origin in the ‘handedness’ or chirality of the building blocks of biological molecules. All natural proteins are made from amino acids that have a ‘left-handed’ molecular shape – the mirror-image right-handed amino acids can’t be used by cells to build proteins. And all nucleic acids use only ‘right-handed’ sugar molecules in their backbones. This molecular-scale twist means, for example, that light circularly polarized to the left or the right is absorbed to different degrees by the photosynthetic molecular apparatus of bacteria and plants, creating a net circular polarization in the scattered light.
It’s not obvious that this will be evident when the light is measured from afar, however, because the scattering process is complicated. Light rays that are scattered many times tend to have their polarization randomized, and scattering from reflective surfaces reverses the polarization. That’s why the researchers needed to check the light that bounces off cultures of marine photosynthetic bacteria, to make sure that the signature remains evident. It does – as indeed they found also for reflected light from a maple leaf. In contrast, light scattered from inorganic iron oxide shows no significant circular polarization.
Whether this method will work in real exoplanet searches is another matter. It depends on how much surface scattering would come from living organisms as opposed to inorganic substances, and on whether this light can be distinguished clearly enough from that of the parent star. Some of the planned astronomical instruments that might conduct planet searches could have sufficient resolution for this, however – probably not NASA’s Terrestrial Planet Finder (which is currently postponed indefinitely in any case), say Sparks and colleagues, but perhaps the ground-based European Extremely Large Telescope, which might begin operating around 2018. The Hubble Space Telescope has already revealed some of the chemical ingredients of an extrasolar planet [5,6]
But who says life must have a chiral molecular basis? do. ‘Homochirality’, say Sparks and colleagues, ‘is thought to be generic to all forms of biochemical life as a necessity for self-replication.’ This statement relies on the work of astrobiologist Radu Popa of Portland State University in Oregon [7]. But what Popa offers is a plausibility argument based on the idea that homochirality simplifies polymer structure in a way that promotes the efficiency of copying information. This doesn’t imply that homochirality is essential, but only that it might help. And we know that life does not always do things in the most efficient way.
However, the notion that sparks and colleagues are invoking actually goes back much further. An intimate association between life, chirality and light polarization was made in the nineteenth century, first by the French scientist Jean-Baptiste Biot and then by Louis Pasteur, who sought Biot’s advice on his seminal discovery of handedness in organic molecules. Biot, a pioneer in the study of optics and polarization, coined the term ‘optical activity’ to describe a substance that rotates the plane of polarized light, and it was no coincidence that this in itself suggested the operation of some vital, ‘active’ agent, rather than lifeless passive matter. Biot came to believe that optical activity was ‘the sole means in man’s possession of confronting the otherwise indefinable limit between life and nonlife on the molecular level’ [8]. Pasteur became a staunch advocate of this view, to the extent that (contrary to the popular view) he developed something of an anti-materialist, vitalist stance on what life is: he felt that optical rotation must result from ‘the play of vital forces’.
We now know, partly through Pasteur’s own work, that he and Biot were wrong. Sparks and colleagues are on sounder ground, but their idea could be seen to support the suspicion that life is everywhere built in our own image.
References
1. Lovelock, J. E. Nature 207, 568-570 (1965).
2. Dietrich, W. E. & Perron, J. T. Nature 439, 411-418 (2006).
3. Sagan, C. et al., Nature 365, 715-721 (1993).
4. Sparks, W. B. et al., Proc. Natl Acad. Sci. USA doi: 10.1073/pnas.0810215106 (2009).
5. Charbonneau, D. et al., Astrophys. J. 568, 377–384 (2002).
6. Vidal-Madjar, A. et al., Astrophys J. 604, L69–L72 )2004).
7. Popa, R. Between Necessity and Probability: Searching for the Definition and Origin of Life (Springer, Berlin, 2004).
8. Levitt, T. The Shadow of Enlightenment (Oxford University Press, Oxford, 2009).
[Here’s the pre-edited version of my latest Muse for Nature News. I have a book review of the stimulating reference 8 appearing shortly in Nature Physics.]
A new proposal for the signature of life on alien worlds resurrects an old idea linking light and life.
The difficulty of saying with scientific rigour what constitutes ‘life’ brings to mind Justice Potter Stewart’s famous description of pornography in 1964: it is hard to define, but we know it when we see it.
Yet do we really? Astrobiologists are haunted by the suspicion of terracentricity: we imagine that life on other planets will look like life here, and bias our searches for extraterrestrials accordingly.
Some of this community struggle nobly to free themselves from such prejudice, questioning for example the complacent conviction that life depends on water. Others seek very general signatures that make no assumptions about biochemical specifics.
One of the first such proposals, made by James Lovelock in the context of lander explorations of Mars [1], argued that sustained chemical disequilibrium in the planetary environment should be a telltale sign. This proposal had the virtue that it could rely on surveying a planet’s atmosphere alone. On Earth, the high proportion of oxygen, along with the presence of other trace gases, should be a giveaway, relying as it does on the operation of photosynthesis to prevent oxygen from getting locked into minerals.
One of the most ingenious ideas is that life affects the topography of a planet, for example by mediating chemical reactions that erode rock, by forming soil and protecting it from erosion, and by dictating climate. Geomorphologists William Dietrich and J. Taylor Perron have argued [2] that the types and distributions of landforms on Earth probably carry an imprint of life’s influence, and that a better understanding of their formation processes might lead to a clear distinction between the contours of planets with and without life.
In 1993, the idea of ‘remote sensing’ of the fingerprints of life from space was explored experimentally when Carl Sagan and coworkers used the data from a planned flyby of the Earth by NASA’s Galileo spacecraft to investigate our planet as though it were an unknown world [3]. From the chemistry of Earth’s atmosphere, images of the planetary surface, and detection of radio-wave emissions, the researchers inferred – somewhat reassuringly – that the presence of water-based life, probably of an intelligent kind, seems highly likely.
Now a new kind of fingerprint has been proposed by William Sparks of the Space Telescope Science Institute in Baltimore and his coworkers. They suggest we search for a characteristic signature of life in the light scattered from the surfaces of extrasolar planets [4]. They say that living organisms will be likely to make the light circularly polarized, meaning that the plane of its oscillating electromagnetic fields is not random but has a characteristic twist to it, either to the left or the right. This shouldn’t happen if the light simply bounces off inorganic surfaces.
Circular polarization is a feature of light scattered from organisms on Earth, where it has its origin in the ‘handedness’ or chirality of the building blocks of biological molecules. All natural proteins are made from amino acids that have a ‘left-handed’ molecular shape – the mirror-image right-handed amino acids can’t be used by cells to build proteins. And all nucleic acids use only ‘right-handed’ sugar molecules in their backbones. This molecular-scale twist means, for example, that light circularly polarized to the left or the right is absorbed to different degrees by the photosynthetic molecular apparatus of bacteria and plants, creating a net circular polarization in the scattered light.
It’s not obvious that this will be evident when the light is measured from afar, however, because the scattering process is complicated. Light rays that are scattered many times tend to have their polarization randomized, and scattering from reflective surfaces reverses the polarization. That’s why the researchers needed to check the light that bounces off cultures of marine photosynthetic bacteria, to make sure that the signature remains evident. It does – as indeed they found also for reflected light from a maple leaf. In contrast, light scattered from inorganic iron oxide shows no significant circular polarization.
Whether this method will work in real exoplanet searches is another matter. It depends on how much surface scattering would come from living organisms as opposed to inorganic substances, and on whether this light can be distinguished clearly enough from that of the parent star. Some of the planned astronomical instruments that might conduct planet searches could have sufficient resolution for this, however – probably not NASA’s Terrestrial Planet Finder (which is currently postponed indefinitely in any case), say Sparks and colleagues, but perhaps the ground-based European Extremely Large Telescope, which might begin operating around 2018. The Hubble Space Telescope has already revealed some of the chemical ingredients of an extrasolar planet [5,6]
But who says life must have a chiral molecular basis? do. ‘Homochirality’, say Sparks and colleagues, ‘is thought to be generic to all forms of biochemical life as a necessity for self-replication.’ This statement relies on the work of astrobiologist Radu Popa of Portland State University in Oregon [7]. But what Popa offers is a plausibility argument based on the idea that homochirality simplifies polymer structure in a way that promotes the efficiency of copying information. This doesn’t imply that homochirality is essential, but only that it might help. And we know that life does not always do things in the most efficient way.
However, the notion that sparks and colleagues are invoking actually goes back much further. An intimate association between life, chirality and light polarization was made in the nineteenth century, first by the French scientist Jean-Baptiste Biot and then by Louis Pasteur, who sought Biot’s advice on his seminal discovery of handedness in organic molecules. Biot, a pioneer in the study of optics and polarization, coined the term ‘optical activity’ to describe a substance that rotates the plane of polarized light, and it was no coincidence that this in itself suggested the operation of some vital, ‘active’ agent, rather than lifeless passive matter. Biot came to believe that optical activity was ‘the sole means in man’s possession of confronting the otherwise indefinable limit between life and nonlife on the molecular level’ [8]. Pasteur became a staunch advocate of this view, to the extent that (contrary to the popular view) he developed something of an anti-materialist, vitalist stance on what life is: he felt that optical rotation must result from ‘the play of vital forces’.
We now know, partly through Pasteur’s own work, that he and Biot were wrong. Sparks and colleagues are on sounder ground, but their idea could be seen to support the suspicion that life is everywhere built in our own image.
References
1. Lovelock, J. E. Nature 207, 568-570 (1965).
2. Dietrich, W. E. & Perron, J. T. Nature 439, 411-418 (2006).
3. Sagan, C. et al., Nature 365, 715-721 (1993).
4. Sparks, W. B. et al., Proc. Natl Acad. Sci. USA doi: 10.1073/pnas.0810215106 (2009).
5. Charbonneau, D. et al., Astrophys. J. 568, 377–384 (2002).
6. Vidal-Madjar, A. et al., Astrophys J. 604, L69–L72 )2004).
7. Popa, R. Between Necessity and Probability: Searching for the Definition and Origin of Life (Springer, Berlin, 2004).
8. Levitt, T. The Shadow of Enlightenment (Oxford University Press, Oxford, 2009).
Friday, April 24, 2009
You daren’t make it up
One of the things I’ve learnt from writing a novel is that it’s not a sufficient excuse to justify unconvincing aspects of a plot by saying that something like that happened in real life. It’s the job of the author to make the implausible sound, if not plausible, then at least not jarring or undermining to the narrative. I was repeatedly reminded of this when I received comments from editors, and later from readers and reviewers, on events and situations in The Sun and Moon Corrupted that I’d taken straight from life. But I would never have dared invent something apropos of the mysterious red mercury, on which parts of the plot hinge, that was as bizarre as this story, brought to my attention by Ivan Vince. Is nothing too strange to be associated with this stuff?
One of the things I’ve learnt from writing a novel is that it’s not a sufficient excuse to justify unconvincing aspects of a plot by saying that something like that happened in real life. It’s the job of the author to make the implausible sound, if not plausible, then at least not jarring or undermining to the narrative. I was repeatedly reminded of this when I received comments from editors, and later from readers and reviewers, on events and situations in The Sun and Moon Corrupted that I’d taken straight from life. But I would never have dared invent something apropos of the mysterious red mercury, on which parts of the plot hinge, that was as bizarre as this story, brought to my attention by Ivan Vince. Is nothing too strange to be associated with this stuff?
Tuesday, April 07, 2009
Physics by numbers
[This is the full version of my latest Muse for Nature News.]
A suggestion that the identification of physical laws can be automated raises questions about what it means to do science.
Two decades ago, computer scientist Kemal Ebcioglu at IBM described a computer program that wrote music like J. S. Bach. Now I know what you’re thinking: no one has ever written music like Bach. And Ebcioglu’s algorithm had a somewhat more modest goal: given the bare melody of a Bach chorale, it could fill in the rest (the harmony) in the style of the maestro. The results looked entirely respectable [1], although sadly no ‘blind tasting’ by music experts ever put them to the test.
Ebcioglu’s aim was not to rival Bach, but to explore whether the ‘laws’ governing his composition could be abstracted from the ‘data’. The goal was really no different from that attempted by scientists all the time: to deduce underlying principles from a mass of observations. Writing ‘Bach-like music’, however, highlights the constant dilemma in this approach. Even if the computerized chorales had fooled experts, there would be no guarantee that the algorithm’s rules bore any relation to the mental processes of Johann Sebastian Bach. To put it crudely, we couldn’t know if the model captured the physics of Bach.
That issue has become increasingly acute in recent years, especially in the hazily defined area of science labelled complexity. Computer models can now supply convincing mimics of all manner of complex behaviours, from the flocking of birds to traffic jams to the dynamics of economic markets. And the question repeatedly put to such claims is: do the rules of the model bear any relation to the real world, or are the resemblances coincidental?
This matter is raised by a recent paper in Science that reports on a technique to ‘automate’ the identification of ‘natural laws’ from experimental data [2]. As the authors Michael Schmidt and Hod Lipson of Cornell University point out, this is much more than a question of data-fitting – it examines what it means to think like a physicist, and perhaps even interrogates the issue of what natural laws are.
The basic conundrum is that, as is well known, it’s always possible to find a mathematical equation that will fit any data set to arbitrary precision. But that’s often pointless, since the resulting equations may be capturing contingent noise as well as meaningful physical processes. What’s needed is a law that obeys Einstein’s famous dictum, being as simple as possible but not simpler.
‘Simpler’ means here that you don’t reduce the data to a trivial level. In complex systems, it has become common, even fashionable, to find power laws (y proportional to x**n) that link two variables [3]. But the very ubiquity of such laws in systems ranging from economics to linguistics is now leading to suspicions that power laws might in themselves lack much physical significance. And some alleged power-laws might in fact be different mathematical relationships that look similar over small ranges [4].
Ideally, the mathematical laws governing a process should reflect the physically meaningful invariants of that process. They might, for example, stem from conservation of energy or of momentum. But it can be terribly hard to distinguish true invariants from trivial patterns. A recent study showed that the similarity of various dimensionless parameters from the life histories of different species, such as the ratio of average life span to age at maturity, have no fundamental significance [5].
It’s not always easy to separate the trivial or coincidental from the profound. Isaac Newton showed that Kepler’s laws identifying mathematical regularities in the parameters of planetary orbits have a deep origin in the inverse-square law of gravity. But the notorious Titius-Bode ‘law’ that alleges a mathematical relationship between the semi-major axes and the ranking of planets in the solar system remains contentious and is dismissed by many astronomers as mere numerology.
As Schmidt and Lipson point out, some of the invariants embedded in natural laws aren’t at all intuitive because they don’t actually relate to observable quantities. Newtonian mechanics deals with quantities such as mass, velocity and acceleration, while its more fundamental formulation by Joseph-Louis Lagrange invokes the principle of minimal action – yet ‘action’ is an abstract mathematical quantity, an integral that can be calculated but not really ‘measured’ directly.
And many of the seemingly fundamental constructs of ‘natural law’ – the concept of force, say, or the Schrodinger equation in quantum theory – turn out to be unphysical conveniences or arbitrary (if well motivated) guesses that merely work well. The question of whether one ascribes any physical reality to such things, or just uses them as theoretical conveniences, is often still unresolved.
Schmidt and Lipson present a clever way to narrow down the list of candidate ‘laws’ describing a data set by using additional criteria, such as whether partial derivatives of the equations also fit those of the data. Their approach is Darwinian: the best candidates are selected, on such grounds, from a pool of trial functions, and refined by iteration with mutation until reaching some specified level of predictive ability. Then parsimony pulls out the preferred solution. This process often generates a sharp drop in predictive ability as the parsimony crosses some threshold, suggesting that the true physics of the problem disappears at that point.
The key point is that the method seems to work. When used to deduce mathematical laws describing the data from two experiments in mechanics – an oscillator made from two masses linked by springs, and a pendulum with two hinged arms – it came up with precisely the equations of motion that physicists would construct from first principles using Newton’s laws of motion and Lagrangian mechanics. In other words, the solutions encode not just the observed data but the underlying physics.
Their experience with this system leads Schmidt and Lipson to suggest ‘seeding’ the selection process by drawing on an ‘alphabet’ of physically motivated building blocks. For example, if the algorithm is sent fishing for equations incorporating kinetic energy, it should seek expressions involving the square of velocities (since kinetic energy is proportional to velocity squared). In this way, the system would start to think increasingly like a physicist, giving results that we can interpret intuitively.
But perhaps the arena most in need of a tool like this is not physics but biology. Another paper in Science by researchers at Cambridge University reports a ‘robot scientist’ named Adam that can frame and experimentally test hypotheses about the genomics of yeast [6] (see here). By identifying connections between genes and enzymes, Adam could channel post-docs away from such donkey-work towards more creative endeavours. But the really deep questions, about which we remain largely ignorant, concern what one might call the physics of genomics: whether there are the equivalents of Newtonian and Lagrangian principles, and if so, what. Despite the current fads for banking vast swathes of biological data, theories of this sort are not going to simply fall out of the numbers. So we need all the help we can get – even from robots.
References
1. Ebcioglu, K. Comput. Music J. 12(3), 43-51 (1988).
2. Schmidt, M. & Lipson, H. Science 324, 81-85 (2009).
3. Newman, M. E. J. Contemp. Phys. 46, 323-351 (2005).
4. Clauset, A., Shalizi, C. R. & Newman, M. E. J. SIAM Rev. (in press).
5. Nee, S. et al., Science 309, 1236-1239 (2005).
6. King, R. D. et al., Science 324, 85-89 (2009).
[This is the full version of my latest Muse for Nature News.]
A suggestion that the identification of physical laws can be automated raises questions about what it means to do science.
Two decades ago, computer scientist Kemal Ebcioglu at IBM described a computer program that wrote music like J. S. Bach. Now I know what you’re thinking: no one has ever written music like Bach. And Ebcioglu’s algorithm had a somewhat more modest goal: given the bare melody of a Bach chorale, it could fill in the rest (the harmony) in the style of the maestro. The results looked entirely respectable [1], although sadly no ‘blind tasting’ by music experts ever put them to the test.
Ebcioglu’s aim was not to rival Bach, but to explore whether the ‘laws’ governing his composition could be abstracted from the ‘data’. The goal was really no different from that attempted by scientists all the time: to deduce underlying principles from a mass of observations. Writing ‘Bach-like music’, however, highlights the constant dilemma in this approach. Even if the computerized chorales had fooled experts, there would be no guarantee that the algorithm’s rules bore any relation to the mental processes of Johann Sebastian Bach. To put it crudely, we couldn’t know if the model captured the physics of Bach.
That issue has become increasingly acute in recent years, especially in the hazily defined area of science labelled complexity. Computer models can now supply convincing mimics of all manner of complex behaviours, from the flocking of birds to traffic jams to the dynamics of economic markets. And the question repeatedly put to such claims is: do the rules of the model bear any relation to the real world, or are the resemblances coincidental?
This matter is raised by a recent paper in Science that reports on a technique to ‘automate’ the identification of ‘natural laws’ from experimental data [2]. As the authors Michael Schmidt and Hod Lipson of Cornell University point out, this is much more than a question of data-fitting – it examines what it means to think like a physicist, and perhaps even interrogates the issue of what natural laws are.
The basic conundrum is that, as is well known, it’s always possible to find a mathematical equation that will fit any data set to arbitrary precision. But that’s often pointless, since the resulting equations may be capturing contingent noise as well as meaningful physical processes. What’s needed is a law that obeys Einstein’s famous dictum, being as simple as possible but not simpler.
‘Simpler’ means here that you don’t reduce the data to a trivial level. In complex systems, it has become common, even fashionable, to find power laws (y proportional to x**n) that link two variables [3]. But the very ubiquity of such laws in systems ranging from economics to linguistics is now leading to suspicions that power laws might in themselves lack much physical significance. And some alleged power-laws might in fact be different mathematical relationships that look similar over small ranges [4].
Ideally, the mathematical laws governing a process should reflect the physically meaningful invariants of that process. They might, for example, stem from conservation of energy or of momentum. But it can be terribly hard to distinguish true invariants from trivial patterns. A recent study showed that the similarity of various dimensionless parameters from the life histories of different species, such as the ratio of average life span to age at maturity, have no fundamental significance [5].
It’s not always easy to separate the trivial or coincidental from the profound. Isaac Newton showed that Kepler’s laws identifying mathematical regularities in the parameters of planetary orbits have a deep origin in the inverse-square law of gravity. But the notorious Titius-Bode ‘law’ that alleges a mathematical relationship between the semi-major axes and the ranking of planets in the solar system remains contentious and is dismissed by many astronomers as mere numerology.
As Schmidt and Lipson point out, some of the invariants embedded in natural laws aren’t at all intuitive because they don’t actually relate to observable quantities. Newtonian mechanics deals with quantities such as mass, velocity and acceleration, while its more fundamental formulation by Joseph-Louis Lagrange invokes the principle of minimal action – yet ‘action’ is an abstract mathematical quantity, an integral that can be calculated but not really ‘measured’ directly.
And many of the seemingly fundamental constructs of ‘natural law’ – the concept of force, say, or the Schrodinger equation in quantum theory – turn out to be unphysical conveniences or arbitrary (if well motivated) guesses that merely work well. The question of whether one ascribes any physical reality to such things, or just uses them as theoretical conveniences, is often still unresolved.
Schmidt and Lipson present a clever way to narrow down the list of candidate ‘laws’ describing a data set by using additional criteria, such as whether partial derivatives of the equations also fit those of the data. Their approach is Darwinian: the best candidates are selected, on such grounds, from a pool of trial functions, and refined by iteration with mutation until reaching some specified level of predictive ability. Then parsimony pulls out the preferred solution. This process often generates a sharp drop in predictive ability as the parsimony crosses some threshold, suggesting that the true physics of the problem disappears at that point.
The key point is that the method seems to work. When used to deduce mathematical laws describing the data from two experiments in mechanics – an oscillator made from two masses linked by springs, and a pendulum with two hinged arms – it came up with precisely the equations of motion that physicists would construct from first principles using Newton’s laws of motion and Lagrangian mechanics. In other words, the solutions encode not just the observed data but the underlying physics.
Their experience with this system leads Schmidt and Lipson to suggest ‘seeding’ the selection process by drawing on an ‘alphabet’ of physically motivated building blocks. For example, if the algorithm is sent fishing for equations incorporating kinetic energy, it should seek expressions involving the square of velocities (since kinetic energy is proportional to velocity squared). In this way, the system would start to think increasingly like a physicist, giving results that we can interpret intuitively.
But perhaps the arena most in need of a tool like this is not physics but biology. Another paper in Science by researchers at Cambridge University reports a ‘robot scientist’ named Adam that can frame and experimentally test hypotheses about the genomics of yeast [6] (see here). By identifying connections between genes and enzymes, Adam could channel post-docs away from such donkey-work towards more creative endeavours. But the really deep questions, about which we remain largely ignorant, concern what one might call the physics of genomics: whether there are the equivalents of Newtonian and Lagrangian principles, and if so, what. Despite the current fads for banking vast swathes of biological data, theories of this sort are not going to simply fall out of the numbers. So we need all the help we can get – even from robots.
References
1. Ebcioglu, K. Comput. Music J. 12(3), 43-51 (1988).
2. Schmidt, M. & Lipson, H. Science 324, 81-85 (2009).
3. Newman, M. E. J. Contemp. Phys. 46, 323-351 (2005).
4. Clauset, A., Shalizi, C. R. & Newman, M. E. J. SIAM Rev. (in press).
5. Nee, S. et al., Science 309, 1236-1239 (2005).
6. King, R. D. et al., Science 324, 85-89 (2009).
Friday, March 27, 2009
Physics, ultimate reality, and an awful lot of money
[A dramatically truncated version of this comment appears in the Diary section of the latest issue of Prospect.]
If you’re a non-believer, it’s easy to mock or even despise efforts to bridge science and religion. But you don’t need to be Richard Dawkins to sense that there’s an imbalance in these often well-meaning initiatives: science has no need of religion in its quest to understand the universe (the relevance to scientific ethics might be more open to debate), whereas religion appears sometimes to crave the intellectual force of science’s rigour. And since it seems hard to imagine how science could ever supply supporting evidence for religion (as opposed to simply unearthing new mysteries), mustn’t any contribution it might make to the logical basis of belief be inevitably negative?
That doesn’t stop people from trying to build bridges, and nor should it. Yet overtures from the religious side are often seen as attempts to sneak doctrine into places where it has no business: witness the controversy over the Royal Society hosting talks and events sponsored by the Templeton Foundation. Philosopher A. C. Grayling, recently denounced as scandalous the willingness of the Royal Society to offer a launching pad for a new book exploring the views of one of its Fellows, Christian minister and physicist John Polkinghorne, on the interactions of science and religion.
The US-based Templeton Foundation has been in the middle of some of the loudest recent controversies about religion and science. Created by ‘global investor and philanthropist’ Sir John Templeton, it professes to ‘serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions, ranging from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness, and creativity.’ For some skeptics, this simply means promoting religion, particularly Christianity, from a seemingly bottomless funding barrel. Templeton himself, a relatively liberal Christian by US standards and a supporter of inter-faith initiatives, once claimed that ‘scientific revelations may be a gold mine for revitalizing religion in the 21st century’. That’s precisely what makes many scientists nervous.
The Templeton Foundation awards an annual prize of £1million to ‘outstanding individuals who have devoted their talents to those aspects of human experience that, even in an age of astonishing scientific advance, remain beyond the reach of scientific explanation.’ This is the world’s largest annual award given to an individual – bigger than a Nobel. And scientists have been prominent among the recipients, especially in recent years: they include cosmologist John Barrow, physicist Freeman Dyson, physics Nobel laureate Charles H. Townes, physicist Paul Davies – and Polkinghorne. That helps to explain why the Royal Society has previously been ready to host the prize’s ceremonials.
I must declare an interest here, because I have taken part in a meeting funded by the Templeton Foundation. In 2005 it convened a gathering of scientists to consider the question of whether water seems ‘fine-tuned’ to support the existence of life. This was an offshoot of an earlier symposium that investigated the broader question of ‘fine tuning’ in the laws of physics, a topic now very much in vogue thanks to recent discoveries in cosmology. That first meeting considered how the basic constants of nature seem to be finely poised to an absurd degree: just a tiny change would seem to make the universe uninhabitable. (The discovery in the 1990s of the acceleration of the expanding universe, currently attributed to a mysterious dark energy, makes the cosmos seem even more improbable than before.) This is a genuine and deep mystery, and at present there is no convincing explanation for it. The issue of water is different, as we concluded at the 2005 meeting: there is no compelling argument for it being a unique solvent for life, or for it being especially fine-tuned even if it were. More pertinently here, this meeting had first-rate speakers and a sound scientific rationale, and even somewhat wary attendees like me detected no hidden agenda beyond an exploration of the issues. If Templeton money is to be used for events like that, I have no problem with that. And it was rather disturbing, even shameful, to find that at least one reputable university press subsequently shied away from publishing the meeting proceedings (soon now to be published by Taylor & Francis) not on any scientific grounds but because of worries about Templeton involvement.
So while I worry about the immodesty of the Templeton Prize, I don’t side with those who consider it basically a bribe to attract good scientists to a disreputable cause. All the same, there is something curious going on. Five of the seven most recent winners have been scientists, and all are listed in the Physics and Cosmology Group of the Center for Theology and the Natural Sciences (CTNS), affiliated to the Graduate Theological Union, an inter-faith centre in Berkeley, California. This includes the latest winner, announced on Monday: French physicist Bernard d’Espagnat, ‘whose explorations of the philosophical implications of quantum physics have’ (according to the prize announcement) ‘cast new light on the definition of reality and the potential limits of knowable science.’ D’Espagnat has suggested ‘the possibility that the things we observe may be tentatively interpreted as signs providing us with some perhaps not entirely misleading glimpses of a higher reality and, therefore, that higher forms of spirituality are fully compatible with what seems to emerge from contemporary physics.’ (See more here and here.) Others might consider this an unnecessary addendum to modern quantum theory, not so far removed from the vague and post hoc analogies of Fritjof Capra’s The Tao of Physics (which was very much a product of its time).
But why this preference for CTNS affiliates? Perhaps it simply means that the people interested in this stuff are a rather small group who are almost bound to get co-opted onto any body with similar interests. Or you might want to view it as an indication that the fastest way to make a million is to join the CTNS’s Physics and Cosmology group. More striking, though, is the fact that all these chaps (I’m afraid so) are physicists of some description. That, it appears, is pretty much the only branch of the natural sciences either willing or able to engage in matters of faith. Of course, American biologists have been given more than enough reason to flee any hint of religiosity; but that alone doesn’t quite seem sufficient to explain this skewed representation of the sciences. I have some ideas about that… but another time.
[A dramatically truncated version of this comment appears in the Diary section of the latest issue of Prospect.]
If you’re a non-believer, it’s easy to mock or even despise efforts to bridge science and religion. But you don’t need to be Richard Dawkins to sense that there’s an imbalance in these often well-meaning initiatives: science has no need of religion in its quest to understand the universe (the relevance to scientific ethics might be more open to debate), whereas religion appears sometimes to crave the intellectual force of science’s rigour. And since it seems hard to imagine how science could ever supply supporting evidence for religion (as opposed to simply unearthing new mysteries), mustn’t any contribution it might make to the logical basis of belief be inevitably negative?
That doesn’t stop people from trying to build bridges, and nor should it. Yet overtures from the religious side are often seen as attempts to sneak doctrine into places where it has no business: witness the controversy over the Royal Society hosting talks and events sponsored by the Templeton Foundation. Philosopher A. C. Grayling, recently denounced as scandalous the willingness of the Royal Society to offer a launching pad for a new book exploring the views of one of its Fellows, Christian minister and physicist John Polkinghorne, on the interactions of science and religion.
The US-based Templeton Foundation has been in the middle of some of the loudest recent controversies about religion and science. Created by ‘global investor and philanthropist’ Sir John Templeton, it professes to ‘serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions, ranging from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness, and creativity.’ For some skeptics, this simply means promoting religion, particularly Christianity, from a seemingly bottomless funding barrel. Templeton himself, a relatively liberal Christian by US standards and a supporter of inter-faith initiatives, once claimed that ‘scientific revelations may be a gold mine for revitalizing religion in the 21st century’. That’s precisely what makes many scientists nervous.
The Templeton Foundation awards an annual prize of £1million to ‘outstanding individuals who have devoted their talents to those aspects of human experience that, even in an age of astonishing scientific advance, remain beyond the reach of scientific explanation.’ This is the world’s largest annual award given to an individual – bigger than a Nobel. And scientists have been prominent among the recipients, especially in recent years: they include cosmologist John Barrow, physicist Freeman Dyson, physics Nobel laureate Charles H. Townes, physicist Paul Davies – and Polkinghorne. That helps to explain why the Royal Society has previously been ready to host the prize’s ceremonials.
I must declare an interest here, because I have taken part in a meeting funded by the Templeton Foundation. In 2005 it convened a gathering of scientists to consider the question of whether water seems ‘fine-tuned’ to support the existence of life. This was an offshoot of an earlier symposium that investigated the broader question of ‘fine tuning’ in the laws of physics, a topic now very much in vogue thanks to recent discoveries in cosmology. That first meeting considered how the basic constants of nature seem to be finely poised to an absurd degree: just a tiny change would seem to make the universe uninhabitable. (The discovery in the 1990s of the acceleration of the expanding universe, currently attributed to a mysterious dark energy, makes the cosmos seem even more improbable than before.) This is a genuine and deep mystery, and at present there is no convincing explanation for it. The issue of water is different, as we concluded at the 2005 meeting: there is no compelling argument for it being a unique solvent for life, or for it being especially fine-tuned even if it were. More pertinently here, this meeting had first-rate speakers and a sound scientific rationale, and even somewhat wary attendees like me detected no hidden agenda beyond an exploration of the issues. If Templeton money is to be used for events like that, I have no problem with that. And it was rather disturbing, even shameful, to find that at least one reputable university press subsequently shied away from publishing the meeting proceedings (soon now to be published by Taylor & Francis) not on any scientific grounds but because of worries about Templeton involvement.
So while I worry about the immodesty of the Templeton Prize, I don’t side with those who consider it basically a bribe to attract good scientists to a disreputable cause. All the same, there is something curious going on. Five of the seven most recent winners have been scientists, and all are listed in the Physics and Cosmology Group of the Center for Theology and the Natural Sciences (CTNS), affiliated to the Graduate Theological Union, an inter-faith centre in Berkeley, California. This includes the latest winner, announced on Monday: French physicist Bernard d’Espagnat, ‘whose explorations of the philosophical implications of quantum physics have’ (according to the prize announcement) ‘cast new light on the definition of reality and the potential limits of knowable science.’ D’Espagnat has suggested ‘the possibility that the things we observe may be tentatively interpreted as signs providing us with some perhaps not entirely misleading glimpses of a higher reality and, therefore, that higher forms of spirituality are fully compatible with what seems to emerge from contemporary physics.’ (See more here and here.) Others might consider this an unnecessary addendum to modern quantum theory, not so far removed from the vague and post hoc analogies of Fritjof Capra’s The Tao of Physics (which was very much a product of its time).
But why this preference for CTNS affiliates? Perhaps it simply means that the people interested in this stuff are a rather small group who are almost bound to get co-opted onto any body with similar interests. Or you might want to view it as an indication that the fastest way to make a million is to join the CTNS’s Physics and Cosmology group. More striking, though, is the fact that all these chaps (I’m afraid so) are physicists of some description. That, it appears, is pretty much the only branch of the natural sciences either willing or able to engage in matters of faith. Of course, American biologists have been given more than enough reason to flee any hint of religiosity; but that alone doesn’t quite seem sufficient to explain this skewed representation of the sciences. I have some ideas about that… but another time.
Wednesday, March 18, 2009



Nature’s Patterns
The first volume (Shapes) of my trilogy on pattern formation, Nature’s Patterns (OUP), is now out. Sort of. At any event, it should be in the shops soon. Nearly all the hiccups with the figures got ironed out in the end (thank you Chantal for your patience) – there are one or two things to put right in the reprints/paperback. Sorry, I tried my best. The second and third volumes (Flow and Branches) are not officially available until (I believe) July and September respectively. But if you talk to OUP sweetly enough, you might get lucky. Better still, they should be on sale at talks, such as the one I’m scheduled to give at the Cheltenham Science Festival on 4 June (8 pm). Maybe see you there.
The right honourable Nigel Lawson
At a university talk I gave recently, a member of the department suggested that I might look at Nigel Lawson’s book An Appeal to Reason: A Cool Look at Climate Change. It’s not that Lawson is necessarily right to be sceptical about climate change and the need to mitigate (rather than adapt to) it, he said. It’s simply that you have to admire the way he makes his case, with the tenacity and rhetorical flair characteristic of his lawyer’s training.
And as chance would have it, I soon thereafter came across some pages of Lawson’s 2006 essay from which the book sprang: ‘The Economics and Politics of Climate Change: An Appeal to Reason’, published by the right-wing think-tank the Centre for Policy Studies. (My daughter was drawing on the other side.) And I was reminded why I doubted that there was indeed very much to admire in Lawson’s methodology. There seems nothing admirable in a bunch of lies; anyone can make nonsense sound correct and reasonable if they are prepared to tell enough bare-faced fibs.
For example, Lawson quotes the Met Office’s Hadley Centre for Climate Prediction and Research:
“Although there is considerable year-to-year variability in annual-mean global temperature, an upward trend can be clearly seen; firstly over the period from about 1920-1940, with little change or a small cooling from 1940-1975, followed by a sustained rise over the last three decades since then.”
He goes on to say: “This last part is a trifle disingenuous, since what the graph actually shows is that the sustained rise took place entirely during the last quarter of the last century.” No. The quote from the Hadley Centre says it exactly as it is, and Lawson’s comment is totally consistent with that. There is nothing disingenuous. Indeed, Lawson goes on to say
“The Hadley Centre graph shows that, for the first phase, from 1920 to 1940, the increase was 0.4 degrees centigrade. From 1940 to 1975 there was a cooling of about 0.2 degrees… Finally, since 1975 there has been a further warming of about 0.5 degrees, making a total increase of some 0.7 degrees over the 20th century as a whole (from 1900 to 1920 there was no change).”
Right. And that is what they said. Lawson has cast aspersions on grounds that are transparently specious. Am I meant to admire this?
It gets worse, of course. Carbon dioxide, he tells us, is only the second most important greenhouse gas, after water vapour. Correct, if you don’t worry about how one technically defines ‘greenhouse gas’ (many scientists don’t usually regard water vapour that way). And your point is? My point is that we are not directly pumping water vapour into the atmosphere in a way that makes much difference to its atmospheric concentration (although anthropogenic warming will increase evaporation). We are doing that for carbon dioxide. What matters for climate change is not the amounts, but whether or not there’s a steady state. Who is being disingenuous?
“It is the published view of the Met Office that is it likely that more than half the warming of recent decades (say 0.3 degrees centigrade out of the overall 0.5 degrees increase between 1975 and 2000) is attributable to man-made sources of greenhouse gases – principally, although by no means exclusively, carbon dioxide”, says Lawson. “But this is highly uncertain, and reputable climate scientists differ sharply over the subject.”
What he means here is that a handful of climate scientists at professional institutions disagree with just about all the others everywhere in the world in maintaining that the warming is not anthropogenic. ‘Reputable’ scientists differ over almost everything – but when the difference is in the ratio of 1 to 1000, say, who would you trust?
And then: “the recent attempt of the Royal Society, of all bodies, to prevent the funding of climate scientists who do not share its alarmist view of the matter is truly shocking.” No, what is truly shocking is that Lawson is so unashamed at distorting the facts. The Royal Society asked asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.
“Yet another uncertainty derives from the fact that, while the growth in manmade carbon dioxide emissions, and thus carbon dioxide concentrations in the atmosphere, continued relentlessly during the 20th century, the global mean surface temperature, as I have already remarked, increased in fits and starts, for which there us no adequate explanation.” Sounds pretty dodgy – until you hear that there is a perfectly adequate explanation in terms of the effects of sulphate aerosols. Perhaps Lawson doesn’t believe this – that’s his prerogative (although he’s then obliged to say why). But to pretend that this issue has just been swept under the carpet, and lacks any plausible explanation, is utterly dishonest.
But those mendacious climate scientists are denying that past warming such as the Medieval Warm Period ever happened, don’t you know: “A rather different account of the past was given by the so-called “hockey-stick” chart of global temperatures over the past millennium, which purported to show that the earth’s temperature was constant until the industrialisation of the 20th century. Reproduced in its 2001 Report by the supposedly authoritative Intergovernmental Panel on Climate Change, set up under the auspices of the United Nations to advise governments on what is clearly a global issue, the chart featured prominently in (among other publications) the present Government’s 2003 energy white paper. It has now been comprehensively discredited.” No. It has been largely supported (see here and here). And it was never the crux of any argument about whether 20th century climate warming is real. What’s more, it never showed that ‘the earth’s temperature was constant until the industrialization of the 20th century; the Medieval Warm Period and the Little Ice Age are both there. As you said, Mr Lawson, we’re talking here about relatively small changes of fractions of a degree. That, indeed, is the whole point: even such apparently small changes are sufficient to make a difference between a ‘warm period’ and a ‘little ice age’.
Phew. I am now on page 3. Excuse me, but I don’t think I have the stamina to wade through a whole book of this stuff. One’s spirit can only withstand a certain amount of falsehood. Admirable? I don’t think so. Imagine if a politician was caught being as dishonest as this. No, hang on a minute, that can’t be right…
I’m moved to write some of this, however, because in the face of such disinformation it becomes crucial to get the facts straight. The situation is not helped, for example, when the Independent says, as it did last Saturday, “The melting of Arctic sea ice could cause global sea levels to rise by more than a metre by the end of the century.” Perhaps there’s some indirect effect here that I’m not aware of; but to my knowledge, melting sea ice has absolutely no effect on sea level. The ice merely displaces the equivalent volume of water. We need to get this stuff right.
At a university talk I gave recently, a member of the department suggested that I might look at Nigel Lawson’s book An Appeal to Reason: A Cool Look at Climate Change. It’s not that Lawson is necessarily right to be sceptical about climate change and the need to mitigate (rather than adapt to) it, he said. It’s simply that you have to admire the way he makes his case, with the tenacity and rhetorical flair characteristic of his lawyer’s training.
And as chance would have it, I soon thereafter came across some pages of Lawson’s 2006 essay from which the book sprang: ‘The Economics and Politics of Climate Change: An Appeal to Reason’, published by the right-wing think-tank the Centre for Policy Studies. (My daughter was drawing on the other side.) And I was reminded why I doubted that there was indeed very much to admire in Lawson’s methodology. There seems nothing admirable in a bunch of lies; anyone can make nonsense sound correct and reasonable if they are prepared to tell enough bare-faced fibs.
For example, Lawson quotes the Met Office’s Hadley Centre for Climate Prediction and Research:
“Although there is considerable year-to-year variability in annual-mean global temperature, an upward trend can be clearly seen; firstly over the period from about 1920-1940, with little change or a small cooling from 1940-1975, followed by a sustained rise over the last three decades since then.”
He goes on to say: “This last part is a trifle disingenuous, since what the graph actually shows is that the sustained rise took place entirely during the last quarter of the last century.” No. The quote from the Hadley Centre says it exactly as it is, and Lawson’s comment is totally consistent with that. There is nothing disingenuous. Indeed, Lawson goes on to say
“The Hadley Centre graph shows that, for the first phase, from 1920 to 1940, the increase was 0.4 degrees centigrade. From 1940 to 1975 there was a cooling of about 0.2 degrees… Finally, since 1975 there has been a further warming of about 0.5 degrees, making a total increase of some 0.7 degrees over the 20th century as a whole (from 1900 to 1920 there was no change).”
Right. And that is what they said. Lawson has cast aspersions on grounds that are transparently specious. Am I meant to admire this?
It gets worse, of course. Carbon dioxide, he tells us, is only the second most important greenhouse gas, after water vapour. Correct, if you don’t worry about how one technically defines ‘greenhouse gas’ (many scientists don’t usually regard water vapour that way). And your point is? My point is that we are not directly pumping water vapour into the atmosphere in a way that makes much difference to its atmospheric concentration (although anthropogenic warming will increase evaporation). We are doing that for carbon dioxide. What matters for climate change is not the amounts, but whether or not there’s a steady state. Who is being disingenuous?
“It is the published view of the Met Office that is it likely that more than half the warming of recent decades (say 0.3 degrees centigrade out of the overall 0.5 degrees increase between 1975 and 2000) is attributable to man-made sources of greenhouse gases – principally, although by no means exclusively, carbon dioxide”, says Lawson. “But this is highly uncertain, and reputable climate scientists differ sharply over the subject.”
What he means here is that a handful of climate scientists at professional institutions disagree with just about all the others everywhere in the world in maintaining that the warming is not anthropogenic. ‘Reputable’ scientists differ over almost everything – but when the difference is in the ratio of 1 to 1000, say, who would you trust?
And then: “the recent attempt of the Royal Society, of all bodies, to prevent the funding of climate scientists who do not share its alarmist view of the matter is truly shocking.” No, what is truly shocking is that Lawson is so unashamed at distorting the facts. The Royal Society asked asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.
“Yet another uncertainty derives from the fact that, while the growth in manmade carbon dioxide emissions, and thus carbon dioxide concentrations in the atmosphere, continued relentlessly during the 20th century, the global mean surface temperature, as I have already remarked, increased in fits and starts, for which there us no adequate explanation.” Sounds pretty dodgy – until you hear that there is a perfectly adequate explanation in terms of the effects of sulphate aerosols. Perhaps Lawson doesn’t believe this – that’s his prerogative (although he’s then obliged to say why). But to pretend that this issue has just been swept under the carpet, and lacks any plausible explanation, is utterly dishonest.
But those mendacious climate scientists are denying that past warming such as the Medieval Warm Period ever happened, don’t you know: “A rather different account of the past was given by the so-called “hockey-stick” chart of global temperatures over the past millennium, which purported to show that the earth’s temperature was constant until the industrialisation of the 20th century. Reproduced in its 2001 Report by the supposedly authoritative Intergovernmental Panel on Climate Change, set up under the auspices of the United Nations to advise governments on what is clearly a global issue, the chart featured prominently in (among other publications) the present Government’s 2003 energy white paper. It has now been comprehensively discredited.” No. It has been largely supported (see here and here). And it was never the crux of any argument about whether 20th century climate warming is real. What’s more, it never showed that ‘the earth’s temperature was constant until the industrialization of the 20th century; the Medieval Warm Period and the Little Ice Age are both there. As you said, Mr Lawson, we’re talking here about relatively small changes of fractions of a degree. That, indeed, is the whole point: even such apparently small changes are sufficient to make a difference between a ‘warm period’ and a ‘little ice age’.
Phew. I am now on page 3. Excuse me, but I don’t think I have the stamina to wade through a whole book of this stuff. One’s spirit can only withstand a certain amount of falsehood. Admirable? I don’t think so. Imagine if a politician was caught being as dishonest as this. No, hang on a minute, that can’t be right…
I’m moved to write some of this, however, because in the face of such disinformation it becomes crucial to get the facts straight. The situation is not helped, for example, when the Independent says, as it did last Saturday, “The melting of Arctic sea ice could cause global sea levels to rise by more than a metre by the end of the century.” Perhaps there’s some indirect effect here that I’m not aware of; but to my knowledge, melting sea ice has absolutely no effect on sea level. The ice merely displaces the equivalent volume of water. We need to get this stuff right.
Friday, March 13, 2009
There’s more to life than sequences
[This is the pre-edited version of my latest Muse for Nature News.]
Shape might be one of the key factors in the function of mysterious ‘non-coding’ DNA.
Everyone knows what DNA looks like. Its double helix decorates countless articles on genetics, has been celebrated in sculpture, and was even engraved on the Golden Record, our message to the cosmos on board the Voyager spacecraft.
The entwined strands, whose form was deduced in 1953 by James Watson and Francis Crick, are admired as much for their beauty as for the light they shed on the mechanism of inheritance: the complementarity between juxtaposed chemical building blocks on the two strands, held together by weak ‘hydrogen’ bonds like a zipper, immediately suggested to Crick and Watson how information encoded in the sequence of blocks could be transmitted to a new strand assembled on the template of an existing one.
With the structure of DNA ‘solved’, genetics switched its focus to the sequence of the four constituent units (called nucleotide bases). By using biotechnological methods to deduce this sequence, they claimed to be ‘reading the book of life’, with the implication that all the information needed to build an organism was held within this abstract linear code.
But beauty has a tendency to inhibit critical thinking. There is now increasing evidence that the molecular structure of DNA is not a delightfully ordered epiphenomenon of its function as a digital data bank but a crucial – and mutable – aspect of the way genomes work. A new study in Science [1] underlines that notion by showing that the precise shape of some genomic DNA has been determined by evolution. In other words, genetics is not simply about sequence, but about structure too.
The standard view – indeed, part of biology’s ‘central dogma’ – is that in its sequence of the four fundamental building blocks (called nucleotide bases) DNA encodes corresponding sequences of amino-acid units that are strung together to make a protein enzyme, with the protein’s compact folded shape (and thus its function) being uniquely determined by that sequence.
This is basically true enough. Yet as the human genome was unpicked nucleotide base by base, it became clear that most of the DNA doesn’t ‘code for’ proteins at all. Fully 98 percent of the human genome is non-coding. So what does it do?
We don’t really know, except to say that it’s clearly not all ‘junk’, as was once suspected – the detritus of evolution, like obsolete files clogging up a computer. Much of the non-coding DNA evidently has a role in cell function, since mutations (changes in nucleotide sequence) in some of these regions have observable (phenotypic) consequences for the organism. We don’t know, however, how the former leads to the latter.
This is the question that Elliott Margulies of the National Institutes of Health in Bethesda, Maryland, Tom Tullius of Boston University, and their coworkers set out to investigate. According to the standard picture, the function of non-coding regions, whatever it is, should be determined by their sequence. Indeed, one way of identifying important non-coding regions is to look for ones that are sensitive to sequence, with the implication that the sequence has been finely tuned by evolution.
But Margulies and colleagues wondered if the shape of non-coding DNA might also be important. As they point out, DNA isn’t simply a uniform double helix: it can be bent or kinked, and may have a helical pitch of varying width, for example. These differences depend on the sequence, but not in any straightforward manner. Two near-identical sequences can adopt quite different shapes, or two very different sequences can have a similar shape.
The researchers used a chemical method to deduce the relationship between sequence and shape. They then searched for shape similarities between analogous non-coding regions in the genomes of 36 different species. Such similarity implies that the shapes have been selected and preserved by evolution – in other words, that shape, rather than sequence per se, is what is important. They found twice as many evolutionarily constrained (and thus functionally important) parts of the non-coding genome than were evident from trans-species correspondences using only sequence data.
So in these non-coding regions, at least, sequence appears to be important only insofar as it specifies a certain molecular shape and not because if its intrinsic information content – a different sequence with the same shape might do just as well.
That doesn’t answer why shape matters to DNA. But it suggests that we are wrong to imagine that the double helix is the beginning and end of the story.
There are plenty of other good reasons to suspect that is true. For example, DNA can adopt structures quite different from Watson and Crick’s helix, called the B-form. It can, under particular conditions of saltiness or temperature, switch to at least two other double-helical structures, called the A and Z forms. It may also from triple- and quadruple-stranded variants, linked by different types of hydrogen-bonding matches between nucleotides. One such is called Hoogsteen base-pairing.
Biochemist Naoki Sugimoto and colleagues at Konan University in Kobe, Japan, have recently shown that, when DNA in solution is surrounded by large polymer molecules, mimicking the crowded conditions of a real cell, Watson-Crick base pairing seems to be less stable than it is in pure, dilute solution, while Hoogsteen base-pairing, which favours the formation of triple and quadruple helices, becomes more stable [2-4].
The researchers think that this is linked to the way water molecules surround the DNA in a ‘hydration shell’. Hoogsteen pairing demands less water in this shell, and so is promoted when molecular crowding makes water scarce.
Changes to the hydration shell, for example induced by ions, may alter DNA shape in a sequence-dependent manner, perhaps being responsible for the sequence-structure relationships studied by Margulies and his colleagues. After all, says Tullius, the method they use to probe structure is a measure of “the local exposure of the surface of DNA to the solvent.”
The importance of DNA’s water sheath on its structure and function is also revealed in work that uses small synthetic molecules as drugs that bind to DNA and alter its behaviour, perhaps switching certain genes on or off. It is conventionally assumed that these molecules must fit snugly into the screw-like groove of the double helix. But some small molecules seem able to bind and show useful therapeutic activity even without such a fit, apparently because they can exploit water molecules in the hydration shell as ‘bridges’ to the DNA itself [5]. So here there is a subtle and irreducible interplay between sequence, shape and ‘environment’.
Then there are mechanical effects too. Some proteins bend and deform DNA significantly when they dock, making the molecule’s stiffness (and its dependence on sequence) a central factor in that process. And the shape and mechanics of DNA can influence gene function at larger scales. For example, the packaging of DNA and associated proteins into a compact form, called chromatin, in cells can affect whether particular genes are active or not. Special ‘chromatin-remodelling’ enzymes are needed to manipulate its structure and enable processes such as gene expression of DNA repair.
None of this is yet well understood. But it feels reminiscent of the way early work on protein structure in the 1930s and 40s grasped for dimly sensed principles before an understanding of the factors governing shape and function transformed our view of life’s molecular machinery. Are studies like these, then, a hint at some forthcoming insight that will reveal gene sequence to be just one element in the logic of life?
References
1. Parker, S. C. J. et al., Science Express doi:10.1126/science.1169050 (2009). Paper here.
2. Miyoshi, D., Karimata, H. & Sugimoto, N. J. Am. Chem. Soc. 128, 7957-7963 (2006). Paper here.
3. Nakano, S. et al., J. Am. Chem. Soc. 126, 14330-14331 (2004). Paper here.
4. Miyoshi, D. et al., J. Am. Chem. Soc. doi:10.1021/ja805972a (2009). Paper here.
5. Nguyen, B., Neidle, S. & Wilson, W. D. Acc. Chem. Res. 42, 11-21 (2009). Paper here.
[This is the pre-edited version of my latest Muse for Nature News.]
Shape might be one of the key factors in the function of mysterious ‘non-coding’ DNA.
Everyone knows what DNA looks like. Its double helix decorates countless articles on genetics, has been celebrated in sculpture, and was even engraved on the Golden Record, our message to the cosmos on board the Voyager spacecraft.
The entwined strands, whose form was deduced in 1953 by James Watson and Francis Crick, are admired as much for their beauty as for the light they shed on the mechanism of inheritance: the complementarity between juxtaposed chemical building blocks on the two strands, held together by weak ‘hydrogen’ bonds like a zipper, immediately suggested to Crick and Watson how information encoded in the sequence of blocks could be transmitted to a new strand assembled on the template of an existing one.
With the structure of DNA ‘solved’, genetics switched its focus to the sequence of the four constituent units (called nucleotide bases). By using biotechnological methods to deduce this sequence, they claimed to be ‘reading the book of life’, with the implication that all the information needed to build an organism was held within this abstract linear code.
But beauty has a tendency to inhibit critical thinking. There is now increasing evidence that the molecular structure of DNA is not a delightfully ordered epiphenomenon of its function as a digital data bank but a crucial – and mutable – aspect of the way genomes work. A new study in Science [1] underlines that notion by showing that the precise shape of some genomic DNA has been determined by evolution. In other words, genetics is not simply about sequence, but about structure too.
The standard view – indeed, part of biology’s ‘central dogma’ – is that in its sequence of the four fundamental building blocks (called nucleotide bases) DNA encodes corresponding sequences of amino-acid units that are strung together to make a protein enzyme, with the protein’s compact folded shape (and thus its function) being uniquely determined by that sequence.
This is basically true enough. Yet as the human genome was unpicked nucleotide base by base, it became clear that most of the DNA doesn’t ‘code for’ proteins at all. Fully 98 percent of the human genome is non-coding. So what does it do?
We don’t really know, except to say that it’s clearly not all ‘junk’, as was once suspected – the detritus of evolution, like obsolete files clogging up a computer. Much of the non-coding DNA evidently has a role in cell function, since mutations (changes in nucleotide sequence) in some of these regions have observable (phenotypic) consequences for the organism. We don’t know, however, how the former leads to the latter.
This is the question that Elliott Margulies of the National Institutes of Health in Bethesda, Maryland, Tom Tullius of Boston University, and their coworkers set out to investigate. According to the standard picture, the function of non-coding regions, whatever it is, should be determined by their sequence. Indeed, one way of identifying important non-coding regions is to look for ones that are sensitive to sequence, with the implication that the sequence has been finely tuned by evolution.
But Margulies and colleagues wondered if the shape of non-coding DNA might also be important. As they point out, DNA isn’t simply a uniform double helix: it can be bent or kinked, and may have a helical pitch of varying width, for example. These differences depend on the sequence, but not in any straightforward manner. Two near-identical sequences can adopt quite different shapes, or two very different sequences can have a similar shape.
The researchers used a chemical method to deduce the relationship between sequence and shape. They then searched for shape similarities between analogous non-coding regions in the genomes of 36 different species. Such similarity implies that the shapes have been selected and preserved by evolution – in other words, that shape, rather than sequence per se, is what is important. They found twice as many evolutionarily constrained (and thus functionally important) parts of the non-coding genome than were evident from trans-species correspondences using only sequence data.
So in these non-coding regions, at least, sequence appears to be important only insofar as it specifies a certain molecular shape and not because if its intrinsic information content – a different sequence with the same shape might do just as well.
That doesn’t answer why shape matters to DNA. But it suggests that we are wrong to imagine that the double helix is the beginning and end of the story.
There are plenty of other good reasons to suspect that is true. For example, DNA can adopt structures quite different from Watson and Crick’s helix, called the B-form. It can, under particular conditions of saltiness or temperature, switch to at least two other double-helical structures, called the A and Z forms. It may also from triple- and quadruple-stranded variants, linked by different types of hydrogen-bonding matches between nucleotides. One such is called Hoogsteen base-pairing.
Biochemist Naoki Sugimoto and colleagues at Konan University in Kobe, Japan, have recently shown that, when DNA in solution is surrounded by large polymer molecules, mimicking the crowded conditions of a real cell, Watson-Crick base pairing seems to be less stable than it is in pure, dilute solution, while Hoogsteen base-pairing, which favours the formation of triple and quadruple helices, becomes more stable [2-4].
The researchers think that this is linked to the way water molecules surround the DNA in a ‘hydration shell’. Hoogsteen pairing demands less water in this shell, and so is promoted when molecular crowding makes water scarce.
Changes to the hydration shell, for example induced by ions, may alter DNA shape in a sequence-dependent manner, perhaps being responsible for the sequence-structure relationships studied by Margulies and his colleagues. After all, says Tullius, the method they use to probe structure is a measure of “the local exposure of the surface of DNA to the solvent.”
The importance of DNA’s water sheath on its structure and function is also revealed in work that uses small synthetic molecules as drugs that bind to DNA and alter its behaviour, perhaps switching certain genes on or off. It is conventionally assumed that these molecules must fit snugly into the screw-like groove of the double helix. But some small molecules seem able to bind and show useful therapeutic activity even without such a fit, apparently because they can exploit water molecules in the hydration shell as ‘bridges’ to the DNA itself [5]. So here there is a subtle and irreducible interplay between sequence, shape and ‘environment’.
Then there are mechanical effects too. Some proteins bend and deform DNA significantly when they dock, making the molecule’s stiffness (and its dependence on sequence) a central factor in that process. And the shape and mechanics of DNA can influence gene function at larger scales. For example, the packaging of DNA and associated proteins into a compact form, called chromatin, in cells can affect whether particular genes are active or not. Special ‘chromatin-remodelling’ enzymes are needed to manipulate its structure and enable processes such as gene expression of DNA repair.
None of this is yet well understood. But it feels reminiscent of the way early work on protein structure in the 1930s and 40s grasped for dimly sensed principles before an understanding of the factors governing shape and function transformed our view of life’s molecular machinery. Are studies like these, then, a hint at some forthcoming insight that will reveal gene sequence to be just one element in the logic of life?
References
1. Parker, S. C. J. et al., Science Express doi:10.1126/science.1169050 (2009). Paper here.
2. Miyoshi, D., Karimata, H. & Sugimoto, N. J. Am. Chem. Soc. 128, 7957-7963 (2006). Paper here.
3. Nakano, S. et al., J. Am. Chem. Soc. 126, 14330-14331 (2004). Paper here.
4. Miyoshi, D. et al., J. Am. Chem. Soc. doi:10.1021/ja805972a (2009). Paper here.
5. Nguyen, B., Neidle, S. & Wilson, W. D. Acc. Chem. Res. 42, 11-21 (2009). Paper here.
Wednesday, March 11, 2009
Who should bear the carbon cost of exports?
[This is the pre-edited version of my latest Muse column for Nature News. (So far it seems only to have elicited outraged comment from some chap who rants against ‘Socialist warming alarmists’, which I suppose says it all.)]
China has become the world’s biggest carbon emitter partly because of its exports. So whose responsibility is that?
There was once a town with a toy factory. Everyone loved the toys, but hated the smell and noise of the factory. ‘That factory boss doesn’t care about us’, they grumbled. ‘He’s getting rich from our pockets, but he should be fined for all the muck he creates.’ Then one entrepreneur decided he could make the same toys without the pollution, using windmills and water filters and so forth. So he did; but they cost twice as much, and no one bought them.
Welcome to the world. Right now, our toy factory is in China. And according to an analysis by Dabo Guan of the University of Cambridge and his colleagues, these exports have helped to turn China into the world’s biggest greenhouse-gas emitting nation [1,2 – papers here and here].
That China now occupies this slot is no surprise; the nation tops the list for most national statistics, simply because it is so big. Per capita emissions of CO2 are still only about a quarter that of the USA, and gasoline consumption per person in 2005 was less than 5 percent that of Americans (but rising fast).
It’s no shocker either that China’s CO2 emissions have surged since it became an economic superpower. In 1981 it was responsible for 8 percent of the global total; in 2002 this reached 14 percent, and by 2007, 21 percent.
But what is most revealing in the new study is that about half of recent emissions increases from China can be attributed to the boom in exports. Their production now accounts for 6 percent of all global CO2 emissions. This invites the question: who is responsible?
Needless to say, China can hardly throw up its hands and say “Don’t blame us – we’re only giving you rich folks what you want.” After all, the revenues from exports are contributing to the remarkable rise in China’s prosperity.
But equally, it would be hypocritical for Western nations to condemn China for the pollution generated in supplying them with the cheap goods that they no longer care to make themselves. Let’s not forget, though, that China imports a lot too, thereby shifting those carbon costs of production somewhere else.
Part of the problem is that China continues to rely on coal for its energy, which provides 70 percent of the total. Nuclear and renewables supply only 7 percent, and while Chinese energy production has become somewhat more efficient, any gains there are vastly overwhelmed by increased demand.
One response to these figures is that they underline the potential value of a globally agreed carbon tax. In theory, this builds the global-warming cost of a product – whether a computer or an airplane flight – into its price. Worries that this enables producers simply to pass on that cost to the consumer might be valid for the production of essentials such as foods. But much of China’s export growth has been in consumer electronics (which have immense ‘embodied energy’) – exports of Chinese-built televisions increased from 21 million in 2002 to 86 million in 2005. Why shouldn’t consumers feel the environmental cost of luxury items? And won’t the hallowed laws of the marketplace ultimately cut sales and profits for manufacturers who simply raise their prices?
Some environmentalists are wary of carbon taxes because they fail to guarantee explicit emissions limits. But the main alternative, cap-and-trade, seems to have bigger problems. The idea here is that carbon emitters – nations, industrial sectors, even individual factories or plants – are given a carbon allocation but can exceed it by buying credits off others. That’s the scheme currently adopted in the European Union, and preferred by the Obama adminstration in the USA.
The major drawback is that it makes costs of emissions virtually impossible to predict, and susceptible to outside influences such as weather or other economic variables. The result would be a dangerously volatile carbon market, with prices that could soar or plummet (the latter a dream case for polluters). We hardly need any reminder now of the hazards of such market mechanisms.
Both a carbon tax and cap-and-trade schemes arguably offer a ‘fair’ way of sharing the carbon cost of exports (although there may be no transparent way to set the cap levels in the latter). But surely the Chinese picture reinforces the need for a broader view too, in which there is rational self-interest in international collaboration on and sharing of technologies that reduce emissions and increase efficiency. The issue also brings some urgency to debates about the best reward mechanisms for stimulating innovation [3].
These figures also emphasize the underlying dilemma. As Laura Bodey puts it in Richard Powers’ 1998 novel Gain, when she declines with cancer possibly caused by proximity to a chemical plant that has given her all kinds of convenient domestic products: “People want everything. That’s their problem.”
References
1. Guan, D., Peters, G. P., Weber, C. L. & Hubacek, K. Geophys. Res. Lett. 36, L04709 (2009).
2. Weber, C. L., Peters, G. P., Guan, D. & Hubacek, K. Energy Policy 36, 3572-3577 (2008).
3. Meloso, D., Copic, J. & Bossaerts, P. Science 323, 1335-1339 (2009).
[This is the pre-edited version of my latest Muse column for Nature News. (So far it seems only to have elicited outraged comment from some chap who rants against ‘Socialist warming alarmists’, which I suppose says it all.)]
China has become the world’s biggest carbon emitter partly because of its exports. So whose responsibility is that?
There was once a town with a toy factory. Everyone loved the toys, but hated the smell and noise of the factory. ‘That factory boss doesn’t care about us’, they grumbled. ‘He’s getting rich from our pockets, but he should be fined for all the muck he creates.’ Then one entrepreneur decided he could make the same toys without the pollution, using windmills and water filters and so forth. So he did; but they cost twice as much, and no one bought them.
Welcome to the world. Right now, our toy factory is in China. And according to an analysis by Dabo Guan of the University of Cambridge and his colleagues, these exports have helped to turn China into the world’s biggest greenhouse-gas emitting nation [1,2 – papers here and here].
That China now occupies this slot is no surprise; the nation tops the list for most national statistics, simply because it is so big. Per capita emissions of CO2 are still only about a quarter that of the USA, and gasoline consumption per person in 2005 was less than 5 percent that of Americans (but rising fast).
It’s no shocker either that China’s CO2 emissions have surged since it became an economic superpower. In 1981 it was responsible for 8 percent of the global total; in 2002 this reached 14 percent, and by 2007, 21 percent.
But what is most revealing in the new study is that about half of recent emissions increases from China can be attributed to the boom in exports. Their production now accounts for 6 percent of all global CO2 emissions. This invites the question: who is responsible?
Needless to say, China can hardly throw up its hands and say “Don’t blame us – we’re only giving you rich folks what you want.” After all, the revenues from exports are contributing to the remarkable rise in China’s prosperity.
But equally, it would be hypocritical for Western nations to condemn China for the pollution generated in supplying them with the cheap goods that they no longer care to make themselves. Let’s not forget, though, that China imports a lot too, thereby shifting those carbon costs of production somewhere else.
Part of the problem is that China continues to rely on coal for its energy, which provides 70 percent of the total. Nuclear and renewables supply only 7 percent, and while Chinese energy production has become somewhat more efficient, any gains there are vastly overwhelmed by increased demand.
One response to these figures is that they underline the potential value of a globally agreed carbon tax. In theory, this builds the global-warming cost of a product – whether a computer or an airplane flight – into its price. Worries that this enables producers simply to pass on that cost to the consumer might be valid for the production of essentials such as foods. But much of China’s export growth has been in consumer electronics (which have immense ‘embodied energy’) – exports of Chinese-built televisions increased from 21 million in 2002 to 86 million in 2005. Why shouldn’t consumers feel the environmental cost of luxury items? And won’t the hallowed laws of the marketplace ultimately cut sales and profits for manufacturers who simply raise their prices?
Some environmentalists are wary of carbon taxes because they fail to guarantee explicit emissions limits. But the main alternative, cap-and-trade, seems to have bigger problems. The idea here is that carbon emitters – nations, industrial sectors, even individual factories or plants – are given a carbon allocation but can exceed it by buying credits off others. That’s the scheme currently adopted in the European Union, and preferred by the Obama adminstration in the USA.
The major drawback is that it makes costs of emissions virtually impossible to predict, and susceptible to outside influences such as weather or other economic variables. The result would be a dangerously volatile carbon market, with prices that could soar or plummet (the latter a dream case for polluters). We hardly need any reminder now of the hazards of such market mechanisms.
Both a carbon tax and cap-and-trade schemes arguably offer a ‘fair’ way of sharing the carbon cost of exports (although there may be no transparent way to set the cap levels in the latter). But surely the Chinese picture reinforces the need for a broader view too, in which there is rational self-interest in international collaboration on and sharing of technologies that reduce emissions and increase efficiency. The issue also brings some urgency to debates about the best reward mechanisms for stimulating innovation [3].
These figures also emphasize the underlying dilemma. As Laura Bodey puts it in Richard Powers’ 1998 novel Gain, when she declines with cancer possibly caused by proximity to a chemical plant that has given her all kinds of convenient domestic products: “People want everything. That’s their problem.”
References
1. Guan, D., Peters, G. P., Weber, C. L. & Hubacek, K. Geophys. Res. Lett. 36, L04709 (2009).
2. Weber, C. L., Peters, G. P., Guan, D. & Hubacek, K. Energy Policy 36, 3572-3577 (2008).
3. Meloso, D., Copic, J. & Bossaerts, P. Science 323, 1335-1339 (2009).
Wednesday, March 04, 2009
What does it all mean?
[This is the pre-edited version of my latest Muse for Nature News.]
Science depends on clear terms and definitions – but the world doesn’t always oblige.
What’s wrong with this statement: ‘The acceleration of an object is proportional to the force acting on it.’ You might think no one could object to this expression of Newton’s second law. But Nobel laureate physicist Frank Wilczek does. This law, he admits, ‘is the soul of classical mechanics.’ But he adds that, ‘like other souls, it is insubstantial’ [1].
Bertrand Russell went further. In 1925 he called for the abolition of the concept of force in physics, and claimed that if people learnt to do without it, this ‘would alter not only their physical imagination, but probably also their morals and politics.’ [2]
That seems an awfully heavy burden for a word that most scientists will use unquestioningly. Wilczek does not go as far as Russell, but he agrees that the concept of ‘force’ acquires meaning only through convention – through the culture of physics – and not because it refers to anything objective. He suspects that only ‘intellectual inertia’ accounts for its continued use.
It’s a disconcerting reminder that scientific terminology, supposed to be so precise and robust, is often much more mutable and ambiguous than we think – which makes it prone to misuse, abuse and confusion [3,4]. But why should that be so?
There are, broadly speaking, several potential problems with words in science. Let’s take each in turn.
Misuse
Some scientific words are simply misapplied, often because their definition is ignored in favour of something less precise. Can’t we just stamp out such transgressions? Not necessarily, for science can’t expect to evade the transformations that any language undergoes through changing conventions of usage. When misuse becomes endemic, we must sometimes accept that a word’s definition has changed de facto. ‘Fertility’ now often connotes birth rate, not just in general culture but among demographers. That is simply not its dictionary meaning, but is it now futile to argue against it? Similarly, it is now routine to speak of protein molecules undergoing phase transitions, which they cannot in the strict sense since phase transitions are only defined in systems that can be extrapolated to infinite size. Here, however, the implication is clear, and inventing a new term is arguably unhelpful.
Perhaps word misuse matters less when it simply alters or broadens meaning – the widespread use of ‘momentarily’ to indicate ‘in a moment’ is wrong and ugly, but it is scarcely disastrous to tolerate it. It’s more problematic when misuse threatens to traduce logic, as for example when the new meaning attached to ‘fertility’ allows the existence of fertile people who have zero fertility.
Everyday words used in science
In 1911 the geologist John W. Gregory, chairman of the British Association for the Advancement of Science, warned of the dangers of appropriating everyday words into science [5]. Worms, elements, rocks – all, he suggested, run risks of securing ‘specious simplicity at the price of subsequent confusion.’ Interestingly, Gregory also worried about the differing uses of ‘metal’ in chemistry and geology; what would he have said, one wonders, about the redefinition later placed on the term by astronomers (any element heavier than helium) which, whatever the historical justification, shows a deplorable lack of self-discipline. Such Humpty Dumpty-style assertions that a familiar word can mean whatever one chooses are more characteristic of the excesses of postmodern philosophy that scientists often lament.
There are hazards in trying to assign new and precise meanings to old and imprecise terms. Experts in nonlinear dynamics can scarcely complain about misuses of ‘chaos’ when it already had several perfectly good meanings before they came along. On the other hand, by either refusing or failing to provide a definition of everyday words that they appropriate – ‘life’ being a prime victim here – scientists risk breeding confusion. In this regard, science can’t win.
Fuzzy boundaries
When scientific words become fashionable, haziness is an exploitable commodity. One begins to suspect there are few areas of science that cannot be portrayed as complexity or nanotechnology. It recently became popular to assert a fractal nature in almost any convoluted shape, until some researchers eventually began to balk at the term being awarded to structures (like ferns) whose self-similarity barely extends beyond a couple of levels of magnification [6].
Heuristic value
The reasons for Wilczek’s scepticism about force are too subtle to describe here, but they don’t leave him calling for its abolition. He points out that it holds meaning because it fits our intuitions – we feel forces and see their effects, even if we don’t strictly need them theoretically. In short, the concept of force is easy to work with: it has heuristic value.
Science is full of concepts that lack sharp definition or even logic but which help us understand the world. Genes are another. The way things are going, it is possible that one day the notion of a gene may create more confusion than enlightenment [7], but at present it doesn’t seem feasible to understand heredity or evolution without their aid – and there’s nothing better yet on offer.
Chemists have recently got themselves into a funk over the concept of oxidation state [8,9]. Some say it is a meaningless measure of an atom’s character; but the fact remains that oxidation states bring into focus a welter of chemical facts, from balancing equations to understanding chemical colour and crystal structure. One could argue that ‘wrong’ ideas that nonetheless systematize observations are harmful only when they refuse to give way to better ones (pace Aristotelian physics and phlogiston), while teaching science is a matter of finding useful (as opposed to ‘true’) hierarchies of knowledge that organize natural phenomena.
The world doesn’t fit into boxes
We’ve known that for a long time: race and species are terms guaranteed to make biologists groan. Now astronomers fare little better, as the furore over the meaning of ‘planet’ illustrated [10] – a classic example of the tension between word use sanctioned by definition or by convention.
The same applies to ‘meteorite’. According to one, perfectly logical, definition of a meteorite, it is not possible for a meteorite ever to strike the Earth (since it becomes one only after having done so). Certainly, the common rule of thumb that meteors are extraterrestrial bodies that enter the atmosphere but don’t hit the surface, while meteorites do, is not one that planetary scientists will endorse. There is no apparent consensus about what they will endorse, which seems to be a result of trying to define processes on the basis of the objects they involve.
All of this suggests some possible rules of thumb for anyone contemplating a scientific neologism. Don’t invent a new word without really good reason (for example, don’t use it to patch over ignorance). Don’t neglect to check if one exists already (we don’t want both amphiphilic and amphipathic). Don’t assume you can put an old word to new use. Make the definition transparent, and think carefully about its boundaries. Oh, and try to make it easy to pronounce - not just in Cambridge but in Tokyo too.
References
1. Wilczek, F. Physics Today 57(10), 11-12 (2004).
2. Russell, B. The ABC of Relativity, 5th edn, p.135 (Routledge, London, 1997).
3. Nature 455, 1023-1028 (2008).
4. Parsons, J. & Wand, Y., Nature 455, 1040-1041 (2008).
5. Gregory, J. W. Nature 87, 538-541 (1911).
6. Avnir, D., Biham, O., Lidar, D. & Malcar, O. Science 279, 39-40 (1998).
7. Pearson, H. Nature 441, 398-401 (2006).
8. Raebinger, H., Lany, S. & Zunger, A. Nature 453, 763 (2008).
9. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. doi:10.1002/anie.200803605.
10. Giles, J. Nature 437, 456-457 (2005).
[This is the pre-edited version of my latest Muse for Nature News.]
Science depends on clear terms and definitions – but the world doesn’t always oblige.
What’s wrong with this statement: ‘The acceleration of an object is proportional to the force acting on it.’ You might think no one could object to this expression of Newton’s second law. But Nobel laureate physicist Frank Wilczek does. This law, he admits, ‘is the soul of classical mechanics.’ But he adds that, ‘like other souls, it is insubstantial’ [1].
Bertrand Russell went further. In 1925 he called for the abolition of the concept of force in physics, and claimed that if people learnt to do without it, this ‘would alter not only their physical imagination, but probably also their morals and politics.’ [2]
That seems an awfully heavy burden for a word that most scientists will use unquestioningly. Wilczek does not go as far as Russell, but he agrees that the concept of ‘force’ acquires meaning only through convention – through the culture of physics – and not because it refers to anything objective. He suspects that only ‘intellectual inertia’ accounts for its continued use.
It’s a disconcerting reminder that scientific terminology, supposed to be so precise and robust, is often much more mutable and ambiguous than we think – which makes it prone to misuse, abuse and confusion [3,4]. But why should that be so?
There are, broadly speaking, several potential problems with words in science. Let’s take each in turn.
Misuse
Some scientific words are simply misapplied, often because their definition is ignored in favour of something less precise. Can’t we just stamp out such transgressions? Not necessarily, for science can’t expect to evade the transformations that any language undergoes through changing conventions of usage. When misuse becomes endemic, we must sometimes accept that a word’s definition has changed de facto. ‘Fertility’ now often connotes birth rate, not just in general culture but among demographers. That is simply not its dictionary meaning, but is it now futile to argue against it? Similarly, it is now routine to speak of protein molecules undergoing phase transitions, which they cannot in the strict sense since phase transitions are only defined in systems that can be extrapolated to infinite size. Here, however, the implication is clear, and inventing a new term is arguably unhelpful.
Perhaps word misuse matters less when it simply alters or broadens meaning – the widespread use of ‘momentarily’ to indicate ‘in a moment’ is wrong and ugly, but it is scarcely disastrous to tolerate it. It’s more problematic when misuse threatens to traduce logic, as for example when the new meaning attached to ‘fertility’ allows the existence of fertile people who have zero fertility.
Everyday words used in science
In 1911 the geologist John W. Gregory, chairman of the British Association for the Advancement of Science, warned of the dangers of appropriating everyday words into science [5]. Worms, elements, rocks – all, he suggested, run risks of securing ‘specious simplicity at the price of subsequent confusion.’ Interestingly, Gregory also worried about the differing uses of ‘metal’ in chemistry and geology; what would he have said, one wonders, about the redefinition later placed on the term by astronomers (any element heavier than helium) which, whatever the historical justification, shows a deplorable lack of self-discipline. Such Humpty Dumpty-style assertions that a familiar word can mean whatever one chooses are more characteristic of the excesses of postmodern philosophy that scientists often lament.
There are hazards in trying to assign new and precise meanings to old and imprecise terms. Experts in nonlinear dynamics can scarcely complain about misuses of ‘chaos’ when it already had several perfectly good meanings before they came along. On the other hand, by either refusing or failing to provide a definition of everyday words that they appropriate – ‘life’ being a prime victim here – scientists risk breeding confusion. In this regard, science can’t win.
Fuzzy boundaries
When scientific words become fashionable, haziness is an exploitable commodity. One begins to suspect there are few areas of science that cannot be portrayed as complexity or nanotechnology. It recently became popular to assert a fractal nature in almost any convoluted shape, until some researchers eventually began to balk at the term being awarded to structures (like ferns) whose self-similarity barely extends beyond a couple of levels of magnification [6].
Heuristic value
The reasons for Wilczek’s scepticism about force are too subtle to describe here, but they don’t leave him calling for its abolition. He points out that it holds meaning because it fits our intuitions – we feel forces and see their effects, even if we don’t strictly need them theoretically. In short, the concept of force is easy to work with: it has heuristic value.
Science is full of concepts that lack sharp definition or even logic but which help us understand the world. Genes are another. The way things are going, it is possible that one day the notion of a gene may create more confusion than enlightenment [7], but at present it doesn’t seem feasible to understand heredity or evolution without their aid – and there’s nothing better yet on offer.
Chemists have recently got themselves into a funk over the concept of oxidation state [8,9]. Some say it is a meaningless measure of an atom’s character; but the fact remains that oxidation states bring into focus a welter of chemical facts, from balancing equations to understanding chemical colour and crystal structure. One could argue that ‘wrong’ ideas that nonetheless systematize observations are harmful only when they refuse to give way to better ones (pace Aristotelian physics and phlogiston), while teaching science is a matter of finding useful (as opposed to ‘true’) hierarchies of knowledge that organize natural phenomena.
The world doesn’t fit into boxes
We’ve known that for a long time: race and species are terms guaranteed to make biologists groan. Now astronomers fare little better, as the furore over the meaning of ‘planet’ illustrated [10] – a classic example of the tension between word use sanctioned by definition or by convention.
The same applies to ‘meteorite’. According to one, perfectly logical, definition of a meteorite, it is not possible for a meteorite ever to strike the Earth (since it becomes one only after having done so). Certainly, the common rule of thumb that meteors are extraterrestrial bodies that enter the atmosphere but don’t hit the surface, while meteorites do, is not one that planetary scientists will endorse. There is no apparent consensus about what they will endorse, which seems to be a result of trying to define processes on the basis of the objects they involve.
All of this suggests some possible rules of thumb for anyone contemplating a scientific neologism. Don’t invent a new word without really good reason (for example, don’t use it to patch over ignorance). Don’t neglect to check if one exists already (we don’t want both amphiphilic and amphipathic). Don’t assume you can put an old word to new use. Make the definition transparent, and think carefully about its boundaries. Oh, and try to make it easy to pronounce - not just in Cambridge but in Tokyo too.
References
1. Wilczek, F. Physics Today 57(10), 11-12 (2004).
2. Russell, B. The ABC of Relativity, 5th edn, p.135 (Routledge, London, 1997).
3. Nature 455, 1023-1028 (2008).
4. Parsons, J. & Wand, Y., Nature 455, 1040-1041 (2008).
5. Gregory, J. W. Nature 87, 538-541 (1911).
6. Avnir, D., Biham, O., Lidar, D. & Malcar, O. Science 279, 39-40 (1998).
7. Pearson, H. Nature 441, 398-401 (2006).
8. Raebinger, H., Lany, S. & Zunger, A. Nature 453, 763 (2008).
9. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. doi:10.1002/anie.200803605.
10. Giles, J. Nature 437, 456-457 (2005).
Friday, February 20, 2009
Catching up
The lack of activity here in the past month or so doesn’t reflect any on my part; rather, frantic preparations in respect of the previous item have left me not a moment free. I now know the East Midlands line and Platform 1b of Derby station rather better than I might have wished. I’m about to head back up that way with a bag full of powdered magnesium, but don’t tell the guard. If the village hall of Matlock Bath doesn’t vanish in a puff of smoke, Paracelsus and his strange world will emerge at the end of next week. There are more details here.
In the meantime, I have been writing some things. There is an article in New Scientist here on using carbon nanotubes for desalination. I used to be a bit sceptical when ‘desalination’ got thrown in as one of the putative applications of nanotechnology; now I’m persuaded that it is a real and exciting possibility.
If you can bear to hear another word about Darwin, my round-up of the crop of books on the great man (but mostly the magisterial new volume by Desmond and Moore), published in the Observer, is here.
Everyone seems to be talking about ‘science and Islam’ – BBC4 has done a series, the World Service is working on another, and I have reviewed two books on the subject in the Sunday Times here here. One is Ehsan Masood’s nice little history, which accompanies the BBC series and is as good a primer as one could wish for.
Then there is my monthly column for Prospect here, but you’ll need to be a subscriber to see it. Sorry, I usually post them up here before editing, but there’s no time this month…
There has been a smattering of reviews of my novel thanks to the release of the paperback. The Observer was a bit sniffy (here), but more troublingly, failed to understand the main themes (“the censoring effect of scientific orthodoxy [and] the questionable morals behind scientific research” – makes it sound like a crank’s manifesto). The Telegraph was nicer (here).
And I discovered a very interesting paper questioning the supposedly unique origin of silk technology in ancient China (here). As a committed Sinophile, I find this news arouses mixed feelings – but heck, China has enough innovations to its credit regardless.
Finally, I’m speaking on pattern formation at one or two places in the coming weeks: first at the Words by the Water literary festival in Keswick, Cumbria on 1 March (Patricia Fara, who is speaking before me that morning, has a very nice new history of science coming out soon), then at the Royal Institution on 10 March. This is in connection with my three books on the subject, which are due to start appearing at the start of March, published by OUP.
The lack of activity here in the past month or so doesn’t reflect any on my part; rather, frantic preparations in respect of the previous item have left me not a moment free. I now know the East Midlands line and Platform 1b of Derby station rather better than I might have wished. I’m about to head back up that way with a bag full of powdered magnesium, but don’t tell the guard. If the village hall of Matlock Bath doesn’t vanish in a puff of smoke, Paracelsus and his strange world will emerge at the end of next week. There are more details here.
In the meantime, I have been writing some things. There is an article in New Scientist here on using carbon nanotubes for desalination. I used to be a bit sceptical when ‘desalination’ got thrown in as one of the putative applications of nanotechnology; now I’m persuaded that it is a real and exciting possibility.
If you can bear to hear another word about Darwin, my round-up of the crop of books on the great man (but mostly the magisterial new volume by Desmond and Moore), published in the Observer, is here.
Everyone seems to be talking about ‘science and Islam’ – BBC4 has done a series, the World Service is working on another, and I have reviewed two books on the subject in the Sunday Times here here. One is Ehsan Masood’s nice little history, which accompanies the BBC series and is as good a primer as one could wish for.
Then there is my monthly column for Prospect here, but you’ll need to be a subscriber to see it. Sorry, I usually post them up here before editing, but there’s no time this month…
There has been a smattering of reviews of my novel thanks to the release of the paperback. The Observer was a bit sniffy (here), but more troublingly, failed to understand the main themes (“the censoring effect of scientific orthodoxy [and] the questionable morals behind scientific research” – makes it sound like a crank’s manifesto). The Telegraph was nicer (here).
And I discovered a very interesting paper questioning the supposedly unique origin of silk technology in ancient China (here). As a committed Sinophile, I find this news arouses mixed feelings – but heck, China has enough innovations to its credit regardless.
Finally, I’m speaking on pattern formation at one or two places in the coming weeks: first at the Words by the Water literary festival in Keswick, Cumbria on 1 March (Patricia Fara, who is speaking before me that morning, has a very nice new history of science coming out soon), then at the Royal Institution on 10 March. This is in connection with my three books on the subject, which are due to start appearing at the start of March, published by OUP.
Friday, January 16, 2009

The Devil’s Doctor on tour
The title of this blog reiterates that which I used for my ‘virtual’ theatre company, in which guise I put on several productions some years ago. One of these was a one-man play about the sixteenth-century alchemist and physician Paracelsus, which turned out to be the precursor to my biography The Devil’s Doctor (Heinemann/Farrar Straus & Giroux, 2006).
Well, now Paracelsus is about to ride again. Whether any of that earlier show will survive remains to be seen, but from the end of January I’ll be attending rehearsals, as a consultant, for a new devised piece created by the wonderful company Shifting Sands, directed by Gerry Flanagan. Anyone who has seen previous shows by Shifting Sands, such as their adaptations of Great Expectations, Romeo and Juliet, or Faust, will know that this should be a riot of visual extravagance, clowning, physical ingenuity and pathos. Just, in fact, what the subject of Paracelsus cries out for, which indeed is why I approached Gerry in the first place to suggest a collaboration. We have generous funding from the Wellcome Trust to develop and perform the piece, and here is where you can see it from the end of February:
Feb 28th Matlock Bath Youth Centre, Derbyshire. 8 pm 01629 55795
March 3rd Arena Theatre Wolverhampton. 01902 321321
March 5th Rose Theatre, Edge Hill University, Ormskirk, Lancs. 01695 584480
March 6th Glasshouse College, Stourbridge. 7.30 pm 01384 399430
March 10th Norden Farm Arts Centre, Maidenhead. 7.30 pm 01628 788997
March 11th Bradon Forest School. Swindon. 7.30pm 01793 770570
March 12, Hamsterley Vilage Hall, Rural touring Cumbria. 7.30pm 01388 488323
March 13th Kirkoswald Village Hall, Rural touring Cumbria. 7.30pm 01768 898187
March 18th Riverhead Theatre, Louth, Lincs. 7.30 pm 01507 600350
March 19th Great Budworth Village Hall, Cheshire. 7.30 pm 01606 891019
March 20th Gawsworth Village Hall, Cheshire. 7.30 pm 01260 223352
March 21st Square Chapel Arts Centre, Halifax. 7.30 pm 01442 349422
March 23rd Highfields School Matlock. Two shows, morning & afternoon.
March 24th Drill Hall, Lincoln. 7.30 pm 01502 873894
March 31st Dana Centre, Science Museum, London. 7 pm
April 1st South Hill Park Arts Centre, Bracknell. 8 pm 01344 416241
April 17th Borough Theatre, Abergavenny. 7.30pm 01873 850805
April 22nd South Street Arts Centre, Reading.
April 23rd South Street, Reading.
April 24th Christ’s Hospital College, Horsham.
May 2nd Redbridge Drama Centre. 8 pm 0208 504 5451
May 3rd Darwin Suite, Assembly Rooms, Derby.
I would love to add another London date, as I think the Science Museum is going to be heavily booked. (Suggestions welcomed.)
In the course of researching this project, Gerry and I went along to the Wellcome Trust’s centre in Euston Road just before Christmas to watch two earlier biopics of Paracelsus. One was the 1943 film by G. W. Pabst, better known for Nosferatu. True to its function as something of a Nazi propaganda movie, this portrayed Paracelsus as a wise sage and hero of the common Volk, unfairly maligned by the authorities but always knowing best. All the same, it has interesting visual moments. The other was something else: a seven-part series made by the UK’s Channel 4 in 1989.
It’s no surprise that, in the early days of Channel 4, the quality of its output varied hugely, and was created on a minimal budget. All the same, seeing this series left me incredulous. For one thing, it beggars belief that someone could have come along and said ‘I have this great idea for a major series. It’s about a Swiss doctor from the Renaissance and how he got caught up in the political turmoil of the age…’, and the commissioners would say ‘Sounds great!’ Nothing like this would ever be entertained for an instant today. But it seemed even more remarkable when I discovered that the script, acting and screenplay are possibly the worst I have ever seen on British television. This would be a candidate for cult status if it weren’t simply so dull. Paracelsus is played by a young man with a hairstyle reminiscent of Kevin Keegan in his heyday. He spends much time gazing into space and straining to make us believe that the Deep and Mystical things he is spouting are Profound. Then we get shots of the Peasants’ War, which consists of half a dozen of the most half-hearted, self-conscious and obviously dirt-cheap rent-a-mob extras I have ever seen outside of Ricky Gervais’s series. They are falling over as other chaps in armour give them delicate blows with wooden swords. The scene is perhaps being filmed on Wimbledon Common. What budget there is has been lavished on (1) hats, and (2) a Star, namely Philip Madoc, who hams as though his life depends on it and who, having presumably signed in blood, is then given at least two different parts, one a crazed old seer and the other some noble or other whose identity I can’t even be bothered to recall. Anything set in this near-medieval period struggles against the spectre of Monty Python and the Holy Grail, but this production positively begs for such comparisons. My favourite scene was the book-burning in Basle, where Paracelsus lugs some unfeasibly immense tome that has painted on the front, in big white Gothic script, ‘Canon of Galen’. His syllabus for teaching at Basle is helpfully pinned up on the wall of the lecture theatre, written in English and in big bold letters that are for some reason in Gaelic script (well, it looks kind of olde – indeed, like the dinner menu at Rivendell). There is seven hours of this stuff. Needless to say, we’ll be shamelessly stealing material from it.
Sunday, December 21, 2008
Nature versus naturoid
[This is my Materials Witness column for the January 2009 issue of Nature Materials.]
Are there metameric devices in the same way that there are metameric colours? The latter are colours that look identical to the eye but have different spectra. Might we make devices that, while made up of different components, perform identically?
Of course we can, you might say. A vacuum tube performs the same function as a semiconductor diode. Clocks can be driven by springs or batteries. But the answer may depend on how much similarity you want. Semiconductor diodes will survive a fall on a hard floor. Battery-operated clocks don’t need winding. And what about something considerably more ambitious, such as an artificial heart?
These thoughts are prompted by a recent article by sociologist Massimo Negrotti of the University of Urbino in Italy (Design Issues 24(4), 26-36; 2008). Negrotti has for several years pondered the question of what, in science and engineering, is commonly called biomimesis, trying to develop a general framework for what this entails and what its limitations might be. His vision is informed less by the usual engineering concern, evident in materials science, to learn from nature and imitate its clever solutions to design problems; rather, Negrotti wants to develop something akin to a philosophy of the artificial, analogous to (but different from) that expounded by Herbert Simon in his 1969 book The Sciences of the Artificial.
To this end, Negrotti has coined the term ‘naturoid’ to describe “all devices that are designed with natural objects in mind, by means of materials and building procedures that differ from those that nature adopts.” A naturoid could by a robot, but also a synthetic-polymer-based enzyme, an artificial-intelligence program, even a simulant of a natural odour. This concept was explored in Negrotti’s 2002 book Naturoids: On the Nature of the Artificial (World Scientific, New Jersey).
Can one say anything useful about a category so broad? That might remain a matter of taste. But Negrotti’s systematic analysis of the issues has the virtue of stripping away some of the illusions and myths that attach to attempts to ‘copy nature’.
It won’t surprise anyone that these attempts will always fall short of perfect mimicry; indeed that is often explicitly not intended. Biomimetic materials are generally imitating just one function of a biological material or structure, such as adhesion or toughness. Negrotti calls this the ‘essential performance’, which itself implies also a selected ‘observation level’ – we might make the comparison solely at the level of bulk mechanical behaviour, irrespective of, say, microstructure or chemical composition.
This inevitably means that the mimicry breaks down at some other observation level, just as colour metamerism can fail depending on the observing conditions (daylight or artificial illumination, say, or different viewing angles).
This reasoning leads Negrotti to conclude that there is no reason to suppose the capacities of naturoids can ever converge on those of the natural models. In particular, the idea that robots and computers will become ever more humanoid in features and function, forecast by some prophets of AI, has no scientific foundation.
[This is my Materials Witness column for the January 2009 issue of Nature Materials.]
Are there metameric devices in the same way that there are metameric colours? The latter are colours that look identical to the eye but have different spectra. Might we make devices that, while made up of different components, perform identically?
Of course we can, you might say. A vacuum tube performs the same function as a semiconductor diode. Clocks can be driven by springs or batteries. But the answer may depend on how much similarity you want. Semiconductor diodes will survive a fall on a hard floor. Battery-operated clocks don’t need winding. And what about something considerably more ambitious, such as an artificial heart?
These thoughts are prompted by a recent article by sociologist Massimo Negrotti of the University of Urbino in Italy (Design Issues 24(4), 26-36; 2008). Negrotti has for several years pondered the question of what, in science and engineering, is commonly called biomimesis, trying to develop a general framework for what this entails and what its limitations might be. His vision is informed less by the usual engineering concern, evident in materials science, to learn from nature and imitate its clever solutions to design problems; rather, Negrotti wants to develop something akin to a philosophy of the artificial, analogous to (but different from) that expounded by Herbert Simon in his 1969 book The Sciences of the Artificial.
To this end, Negrotti has coined the term ‘naturoid’ to describe “all devices that are designed with natural objects in mind, by means of materials and building procedures that differ from those that nature adopts.” A naturoid could by a robot, but also a synthetic-polymer-based enzyme, an artificial-intelligence program, even a simulant of a natural odour. This concept was explored in Negrotti’s 2002 book Naturoids: On the Nature of the Artificial (World Scientific, New Jersey).
Can one say anything useful about a category so broad? That might remain a matter of taste. But Negrotti’s systematic analysis of the issues has the virtue of stripping away some of the illusions and myths that attach to attempts to ‘copy nature’.
It won’t surprise anyone that these attempts will always fall short of perfect mimicry; indeed that is often explicitly not intended. Biomimetic materials are generally imitating just one function of a biological material or structure, such as adhesion or toughness. Negrotti calls this the ‘essential performance’, which itself implies also a selected ‘observation level’ – we might make the comparison solely at the level of bulk mechanical behaviour, irrespective of, say, microstructure or chemical composition.
This inevitably means that the mimicry breaks down at some other observation level, just as colour metamerism can fail depending on the observing conditions (daylight or artificial illumination, say, or different viewing angles).
This reasoning leads Negrotti to conclude that there is no reason to suppose the capacities of naturoids can ever converge on those of the natural models. In particular, the idea that robots and computers will become ever more humanoid in features and function, forecast by some prophets of AI, has no scientific foundation.
Dark matter and DIY genomics
[This is my column for the January 2009 issue of Prospect.]
Physicists’ understandable embarrassment that we don’t know what most of the universe is made of prompts an eagerness, verging on desperation, to identify the missing ingredients. Dark energy – the stuff apparently causing an acceleration of cosmic expansion – is currently a matter of mere speculation, but dark matter, which is thought to comprise around 85 percent of tangible material, is very much on the experimental agenda. This invisible substance is inferred on several grounds, especially that galaxies ought to fall apart without its gravitational influence. The favourite idea is that dark matter consists of unknown fundamental particles that barely interact with visible matter – hence its elusiveness.
One candidate is a particle predicted by theories that invoke extra dimensions of spacetime (beyond the familiar four). So there was much excitement at the recent suggestion that the signature of these particles has been detected in cosmic rays, which are electrically charged particles (mostly protons and electrons) that whiz through all of space. Cosmic rays can be detected when they collide with atoms in the Earth’s atmosphere. Some are probably produced in high-energy astrophysical environments such as supernovae and neutron stars, but their origins are poorly understood.
An international experiment called ATIC, which floats balloon-borne cosmic-ray detectors high over Antarctica, has found an unexpected excess of cosmic-ray electrons with high energies, which might be the debris of collisions between the hypothetical dark-matter particles. That’s the sexy interpretation. They might instead come from more conventional sources, although it’s not then clear whence this excess above the normal cosmic-ray background.
The matter is further complicated by an independent finding, from a detector called Milagro near Los Alamos in New Mexico, that high-energy cosmic-ray protons seem to be concentrated in a couple of bright patches in the sky. It’s not clear if the two results are related, but if the ATIC electrons come from the same source as the Milagro protons, that rules out dark matter, which is expected to produce no such patchiness. On the other hand, no other source is expected to do so either. It’s all very perplexing, but nonetheless a demonstration that cosmic rays, whose energies can exceed those of equivalent particles in Cern’s new Large Hadron Collider, offer an unparalleled natural resource for particle physicists.
*****
A Californian biotech company is promising, within five years, to be able to sequence your entire personal genome while you wait. In under an hour, a doctor could deduce from a swab or blood sample all of your genetic predispositions to disease. At least, that’s the theory.
Pacific Biosciences in Menlo Park has developed a technique for replicating a piece of DNA in a form that contains fluorescent chemical markers attached to each ‘base’, the fundamental building blocks of genes. Each of the four types of base gets a differently coloured marker, and so the DNA sequence – the arrangement of bases along the strand – can be discerned as a string of fairy lights, using a microchip-based light sensor that can image individual molecules.
With a readout rate of about 4.7 bases per second, the method would currently take much longer than an hour to sequence all three billion bases of a human genome. And it is plagued by errors – mistakes about the ‘colour’ of the fluorescent markers – which might wrongly identify as many as one in five of the bases. But these are early days; the basic technology evidently works. The company hopes to start selling commercial products by 2010.
Faster genome sequencing should do wonders for our fundamental understanding of, say, the relationships between species and how these have evolved, or the role of genetic diversity in human populations. There’s no doubt that it would be valuable in medicine too – for example, potential drugs that are currently unusable because of genetically based side-effects in a minority of cases could be rescued by screening that identifies those at risk. But many researchers admit that the notion of a genome-centred ‘personalized medicine’ is easily over-hyped. Not all diseases have a genetic component, and those that do may involve complex, poorly understood interactions of many genes. Worse still, DIY sequencing kits could saddle people with genetic data that they don’t know how to interpret or deal with, as well as running into a legal morass about privacy and disclosure. At this rate, the technology is far ahead of the ethics.
*****
Besides, it is becoming increasingly clear that the programme encoded in genes can be over-ridden: to put it crudely, an organism can ‘disobey’ its genes. There are now many examples of ‘epigenetic’ inheritance, in which phenotypic characteristics (hair colour, say, or susceptibility to certain diseases) can be manifested or suppressed despite a genetic imperative to the contrary (see Prospect May 2008). Commonly, epigenetic inheritance is induced by small strands of RNA, the intermediary between genes and the proteins they encode, which are acquired directly from a parent and can modify the effect of genes in the offspring.
An American team have now shown a new type of such behaviour, in which a rogue gene than can cause sterility in crossbreeds of wild and laboratory-bed fruit flies may be silenced by RNA molecules if the gene is maternally inherited, maintaining fertility in the offspring despite a ‘genetic’ sterility. Most strikingly, this effect may depend on the conditions in which the mothers are reared: warmth boosts the fertility of progeny. It’s not exactly inheritance of acquired characteristics, but is a reminder, amidst the impending Darwin celebrations, of how complicated the story of heredity has now become.
[This is my column for the January 2009 issue of Prospect.]
Physicists’ understandable embarrassment that we don’t know what most of the universe is made of prompts an eagerness, verging on desperation, to identify the missing ingredients. Dark energy – the stuff apparently causing an acceleration of cosmic expansion – is currently a matter of mere speculation, but dark matter, which is thought to comprise around 85 percent of tangible material, is very much on the experimental agenda. This invisible substance is inferred on several grounds, especially that galaxies ought to fall apart without its gravitational influence. The favourite idea is that dark matter consists of unknown fundamental particles that barely interact with visible matter – hence its elusiveness.
One candidate is a particle predicted by theories that invoke extra dimensions of spacetime (beyond the familiar four). So there was much excitement at the recent suggestion that the signature of these particles has been detected in cosmic rays, which are electrically charged particles (mostly protons and electrons) that whiz through all of space. Cosmic rays can be detected when they collide with atoms in the Earth’s atmosphere. Some are probably produced in high-energy astrophysical environments such as supernovae and neutron stars, but their origins are poorly understood.
An international experiment called ATIC, which floats balloon-borne cosmic-ray detectors high over Antarctica, has found an unexpected excess of cosmic-ray electrons with high energies, which might be the debris of collisions between the hypothetical dark-matter particles. That’s the sexy interpretation. They might instead come from more conventional sources, although it’s not then clear whence this excess above the normal cosmic-ray background.
The matter is further complicated by an independent finding, from a detector called Milagro near Los Alamos in New Mexico, that high-energy cosmic-ray protons seem to be concentrated in a couple of bright patches in the sky. It’s not clear if the two results are related, but if the ATIC electrons come from the same source as the Milagro protons, that rules out dark matter, which is expected to produce no such patchiness. On the other hand, no other source is expected to do so either. It’s all very perplexing, but nonetheless a demonstration that cosmic rays, whose energies can exceed those of equivalent particles in Cern’s new Large Hadron Collider, offer an unparalleled natural resource for particle physicists.
*****
A Californian biotech company is promising, within five years, to be able to sequence your entire personal genome while you wait. In under an hour, a doctor could deduce from a swab or blood sample all of your genetic predispositions to disease. At least, that’s the theory.
Pacific Biosciences in Menlo Park has developed a technique for replicating a piece of DNA in a form that contains fluorescent chemical markers attached to each ‘base’, the fundamental building blocks of genes. Each of the four types of base gets a differently coloured marker, and so the DNA sequence – the arrangement of bases along the strand – can be discerned as a string of fairy lights, using a microchip-based light sensor that can image individual molecules.
With a readout rate of about 4.7 bases per second, the method would currently take much longer than an hour to sequence all three billion bases of a human genome. And it is plagued by errors – mistakes about the ‘colour’ of the fluorescent markers – which might wrongly identify as many as one in five of the bases. But these are early days; the basic technology evidently works. The company hopes to start selling commercial products by 2010.
Faster genome sequencing should do wonders for our fundamental understanding of, say, the relationships between species and how these have evolved, or the role of genetic diversity in human populations. There’s no doubt that it would be valuable in medicine too – for example, potential drugs that are currently unusable because of genetically based side-effects in a minority of cases could be rescued by screening that identifies those at risk. But many researchers admit that the notion of a genome-centred ‘personalized medicine’ is easily over-hyped. Not all diseases have a genetic component, and those that do may involve complex, poorly understood interactions of many genes. Worse still, DIY sequencing kits could saddle people with genetic data that they don’t know how to interpret or deal with, as well as running into a legal morass about privacy and disclosure. At this rate, the technology is far ahead of the ethics.
*****
Besides, it is becoming increasingly clear that the programme encoded in genes can be over-ridden: to put it crudely, an organism can ‘disobey’ its genes. There are now many examples of ‘epigenetic’ inheritance, in which phenotypic characteristics (hair colour, say, or susceptibility to certain diseases) can be manifested or suppressed despite a genetic imperative to the contrary (see Prospect May 2008). Commonly, epigenetic inheritance is induced by small strands of RNA, the intermediary between genes and the proteins they encode, which are acquired directly from a parent and can modify the effect of genes in the offspring.
An American team have now shown a new type of such behaviour, in which a rogue gene than can cause sterility in crossbreeds of wild and laboratory-bed fruit flies may be silenced by RNA molecules if the gene is maternally inherited, maintaining fertility in the offspring despite a ‘genetic’ sterility. Most strikingly, this effect may depend on the conditions in which the mothers are reared: warmth boosts the fertility of progeny. It’s not exactly inheritance of acquired characteristics, but is a reminder, amidst the impending Darwin celebrations, of how complicated the story of heredity has now become.
Monday, December 08, 2008

Who knows what ET is thinking?
[My early New Year resolution is to stop giving my Nature colleagues a hard time by forcing them to edit stories that are twice as long as they should be. It won’t stop me writing them that way (so that I can stick them up here), but at least I should do the surgery myself. Here is the initial version of my latest Muse column, before it was given a much-needed shave.]
Attempts to identify the signs of astro-engineering by advanced civilizations aren’t exactly scientific. But it would be sad to rule them out on that score.
“Where is everybody?” Fermi’s famous question about intelligent extraterrestrials still taunts us. Even if the appearance of intelligent life is rare, the vast numbers of Sun-like stars in the Milky Way alone should compensate overwhelmingly, and make it a near certainty that we are not alone. So why does it look that way?
Everyone likes a good Fermi story, but it seems that the origins of the ‘Fermi Paradox’ are true [1]. In the summer of 1950, Fermi was walking to lunch at Los Alamos with Edward Teller, Emil Konopinski and Herbert York. They were discussing a recent spate of UFO reports, and Konopinski recalled a cartoon he had seen in the New Yorker blaming the disappearance of garbage bins from the streets of New York City on extraterrestrials. And so the group fell to debating the feasibility of faster-than-light travel (which Fermi considered quite likely to be found soon). Then they sat down to lunch and spoke of other things.
Suddenly, Fermi piped up, out of the blue, with his question. Everyone knew what he meant, and they laughed. Fermi apparently then did a back-of-the-envelope calculation (his forte) to show that we should have been visited by aliens long ago. Since we haven’t been (nobody mention Erich von Daniken, please), this must mean either that interstellar travel is impossible, or deemed not worthwhile, or that technological civilizations don’t last long.
Fermi’s thinking was formalized and fleshed out in the 1960s by astronomer Frank Drake of Cornell University, whose celebrated equation estimates the probability of extraterrestrial technological civilizations in our galaxy by breaking it down into the product of the various factors involved: the fraction of habitable planets, the number of them on which life appears, and so on.
Meanwhile, the question of extraterrestrial visits was broadened into the problem of whether we can see signs of technological civilizations from afar, for example via radio broadcasts of the sort that are currently sought by the SETI Project, based in Mountain View, California. This raises the issue of whether we would know signs of intelligence if we saw them. The usual assumption is that a civilization aiming to communicate would broadcast some distinctive universal pattern such as an encoding of the mathematical constant pi.
A new angle on that issue is now provided in a preprint [2] by physicist Richard Carrigan of (appropriately enough) the Fermi National Accelerator Laboratory in Batavia, Illinois. He has combed through the data from 250,000 astronomical sources found by the IRAS infrared satellite – which scanned 96 percent of the sky – to look for the signature of solar systems that have been technologically manipulated after a fashion proposed in the 1960s by physicist Freeman Dyson.
Dyson suggested that a sufficiently advanced civilization would baulk at the prospect of its star’s energy being mostly radiated uselessly into space. They could capture it, he said, by breaking up other planets in the solar system into rubble that formed a spherical shell around the star, creating a surface on which the solar energy could be harvested [3].
Can we see a Dyson Sphere from outside? It would be warm, re-radiating some of the star’s energy at a much lower temperature – for a shell with a radius of the Earth’s orbit around a Sun-like star, the temperature should be around 300 K. This would show up as a far-infrared object unlike any other currently known. If Dyson spheres exist in our galaxy, said Dyson, we should be able to see them – and he proposed that we look.
That’s what Carrigan has done. He reported a preliminary search in 2004 [4], but the new data set is sufficient to spot any Dyson Spheres around sun-like bodies out to 300 parsecs – a volume that encompasses a million such stars. It will probably surprise no one that Carrigan finds no compelling candidates. One complication is that some types of star that might resemble a Dyson Sphere, such as those in the late stage of their evolution when they become surrounded by thick dust clouds. But there are ways to weed these out, for example by looking at the spectral signatures such objects are expected to exhibit. Winnowing out such false positives left just 17 candidate objects, of which most, indeed perhaps all, could be given more conventional interpretations. It’s not quite the same as saying that the results are wholly negative – Carrigan argues that the handful of remaining candidates warrant closer inspection – but there’s currently no reason to suppose that there are indeed Dyson Spheres out there.
Dyson says that he didn’t imagine in 1960 that a search like this would be complicated by so many natural mimics of Dyson Spheres. “I had no idea that the sky would be crawling with millions of natural infrared sources”, he says. “So a search for artificial sources seemed reasonable. But after IRAS scanned the sky and found a huge number of natural sources, a search for artificial sources based on infrared data alone was obviously hopeless.”
All the same, he feels that Carrigan may be rather too stringent in whittling down the list of candidates. Carrigan basically excludes any source that doesn’t radiate energy pretty much like a ‘black body’. “I see no reason to expect that an artificial source should have a Planck [black-body] spectrum”, says Dyson. “The spectrum will depend on many unpredictable factors, such as the paint on the outside of the radiating surface.”
So although he agrees that there is no evidence that any of the IRAS sources is artificial, he says that “I do not agree that there is evidence that all of them are natural. There are many IRAS sources for which there is no evidence either way.”
Yet the obvious question hanging over all of this is: who says advanced extraterrestrials will want to make Dyson Spheres anyway? Dyson’s proposal carries a raft of assumptions about the energy requirements and sources of such a civilization. It seems an enormously hubristic assumption that we can second-guess what beings considerably more technologically advanced than us will choose to do (which, in fairness, was never Dyson’s aim). After all, history shows that we find it hard enough to predict where technology will take us in just a hundred years’ time.
Carrigan concedes that it’s a long shot: “It is hard to predict anything about some other civilization”. But he says that the attraction of looking for the Dyson Sphere signature is that “it is a fairly clean case of an astroengineering project that could be observable.”
Yet the fact is that we know absolutely nothing about civilizations more technologically advanced than ours. In that sense, while it might be fun to speculate about what is physically possible, one might charge that this strays beyond science. The Drake equation has itself been criticized as being unfalsifiable, even a ‘religion’ according to Michael Crichton, the late science-fiction writer.
All that is an old debate. But it might be more accurate to say that what we really have here is an attempt to extract knowledge from ignorance: to apply the trappings of science, such as equations and data sets, to an arena where there is nothing to build on.
There are, however, some conceptual – one might say philosophical – underpinnings to the argument. By assuming that human reasoning and agendas can be extrapolated to extraterrestrials, Dyson was in a sense leaning on the Copernican principle, which assumes that the human situation is representative rather than extraordinary. It has recently been proposed [5,6] that this principle may be put to the experimental test in a different context, to examine whether our cosmic neighbourhood is or is not unusual – whether we are, say, at the centre of a large void, which might provide a prosaic, ‘local’ explanation for the apparent cosmic acceleration that motivates the idea of dark energy.
But the Copernican principle can be considered to have a broader application than merely the geographical. Astrophysicist George Ellis has pointed out how arguments over the apparent fine-tuning of the universe – the fact, for example, that ratio of the observed to the theoretical ‘vacuum energy’ is the absurdly small 10**-120 rather than the more understandable zero – entails an assumption that our universe should not be ‘extraordinary’. With a sample of one, says Ellis, there is no logical justification for that belief: ‘there simply is no proof the universe is probable’ [7]. He argues that cosmological theories that use the fine-tuning as justification are therefore drawing on philosophical rather than scientific arguments.
It would be wrong to imagine that a question lies beyond the grasp of science just because it seems very remote and difficult – we now have well-motivated accounts of the origins of the moon, the solar system, and the universe itself from just a fraction of a second onward. But when contingency is involved – in the origin of life, say, or some aspects of evolution, or predictions of the future – the dangers of trying to do science in the absence of discriminating evidence are real. It becomes a little like trying to figure out the language of Neanderthals, or the thoughts of Moses.
It is hard to see that a survey like Carrigan’s could ever claim definitive, or even persuasive, proof of a Dyson Sphere; in that sense, the hypothesis that the paper probes might indeed be called ‘unscientific’ in a Popperian sense. And in the end, the Fermi Paradox that motivates it is not a scientific proposition either, because we know precisely nothing about the motives of other civilizations. Astronomer Glen David Brin suggested in 1983, for example, that they might opt to stay hidden from less advanced worlds, like adults speaking softly in a nursery ‘lest they disturb the infant’s extravagant and colourful time of dreaming’ [8]. We simply don’t know if there is a paradox at all.
But how sad it would be to declare out of scientific bounds speculations like Dyson’s, or experimental searches like Carrigan’s. So long as we see them for what they are, efforts to gain a foothold on metaphysical questions are surely a valid part of the playful creativity of the sciences.
References
1. E. M. Jones, Los Alamos National Laboratory LA-10311-MS (1985).
2. Carrigan, R. http://arxiv.org/abs/0811.2376
3. Dyson, F. J. Science 131, 1667-1668 (1960).
4. Carrigan, R. IAC-04-IAA-1.1.1.06, 55th International Astronautical Congress, Vancouver (2004).
5. Caldwell, R. R. & Stebbins, A. Phys. Rev. Lett. 100, 191302 (2008).
6. Clifton, T., Ferreira, P. G. & Land, K. Phys. Rev. Lett. 101, 131302 (2008).
7. Ellis, G. F. R. http://arxiv.org/abs/0811.3529 (2008).
8. Brin, G. D. Q. J. R. Astr. Soc. 24, 283-309 (1983).
Subscribe to:
Posts (Atom)