Physics, ultimate reality, and an awful lot of money
[A dramatically truncated version of this comment appears in the Diary section of the latest issue of Prospect.]
If you’re a non-believer, it’s easy to mock or even despise efforts to bridge science and religion. But you don’t need to be Richard Dawkins to sense that there’s an imbalance in these often well-meaning initiatives: science has no need of religion in its quest to understand the universe (the relevance to scientific ethics might be more open to debate), whereas religion appears sometimes to crave the intellectual force of science’s rigour. And since it seems hard to imagine how science could ever supply supporting evidence for religion (as opposed to simply unearthing new mysteries), mustn’t any contribution it might make to the logical basis of belief be inevitably negative?
That doesn’t stop people from trying to build bridges, and nor should it. Yet overtures from the religious side are often seen as attempts to sneak doctrine into places where it has no business: witness the controversy over the Royal Society hosting talks and events sponsored by the Templeton Foundation. Philosopher A. C. Grayling, recently denounced as scandalous the willingness of the Royal Society to offer a launching pad for a new book exploring the views of one of its Fellows, Christian minister and physicist John Polkinghorne, on the interactions of science and religion.
The US-based Templeton Foundation has been in the middle of some of the loudest recent controversies about religion and science. Created by ‘global investor and philanthropist’ Sir John Templeton, it professes to ‘serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions, ranging from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness, and creativity.’ For some skeptics, this simply means promoting religion, particularly Christianity, from a seemingly bottomless funding barrel. Templeton himself, a relatively liberal Christian by US standards and a supporter of inter-faith initiatives, once claimed that ‘scientific revelations may be a gold mine for revitalizing religion in the 21st century’. That’s precisely what makes many scientists nervous.
The Templeton Foundation awards an annual prize of £1million to ‘outstanding individuals who have devoted their talents to those aspects of human experience that, even in an age of astonishing scientific advance, remain beyond the reach of scientific explanation.’ This is the world’s largest annual award given to an individual – bigger than a Nobel. And scientists have been prominent among the recipients, especially in recent years: they include cosmologist John Barrow, physicist Freeman Dyson, physics Nobel laureate Charles H. Townes, physicist Paul Davies – and Polkinghorne. That helps to explain why the Royal Society has previously been ready to host the prize’s ceremonials.
I must declare an interest here, because I have taken part in a meeting funded by the Templeton Foundation. In 2005 it convened a gathering of scientists to consider the question of whether water seems ‘fine-tuned’ to support the existence of life. This was an offshoot of an earlier symposium that investigated the broader question of ‘fine tuning’ in the laws of physics, a topic now very much in vogue thanks to recent discoveries in cosmology. That first meeting considered how the basic constants of nature seem to be finely poised to an absurd degree: just a tiny change would seem to make the universe uninhabitable. (The discovery in the 1990s of the acceleration of the expanding universe, currently attributed to a mysterious dark energy, makes the cosmos seem even more improbable than before.) This is a genuine and deep mystery, and at present there is no convincing explanation for it. The issue of water is different, as we concluded at the 2005 meeting: there is no compelling argument for it being a unique solvent for life, or for it being especially fine-tuned even if it were. More pertinently here, this meeting had first-rate speakers and a sound scientific rationale, and even somewhat wary attendees like me detected no hidden agenda beyond an exploration of the issues. If Templeton money is to be used for events like that, I have no problem with that. And it was rather disturbing, even shameful, to find that at least one reputable university press subsequently shied away from publishing the meeting proceedings (soon now to be published by Taylor & Francis) not on any scientific grounds but because of worries about Templeton involvement.
So while I worry about the immodesty of the Templeton Prize, I don’t side with those who consider it basically a bribe to attract good scientists to a disreputable cause. All the same, there is something curious going on. Five of the seven most recent winners have been scientists, and all are listed in the Physics and Cosmology Group of the Center for Theology and the Natural Sciences (CTNS), affiliated to the Graduate Theological Union, an inter-faith centre in Berkeley, California. This includes the latest winner, announced on Monday: French physicist Bernard d’Espagnat, ‘whose explorations of the philosophical implications of quantum physics have’ (according to the prize announcement) ‘cast new light on the definition of reality and the potential limits of knowable science.’ D’Espagnat has suggested ‘the possibility that the things we observe may be tentatively interpreted as signs providing us with some perhaps not entirely misleading glimpses of a higher reality and, therefore, that higher forms of spirituality are fully compatible with what seems to emerge from contemporary physics.’ (See more here and here.) Others might consider this an unnecessary addendum to modern quantum theory, not so far removed from the vague and post hoc analogies of Fritjof Capra’s The Tao of Physics (which was very much a product of its time).
But why this preference for CTNS affiliates? Perhaps it simply means that the people interested in this stuff are a rather small group who are almost bound to get co-opted onto any body with similar interests. Or you might want to view it as an indication that the fastest way to make a million is to join the CTNS’s Physics and Cosmology group. More striking, though, is the fact that all these chaps (I’m afraid so) are physicists of some description. That, it appears, is pretty much the only branch of the natural sciences either willing or able to engage in matters of faith. Of course, American biologists have been given more than enough reason to flee any hint of religiosity; but that alone doesn’t quite seem sufficient to explain this skewed representation of the sciences. I have some ideas about that… but another time.
Friday, March 27, 2009
Wednesday, March 18, 2009



Nature’s Patterns
The first volume (Shapes) of my trilogy on pattern formation, Nature’s Patterns (OUP), is now out. Sort of. At any event, it should be in the shops soon. Nearly all the hiccups with the figures got ironed out in the end (thank you Chantal for your patience) – there are one or two things to put right in the reprints/paperback. Sorry, I tried my best. The second and third volumes (Flow and Branches) are not officially available until (I believe) July and September respectively. But if you talk to OUP sweetly enough, you might get lucky. Better still, they should be on sale at talks, such as the one I’m scheduled to give at the Cheltenham Science Festival on 4 June (8 pm). Maybe see you there.
The right honourable Nigel Lawson
At a university talk I gave recently, a member of the department suggested that I might look at Nigel Lawson’s book An Appeal to Reason: A Cool Look at Climate Change. It’s not that Lawson is necessarily right to be sceptical about climate change and the need to mitigate (rather than adapt to) it, he said. It’s simply that you have to admire the way he makes his case, with the tenacity and rhetorical flair characteristic of his lawyer’s training.
And as chance would have it, I soon thereafter came across some pages of Lawson’s 2006 essay from which the book sprang: ‘The Economics and Politics of Climate Change: An Appeal to Reason’, published by the right-wing think-tank the Centre for Policy Studies. (My daughter was drawing on the other side.) And I was reminded why I doubted that there was indeed very much to admire in Lawson’s methodology. There seems nothing admirable in a bunch of lies; anyone can make nonsense sound correct and reasonable if they are prepared to tell enough bare-faced fibs.
For example, Lawson quotes the Met Office’s Hadley Centre for Climate Prediction and Research:
“Although there is considerable year-to-year variability in annual-mean global temperature, an upward trend can be clearly seen; firstly over the period from about 1920-1940, with little change or a small cooling from 1940-1975, followed by a sustained rise over the last three decades since then.”
He goes on to say: “This last part is a trifle disingenuous, since what the graph actually shows is that the sustained rise took place entirely during the last quarter of the last century.” No. The quote from the Hadley Centre says it exactly as it is, and Lawson’s comment is totally consistent with that. There is nothing disingenuous. Indeed, Lawson goes on to say
“The Hadley Centre graph shows that, for the first phase, from 1920 to 1940, the increase was 0.4 degrees centigrade. From 1940 to 1975 there was a cooling of about 0.2 degrees… Finally, since 1975 there has been a further warming of about 0.5 degrees, making a total increase of some 0.7 degrees over the 20th century as a whole (from 1900 to 1920 there was no change).”
Right. And that is what they said. Lawson has cast aspersions on grounds that are transparently specious. Am I meant to admire this?
It gets worse, of course. Carbon dioxide, he tells us, is only the second most important greenhouse gas, after water vapour. Correct, if you don’t worry about how one technically defines ‘greenhouse gas’ (many scientists don’t usually regard water vapour that way). And your point is? My point is that we are not directly pumping water vapour into the atmosphere in a way that makes much difference to its atmospheric concentration (although anthropogenic warming will increase evaporation). We are doing that for carbon dioxide. What matters for climate change is not the amounts, but whether or not there’s a steady state. Who is being disingenuous?
“It is the published view of the Met Office that is it likely that more than half the warming of recent decades (say 0.3 degrees centigrade out of the overall 0.5 degrees increase between 1975 and 2000) is attributable to man-made sources of greenhouse gases – principally, although by no means exclusively, carbon dioxide”, says Lawson. “But this is highly uncertain, and reputable climate scientists differ sharply over the subject.”
What he means here is that a handful of climate scientists at professional institutions disagree with just about all the others everywhere in the world in maintaining that the warming is not anthropogenic. ‘Reputable’ scientists differ over almost everything – but when the difference is in the ratio of 1 to 1000, say, who would you trust?
And then: “the recent attempt of the Royal Society, of all bodies, to prevent the funding of climate scientists who do not share its alarmist view of the matter is truly shocking.” No, what is truly shocking is that Lawson is so unashamed at distorting the facts. The Royal Society asked asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.
“Yet another uncertainty derives from the fact that, while the growth in manmade carbon dioxide emissions, and thus carbon dioxide concentrations in the atmosphere, continued relentlessly during the 20th century, the global mean surface temperature, as I have already remarked, increased in fits and starts, for which there us no adequate explanation.” Sounds pretty dodgy – until you hear that there is a perfectly adequate explanation in terms of the effects of sulphate aerosols. Perhaps Lawson doesn’t believe this – that’s his prerogative (although he’s then obliged to say why). But to pretend that this issue has just been swept under the carpet, and lacks any plausible explanation, is utterly dishonest.
But those mendacious climate scientists are denying that past warming such as the Medieval Warm Period ever happened, don’t you know: “A rather different account of the past was given by the so-called “hockey-stick” chart of global temperatures over the past millennium, which purported to show that the earth’s temperature was constant until the industrialisation of the 20th century. Reproduced in its 2001 Report by the supposedly authoritative Intergovernmental Panel on Climate Change, set up under the auspices of the United Nations to advise governments on what is clearly a global issue, the chart featured prominently in (among other publications) the present Government’s 2003 energy white paper. It has now been comprehensively discredited.” No. It has been largely supported (see here and here). And it was never the crux of any argument about whether 20th century climate warming is real. What’s more, it never showed that ‘the earth’s temperature was constant until the industrialization of the 20th century; the Medieval Warm Period and the Little Ice Age are both there. As you said, Mr Lawson, we’re talking here about relatively small changes of fractions of a degree. That, indeed, is the whole point: even such apparently small changes are sufficient to make a difference between a ‘warm period’ and a ‘little ice age’.
Phew. I am now on page 3. Excuse me, but I don’t think I have the stamina to wade through a whole book of this stuff. One’s spirit can only withstand a certain amount of falsehood. Admirable? I don’t think so. Imagine if a politician was caught being as dishonest as this. No, hang on a minute, that can’t be right…
I’m moved to write some of this, however, because in the face of such disinformation it becomes crucial to get the facts straight. The situation is not helped, for example, when the Independent says, as it did last Saturday, “The melting of Arctic sea ice could cause global sea levels to rise by more than a metre by the end of the century.” Perhaps there’s some indirect effect here that I’m not aware of; but to my knowledge, melting sea ice has absolutely no effect on sea level. The ice merely displaces the equivalent volume of water. We need to get this stuff right.
At a university talk I gave recently, a member of the department suggested that I might look at Nigel Lawson’s book An Appeal to Reason: A Cool Look at Climate Change. It’s not that Lawson is necessarily right to be sceptical about climate change and the need to mitigate (rather than adapt to) it, he said. It’s simply that you have to admire the way he makes his case, with the tenacity and rhetorical flair characteristic of his lawyer’s training.
And as chance would have it, I soon thereafter came across some pages of Lawson’s 2006 essay from which the book sprang: ‘The Economics and Politics of Climate Change: An Appeal to Reason’, published by the right-wing think-tank the Centre for Policy Studies. (My daughter was drawing on the other side.) And I was reminded why I doubted that there was indeed very much to admire in Lawson’s methodology. There seems nothing admirable in a bunch of lies; anyone can make nonsense sound correct and reasonable if they are prepared to tell enough bare-faced fibs.
For example, Lawson quotes the Met Office’s Hadley Centre for Climate Prediction and Research:
“Although there is considerable year-to-year variability in annual-mean global temperature, an upward trend can be clearly seen; firstly over the period from about 1920-1940, with little change or a small cooling from 1940-1975, followed by a sustained rise over the last three decades since then.”
He goes on to say: “This last part is a trifle disingenuous, since what the graph actually shows is that the sustained rise took place entirely during the last quarter of the last century.” No. The quote from the Hadley Centre says it exactly as it is, and Lawson’s comment is totally consistent with that. There is nothing disingenuous. Indeed, Lawson goes on to say
“The Hadley Centre graph shows that, for the first phase, from 1920 to 1940, the increase was 0.4 degrees centigrade. From 1940 to 1975 there was a cooling of about 0.2 degrees… Finally, since 1975 there has been a further warming of about 0.5 degrees, making a total increase of some 0.7 degrees over the 20th century as a whole (from 1900 to 1920 there was no change).”
Right. And that is what they said. Lawson has cast aspersions on grounds that are transparently specious. Am I meant to admire this?
It gets worse, of course. Carbon dioxide, he tells us, is only the second most important greenhouse gas, after water vapour. Correct, if you don’t worry about how one technically defines ‘greenhouse gas’ (many scientists don’t usually regard water vapour that way). And your point is? My point is that we are not directly pumping water vapour into the atmosphere in a way that makes much difference to its atmospheric concentration (although anthropogenic warming will increase evaporation). We are doing that for carbon dioxide. What matters for climate change is not the amounts, but whether or not there’s a steady state. Who is being disingenuous?
“It is the published view of the Met Office that is it likely that more than half the warming of recent decades (say 0.3 degrees centigrade out of the overall 0.5 degrees increase between 1975 and 2000) is attributable to man-made sources of greenhouse gases – principally, although by no means exclusively, carbon dioxide”, says Lawson. “But this is highly uncertain, and reputable climate scientists differ sharply over the subject.”
What he means here is that a handful of climate scientists at professional institutions disagree with just about all the others everywhere in the world in maintaining that the warming is not anthropogenic. ‘Reputable’ scientists differ over almost everything – but when the difference is in the ratio of 1 to 1000, say, who would you trust?
And then: “the recent attempt of the Royal Society, of all bodies, to prevent the funding of climate scientists who do not share its alarmist view of the matter is truly shocking.” No, what is truly shocking is that Lawson is so unashamed at distorting the facts. The Royal Society asked asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.
“Yet another uncertainty derives from the fact that, while the growth in manmade carbon dioxide emissions, and thus carbon dioxide concentrations in the atmosphere, continued relentlessly during the 20th century, the global mean surface temperature, as I have already remarked, increased in fits and starts, for which there us no adequate explanation.” Sounds pretty dodgy – until you hear that there is a perfectly adequate explanation in terms of the effects of sulphate aerosols. Perhaps Lawson doesn’t believe this – that’s his prerogative (although he’s then obliged to say why). But to pretend that this issue has just been swept under the carpet, and lacks any plausible explanation, is utterly dishonest.
But those mendacious climate scientists are denying that past warming such as the Medieval Warm Period ever happened, don’t you know: “A rather different account of the past was given by the so-called “hockey-stick” chart of global temperatures over the past millennium, which purported to show that the earth’s temperature was constant until the industrialisation of the 20th century. Reproduced in its 2001 Report by the supposedly authoritative Intergovernmental Panel on Climate Change, set up under the auspices of the United Nations to advise governments on what is clearly a global issue, the chart featured prominently in (among other publications) the present Government’s 2003 energy white paper. It has now been comprehensively discredited.” No. It has been largely supported (see here and here). And it was never the crux of any argument about whether 20th century climate warming is real. What’s more, it never showed that ‘the earth’s temperature was constant until the industrialization of the 20th century; the Medieval Warm Period and the Little Ice Age are both there. As you said, Mr Lawson, we’re talking here about relatively small changes of fractions of a degree. That, indeed, is the whole point: even such apparently small changes are sufficient to make a difference between a ‘warm period’ and a ‘little ice age’.
Phew. I am now on page 3. Excuse me, but I don’t think I have the stamina to wade through a whole book of this stuff. One’s spirit can only withstand a certain amount of falsehood. Admirable? I don’t think so. Imagine if a politician was caught being as dishonest as this. No, hang on a minute, that can’t be right…
I’m moved to write some of this, however, because in the face of such disinformation it becomes crucial to get the facts straight. The situation is not helped, for example, when the Independent says, as it did last Saturday, “The melting of Arctic sea ice could cause global sea levels to rise by more than a metre by the end of the century.” Perhaps there’s some indirect effect here that I’m not aware of; but to my knowledge, melting sea ice has absolutely no effect on sea level. The ice merely displaces the equivalent volume of water. We need to get this stuff right.
Friday, March 13, 2009
There’s more to life than sequences
[This is the pre-edited version of my latest Muse for Nature News.]
Shape might be one of the key factors in the function of mysterious ‘non-coding’ DNA.
Everyone knows what DNA looks like. Its double helix decorates countless articles on genetics, has been celebrated in sculpture, and was even engraved on the Golden Record, our message to the cosmos on board the Voyager spacecraft.
The entwined strands, whose form was deduced in 1953 by James Watson and Francis Crick, are admired as much for their beauty as for the light they shed on the mechanism of inheritance: the complementarity between juxtaposed chemical building blocks on the two strands, held together by weak ‘hydrogen’ bonds like a zipper, immediately suggested to Crick and Watson how information encoded in the sequence of blocks could be transmitted to a new strand assembled on the template of an existing one.
With the structure of DNA ‘solved’, genetics switched its focus to the sequence of the four constituent units (called nucleotide bases). By using biotechnological methods to deduce this sequence, they claimed to be ‘reading the book of life’, with the implication that all the information needed to build an organism was held within this abstract linear code.
But beauty has a tendency to inhibit critical thinking. There is now increasing evidence that the molecular structure of DNA is not a delightfully ordered epiphenomenon of its function as a digital data bank but a crucial – and mutable – aspect of the way genomes work. A new study in Science [1] underlines that notion by showing that the precise shape of some genomic DNA has been determined by evolution. In other words, genetics is not simply about sequence, but about structure too.
The standard view – indeed, part of biology’s ‘central dogma’ – is that in its sequence of the four fundamental building blocks (called nucleotide bases) DNA encodes corresponding sequences of amino-acid units that are strung together to make a protein enzyme, with the protein’s compact folded shape (and thus its function) being uniquely determined by that sequence.
This is basically true enough. Yet as the human genome was unpicked nucleotide base by base, it became clear that most of the DNA doesn’t ‘code for’ proteins at all. Fully 98 percent of the human genome is non-coding. So what does it do?
We don’t really know, except to say that it’s clearly not all ‘junk’, as was once suspected – the detritus of evolution, like obsolete files clogging up a computer. Much of the non-coding DNA evidently has a role in cell function, since mutations (changes in nucleotide sequence) in some of these regions have observable (phenotypic) consequences for the organism. We don’t know, however, how the former leads to the latter.
This is the question that Elliott Margulies of the National Institutes of Health in Bethesda, Maryland, Tom Tullius of Boston University, and their coworkers set out to investigate. According to the standard picture, the function of non-coding regions, whatever it is, should be determined by their sequence. Indeed, one way of identifying important non-coding regions is to look for ones that are sensitive to sequence, with the implication that the sequence has been finely tuned by evolution.
But Margulies and colleagues wondered if the shape of non-coding DNA might also be important. As they point out, DNA isn’t simply a uniform double helix: it can be bent or kinked, and may have a helical pitch of varying width, for example. These differences depend on the sequence, but not in any straightforward manner. Two near-identical sequences can adopt quite different shapes, or two very different sequences can have a similar shape.
The researchers used a chemical method to deduce the relationship between sequence and shape. They then searched for shape similarities between analogous non-coding regions in the genomes of 36 different species. Such similarity implies that the shapes have been selected and preserved by evolution – in other words, that shape, rather than sequence per se, is what is important. They found twice as many evolutionarily constrained (and thus functionally important) parts of the non-coding genome than were evident from trans-species correspondences using only sequence data.
So in these non-coding regions, at least, sequence appears to be important only insofar as it specifies a certain molecular shape and not because if its intrinsic information content – a different sequence with the same shape might do just as well.
That doesn’t answer why shape matters to DNA. But it suggests that we are wrong to imagine that the double helix is the beginning and end of the story.
There are plenty of other good reasons to suspect that is true. For example, DNA can adopt structures quite different from Watson and Crick’s helix, called the B-form. It can, under particular conditions of saltiness or temperature, switch to at least two other double-helical structures, called the A and Z forms. It may also from triple- and quadruple-stranded variants, linked by different types of hydrogen-bonding matches between nucleotides. One such is called Hoogsteen base-pairing.
Biochemist Naoki Sugimoto and colleagues at Konan University in Kobe, Japan, have recently shown that, when DNA in solution is surrounded by large polymer molecules, mimicking the crowded conditions of a real cell, Watson-Crick base pairing seems to be less stable than it is in pure, dilute solution, while Hoogsteen base-pairing, which favours the formation of triple and quadruple helices, becomes more stable [2-4].
The researchers think that this is linked to the way water molecules surround the DNA in a ‘hydration shell’. Hoogsteen pairing demands less water in this shell, and so is promoted when molecular crowding makes water scarce.
Changes to the hydration shell, for example induced by ions, may alter DNA shape in a sequence-dependent manner, perhaps being responsible for the sequence-structure relationships studied by Margulies and his colleagues. After all, says Tullius, the method they use to probe structure is a measure of “the local exposure of the surface of DNA to the solvent.”
The importance of DNA’s water sheath on its structure and function is also revealed in work that uses small synthetic molecules as drugs that bind to DNA and alter its behaviour, perhaps switching certain genes on or off. It is conventionally assumed that these molecules must fit snugly into the screw-like groove of the double helix. But some small molecules seem able to bind and show useful therapeutic activity even without such a fit, apparently because they can exploit water molecules in the hydration shell as ‘bridges’ to the DNA itself [5]. So here there is a subtle and irreducible interplay between sequence, shape and ‘environment’.
Then there are mechanical effects too. Some proteins bend and deform DNA significantly when they dock, making the molecule’s stiffness (and its dependence on sequence) a central factor in that process. And the shape and mechanics of DNA can influence gene function at larger scales. For example, the packaging of DNA and associated proteins into a compact form, called chromatin, in cells can affect whether particular genes are active or not. Special ‘chromatin-remodelling’ enzymes are needed to manipulate its structure and enable processes such as gene expression of DNA repair.
None of this is yet well understood. But it feels reminiscent of the way early work on protein structure in the 1930s and 40s grasped for dimly sensed principles before an understanding of the factors governing shape and function transformed our view of life’s molecular machinery. Are studies like these, then, a hint at some forthcoming insight that will reveal gene sequence to be just one element in the logic of life?
References
1. Parker, S. C. J. et al., Science Express doi:10.1126/science.1169050 (2009). Paper here.
2. Miyoshi, D., Karimata, H. & Sugimoto, N. J. Am. Chem. Soc. 128, 7957-7963 (2006). Paper here.
3. Nakano, S. et al., J. Am. Chem. Soc. 126, 14330-14331 (2004). Paper here.
4. Miyoshi, D. et al., J. Am. Chem. Soc. doi:10.1021/ja805972a (2009). Paper here.
5. Nguyen, B., Neidle, S. & Wilson, W. D. Acc. Chem. Res. 42, 11-21 (2009). Paper here.
[This is the pre-edited version of my latest Muse for Nature News.]
Shape might be one of the key factors in the function of mysterious ‘non-coding’ DNA.
Everyone knows what DNA looks like. Its double helix decorates countless articles on genetics, has been celebrated in sculpture, and was even engraved on the Golden Record, our message to the cosmos on board the Voyager spacecraft.
The entwined strands, whose form was deduced in 1953 by James Watson and Francis Crick, are admired as much for their beauty as for the light they shed on the mechanism of inheritance: the complementarity between juxtaposed chemical building blocks on the two strands, held together by weak ‘hydrogen’ bonds like a zipper, immediately suggested to Crick and Watson how information encoded in the sequence of blocks could be transmitted to a new strand assembled on the template of an existing one.
With the structure of DNA ‘solved’, genetics switched its focus to the sequence of the four constituent units (called nucleotide bases). By using biotechnological methods to deduce this sequence, they claimed to be ‘reading the book of life’, with the implication that all the information needed to build an organism was held within this abstract linear code.
But beauty has a tendency to inhibit critical thinking. There is now increasing evidence that the molecular structure of DNA is not a delightfully ordered epiphenomenon of its function as a digital data bank but a crucial – and mutable – aspect of the way genomes work. A new study in Science [1] underlines that notion by showing that the precise shape of some genomic DNA has been determined by evolution. In other words, genetics is not simply about sequence, but about structure too.
The standard view – indeed, part of biology’s ‘central dogma’ – is that in its sequence of the four fundamental building blocks (called nucleotide bases) DNA encodes corresponding sequences of amino-acid units that are strung together to make a protein enzyme, with the protein’s compact folded shape (and thus its function) being uniquely determined by that sequence.
This is basically true enough. Yet as the human genome was unpicked nucleotide base by base, it became clear that most of the DNA doesn’t ‘code for’ proteins at all. Fully 98 percent of the human genome is non-coding. So what does it do?
We don’t really know, except to say that it’s clearly not all ‘junk’, as was once suspected – the detritus of evolution, like obsolete files clogging up a computer. Much of the non-coding DNA evidently has a role in cell function, since mutations (changes in nucleotide sequence) in some of these regions have observable (phenotypic) consequences for the organism. We don’t know, however, how the former leads to the latter.
This is the question that Elliott Margulies of the National Institutes of Health in Bethesda, Maryland, Tom Tullius of Boston University, and their coworkers set out to investigate. According to the standard picture, the function of non-coding regions, whatever it is, should be determined by their sequence. Indeed, one way of identifying important non-coding regions is to look for ones that are sensitive to sequence, with the implication that the sequence has been finely tuned by evolution.
But Margulies and colleagues wondered if the shape of non-coding DNA might also be important. As they point out, DNA isn’t simply a uniform double helix: it can be bent or kinked, and may have a helical pitch of varying width, for example. These differences depend on the sequence, but not in any straightforward manner. Two near-identical sequences can adopt quite different shapes, or two very different sequences can have a similar shape.
The researchers used a chemical method to deduce the relationship between sequence and shape. They then searched for shape similarities between analogous non-coding regions in the genomes of 36 different species. Such similarity implies that the shapes have been selected and preserved by evolution – in other words, that shape, rather than sequence per se, is what is important. They found twice as many evolutionarily constrained (and thus functionally important) parts of the non-coding genome than were evident from trans-species correspondences using only sequence data.
So in these non-coding regions, at least, sequence appears to be important only insofar as it specifies a certain molecular shape and not because if its intrinsic information content – a different sequence with the same shape might do just as well.
That doesn’t answer why shape matters to DNA. But it suggests that we are wrong to imagine that the double helix is the beginning and end of the story.
There are plenty of other good reasons to suspect that is true. For example, DNA can adopt structures quite different from Watson and Crick’s helix, called the B-form. It can, under particular conditions of saltiness or temperature, switch to at least two other double-helical structures, called the A and Z forms. It may also from triple- and quadruple-stranded variants, linked by different types of hydrogen-bonding matches between nucleotides. One such is called Hoogsteen base-pairing.
Biochemist Naoki Sugimoto and colleagues at Konan University in Kobe, Japan, have recently shown that, when DNA in solution is surrounded by large polymer molecules, mimicking the crowded conditions of a real cell, Watson-Crick base pairing seems to be less stable than it is in pure, dilute solution, while Hoogsteen base-pairing, which favours the formation of triple and quadruple helices, becomes more stable [2-4].
The researchers think that this is linked to the way water molecules surround the DNA in a ‘hydration shell’. Hoogsteen pairing demands less water in this shell, and so is promoted when molecular crowding makes water scarce.
Changes to the hydration shell, for example induced by ions, may alter DNA shape in a sequence-dependent manner, perhaps being responsible for the sequence-structure relationships studied by Margulies and his colleagues. After all, says Tullius, the method they use to probe structure is a measure of “the local exposure of the surface of DNA to the solvent.”
The importance of DNA’s water sheath on its structure and function is also revealed in work that uses small synthetic molecules as drugs that bind to DNA and alter its behaviour, perhaps switching certain genes on or off. It is conventionally assumed that these molecules must fit snugly into the screw-like groove of the double helix. But some small molecules seem able to bind and show useful therapeutic activity even without such a fit, apparently because they can exploit water molecules in the hydration shell as ‘bridges’ to the DNA itself [5]. So here there is a subtle and irreducible interplay between sequence, shape and ‘environment’.
Then there are mechanical effects too. Some proteins bend and deform DNA significantly when they dock, making the molecule’s stiffness (and its dependence on sequence) a central factor in that process. And the shape and mechanics of DNA can influence gene function at larger scales. For example, the packaging of DNA and associated proteins into a compact form, called chromatin, in cells can affect whether particular genes are active or not. Special ‘chromatin-remodelling’ enzymes are needed to manipulate its structure and enable processes such as gene expression of DNA repair.
None of this is yet well understood. But it feels reminiscent of the way early work on protein structure in the 1930s and 40s grasped for dimly sensed principles before an understanding of the factors governing shape and function transformed our view of life’s molecular machinery. Are studies like these, then, a hint at some forthcoming insight that will reveal gene sequence to be just one element in the logic of life?
References
1. Parker, S. C. J. et al., Science Express doi:10.1126/science.1169050 (2009). Paper here.
2. Miyoshi, D., Karimata, H. & Sugimoto, N. J. Am. Chem. Soc. 128, 7957-7963 (2006). Paper here.
3. Nakano, S. et al., J. Am. Chem. Soc. 126, 14330-14331 (2004). Paper here.
4. Miyoshi, D. et al., J. Am. Chem. Soc. doi:10.1021/ja805972a (2009). Paper here.
5. Nguyen, B., Neidle, S. & Wilson, W. D. Acc. Chem. Res. 42, 11-21 (2009). Paper here.
Wednesday, March 11, 2009
Who should bear the carbon cost of exports?
[This is the pre-edited version of my latest Muse column for Nature News. (So far it seems only to have elicited outraged comment from some chap who rants against ‘Socialist warming alarmists’, which I suppose says it all.)]
China has become the world’s biggest carbon emitter partly because of its exports. So whose responsibility is that?
There was once a town with a toy factory. Everyone loved the toys, but hated the smell and noise of the factory. ‘That factory boss doesn’t care about us’, they grumbled. ‘He’s getting rich from our pockets, but he should be fined for all the muck he creates.’ Then one entrepreneur decided he could make the same toys without the pollution, using windmills and water filters and so forth. So he did; but they cost twice as much, and no one bought them.
Welcome to the world. Right now, our toy factory is in China. And according to an analysis by Dabo Guan of the University of Cambridge and his colleagues, these exports have helped to turn China into the world’s biggest greenhouse-gas emitting nation [1,2 – papers here and here].
That China now occupies this slot is no surprise; the nation tops the list for most national statistics, simply because it is so big. Per capita emissions of CO2 are still only about a quarter that of the USA, and gasoline consumption per person in 2005 was less than 5 percent that of Americans (but rising fast).
It’s no shocker either that China’s CO2 emissions have surged since it became an economic superpower. In 1981 it was responsible for 8 percent of the global total; in 2002 this reached 14 percent, and by 2007, 21 percent.
But what is most revealing in the new study is that about half of recent emissions increases from China can be attributed to the boom in exports. Their production now accounts for 6 percent of all global CO2 emissions. This invites the question: who is responsible?
Needless to say, China can hardly throw up its hands and say “Don’t blame us – we’re only giving you rich folks what you want.” After all, the revenues from exports are contributing to the remarkable rise in China’s prosperity.
But equally, it would be hypocritical for Western nations to condemn China for the pollution generated in supplying them with the cheap goods that they no longer care to make themselves. Let’s not forget, though, that China imports a lot too, thereby shifting those carbon costs of production somewhere else.
Part of the problem is that China continues to rely on coal for its energy, which provides 70 percent of the total. Nuclear and renewables supply only 7 percent, and while Chinese energy production has become somewhat more efficient, any gains there are vastly overwhelmed by increased demand.
One response to these figures is that they underline the potential value of a globally agreed carbon tax. In theory, this builds the global-warming cost of a product – whether a computer or an airplane flight – into its price. Worries that this enables producers simply to pass on that cost to the consumer might be valid for the production of essentials such as foods. But much of China’s export growth has been in consumer electronics (which have immense ‘embodied energy’) – exports of Chinese-built televisions increased from 21 million in 2002 to 86 million in 2005. Why shouldn’t consumers feel the environmental cost of luxury items? And won’t the hallowed laws of the marketplace ultimately cut sales and profits for manufacturers who simply raise their prices?
Some environmentalists are wary of carbon taxes because they fail to guarantee explicit emissions limits. But the main alternative, cap-and-trade, seems to have bigger problems. The idea here is that carbon emitters – nations, industrial sectors, even individual factories or plants – are given a carbon allocation but can exceed it by buying credits off others. That’s the scheme currently adopted in the European Union, and preferred by the Obama adminstration in the USA.
The major drawback is that it makes costs of emissions virtually impossible to predict, and susceptible to outside influences such as weather or other economic variables. The result would be a dangerously volatile carbon market, with prices that could soar or plummet (the latter a dream case for polluters). We hardly need any reminder now of the hazards of such market mechanisms.
Both a carbon tax and cap-and-trade schemes arguably offer a ‘fair’ way of sharing the carbon cost of exports (although there may be no transparent way to set the cap levels in the latter). But surely the Chinese picture reinforces the need for a broader view too, in which there is rational self-interest in international collaboration on and sharing of technologies that reduce emissions and increase efficiency. The issue also brings some urgency to debates about the best reward mechanisms for stimulating innovation [3].
These figures also emphasize the underlying dilemma. As Laura Bodey puts it in Richard Powers’ 1998 novel Gain, when she declines with cancer possibly caused by proximity to a chemical plant that has given her all kinds of convenient domestic products: “People want everything. That’s their problem.”
References
1. Guan, D., Peters, G. P., Weber, C. L. & Hubacek, K. Geophys. Res. Lett. 36, L04709 (2009).
2. Weber, C. L., Peters, G. P., Guan, D. & Hubacek, K. Energy Policy 36, 3572-3577 (2008).
3. Meloso, D., Copic, J. & Bossaerts, P. Science 323, 1335-1339 (2009).
[This is the pre-edited version of my latest Muse column for Nature News. (So far it seems only to have elicited outraged comment from some chap who rants against ‘Socialist warming alarmists’, which I suppose says it all.)]
China has become the world’s biggest carbon emitter partly because of its exports. So whose responsibility is that?
There was once a town with a toy factory. Everyone loved the toys, but hated the smell and noise of the factory. ‘That factory boss doesn’t care about us’, they grumbled. ‘He’s getting rich from our pockets, but he should be fined for all the muck he creates.’ Then one entrepreneur decided he could make the same toys without the pollution, using windmills and water filters and so forth. So he did; but they cost twice as much, and no one bought them.
Welcome to the world. Right now, our toy factory is in China. And according to an analysis by Dabo Guan of the University of Cambridge and his colleagues, these exports have helped to turn China into the world’s biggest greenhouse-gas emitting nation [1,2 – papers here and here].
That China now occupies this slot is no surprise; the nation tops the list for most national statistics, simply because it is so big. Per capita emissions of CO2 are still only about a quarter that of the USA, and gasoline consumption per person in 2005 was less than 5 percent that of Americans (but rising fast).
It’s no shocker either that China’s CO2 emissions have surged since it became an economic superpower. In 1981 it was responsible for 8 percent of the global total; in 2002 this reached 14 percent, and by 2007, 21 percent.
But what is most revealing in the new study is that about half of recent emissions increases from China can be attributed to the boom in exports. Their production now accounts for 6 percent of all global CO2 emissions. This invites the question: who is responsible?
Needless to say, China can hardly throw up its hands and say “Don’t blame us – we’re only giving you rich folks what you want.” After all, the revenues from exports are contributing to the remarkable rise in China’s prosperity.
But equally, it would be hypocritical for Western nations to condemn China for the pollution generated in supplying them with the cheap goods that they no longer care to make themselves. Let’s not forget, though, that China imports a lot too, thereby shifting those carbon costs of production somewhere else.
Part of the problem is that China continues to rely on coal for its energy, which provides 70 percent of the total. Nuclear and renewables supply only 7 percent, and while Chinese energy production has become somewhat more efficient, any gains there are vastly overwhelmed by increased demand.
One response to these figures is that they underline the potential value of a globally agreed carbon tax. In theory, this builds the global-warming cost of a product – whether a computer or an airplane flight – into its price. Worries that this enables producers simply to pass on that cost to the consumer might be valid for the production of essentials such as foods. But much of China’s export growth has been in consumer electronics (which have immense ‘embodied energy’) – exports of Chinese-built televisions increased from 21 million in 2002 to 86 million in 2005. Why shouldn’t consumers feel the environmental cost of luxury items? And won’t the hallowed laws of the marketplace ultimately cut sales and profits for manufacturers who simply raise their prices?
Some environmentalists are wary of carbon taxes because they fail to guarantee explicit emissions limits. But the main alternative, cap-and-trade, seems to have bigger problems. The idea here is that carbon emitters – nations, industrial sectors, even individual factories or plants – are given a carbon allocation but can exceed it by buying credits off others. That’s the scheme currently adopted in the European Union, and preferred by the Obama adminstration in the USA.
The major drawback is that it makes costs of emissions virtually impossible to predict, and susceptible to outside influences such as weather or other economic variables. The result would be a dangerously volatile carbon market, with prices that could soar or plummet (the latter a dream case for polluters). We hardly need any reminder now of the hazards of such market mechanisms.
Both a carbon tax and cap-and-trade schemes arguably offer a ‘fair’ way of sharing the carbon cost of exports (although there may be no transparent way to set the cap levels in the latter). But surely the Chinese picture reinforces the need for a broader view too, in which there is rational self-interest in international collaboration on and sharing of technologies that reduce emissions and increase efficiency. The issue also brings some urgency to debates about the best reward mechanisms for stimulating innovation [3].
These figures also emphasize the underlying dilemma. As Laura Bodey puts it in Richard Powers’ 1998 novel Gain, when she declines with cancer possibly caused by proximity to a chemical plant that has given her all kinds of convenient domestic products: “People want everything. That’s their problem.”
References
1. Guan, D., Peters, G. P., Weber, C. L. & Hubacek, K. Geophys. Res. Lett. 36, L04709 (2009).
2. Weber, C. L., Peters, G. P., Guan, D. & Hubacek, K. Energy Policy 36, 3572-3577 (2008).
3. Meloso, D., Copic, J. & Bossaerts, P. Science 323, 1335-1339 (2009).
Wednesday, March 04, 2009
What does it all mean?
[This is the pre-edited version of my latest Muse for Nature News.]
Science depends on clear terms and definitions – but the world doesn’t always oblige.
What’s wrong with this statement: ‘The acceleration of an object is proportional to the force acting on it.’ You might think no one could object to this expression of Newton’s second law. But Nobel laureate physicist Frank Wilczek does. This law, he admits, ‘is the soul of classical mechanics.’ But he adds that, ‘like other souls, it is insubstantial’ [1].
Bertrand Russell went further. In 1925 he called for the abolition of the concept of force in physics, and claimed that if people learnt to do without it, this ‘would alter not only their physical imagination, but probably also their morals and politics.’ [2]
That seems an awfully heavy burden for a word that most scientists will use unquestioningly. Wilczek does not go as far as Russell, but he agrees that the concept of ‘force’ acquires meaning only through convention – through the culture of physics – and not because it refers to anything objective. He suspects that only ‘intellectual inertia’ accounts for its continued use.
It’s a disconcerting reminder that scientific terminology, supposed to be so precise and robust, is often much more mutable and ambiguous than we think – which makes it prone to misuse, abuse and confusion [3,4]. But why should that be so?
There are, broadly speaking, several potential problems with words in science. Let’s take each in turn.
Misuse
Some scientific words are simply misapplied, often because their definition is ignored in favour of something less precise. Can’t we just stamp out such transgressions? Not necessarily, for science can’t expect to evade the transformations that any language undergoes through changing conventions of usage. When misuse becomes endemic, we must sometimes accept that a word’s definition has changed de facto. ‘Fertility’ now often connotes birth rate, not just in general culture but among demographers. That is simply not its dictionary meaning, but is it now futile to argue against it? Similarly, it is now routine to speak of protein molecules undergoing phase transitions, which they cannot in the strict sense since phase transitions are only defined in systems that can be extrapolated to infinite size. Here, however, the implication is clear, and inventing a new term is arguably unhelpful.
Perhaps word misuse matters less when it simply alters or broadens meaning – the widespread use of ‘momentarily’ to indicate ‘in a moment’ is wrong and ugly, but it is scarcely disastrous to tolerate it. It’s more problematic when misuse threatens to traduce logic, as for example when the new meaning attached to ‘fertility’ allows the existence of fertile people who have zero fertility.
Everyday words used in science
In 1911 the geologist John W. Gregory, chairman of the British Association for the Advancement of Science, warned of the dangers of appropriating everyday words into science [5]. Worms, elements, rocks – all, he suggested, run risks of securing ‘specious simplicity at the price of subsequent confusion.’ Interestingly, Gregory also worried about the differing uses of ‘metal’ in chemistry and geology; what would he have said, one wonders, about the redefinition later placed on the term by astronomers (any element heavier than helium) which, whatever the historical justification, shows a deplorable lack of self-discipline. Such Humpty Dumpty-style assertions that a familiar word can mean whatever one chooses are more characteristic of the excesses of postmodern philosophy that scientists often lament.
There are hazards in trying to assign new and precise meanings to old and imprecise terms. Experts in nonlinear dynamics can scarcely complain about misuses of ‘chaos’ when it already had several perfectly good meanings before they came along. On the other hand, by either refusing or failing to provide a definition of everyday words that they appropriate – ‘life’ being a prime victim here – scientists risk breeding confusion. In this regard, science can’t win.
Fuzzy boundaries
When scientific words become fashionable, haziness is an exploitable commodity. One begins to suspect there are few areas of science that cannot be portrayed as complexity or nanotechnology. It recently became popular to assert a fractal nature in almost any convoluted shape, until some researchers eventually began to balk at the term being awarded to structures (like ferns) whose self-similarity barely extends beyond a couple of levels of magnification [6].
Heuristic value
The reasons for Wilczek’s scepticism about force are too subtle to describe here, but they don’t leave him calling for its abolition. He points out that it holds meaning because it fits our intuitions – we feel forces and see their effects, even if we don’t strictly need them theoretically. In short, the concept of force is easy to work with: it has heuristic value.
Science is full of concepts that lack sharp definition or even logic but which help us understand the world. Genes are another. The way things are going, it is possible that one day the notion of a gene may create more confusion than enlightenment [7], but at present it doesn’t seem feasible to understand heredity or evolution without their aid – and there’s nothing better yet on offer.
Chemists have recently got themselves into a funk over the concept of oxidation state [8,9]. Some say it is a meaningless measure of an atom’s character; but the fact remains that oxidation states bring into focus a welter of chemical facts, from balancing equations to understanding chemical colour and crystal structure. One could argue that ‘wrong’ ideas that nonetheless systematize observations are harmful only when they refuse to give way to better ones (pace Aristotelian physics and phlogiston), while teaching science is a matter of finding useful (as opposed to ‘true’) hierarchies of knowledge that organize natural phenomena.
The world doesn’t fit into boxes
We’ve known that for a long time: race and species are terms guaranteed to make biologists groan. Now astronomers fare little better, as the furore over the meaning of ‘planet’ illustrated [10] – a classic example of the tension between word use sanctioned by definition or by convention.
The same applies to ‘meteorite’. According to one, perfectly logical, definition of a meteorite, it is not possible for a meteorite ever to strike the Earth (since it becomes one only after having done so). Certainly, the common rule of thumb that meteors are extraterrestrial bodies that enter the atmosphere but don’t hit the surface, while meteorites do, is not one that planetary scientists will endorse. There is no apparent consensus about what they will endorse, which seems to be a result of trying to define processes on the basis of the objects they involve.
All of this suggests some possible rules of thumb for anyone contemplating a scientific neologism. Don’t invent a new word without really good reason (for example, don’t use it to patch over ignorance). Don’t neglect to check if one exists already (we don’t want both amphiphilic and amphipathic). Don’t assume you can put an old word to new use. Make the definition transparent, and think carefully about its boundaries. Oh, and try to make it easy to pronounce - not just in Cambridge but in Tokyo too.
References
1. Wilczek, F. Physics Today 57(10), 11-12 (2004).
2. Russell, B. The ABC of Relativity, 5th edn, p.135 (Routledge, London, 1997).
3. Nature 455, 1023-1028 (2008).
4. Parsons, J. & Wand, Y., Nature 455, 1040-1041 (2008).
5. Gregory, J. W. Nature 87, 538-541 (1911).
6. Avnir, D., Biham, O., Lidar, D. & Malcar, O. Science 279, 39-40 (1998).
7. Pearson, H. Nature 441, 398-401 (2006).
8. Raebinger, H., Lany, S. & Zunger, A. Nature 453, 763 (2008).
9. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. doi:10.1002/anie.200803605.
10. Giles, J. Nature 437, 456-457 (2005).
[This is the pre-edited version of my latest Muse for Nature News.]
Science depends on clear terms and definitions – but the world doesn’t always oblige.
What’s wrong with this statement: ‘The acceleration of an object is proportional to the force acting on it.’ You might think no one could object to this expression of Newton’s second law. But Nobel laureate physicist Frank Wilczek does. This law, he admits, ‘is the soul of classical mechanics.’ But he adds that, ‘like other souls, it is insubstantial’ [1].
Bertrand Russell went further. In 1925 he called for the abolition of the concept of force in physics, and claimed that if people learnt to do without it, this ‘would alter not only their physical imagination, but probably also their morals and politics.’ [2]
That seems an awfully heavy burden for a word that most scientists will use unquestioningly. Wilczek does not go as far as Russell, but he agrees that the concept of ‘force’ acquires meaning only through convention – through the culture of physics – and not because it refers to anything objective. He suspects that only ‘intellectual inertia’ accounts for its continued use.
It’s a disconcerting reminder that scientific terminology, supposed to be so precise and robust, is often much more mutable and ambiguous than we think – which makes it prone to misuse, abuse and confusion [3,4]. But why should that be so?
There are, broadly speaking, several potential problems with words in science. Let’s take each in turn.
Misuse
Some scientific words are simply misapplied, often because their definition is ignored in favour of something less precise. Can’t we just stamp out such transgressions? Not necessarily, for science can’t expect to evade the transformations that any language undergoes through changing conventions of usage. When misuse becomes endemic, we must sometimes accept that a word’s definition has changed de facto. ‘Fertility’ now often connotes birth rate, not just in general culture but among demographers. That is simply not its dictionary meaning, but is it now futile to argue against it? Similarly, it is now routine to speak of protein molecules undergoing phase transitions, which they cannot in the strict sense since phase transitions are only defined in systems that can be extrapolated to infinite size. Here, however, the implication is clear, and inventing a new term is arguably unhelpful.
Perhaps word misuse matters less when it simply alters or broadens meaning – the widespread use of ‘momentarily’ to indicate ‘in a moment’ is wrong and ugly, but it is scarcely disastrous to tolerate it. It’s more problematic when misuse threatens to traduce logic, as for example when the new meaning attached to ‘fertility’ allows the existence of fertile people who have zero fertility.
Everyday words used in science
In 1911 the geologist John W. Gregory, chairman of the British Association for the Advancement of Science, warned of the dangers of appropriating everyday words into science [5]. Worms, elements, rocks – all, he suggested, run risks of securing ‘specious simplicity at the price of subsequent confusion.’ Interestingly, Gregory also worried about the differing uses of ‘metal’ in chemistry and geology; what would he have said, one wonders, about the redefinition later placed on the term by astronomers (any element heavier than helium) which, whatever the historical justification, shows a deplorable lack of self-discipline. Such Humpty Dumpty-style assertions that a familiar word can mean whatever one chooses are more characteristic of the excesses of postmodern philosophy that scientists often lament.
There are hazards in trying to assign new and precise meanings to old and imprecise terms. Experts in nonlinear dynamics can scarcely complain about misuses of ‘chaos’ when it already had several perfectly good meanings before they came along. On the other hand, by either refusing or failing to provide a definition of everyday words that they appropriate – ‘life’ being a prime victim here – scientists risk breeding confusion. In this regard, science can’t win.
Fuzzy boundaries
When scientific words become fashionable, haziness is an exploitable commodity. One begins to suspect there are few areas of science that cannot be portrayed as complexity or nanotechnology. It recently became popular to assert a fractal nature in almost any convoluted shape, until some researchers eventually began to balk at the term being awarded to structures (like ferns) whose self-similarity barely extends beyond a couple of levels of magnification [6].
Heuristic value
The reasons for Wilczek’s scepticism about force are too subtle to describe here, but they don’t leave him calling for its abolition. He points out that it holds meaning because it fits our intuitions – we feel forces and see their effects, even if we don’t strictly need them theoretically. In short, the concept of force is easy to work with: it has heuristic value.
Science is full of concepts that lack sharp definition or even logic but which help us understand the world. Genes are another. The way things are going, it is possible that one day the notion of a gene may create more confusion than enlightenment [7], but at present it doesn’t seem feasible to understand heredity or evolution without their aid – and there’s nothing better yet on offer.
Chemists have recently got themselves into a funk over the concept of oxidation state [8,9]. Some say it is a meaningless measure of an atom’s character; but the fact remains that oxidation states bring into focus a welter of chemical facts, from balancing equations to understanding chemical colour and crystal structure. One could argue that ‘wrong’ ideas that nonetheless systematize observations are harmful only when they refuse to give way to better ones (pace Aristotelian physics and phlogiston), while teaching science is a matter of finding useful (as opposed to ‘true’) hierarchies of knowledge that organize natural phenomena.
The world doesn’t fit into boxes
We’ve known that for a long time: race and species are terms guaranteed to make biologists groan. Now astronomers fare little better, as the furore over the meaning of ‘planet’ illustrated [10] – a classic example of the tension between word use sanctioned by definition or by convention.
The same applies to ‘meteorite’. According to one, perfectly logical, definition of a meteorite, it is not possible for a meteorite ever to strike the Earth (since it becomes one only after having done so). Certainly, the common rule of thumb that meteors are extraterrestrial bodies that enter the atmosphere but don’t hit the surface, while meteorites do, is not one that planetary scientists will endorse. There is no apparent consensus about what they will endorse, which seems to be a result of trying to define processes on the basis of the objects they involve.
All of this suggests some possible rules of thumb for anyone contemplating a scientific neologism. Don’t invent a new word without really good reason (for example, don’t use it to patch over ignorance). Don’t neglect to check if one exists already (we don’t want both amphiphilic and amphipathic). Don’t assume you can put an old word to new use. Make the definition transparent, and think carefully about its boundaries. Oh, and try to make it easy to pronounce - not just in Cambridge but in Tokyo too.
References
1. Wilczek, F. Physics Today 57(10), 11-12 (2004).
2. Russell, B. The ABC of Relativity, 5th edn, p.135 (Routledge, London, 1997).
3. Nature 455, 1023-1028 (2008).
4. Parsons, J. & Wand, Y., Nature 455, 1040-1041 (2008).
5. Gregory, J. W. Nature 87, 538-541 (1911).
6. Avnir, D., Biham, O., Lidar, D. & Malcar, O. Science 279, 39-40 (1998).
7. Pearson, H. Nature 441, 398-401 (2006).
8. Raebinger, H., Lany, S. & Zunger, A. Nature 453, 763 (2008).
9. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. doi:10.1002/anie.200803605.
10. Giles, J. Nature 437, 456-457 (2005).
Friday, February 20, 2009
Catching up
The lack of activity here in the past month or so doesn’t reflect any on my part; rather, frantic preparations in respect of the previous item have left me not a moment free. I now know the East Midlands line and Platform 1b of Derby station rather better than I might have wished. I’m about to head back up that way with a bag full of powdered magnesium, but don’t tell the guard. If the village hall of Matlock Bath doesn’t vanish in a puff of smoke, Paracelsus and his strange world will emerge at the end of next week. There are more details here.
In the meantime, I have been writing some things. There is an article in New Scientist here on using carbon nanotubes for desalination. I used to be a bit sceptical when ‘desalination’ got thrown in as one of the putative applications of nanotechnology; now I’m persuaded that it is a real and exciting possibility.
If you can bear to hear another word about Darwin, my round-up of the crop of books on the great man (but mostly the magisterial new volume by Desmond and Moore), published in the Observer, is here.
Everyone seems to be talking about ‘science and Islam’ – BBC4 has done a series, the World Service is working on another, and I have reviewed two books on the subject in the Sunday Times here here. One is Ehsan Masood’s nice little history, which accompanies the BBC series and is as good a primer as one could wish for.
Then there is my monthly column for Prospect here, but you’ll need to be a subscriber to see it. Sorry, I usually post them up here before editing, but there’s no time this month…
There has been a smattering of reviews of my novel thanks to the release of the paperback. The Observer was a bit sniffy (here), but more troublingly, failed to understand the main themes (“the censoring effect of scientific orthodoxy [and] the questionable morals behind scientific research” – makes it sound like a crank’s manifesto). The Telegraph was nicer (here).
And I discovered a very interesting paper questioning the supposedly unique origin of silk technology in ancient China (here). As a committed Sinophile, I find this news arouses mixed feelings – but heck, China has enough innovations to its credit regardless.
Finally, I’m speaking on pattern formation at one or two places in the coming weeks: first at the Words by the Water literary festival in Keswick, Cumbria on 1 March (Patricia Fara, who is speaking before me that morning, has a very nice new history of science coming out soon), then at the Royal Institution on 10 March. This is in connection with my three books on the subject, which are due to start appearing at the start of March, published by OUP.
The lack of activity here in the past month or so doesn’t reflect any on my part; rather, frantic preparations in respect of the previous item have left me not a moment free. I now know the East Midlands line and Platform 1b of Derby station rather better than I might have wished. I’m about to head back up that way with a bag full of powdered magnesium, but don’t tell the guard. If the village hall of Matlock Bath doesn’t vanish in a puff of smoke, Paracelsus and his strange world will emerge at the end of next week. There are more details here.
In the meantime, I have been writing some things. There is an article in New Scientist here on using carbon nanotubes for desalination. I used to be a bit sceptical when ‘desalination’ got thrown in as one of the putative applications of nanotechnology; now I’m persuaded that it is a real and exciting possibility.
If you can bear to hear another word about Darwin, my round-up of the crop of books on the great man (but mostly the magisterial new volume by Desmond and Moore), published in the Observer, is here.
Everyone seems to be talking about ‘science and Islam’ – BBC4 has done a series, the World Service is working on another, and I have reviewed two books on the subject in the Sunday Times here here. One is Ehsan Masood’s nice little history, which accompanies the BBC series and is as good a primer as one could wish for.
Then there is my monthly column for Prospect here, but you’ll need to be a subscriber to see it. Sorry, I usually post them up here before editing, but there’s no time this month…
There has been a smattering of reviews of my novel thanks to the release of the paperback. The Observer was a bit sniffy (here), but more troublingly, failed to understand the main themes (“the censoring effect of scientific orthodoxy [and] the questionable morals behind scientific research” – makes it sound like a crank’s manifesto). The Telegraph was nicer (here).
And I discovered a very interesting paper questioning the supposedly unique origin of silk technology in ancient China (here). As a committed Sinophile, I find this news arouses mixed feelings – but heck, China has enough innovations to its credit regardless.
Finally, I’m speaking on pattern formation at one or two places in the coming weeks: first at the Words by the Water literary festival in Keswick, Cumbria on 1 March (Patricia Fara, who is speaking before me that morning, has a very nice new history of science coming out soon), then at the Royal Institution on 10 March. This is in connection with my three books on the subject, which are due to start appearing at the start of March, published by OUP.
Friday, January 16, 2009

The Devil’s Doctor on tour
The title of this blog reiterates that which I used for my ‘virtual’ theatre company, in which guise I put on several productions some years ago. One of these was a one-man play about the sixteenth-century alchemist and physician Paracelsus, which turned out to be the precursor to my biography The Devil’s Doctor (Heinemann/Farrar Straus & Giroux, 2006).
Well, now Paracelsus is about to ride again. Whether any of that earlier show will survive remains to be seen, but from the end of January I’ll be attending rehearsals, as a consultant, for a new devised piece created by the wonderful company Shifting Sands, directed by Gerry Flanagan. Anyone who has seen previous shows by Shifting Sands, such as their adaptations of Great Expectations, Romeo and Juliet, or Faust, will know that this should be a riot of visual extravagance, clowning, physical ingenuity and pathos. Just, in fact, what the subject of Paracelsus cries out for, which indeed is why I approached Gerry in the first place to suggest a collaboration. We have generous funding from the Wellcome Trust to develop and perform the piece, and here is where you can see it from the end of February:
Feb 28th Matlock Bath Youth Centre, Derbyshire. 8 pm 01629 55795
March 3rd Arena Theatre Wolverhampton. 01902 321321
March 5th Rose Theatre, Edge Hill University, Ormskirk, Lancs. 01695 584480
March 6th Glasshouse College, Stourbridge. 7.30 pm 01384 399430
March 10th Norden Farm Arts Centre, Maidenhead. 7.30 pm 01628 788997
March 11th Bradon Forest School. Swindon. 7.30pm 01793 770570
March 12, Hamsterley Vilage Hall, Rural touring Cumbria. 7.30pm 01388 488323
March 13th Kirkoswald Village Hall, Rural touring Cumbria. 7.30pm 01768 898187
March 18th Riverhead Theatre, Louth, Lincs. 7.30 pm 01507 600350
March 19th Great Budworth Village Hall, Cheshire. 7.30 pm 01606 891019
March 20th Gawsworth Village Hall, Cheshire. 7.30 pm 01260 223352
March 21st Square Chapel Arts Centre, Halifax. 7.30 pm 01442 349422
March 23rd Highfields School Matlock. Two shows, morning & afternoon.
March 24th Drill Hall, Lincoln. 7.30 pm 01502 873894
March 31st Dana Centre, Science Museum, London. 7 pm
April 1st South Hill Park Arts Centre, Bracknell. 8 pm 01344 416241
April 17th Borough Theatre, Abergavenny. 7.30pm 01873 850805
April 22nd South Street Arts Centre, Reading.
April 23rd South Street, Reading.
April 24th Christ’s Hospital College, Horsham.
May 2nd Redbridge Drama Centre. 8 pm 0208 504 5451
May 3rd Darwin Suite, Assembly Rooms, Derby.
I would love to add another London date, as I think the Science Museum is going to be heavily booked. (Suggestions welcomed.)
In the course of researching this project, Gerry and I went along to the Wellcome Trust’s centre in Euston Road just before Christmas to watch two earlier biopics of Paracelsus. One was the 1943 film by G. W. Pabst, better known for Nosferatu. True to its function as something of a Nazi propaganda movie, this portrayed Paracelsus as a wise sage and hero of the common Volk, unfairly maligned by the authorities but always knowing best. All the same, it has interesting visual moments. The other was something else: a seven-part series made by the UK’s Channel 4 in 1989.
It’s no surprise that, in the early days of Channel 4, the quality of its output varied hugely, and was created on a minimal budget. All the same, seeing this series left me incredulous. For one thing, it beggars belief that someone could have come along and said ‘I have this great idea for a major series. It’s about a Swiss doctor from the Renaissance and how he got caught up in the political turmoil of the age…’, and the commissioners would say ‘Sounds great!’ Nothing like this would ever be entertained for an instant today. But it seemed even more remarkable when I discovered that the script, acting and screenplay are possibly the worst I have ever seen on British television. This would be a candidate for cult status if it weren’t simply so dull. Paracelsus is played by a young man with a hairstyle reminiscent of Kevin Keegan in his heyday. He spends much time gazing into space and straining to make us believe that the Deep and Mystical things he is spouting are Profound. Then we get shots of the Peasants’ War, which consists of half a dozen of the most half-hearted, self-conscious and obviously dirt-cheap rent-a-mob extras I have ever seen outside of Ricky Gervais’s series. They are falling over as other chaps in armour give them delicate blows with wooden swords. The scene is perhaps being filmed on Wimbledon Common. What budget there is has been lavished on (1) hats, and (2) a Star, namely Philip Madoc, who hams as though his life depends on it and who, having presumably signed in blood, is then given at least two different parts, one a crazed old seer and the other some noble or other whose identity I can’t even be bothered to recall. Anything set in this near-medieval period struggles against the spectre of Monty Python and the Holy Grail, but this production positively begs for such comparisons. My favourite scene was the book-burning in Basle, where Paracelsus lugs some unfeasibly immense tome that has painted on the front, in big white Gothic script, ‘Canon of Galen’. His syllabus for teaching at Basle is helpfully pinned up on the wall of the lecture theatre, written in English and in big bold letters that are for some reason in Gaelic script (well, it looks kind of olde – indeed, like the dinner menu at Rivendell). There is seven hours of this stuff. Needless to say, we’ll be shamelessly stealing material from it.
Sunday, December 21, 2008
Nature versus naturoid
[This is my Materials Witness column for the January 2009 issue of Nature Materials.]
Are there metameric devices in the same way that there are metameric colours? The latter are colours that look identical to the eye but have different spectra. Might we make devices that, while made up of different components, perform identically?
Of course we can, you might say. A vacuum tube performs the same function as a semiconductor diode. Clocks can be driven by springs or batteries. But the answer may depend on how much similarity you want. Semiconductor diodes will survive a fall on a hard floor. Battery-operated clocks don’t need winding. And what about something considerably more ambitious, such as an artificial heart?
These thoughts are prompted by a recent article by sociologist Massimo Negrotti of the University of Urbino in Italy (Design Issues 24(4), 26-36; 2008). Negrotti has for several years pondered the question of what, in science and engineering, is commonly called biomimesis, trying to develop a general framework for what this entails and what its limitations might be. His vision is informed less by the usual engineering concern, evident in materials science, to learn from nature and imitate its clever solutions to design problems; rather, Negrotti wants to develop something akin to a philosophy of the artificial, analogous to (but different from) that expounded by Herbert Simon in his 1969 book The Sciences of the Artificial.
To this end, Negrotti has coined the term ‘naturoid’ to describe “all devices that are designed with natural objects in mind, by means of materials and building procedures that differ from those that nature adopts.” A naturoid could by a robot, but also a synthetic-polymer-based enzyme, an artificial-intelligence program, even a simulant of a natural odour. This concept was explored in Negrotti’s 2002 book Naturoids: On the Nature of the Artificial (World Scientific, New Jersey).
Can one say anything useful about a category so broad? That might remain a matter of taste. But Negrotti’s systematic analysis of the issues has the virtue of stripping away some of the illusions and myths that attach to attempts to ‘copy nature’.
It won’t surprise anyone that these attempts will always fall short of perfect mimicry; indeed that is often explicitly not intended. Biomimetic materials are generally imitating just one function of a biological material or structure, such as adhesion or toughness. Negrotti calls this the ‘essential performance’, which itself implies also a selected ‘observation level’ – we might make the comparison solely at the level of bulk mechanical behaviour, irrespective of, say, microstructure or chemical composition.
This inevitably means that the mimicry breaks down at some other observation level, just as colour metamerism can fail depending on the observing conditions (daylight or artificial illumination, say, or different viewing angles).
This reasoning leads Negrotti to conclude that there is no reason to suppose the capacities of naturoids can ever converge on those of the natural models. In particular, the idea that robots and computers will become ever more humanoid in features and function, forecast by some prophets of AI, has no scientific foundation.
[This is my Materials Witness column for the January 2009 issue of Nature Materials.]
Are there metameric devices in the same way that there are metameric colours? The latter are colours that look identical to the eye but have different spectra. Might we make devices that, while made up of different components, perform identically?
Of course we can, you might say. A vacuum tube performs the same function as a semiconductor diode. Clocks can be driven by springs or batteries. But the answer may depend on how much similarity you want. Semiconductor diodes will survive a fall on a hard floor. Battery-operated clocks don’t need winding. And what about something considerably more ambitious, such as an artificial heart?
These thoughts are prompted by a recent article by sociologist Massimo Negrotti of the University of Urbino in Italy (Design Issues 24(4), 26-36; 2008). Negrotti has for several years pondered the question of what, in science and engineering, is commonly called biomimesis, trying to develop a general framework for what this entails and what its limitations might be. His vision is informed less by the usual engineering concern, evident in materials science, to learn from nature and imitate its clever solutions to design problems; rather, Negrotti wants to develop something akin to a philosophy of the artificial, analogous to (but different from) that expounded by Herbert Simon in his 1969 book The Sciences of the Artificial.
To this end, Negrotti has coined the term ‘naturoid’ to describe “all devices that are designed with natural objects in mind, by means of materials and building procedures that differ from those that nature adopts.” A naturoid could by a robot, but also a synthetic-polymer-based enzyme, an artificial-intelligence program, even a simulant of a natural odour. This concept was explored in Negrotti’s 2002 book Naturoids: On the Nature of the Artificial (World Scientific, New Jersey).
Can one say anything useful about a category so broad? That might remain a matter of taste. But Negrotti’s systematic analysis of the issues has the virtue of stripping away some of the illusions and myths that attach to attempts to ‘copy nature’.
It won’t surprise anyone that these attempts will always fall short of perfect mimicry; indeed that is often explicitly not intended. Biomimetic materials are generally imitating just one function of a biological material or structure, such as adhesion or toughness. Negrotti calls this the ‘essential performance’, which itself implies also a selected ‘observation level’ – we might make the comparison solely at the level of bulk mechanical behaviour, irrespective of, say, microstructure or chemical composition.
This inevitably means that the mimicry breaks down at some other observation level, just as colour metamerism can fail depending on the observing conditions (daylight or artificial illumination, say, or different viewing angles).
This reasoning leads Negrotti to conclude that there is no reason to suppose the capacities of naturoids can ever converge on those of the natural models. In particular, the idea that robots and computers will become ever more humanoid in features and function, forecast by some prophets of AI, has no scientific foundation.
Dark matter and DIY genomics
[This is my column for the January 2009 issue of Prospect.]
Physicists’ understandable embarrassment that we don’t know what most of the universe is made of prompts an eagerness, verging on desperation, to identify the missing ingredients. Dark energy – the stuff apparently causing an acceleration of cosmic expansion – is currently a matter of mere speculation, but dark matter, which is thought to comprise around 85 percent of tangible material, is very much on the experimental agenda. This invisible substance is inferred on several grounds, especially that galaxies ought to fall apart without its gravitational influence. The favourite idea is that dark matter consists of unknown fundamental particles that barely interact with visible matter – hence its elusiveness.
One candidate is a particle predicted by theories that invoke extra dimensions of spacetime (beyond the familiar four). So there was much excitement at the recent suggestion that the signature of these particles has been detected in cosmic rays, which are electrically charged particles (mostly protons and electrons) that whiz through all of space. Cosmic rays can be detected when they collide with atoms in the Earth’s atmosphere. Some are probably produced in high-energy astrophysical environments such as supernovae and neutron stars, but their origins are poorly understood.
An international experiment called ATIC, which floats balloon-borne cosmic-ray detectors high over Antarctica, has found an unexpected excess of cosmic-ray electrons with high energies, which might be the debris of collisions between the hypothetical dark-matter particles. That’s the sexy interpretation. They might instead come from more conventional sources, although it’s not then clear whence this excess above the normal cosmic-ray background.
The matter is further complicated by an independent finding, from a detector called Milagro near Los Alamos in New Mexico, that high-energy cosmic-ray protons seem to be concentrated in a couple of bright patches in the sky. It’s not clear if the two results are related, but if the ATIC electrons come from the same source as the Milagro protons, that rules out dark matter, which is expected to produce no such patchiness. On the other hand, no other source is expected to do so either. It’s all very perplexing, but nonetheless a demonstration that cosmic rays, whose energies can exceed those of equivalent particles in Cern’s new Large Hadron Collider, offer an unparalleled natural resource for particle physicists.
*****
A Californian biotech company is promising, within five years, to be able to sequence your entire personal genome while you wait. In under an hour, a doctor could deduce from a swab or blood sample all of your genetic predispositions to disease. At least, that’s the theory.
Pacific Biosciences in Menlo Park has developed a technique for replicating a piece of DNA in a form that contains fluorescent chemical markers attached to each ‘base’, the fundamental building blocks of genes. Each of the four types of base gets a differently coloured marker, and so the DNA sequence – the arrangement of bases along the strand – can be discerned as a string of fairy lights, using a microchip-based light sensor that can image individual molecules.
With a readout rate of about 4.7 bases per second, the method would currently take much longer than an hour to sequence all three billion bases of a human genome. And it is plagued by errors – mistakes about the ‘colour’ of the fluorescent markers – which might wrongly identify as many as one in five of the bases. But these are early days; the basic technology evidently works. The company hopes to start selling commercial products by 2010.
Faster genome sequencing should do wonders for our fundamental understanding of, say, the relationships between species and how these have evolved, or the role of genetic diversity in human populations. There’s no doubt that it would be valuable in medicine too – for example, potential drugs that are currently unusable because of genetically based side-effects in a minority of cases could be rescued by screening that identifies those at risk. But many researchers admit that the notion of a genome-centred ‘personalized medicine’ is easily over-hyped. Not all diseases have a genetic component, and those that do may involve complex, poorly understood interactions of many genes. Worse still, DIY sequencing kits could saddle people with genetic data that they don’t know how to interpret or deal with, as well as running into a legal morass about privacy and disclosure. At this rate, the technology is far ahead of the ethics.
*****
Besides, it is becoming increasingly clear that the programme encoded in genes can be over-ridden: to put it crudely, an organism can ‘disobey’ its genes. There are now many examples of ‘epigenetic’ inheritance, in which phenotypic characteristics (hair colour, say, or susceptibility to certain diseases) can be manifested or suppressed despite a genetic imperative to the contrary (see Prospect May 2008). Commonly, epigenetic inheritance is induced by small strands of RNA, the intermediary between genes and the proteins they encode, which are acquired directly from a parent and can modify the effect of genes in the offspring.
An American team have now shown a new type of such behaviour, in which a rogue gene than can cause sterility in crossbreeds of wild and laboratory-bed fruit flies may be silenced by RNA molecules if the gene is maternally inherited, maintaining fertility in the offspring despite a ‘genetic’ sterility. Most strikingly, this effect may depend on the conditions in which the mothers are reared: warmth boosts the fertility of progeny. It’s not exactly inheritance of acquired characteristics, but is a reminder, amidst the impending Darwin celebrations, of how complicated the story of heredity has now become.
[This is my column for the January 2009 issue of Prospect.]
Physicists’ understandable embarrassment that we don’t know what most of the universe is made of prompts an eagerness, verging on desperation, to identify the missing ingredients. Dark energy – the stuff apparently causing an acceleration of cosmic expansion – is currently a matter of mere speculation, but dark matter, which is thought to comprise around 85 percent of tangible material, is very much on the experimental agenda. This invisible substance is inferred on several grounds, especially that galaxies ought to fall apart without its gravitational influence. The favourite idea is that dark matter consists of unknown fundamental particles that barely interact with visible matter – hence its elusiveness.
One candidate is a particle predicted by theories that invoke extra dimensions of spacetime (beyond the familiar four). So there was much excitement at the recent suggestion that the signature of these particles has been detected in cosmic rays, which are electrically charged particles (mostly protons and electrons) that whiz through all of space. Cosmic rays can be detected when they collide with atoms in the Earth’s atmosphere. Some are probably produced in high-energy astrophysical environments such as supernovae and neutron stars, but their origins are poorly understood.
An international experiment called ATIC, which floats balloon-borne cosmic-ray detectors high over Antarctica, has found an unexpected excess of cosmic-ray electrons with high energies, which might be the debris of collisions between the hypothetical dark-matter particles. That’s the sexy interpretation. They might instead come from more conventional sources, although it’s not then clear whence this excess above the normal cosmic-ray background.
The matter is further complicated by an independent finding, from a detector called Milagro near Los Alamos in New Mexico, that high-energy cosmic-ray protons seem to be concentrated in a couple of bright patches in the sky. It’s not clear if the two results are related, but if the ATIC electrons come from the same source as the Milagro protons, that rules out dark matter, which is expected to produce no such patchiness. On the other hand, no other source is expected to do so either. It’s all very perplexing, but nonetheless a demonstration that cosmic rays, whose energies can exceed those of equivalent particles in Cern’s new Large Hadron Collider, offer an unparalleled natural resource for particle physicists.
*****
A Californian biotech company is promising, within five years, to be able to sequence your entire personal genome while you wait. In under an hour, a doctor could deduce from a swab or blood sample all of your genetic predispositions to disease. At least, that’s the theory.
Pacific Biosciences in Menlo Park has developed a technique for replicating a piece of DNA in a form that contains fluorescent chemical markers attached to each ‘base’, the fundamental building blocks of genes. Each of the four types of base gets a differently coloured marker, and so the DNA sequence – the arrangement of bases along the strand – can be discerned as a string of fairy lights, using a microchip-based light sensor that can image individual molecules.
With a readout rate of about 4.7 bases per second, the method would currently take much longer than an hour to sequence all three billion bases of a human genome. And it is plagued by errors – mistakes about the ‘colour’ of the fluorescent markers – which might wrongly identify as many as one in five of the bases. But these are early days; the basic technology evidently works. The company hopes to start selling commercial products by 2010.
Faster genome sequencing should do wonders for our fundamental understanding of, say, the relationships between species and how these have evolved, or the role of genetic diversity in human populations. There’s no doubt that it would be valuable in medicine too – for example, potential drugs that are currently unusable because of genetically based side-effects in a minority of cases could be rescued by screening that identifies those at risk. But many researchers admit that the notion of a genome-centred ‘personalized medicine’ is easily over-hyped. Not all diseases have a genetic component, and those that do may involve complex, poorly understood interactions of many genes. Worse still, DIY sequencing kits could saddle people with genetic data that they don’t know how to interpret or deal with, as well as running into a legal morass about privacy and disclosure. At this rate, the technology is far ahead of the ethics.
*****
Besides, it is becoming increasingly clear that the programme encoded in genes can be over-ridden: to put it crudely, an organism can ‘disobey’ its genes. There are now many examples of ‘epigenetic’ inheritance, in which phenotypic characteristics (hair colour, say, or susceptibility to certain diseases) can be manifested or suppressed despite a genetic imperative to the contrary (see Prospect May 2008). Commonly, epigenetic inheritance is induced by small strands of RNA, the intermediary between genes and the proteins they encode, which are acquired directly from a parent and can modify the effect of genes in the offspring.
An American team have now shown a new type of such behaviour, in which a rogue gene than can cause sterility in crossbreeds of wild and laboratory-bed fruit flies may be silenced by RNA molecules if the gene is maternally inherited, maintaining fertility in the offspring despite a ‘genetic’ sterility. Most strikingly, this effect may depend on the conditions in which the mothers are reared: warmth boosts the fertility of progeny. It’s not exactly inheritance of acquired characteristics, but is a reminder, amidst the impending Darwin celebrations, of how complicated the story of heredity has now become.
Monday, December 08, 2008

Who knows what ET is thinking?
[My early New Year resolution is to stop giving my Nature colleagues a hard time by forcing them to edit stories that are twice as long as they should be. It won’t stop me writing them that way (so that I can stick them up here), but at least I should do the surgery myself. Here is the initial version of my latest Muse column, before it was given a much-needed shave.]
Attempts to identify the signs of astro-engineering by advanced civilizations aren’t exactly scientific. But it would be sad to rule them out on that score.
“Where is everybody?” Fermi’s famous question about intelligent extraterrestrials still taunts us. Even if the appearance of intelligent life is rare, the vast numbers of Sun-like stars in the Milky Way alone should compensate overwhelmingly, and make it a near certainty that we are not alone. So why does it look that way?
Everyone likes a good Fermi story, but it seems that the origins of the ‘Fermi Paradox’ are true [1]. In the summer of 1950, Fermi was walking to lunch at Los Alamos with Edward Teller, Emil Konopinski and Herbert York. They were discussing a recent spate of UFO reports, and Konopinski recalled a cartoon he had seen in the New Yorker blaming the disappearance of garbage bins from the streets of New York City on extraterrestrials. And so the group fell to debating the feasibility of faster-than-light travel (which Fermi considered quite likely to be found soon). Then they sat down to lunch and spoke of other things.
Suddenly, Fermi piped up, out of the blue, with his question. Everyone knew what he meant, and they laughed. Fermi apparently then did a back-of-the-envelope calculation (his forte) to show that we should have been visited by aliens long ago. Since we haven’t been (nobody mention Erich von Daniken, please), this must mean either that interstellar travel is impossible, or deemed not worthwhile, or that technological civilizations don’t last long.
Fermi’s thinking was formalized and fleshed out in the 1960s by astronomer Frank Drake of Cornell University, whose celebrated equation estimates the probability of extraterrestrial technological civilizations in our galaxy by breaking it down into the product of the various factors involved: the fraction of habitable planets, the number of them on which life appears, and so on.
Meanwhile, the question of extraterrestrial visits was broadened into the problem of whether we can see signs of technological civilizations from afar, for example via radio broadcasts of the sort that are currently sought by the SETI Project, based in Mountain View, California. This raises the issue of whether we would know signs of intelligence if we saw them. The usual assumption is that a civilization aiming to communicate would broadcast some distinctive universal pattern such as an encoding of the mathematical constant pi.
A new angle on that issue is now provided in a preprint [2] by physicist Richard Carrigan of (appropriately enough) the Fermi National Accelerator Laboratory in Batavia, Illinois. He has combed through the data from 250,000 astronomical sources found by the IRAS infrared satellite – which scanned 96 percent of the sky – to look for the signature of solar systems that have been technologically manipulated after a fashion proposed in the 1960s by physicist Freeman Dyson.
Dyson suggested that a sufficiently advanced civilization would baulk at the prospect of its star’s energy being mostly radiated uselessly into space. They could capture it, he said, by breaking up other planets in the solar system into rubble that formed a spherical shell around the star, creating a surface on which the solar energy could be harvested [3].
Can we see a Dyson Sphere from outside? It would be warm, re-radiating some of the star’s energy at a much lower temperature – for a shell with a radius of the Earth’s orbit around a Sun-like star, the temperature should be around 300 K. This would show up as a far-infrared object unlike any other currently known. If Dyson spheres exist in our galaxy, said Dyson, we should be able to see them – and he proposed that we look.
That’s what Carrigan has done. He reported a preliminary search in 2004 [4], but the new data set is sufficient to spot any Dyson Spheres around sun-like bodies out to 300 parsecs – a volume that encompasses a million such stars. It will probably surprise no one that Carrigan finds no compelling candidates. One complication is that some types of star that might resemble a Dyson Sphere, such as those in the late stage of their evolution when they become surrounded by thick dust clouds. But there are ways to weed these out, for example by looking at the spectral signatures such objects are expected to exhibit. Winnowing out such false positives left just 17 candidate objects, of which most, indeed perhaps all, could be given more conventional interpretations. It’s not quite the same as saying that the results are wholly negative – Carrigan argues that the handful of remaining candidates warrant closer inspection – but there’s currently no reason to suppose that there are indeed Dyson Spheres out there.
Dyson says that he didn’t imagine in 1960 that a search like this would be complicated by so many natural mimics of Dyson Spheres. “I had no idea that the sky would be crawling with millions of natural infrared sources”, he says. “So a search for artificial sources seemed reasonable. But after IRAS scanned the sky and found a huge number of natural sources, a search for artificial sources based on infrared data alone was obviously hopeless.”
All the same, he feels that Carrigan may be rather too stringent in whittling down the list of candidates. Carrigan basically excludes any source that doesn’t radiate energy pretty much like a ‘black body’. “I see no reason to expect that an artificial source should have a Planck [black-body] spectrum”, says Dyson. “The spectrum will depend on many unpredictable factors, such as the paint on the outside of the radiating surface.”
So although he agrees that there is no evidence that any of the IRAS sources is artificial, he says that “I do not agree that there is evidence that all of them are natural. There are many IRAS sources for which there is no evidence either way.”
Yet the obvious question hanging over all of this is: who says advanced extraterrestrials will want to make Dyson Spheres anyway? Dyson’s proposal carries a raft of assumptions about the energy requirements and sources of such a civilization. It seems an enormously hubristic assumption that we can second-guess what beings considerably more technologically advanced than us will choose to do (which, in fairness, was never Dyson’s aim). After all, history shows that we find it hard enough to predict where technology will take us in just a hundred years’ time.
Carrigan concedes that it’s a long shot: “It is hard to predict anything about some other civilization”. But he says that the attraction of looking for the Dyson Sphere signature is that “it is a fairly clean case of an astroengineering project that could be observable.”
Yet the fact is that we know absolutely nothing about civilizations more technologically advanced than ours. In that sense, while it might be fun to speculate about what is physically possible, one might charge that this strays beyond science. The Drake equation has itself been criticized as being unfalsifiable, even a ‘religion’ according to Michael Crichton, the late science-fiction writer.
All that is an old debate. But it might be more accurate to say that what we really have here is an attempt to extract knowledge from ignorance: to apply the trappings of science, such as equations and data sets, to an arena where there is nothing to build on.
There are, however, some conceptual – one might say philosophical – underpinnings to the argument. By assuming that human reasoning and agendas can be extrapolated to extraterrestrials, Dyson was in a sense leaning on the Copernican principle, which assumes that the human situation is representative rather than extraordinary. It has recently been proposed [5,6] that this principle may be put to the experimental test in a different context, to examine whether our cosmic neighbourhood is or is not unusual – whether we are, say, at the centre of a large void, which might provide a prosaic, ‘local’ explanation for the apparent cosmic acceleration that motivates the idea of dark energy.
But the Copernican principle can be considered to have a broader application than merely the geographical. Astrophysicist George Ellis has pointed out how arguments over the apparent fine-tuning of the universe – the fact, for example, that ratio of the observed to the theoretical ‘vacuum energy’ is the absurdly small 10**-120 rather than the more understandable zero – entails an assumption that our universe should not be ‘extraordinary’. With a sample of one, says Ellis, there is no logical justification for that belief: ‘there simply is no proof the universe is probable’ [7]. He argues that cosmological theories that use the fine-tuning as justification are therefore drawing on philosophical rather than scientific arguments.
It would be wrong to imagine that a question lies beyond the grasp of science just because it seems very remote and difficult – we now have well-motivated accounts of the origins of the moon, the solar system, and the universe itself from just a fraction of a second onward. But when contingency is involved – in the origin of life, say, or some aspects of evolution, or predictions of the future – the dangers of trying to do science in the absence of discriminating evidence are real. It becomes a little like trying to figure out the language of Neanderthals, or the thoughts of Moses.
It is hard to see that a survey like Carrigan’s could ever claim definitive, or even persuasive, proof of a Dyson Sphere; in that sense, the hypothesis that the paper probes might indeed be called ‘unscientific’ in a Popperian sense. And in the end, the Fermi Paradox that motivates it is not a scientific proposition either, because we know precisely nothing about the motives of other civilizations. Astronomer Glen David Brin suggested in 1983, for example, that they might opt to stay hidden from less advanced worlds, like adults speaking softly in a nursery ‘lest they disturb the infant’s extravagant and colourful time of dreaming’ [8]. We simply don’t know if there is a paradox at all.
But how sad it would be to declare out of scientific bounds speculations like Dyson’s, or experimental searches like Carrigan’s. So long as we see them for what they are, efforts to gain a foothold on metaphysical questions are surely a valid part of the playful creativity of the sciences.
References
1. E. M. Jones, Los Alamos National Laboratory LA-10311-MS (1985).
2. Carrigan, R. http://arxiv.org/abs/0811.2376
3. Dyson, F. J. Science 131, 1667-1668 (1960).
4. Carrigan, R. IAC-04-IAA-1.1.1.06, 55th International Astronautical Congress, Vancouver (2004).
5. Caldwell, R. R. & Stebbins, A. Phys. Rev. Lett. 100, 191302 (2008).
6. Clifton, T., Ferreira, P. G. & Land, K. Phys. Rev. Lett. 101, 131302 (2008).
7. Ellis, G. F. R. http://arxiv.org/abs/0811.3529 (2008).
8. Brin, G. D. Q. J. R. Astr. Soc. 24, 283-309 (1983).
Una poca lettura
For any of you who reads Italian (I am sure there are many), there is a little essay of mine up on the Italian science & culture site Fortepiano here. This is basically the text of the short talk I gave in Turin for the receipt of one of the Lagrange prizes for complexity last April. At least, I hope it is – my Italian is non-existent, I fear. Which is a shame, because the Fortepiano site looks kind of intriguing.
For any of you who reads Italian (I am sure there are many), there is a little essay of mine up on the Italian science & culture site Fortepiano here. This is basically the text of the short talk I gave in Turin for the receipt of one of the Lagrange prizes for complexity last April. At least, I hope it is – my Italian is non-existent, I fear. Which is a shame, because the Fortepiano site looks kind of intriguing.
Thursday, November 20, 2008
DIY economics
There I am, performing a bit of rotary sanding on top of a piece of the newspaper that I’d considered disposable (the Business pages of the Guardian), when something catches my eye. Namely, a reference to ‘the science weekly Nature’. What’s all this?
It is an article by the Guardian’s Management Editor Simon Caulkin, explaining why ‘self-interest is bad for the economy’. Needless to say, that’s not quite right, and presumably not quite intended. The economy relies on self-interest. What Caulkin is really saying is that self-interest without restraint or regulation is bad for the economy, especially when it generates the kind of absurd salaries that promote reckless short-termism and erosion of trust (not to mention outright arrogant malpractice). Caulkin rightly points out that Adam Smith never condoned any such unfettered selfishness.
But where does Nature feature in this? Caulkin refers to the recent article by Jean-Philippe Bouchaud that points to some of the shortcomings of conventional economic thinking, based as it is on unproven (indeed, fallacious) axioms. In physics, models that don’t fit with reality are thrown out. “Not so in economics”, says Caulkin, “whose central tenets – rational agents, the invisible hand, efficient markets – derive from economic work done in the 1950s and 1960s”. Bouchaud says that these, in hindsight, look “more like propaganda against communism than plausible science” (did anyone hear Hayek’s name whispered just then?).
Now, the last time I said any such thing (with, I hope, a little more circumspection), I was told by several economists (here and here) that this was a caricature of what economists think, and that I was just making it up. Economists know that markets are often not efficient! They know that agents aren’t always rational (in the economic sense)! Get up to date, man! Look at the recent Nobel prizes!
In fact I had wanted, in my FT article above, to mention the curious paradox that several recent Nobels have been for work decidedly outside the neoclassical paradigm, while much of economics labours doggedly within it. But there was no room. In any event, there is some justification in such responses, if the implication (not my intention) is that all economists still think as they did in the 1950s. These days I am happy to be more irenic, not only because that’s the sort of fellow I am but because it seems to me that thoughtful, progressive economists and those who challenge the neoclassical ‘rational agent’ tradition from outside should be natural allies, not foes, in the fight against the use of debased economic ideas in policy making.
But look, economists: do you think all is really so fine when a journalist paid to comment on the economy (and not just some trumped-up physicist-cum-science writer) not only possesses these views about your discipline but regards it as something of an eye-opener when someone points out in a science journal that the economy is not like this at all? Are you still so complacently sure that you are communicating your penetrating insights about economic markets to the world beyond? Are you so sure that your views are common knowledge not just in academia but to the people who actually run the economy? Maybe you are. But Nobel laureates like Joe Stiglitz and Paul Krugman aren’t.
There I am, performing a bit of rotary sanding on top of a piece of the newspaper that I’d considered disposable (the Business pages of the Guardian), when something catches my eye. Namely, a reference to ‘the science weekly Nature’. What’s all this?
It is an article by the Guardian’s Management Editor Simon Caulkin, explaining why ‘self-interest is bad for the economy’. Needless to say, that’s not quite right, and presumably not quite intended. The economy relies on self-interest. What Caulkin is really saying is that self-interest without restraint or regulation is bad for the economy, especially when it generates the kind of absurd salaries that promote reckless short-termism and erosion of trust (not to mention outright arrogant malpractice). Caulkin rightly points out that Adam Smith never condoned any such unfettered selfishness.
But where does Nature feature in this? Caulkin refers to the recent article by Jean-Philippe Bouchaud that points to some of the shortcomings of conventional economic thinking, based as it is on unproven (indeed, fallacious) axioms. In physics, models that don’t fit with reality are thrown out. “Not so in economics”, says Caulkin, “whose central tenets – rational agents, the invisible hand, efficient markets – derive from economic work done in the 1950s and 1960s”. Bouchaud says that these, in hindsight, look “more like propaganda against communism than plausible science” (did anyone hear Hayek’s name whispered just then?).
Now, the last time I said any such thing (with, I hope, a little more circumspection), I was told by several economists (here and here) that this was a caricature of what economists think, and that I was just making it up. Economists know that markets are often not efficient! They know that agents aren’t always rational (in the economic sense)! Get up to date, man! Look at the recent Nobel prizes!
In fact I had wanted, in my FT article above, to mention the curious paradox that several recent Nobels have been for work decidedly outside the neoclassical paradigm, while much of economics labours doggedly within it. But there was no room. In any event, there is some justification in such responses, if the implication (not my intention) is that all economists still think as they did in the 1950s. These days I am happy to be more irenic, not only because that’s the sort of fellow I am but because it seems to me that thoughtful, progressive economists and those who challenge the neoclassical ‘rational agent’ tradition from outside should be natural allies, not foes, in the fight against the use of debased economic ideas in policy making.
But look, economists: do you think all is really so fine when a journalist paid to comment on the economy (and not just some trumped-up physicist-cum-science writer) not only possesses these views about your discipline but regards it as something of an eye-opener when someone points out in a science journal that the economy is not like this at all? Are you still so complacently sure that you are communicating your penetrating insights about economic markets to the world beyond? Are you so sure that your views are common knowledge not just in academia but to the people who actually run the economy? Maybe you are. But Nobel laureates like Joe Stiglitz and Paul Krugman aren’t.
Thursday, November 06, 2008
What you don’t learn at school about the economy
[Why, you might wonder, would I want to expose myself to more flak from economists by writing this column for Nature’s online news? Well, if any economists get to see it at all, I hope they will recognize that it is not an attack at all but a call to make common cause in driving out simplistic myths from the world of economic policy. I was particularly taken with the article by Fred Argy on the use of economic theory in policy advising, and I guess I am moved to what hope is the irenic position of recognizing that his statement that ‘economics does not lend itself to doctrinaire policy assertions’ should apply too to criticisms of traditional economic theory. The world is too complex to become dogmatic about this stuff. My goodness, though, that seems unlikely to deter the pundits, from what I have seen.
Below, as usual, is the ‘long’ version of the column…]
When the sophisticated theories of economics get vulgarized into policy-making tools, that spells trouble for us all.
The column inches devoted to the global financial crisis must be running now into miles, and yet one could be forgiven for concluding that we are little the wiser. When one group of several hundred academic economists opposes the US Treasury’s bank bail-out and another equally eminent group supports it, how is the ordinary person supposed to decide about what is the right response to the financial mess we’re in, or indeed what caused it in the first place?
Yet some things seem clear. A largely unregulated market has clearly failed to deliver the optimal behaviour that conventional theories of economic competition promise it should. Most commentators now acknowledge that new types of financial regulation are needed, and the (for want of a better word) liberal pundits are seizing the chance to denounce the alleged superiority of the free market and to question whether Adam Smith’s invisible hand is anything but a libertarian fantasy.
Behind all this, however, is the question of why the free market hasn’t done what it is supposed to. In the New Statesman, economics Nobel laureate Joseph Stiglitz recently offered an answer: “For over a quarter of a century, we have known that Smith’s conclusions do not hold when there is imperfect information – and all markets, especially financial markets, are characterised by information imperfections.” For this reason, Stiglitz concludes, “The reason the invisible hand often seems invisible is that it is not there.”
Now, some might say that Stiglitz would say this, because analysing the effects of imperfect information is what won him his Nobel in 2001. In short, the traditional ‘neoclassical’ microeconomic models developed in the first half of the twentieth century assumed that all agents have ‘perfect’ information: they all know everything about what is being bought and sold.
This assumption makes the models mathematically tractable, not least because it ensures that all agents are identical. The idea is then that these agents use this information to deduce the actions that will maximize their ‘utility’, often synonymous with wealth or profit.
Under such conditions of perfect competition, the self-interested actions of market agents create an optimally efficient market in which asset prices attain their ‘correct’ value and supply matches demand. This is the principle notoriously summarized by Gordon Gekko in the movie Wall Street: greed is good, because it benefits society. That idea harks back to the French philosopher Bernard Mandeville, who argued semi-humorously in 1705 that ‘private vices’ have ‘public benefits.’
Stiglitz and his co-laureates George Akerlof and Michael Spence showed what can go wrong in this tidy picture when (as in the real world) market agents don’t know everything, and some know more than others. With these ‘asymmetries’ of information, the market may then no longer be ‘efficient’ at all, so that for example poor products can crowd out good ones.
Now, as Stiglitz says, this has been known for decades. So why isn’t it heeded? “Many of the problems our economy faces”, says Stiglitz, “are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy.”
Economist Steve Cohn, now at Knox College in Galesburg, Illinois, echoes this view about the failures of basic economic education: “More than one million students take principles of economics classes annually in the United States. These courses will be the main contact with formal economic theory for most undergraduates and will influence how they think about economic issues. Only a few percent of all students studying introductory microeconomics will likely use a textbook that seriously challenges the neoclassical paradigm.”
And there lies the problem. When people criticize economics for its reliance on traditional models that, while occasionally applicable in some special cases, are just plain wrong in general, the usual response is that this is a mere caricature of the discipline and that of course economists know all about the shortcomings of those models. Look, they say, at the way people like Stiglitz have been rewarded for pointing them out.
Fine. So why are the traditional models still taught as a meaningful first approximation to economics students who may never encounter the caveats before graduating and becoming financiers and policy advisers? Why has free-market fundamentalism become an unqualified act of faith among many pundits and advisers, particularly in the US, as this year’s Nobel laureate Paul Krugman has explained in his 1994 book Peddling Prosperity? Why are models still used that cannot explain crashes and recessions at all?
Robert Hunter Wade of the London School of Economics agrees that the sophistication of academic economics tends to vanish in the real world, despite what its defenders claim. “Go to the journals, they say, and you find a world of great variety and innovation, where some of the best work is done on issues of market failure. And they are right as far as they go. But one should also sample economics as it is applied by people such as World Bank country economists when they advise the government of country X, and as it is hard-wired into World Bank formulas for evaluating countries' policies and institutions. In this second kind of economics the sophistication of the first kind is stripped away to leave only ‘the fundamentals’.”
As Australian policy adviser Fred Argy has pointed out, such economic fundamentalism based on simplistic models commonly lead to dogmatic policy extremism. “We saw in the 1960s and 1970s how the work of John Maynard Keynes was vulgarised by many of his followers, and used to justify the most extreme forms of government intervention. And in the 1980s and 1990s we saw how monetarism, public choice theory and neo-classical economics have been misused by some to justify simplistic small government policies.”
An example of this misdirected thinking can be seen in the way several columnists have announced smugly that it is wrong to describe the current financial panic as ‘irrational’: it is perfectly ‘rational’, they say, for people to be offloading stock before it becomes valueless. That’s true, but it fails to acknowledge that this is not the kind of rationality described in conventional economic models. Rational herding behaviour is called irrational because it is not what the models predict. In other words, there’s something badly wrong with those models.
Scientists are used to the need for approximations and simplifications in teaching. But this doesn’t mean that they regard Lamarkism as a useful approximation to the Darwinism that graduate students will learn, or terracentrism as the best system for Astronomy 101.
Sadly, this often becomes an argument about how backward and unscientific economics is. That is not only unhelpful, but untrue: it is quite correct to say that a glance at the Nobels reveals (with a few exceptions) the dramatic leaps in understanding and realism that the discipline has made since its origins in misguided analogies from microscopic physics. Knowledgeable economists and critics of traditional economics are on the same side; they need to unite against the use of vulgarized, introductory or plain incorrect models as instruments of policy. After all, as the British economist Joan Robinson, a pioneer in the understanding of imperfect competition, put it, “the purpose of studying economics is to learn how to avoid being deceived by other economists.”
[Why, you might wonder, would I want to expose myself to more flak from economists by writing this column for Nature’s online news? Well, if any economists get to see it at all, I hope they will recognize that it is not an attack at all but a call to make common cause in driving out simplistic myths from the world of economic policy. I was particularly taken with the article by Fred Argy on the use of economic theory in policy advising, and I guess I am moved to what hope is the irenic position of recognizing that his statement that ‘economics does not lend itself to doctrinaire policy assertions’ should apply too to criticisms of traditional economic theory. The world is too complex to become dogmatic about this stuff. My goodness, though, that seems unlikely to deter the pundits, from what I have seen.
Below, as usual, is the ‘long’ version of the column…]
When the sophisticated theories of economics get vulgarized into policy-making tools, that spells trouble for us all.
The column inches devoted to the global financial crisis must be running now into miles, and yet one could be forgiven for concluding that we are little the wiser. When one group of several hundred academic economists opposes the US Treasury’s bank bail-out and another equally eminent group supports it, how is the ordinary person supposed to decide about what is the right response to the financial mess we’re in, or indeed what caused it in the first place?
Yet some things seem clear. A largely unregulated market has clearly failed to deliver the optimal behaviour that conventional theories of economic competition promise it should. Most commentators now acknowledge that new types of financial regulation are needed, and the (for want of a better word) liberal pundits are seizing the chance to denounce the alleged superiority of the free market and to question whether Adam Smith’s invisible hand is anything but a libertarian fantasy.
Behind all this, however, is the question of why the free market hasn’t done what it is supposed to. In the New Statesman, economics Nobel laureate Joseph Stiglitz recently offered an answer: “For over a quarter of a century, we have known that Smith’s conclusions do not hold when there is imperfect information – and all markets, especially financial markets, are characterised by information imperfections.” For this reason, Stiglitz concludes, “The reason the invisible hand often seems invisible is that it is not there.”
Now, some might say that Stiglitz would say this, because analysing the effects of imperfect information is what won him his Nobel in 2001. In short, the traditional ‘neoclassical’ microeconomic models developed in the first half of the twentieth century assumed that all agents have ‘perfect’ information: they all know everything about what is being bought and sold.
This assumption makes the models mathematically tractable, not least because it ensures that all agents are identical. The idea is then that these agents use this information to deduce the actions that will maximize their ‘utility’, often synonymous with wealth or profit.
Under such conditions of perfect competition, the self-interested actions of market agents create an optimally efficient market in which asset prices attain their ‘correct’ value and supply matches demand. This is the principle notoriously summarized by Gordon Gekko in the movie Wall Street: greed is good, because it benefits society. That idea harks back to the French philosopher Bernard Mandeville, who argued semi-humorously in 1705 that ‘private vices’ have ‘public benefits.’
Stiglitz and his co-laureates George Akerlof and Michael Spence showed what can go wrong in this tidy picture when (as in the real world) market agents don’t know everything, and some know more than others. With these ‘asymmetries’ of information, the market may then no longer be ‘efficient’ at all, so that for example poor products can crowd out good ones.
Now, as Stiglitz says, this has been known for decades. So why isn’t it heeded? “Many of the problems our economy faces”, says Stiglitz, “are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy.”
Economist Steve Cohn, now at Knox College in Galesburg, Illinois, echoes this view about the failures of basic economic education: “More than one million students take principles of economics classes annually in the United States. These courses will be the main contact with formal economic theory for most undergraduates and will influence how they think about economic issues. Only a few percent of all students studying introductory microeconomics will likely use a textbook that seriously challenges the neoclassical paradigm.”
And there lies the problem. When people criticize economics for its reliance on traditional models that, while occasionally applicable in some special cases, are just plain wrong in general, the usual response is that this is a mere caricature of the discipline and that of course economists know all about the shortcomings of those models. Look, they say, at the way people like Stiglitz have been rewarded for pointing them out.
Fine. So why are the traditional models still taught as a meaningful first approximation to economics students who may never encounter the caveats before graduating and becoming financiers and policy advisers? Why has free-market fundamentalism become an unqualified act of faith among many pundits and advisers, particularly in the US, as this year’s Nobel laureate Paul Krugman has explained in his 1994 book Peddling Prosperity? Why are models still used that cannot explain crashes and recessions at all?
Robert Hunter Wade of the London School of Economics agrees that the sophistication of academic economics tends to vanish in the real world, despite what its defenders claim. “Go to the journals, they say, and you find a world of great variety and innovation, where some of the best work is done on issues of market failure. And they are right as far as they go. But one should also sample economics as it is applied by people such as World Bank country economists when they advise the government of country X, and as it is hard-wired into World Bank formulas for evaluating countries' policies and institutions. In this second kind of economics the sophistication of the first kind is stripped away to leave only ‘the fundamentals’.”
As Australian policy adviser Fred Argy has pointed out, such economic fundamentalism based on simplistic models commonly lead to dogmatic policy extremism. “We saw in the 1960s and 1970s how the work of John Maynard Keynes was vulgarised by many of his followers, and used to justify the most extreme forms of government intervention. And in the 1980s and 1990s we saw how monetarism, public choice theory and neo-classical economics have been misused by some to justify simplistic small government policies.”
An example of this misdirected thinking can be seen in the way several columnists have announced smugly that it is wrong to describe the current financial panic as ‘irrational’: it is perfectly ‘rational’, they say, for people to be offloading stock before it becomes valueless. That’s true, but it fails to acknowledge that this is not the kind of rationality described in conventional economic models. Rational herding behaviour is called irrational because it is not what the models predict. In other words, there’s something badly wrong with those models.
Scientists are used to the need for approximations and simplifications in teaching. But this doesn’t mean that they regard Lamarkism as a useful approximation to the Darwinism that graduate students will learn, or terracentrism as the best system for Astronomy 101.
Sadly, this often becomes an argument about how backward and unscientific economics is. That is not only unhelpful, but untrue: it is quite correct to say that a glance at the Nobels reveals (with a few exceptions) the dramatic leaps in understanding and realism that the discipline has made since its origins in misguided analogies from microscopic physics. Knowledgeable economists and critics of traditional economics are on the same side; they need to unite against the use of vulgarized, introductory or plain incorrect models as instruments of policy. After all, as the British economist Joan Robinson, a pioneer in the understanding of imperfect competition, put it, “the purpose of studying economics is to learn how to avoid being deceived by other economists.”
Tuesday, October 21, 2008
Epidemics, tipping points and phase transitions
I just came across this comment in the FT about the kind of social dynamics I discussed in my book Critical Mass.
It’s nicely put, though the spread of ideas/disease/information in epidemiological models can in fact also be described in terms of phase transitions: they’re a far more general concept than is implied by citing just the freezing transition. I also agree that sociologists have important, indeed crucial, things to offer in this area. But Duncan Watts trained as a physicist.
I just came across this comment in the FT about the kind of social dynamics I discussed in my book Critical Mass.
It’s nicely put, though the spread of ideas/disease/information in epidemiological models can in fact also be described in terms of phase transitions: they’re a far more general concept than is implied by citing just the freezing transition. I also agree that sociologists have important, indeed crucial, things to offer in this area. But Duncan Watts trained as a physicist.
Thursday, October 16, 2008

Fractal calligraphy
Everyone got very excited several years ago when some guys claimed that Jackson Pollock’s drip paintings were fractals (R. P. Taylor et al., Nature 399, 422; 1999). That claim has come under scrutiny, but now it seems in any case that, as with everything else in the world, the Chinese were there first long ago. Yuelin Li of Argonne National Laboratory has found evidence of fractality in the calligraphy of Chinese artists dating back many hundreds of years (paper here). In particular, he describes the fractal analysis of a calligraphic letter by Huai Su, one of the legendary figures of Chinese calligraphy (Li calls him a ‘maniac Buddhist monk’, an image I rather enjoyed). Huai Su’s scroll, which hangs in the Shanghai Museum, says “Bitter bamboo shoots and tea? Excellent! Just rush them [over]. Presented by Huai Su.” (See image above: you’ve got to admit, it beats a text message.)
So what, you might be tempted to say? Isn’t this just a chance consequence of the fragmented nature of brush strokes? Apparently not. Li points out that Su seems to have drawn explicit inspiration from natural fractal objects. A conversation with the calligrapher Yan Zhenqing, recorded in 722 CE, goes as follows:
Zhenqing asked: ‘Do you have your own inspiration? Su answered: ‘I often marvel at the spectacular summer clouds and imitate it… I also find the cracks in a wall very natural.’ Zhenqing asked: ‘How about water stains of a leaking house?’ Su rose, grabbed Yan’s hands, and exclaimed: ‘I get it!’
‘This conversation’, says Li, ‘has virtually defined the aesthetic standard of Chinese calligraphy thereafter, and ‘house leaking stains’ and ‘wall cracks’ became a gold measure of the skill of a calligrapher and the quality of his work.’
Monday, October 06, 2008
The drip, drip, drip of environmental change
[You know how I like to give you added value here, which is to say, the full-blown (who said over-blown?) versions of what I write for Nature before the editors judiciously wield their scalpels. In that spirit, here is my latest Muse column.]
Your starter for ten: which of the following can alter the Earth’s climate?
(1) rain in Tibet
(2) sunspots
(3) the Earth’s magnetic field
(4) iron filings
(5) cosmic rays
(6) insects
The answers? They depend on how big an argument you want to have. All have been proposed as agents of climate change. Some of them now look fairly well established as such; others remain controversial; some have been largely discounted.
The point is that it is awfully hard to say. In every case, the perturbations that the phenomena pose to the global environment look minuscule by themselves, but the problem is that when they act over the entire planet, or over geological timescales, or both, the effects can add up. Or they might not.
This issue goes to the heart of the debate over climate change. It’s not hard to imagine that a 10-km meteorite hitting the planet at a speed of several kilometres per second, as one seems to have done at the end of the Cretaceous period 65 million years ago, might have consequences of global significance. But tiny influences in the geo-, bio-, hydro- and atmospheres that trigger dramatic environmental shifts [see Box] – the dripping taps that eventually flood the building – are not only hard for the general public to grasp. They’re also tough for scientists to evaluate, or even to spot in the first place.
Even now one can find climate sceptics ridiculing the notion that a harmless, invisible gas at a concentration of a few hundred parts per million in the atmosphere can bring about potentially catastrophic changes in climate. It just seems to defy intuitive notions of cause and effect.
Two recent papers now propose new ‘trickle effects’ connected with climate change that are subtle, far from obvious, and hard to assess. Both bear on atmospheric levels of the greenhouse gas carbon dioxide: one suggests that these may shift with changes in the strength of the Earth’s magnetic field [1], the other that they may alter the ambient noisiness of the oceans [2].
Noise? What can a trace gas have to do with that? Peter Brewer and his colleagues at Monterey Bay Aquarium Research Center in Moss Landing, California, point out [2] that the transmission of low-frequency sound in seawater has been shown to be dependent on the water’s pH: at around 1 kHz (a little above a soprano’s range), greater acidity reduces sound absorption. And as atmospheric CO2 increases, more is absorbed in the oceans and seawater gets more acid through the formation of carbonic acid.
This effect of acidity on sound seems bizarre at first encounter. But it seems unlikely to have anything to do with water per se. Rather, chemical equilibria involving dissolved borate, carbonate and bicarbonate ions are apparently involved: certain groups of these ions appear to have vibrations at acoustic frequencies, causing resonant absorption.
If this sounds vague, sadly that’s how it is. Such ‘explanations’ as exist so far seem almost scandalously sketchy. But the effect itself is well documented, including the pH-dependence that follows from the way acids or alkalis tip the balance of these acid-base processes. Brewer and colleagues use these earlier measurements to calculate how current and future changes in absorbed CO2 in the oceans will alter the sound absorption at different depths. They say that this has probably decreased by more than 12 percent already relative to pre-industrial levels, and that low-frequency sound might travel up to 70 percent further by 2050.
And indeed, low-frequency ambient noise has been found to be 9 dB higher off the Californian coast than it was in the 1960s, not all of which can be explained by increased human activity. How such changes might affect marine mammals that use long-distance acoustic communication is the question left hanging.
Uptake of atmospheric carbon dioxide by the oceans is also central to the proposal by Alexander Pazur and Michael Winklhofer of the University of Munich [1] that changes in the Earth’s magnetic field could affect climate. They claim that in a magnetic field 40 percent that of the current geomagnetic value, the solubility of carbon dioxide is 30 percent lower.
They use this to estimate that a mere 1 percent reduction in geomagnetic field strength can release ten times more CO2 than all currently emitted from subsea volcanism. Admittedly, they say, this effect is tiny compared with present inputs from human activities; but it would change the concentration by 1 part per million per decade, and could add up to a non-negligible effect over long enough times.
This isn’t the first suggested link between climate and geomagnetism. It has been proposed, for example, that growing or shrinking ice sheets could alter the Earth’s rotation rate and thus trigger changes in core circulation that drives the geodynamo. And the geomagnetic field also affects the influx of cosmic rays at the magnetic poles, whose collisions ionize molecules in the atmosphere which can then seed the formation of airborne particles. These in turn might nucleate cloud droplets, changing the Earth’s albedo.
Indeed, once you start to think about it, possible links and interactions of this sort seem endless. How to know which are worth pursuing? The effect claimed by Pazur and Winklhofer does seem a trifle hard to credit, although mercifully they are not suggesting any mysterious magnetically induced changes of ‘water structure’ – a favourite fantasy of those who insist on the powers of magnets to heal bodies or soften water. Rather, they offer the plausible hypothesis that the influence acts via ions adsorbed on the surfaces of tiny bubbles of dissolved gas. But there are good arguments why such effects seem unlikely to be significant at such weak field strengths [3]. Moreover, the researchers measure the solubility changes indirectly, via the effect of tiny bubbles on light scattering – but bubble size and coalescence is itself sensitive to dissolved salt in complicated ways [4]. In any event, the effect vanishes in pure water.
So the idea needs much more thorough study before one can say much about its validity. But the broader issue is that it is distressingly hard to anticipate these effects – merely to think of them in the first place, let alone to estimate their importance. Climate scientists have been saying pretty much that for decades: feedbacks in the biogeochemical cycles that influence climate are a devil to discern and probe, which is why the job of forecasting future change is so fraught with uncertainty.
And of course every well-motivated proposal of some subtle modifier of global change – such as cosmic rays – tends to be commandeered to spread doubt about whether global warming is caused by humans, or is happening at all, or whether scientists have the slightest notion of what is going on (and therefore whether we can trust their ‘consensus’).
Perhaps this is a good reason to embrace the metaphor of ‘planetary physiology’ proposed by James Lovelock. We are all used to the idea that minute quantities of chemical agents, or small but persistent outside influences, can produce all kinds of surprising, nonlinear and non-intuitive transformations in our bodies. One doesn’t have to buy into the arid debate about whether or not our planet is ‘alive’; maybe we need only reckon that it might as well be.
References
1. Pazur, A. & Winklhofer, M. Geophys. Res. Lett. 35, L16710 (2008).
2. Hester, K. C. et al. Geophys. Res. Lett. 35, L19601 (2008).
3. Kitazawa, K. et al. Physica B 294, 709-714 (2001).
4. Craig, V. S. J., Ninham, B. W. & Pashley, R. M. Nature 364, 317-319 (1993).
Box: Easy to miss?
Iron fertilization
Oceanographer John Martin suggested in the 1980s that atmospheric CO2 might depend on the amount of iron in the oceans (Martin, J. H. Paleoceanography 5, 1–13; 1990). Iron is an essential nutrient for phytoplankton, which absorb and fix carbon in their tissues as they grow, drawing carbon dioxide out of the atmosphere. Martin’s hypothesis was that plankton growth could be stimulated, reducing CO2 levels, by dumping iron into key parts of the world oceans.
But whether the idea will work as a way of mitigating global warming depends on a host of factors, such as whether plankton growth really is limited by the availability of iron and how quickly the fixed carbon gets recycled through the oceans and atmosphere. In the natural climate system, the iron fertilization hypothesis suggests some complex feedbacks: for example, much oceanic iron comes from windborne dust, which might be more mobilized in a drier world.
Cenozoic uplift of the Himalayas
About 40-50 million years ago, the Indian subcontinent began to collide with Asia, pushing up the crust to form the Himalayan plateau. A period of global cooling began at about the same time. Coincidence? Perhaps not, according to geologists Maureen Raymo and William Ruddiman and their collaborators (Raymo, M. E. & Ruddiman, W. F. Nature 359, 117-122; 1992). The mountain range and high ground may have intensified monsoon rainfall, and the uplift exposed rock to the downpour which underwent ‘chemical weathering’, a process involving the conversion of silicate to carbonate minerals. This consumes carbon dioxide from the atmosphere, cooling the climate.
A proof must negotiate many links in the chain of reasoning. Was weathering really more extensive then? And the monsoon more intense? How might the growth of mountain glaciers affect erosion and weathering? How do changes in dissolved minerals washed into the sea interact with CO2-dependent ocean acidity to affect the relevant biogeochemical cycles? The details are still debated.
Plant growth, carbon dioxide and the hydrological cycle
How changes in atmospheric CO2 levels will affect plant growth has been one of the most contentious issues in climate modelling. Will plants grow faster when they have more carbon dioxide available for photosynthesis, thus providing a negative feedback on climate? That’s still unclear. A separate issue has been explored by Ian Woodward at Cambridge University, who reported that plants have fewer stomata – pores that open and close to let in atmospheric CO2 – in their leaves when CO2 levels are greater (Woodward, F. I. Nature 327, 617-618; 1987). They simply don’t need so many portals when the gas is plentiful. The relationship is robust enough for stomatal density of fossil plants to be used as a proxy for ancient CO2 levels.
But stomata are also the leaks through which water vapour escapes from plants in a process called transpiration. This is a vital part of the hydrological cycle, the movement of water between the atmosphere, oceans and ground. So fewer stomata means that plants take up and evaporate less water from the earth, making the local climate less moist and producing knock-on effects such as greater runoff and increased erosion.
Ozone depletion
They sounded so good, didn’t they? Chlorofluorocarbons are gases that seemed chemically inert and therefore unlikely to harm us or the environment when used as the coolants in refrigerators from the early twentieth century. So what if the occasional whiff of CFCs leaked into the atmosphere when fridges were dumped? – the quantities would be tiny.
But their very inertness meant that they could accumulate in the air. And when exposed to harsh ultraviolet rays in the upper atmosphere, the molecules could break apart into reactive chlorine free radicals, which react with and destroy the stratospheric ozone that protects the Earth’s surface from the worst of the Sun’s harmful UV rays. This danger wasn’t seen until 1974, when it was pointed out by chemists Mark Molina and Sherwood Rowland (Molina, M. J. & Rowland, F. S. Nature 249, 810-812; 1974).
Even then, when ozone depletion was first observed in the Antarctic atmosphere in the 1980s, it was put down to instrumental error. Not until 1985 did the observations become impossible to ignore: CFCs were destroying the ozone layer (Farman, J. C., Gardiner, B. G. & Shanklin, J. D. Nature 315, 207-209; 1985). The process was confined to the Antarctic (and later the Arctic) because it required the ice particles of polar stratospheric clouds to keep chlorine in an ‘active’, ozone-destroying form.
CFCs are also potent greenhouse gases; and changes in global climate might alter the distribution and formation of the polar atmospheric vortices and stratospheric clouds on which ozone depletion depends. So the feedbacks between ozone depletion and global warming are subtle and hard to untangle.
[You know how I like to give you added value here, which is to say, the full-blown (who said over-blown?) versions of what I write for Nature before the editors judiciously wield their scalpels. In that spirit, here is my latest Muse column.]
Your starter for ten: which of the following can alter the Earth’s climate?
(1) rain in Tibet
(2) sunspots
(3) the Earth’s magnetic field
(4) iron filings
(5) cosmic rays
(6) insects
The answers? They depend on how big an argument you want to have. All have been proposed as agents of climate change. Some of them now look fairly well established as such; others remain controversial; some have been largely discounted.
The point is that it is awfully hard to say. In every case, the perturbations that the phenomena pose to the global environment look minuscule by themselves, but the problem is that when they act over the entire planet, or over geological timescales, or both, the effects can add up. Or they might not.
This issue goes to the heart of the debate over climate change. It’s not hard to imagine that a 10-km meteorite hitting the planet at a speed of several kilometres per second, as one seems to have done at the end of the Cretaceous period 65 million years ago, might have consequences of global significance. But tiny influences in the geo-, bio-, hydro- and atmospheres that trigger dramatic environmental shifts [see Box] – the dripping taps that eventually flood the building – are not only hard for the general public to grasp. They’re also tough for scientists to evaluate, or even to spot in the first place.
Even now one can find climate sceptics ridiculing the notion that a harmless, invisible gas at a concentration of a few hundred parts per million in the atmosphere can bring about potentially catastrophic changes in climate. It just seems to defy intuitive notions of cause and effect.
Two recent papers now propose new ‘trickle effects’ connected with climate change that are subtle, far from obvious, and hard to assess. Both bear on atmospheric levels of the greenhouse gas carbon dioxide: one suggests that these may shift with changes in the strength of the Earth’s magnetic field [1], the other that they may alter the ambient noisiness of the oceans [2].
Noise? What can a trace gas have to do with that? Peter Brewer and his colleagues at Monterey Bay Aquarium Research Center in Moss Landing, California, point out [2] that the transmission of low-frequency sound in seawater has been shown to be dependent on the water’s pH: at around 1 kHz (a little above a soprano’s range), greater acidity reduces sound absorption. And as atmospheric CO2 increases, more is absorbed in the oceans and seawater gets more acid through the formation of carbonic acid.
This effect of acidity on sound seems bizarre at first encounter. But it seems unlikely to have anything to do with water per se. Rather, chemical equilibria involving dissolved borate, carbonate and bicarbonate ions are apparently involved: certain groups of these ions appear to have vibrations at acoustic frequencies, causing resonant absorption.
If this sounds vague, sadly that’s how it is. Such ‘explanations’ as exist so far seem almost scandalously sketchy. But the effect itself is well documented, including the pH-dependence that follows from the way acids or alkalis tip the balance of these acid-base processes. Brewer and colleagues use these earlier measurements to calculate how current and future changes in absorbed CO2 in the oceans will alter the sound absorption at different depths. They say that this has probably decreased by more than 12 percent already relative to pre-industrial levels, and that low-frequency sound might travel up to 70 percent further by 2050.
And indeed, low-frequency ambient noise has been found to be 9 dB higher off the Californian coast than it was in the 1960s, not all of which can be explained by increased human activity. How such changes might affect marine mammals that use long-distance acoustic communication is the question left hanging.
Uptake of atmospheric carbon dioxide by the oceans is also central to the proposal by Alexander Pazur and Michael Winklhofer of the University of Munich [1] that changes in the Earth’s magnetic field could affect climate. They claim that in a magnetic field 40 percent that of the current geomagnetic value, the solubility of carbon dioxide is 30 percent lower.
They use this to estimate that a mere 1 percent reduction in geomagnetic field strength can release ten times more CO2 than all currently emitted from subsea volcanism. Admittedly, they say, this effect is tiny compared with present inputs from human activities; but it would change the concentration by 1 part per million per decade, and could add up to a non-negligible effect over long enough times.
This isn’t the first suggested link between climate and geomagnetism. It has been proposed, for example, that growing or shrinking ice sheets could alter the Earth’s rotation rate and thus trigger changes in core circulation that drives the geodynamo. And the geomagnetic field also affects the influx of cosmic rays at the magnetic poles, whose collisions ionize molecules in the atmosphere which can then seed the formation of airborne particles. These in turn might nucleate cloud droplets, changing the Earth’s albedo.
Indeed, once you start to think about it, possible links and interactions of this sort seem endless. How to know which are worth pursuing? The effect claimed by Pazur and Winklhofer does seem a trifle hard to credit, although mercifully they are not suggesting any mysterious magnetically induced changes of ‘water structure’ – a favourite fantasy of those who insist on the powers of magnets to heal bodies or soften water. Rather, they offer the plausible hypothesis that the influence acts via ions adsorbed on the surfaces of tiny bubbles of dissolved gas. But there are good arguments why such effects seem unlikely to be significant at such weak field strengths [3]. Moreover, the researchers measure the solubility changes indirectly, via the effect of tiny bubbles on light scattering – but bubble size and coalescence is itself sensitive to dissolved salt in complicated ways [4]. In any event, the effect vanishes in pure water.
So the idea needs much more thorough study before one can say much about its validity. But the broader issue is that it is distressingly hard to anticipate these effects – merely to think of them in the first place, let alone to estimate their importance. Climate scientists have been saying pretty much that for decades: feedbacks in the biogeochemical cycles that influence climate are a devil to discern and probe, which is why the job of forecasting future change is so fraught with uncertainty.
And of course every well-motivated proposal of some subtle modifier of global change – such as cosmic rays – tends to be commandeered to spread doubt about whether global warming is caused by humans, or is happening at all, or whether scientists have the slightest notion of what is going on (and therefore whether we can trust their ‘consensus’).
Perhaps this is a good reason to embrace the metaphor of ‘planetary physiology’ proposed by James Lovelock. We are all used to the idea that minute quantities of chemical agents, or small but persistent outside influences, can produce all kinds of surprising, nonlinear and non-intuitive transformations in our bodies. One doesn’t have to buy into the arid debate about whether or not our planet is ‘alive’; maybe we need only reckon that it might as well be.
References
1. Pazur, A. & Winklhofer, M. Geophys. Res. Lett. 35, L16710 (2008).
2. Hester, K. C. et al. Geophys. Res. Lett. 35, L19601 (2008).
3. Kitazawa, K. et al. Physica B 294, 709-714 (2001).
4. Craig, V. S. J., Ninham, B. W. & Pashley, R. M. Nature 364, 317-319 (1993).
Box: Easy to miss?
Iron fertilization
Oceanographer John Martin suggested in the 1980s that atmospheric CO2 might depend on the amount of iron in the oceans (Martin, J. H. Paleoceanography 5, 1–13; 1990). Iron is an essential nutrient for phytoplankton, which absorb and fix carbon in their tissues as they grow, drawing carbon dioxide out of the atmosphere. Martin’s hypothesis was that plankton growth could be stimulated, reducing CO2 levels, by dumping iron into key parts of the world oceans.
But whether the idea will work as a way of mitigating global warming depends on a host of factors, such as whether plankton growth really is limited by the availability of iron and how quickly the fixed carbon gets recycled through the oceans and atmosphere. In the natural climate system, the iron fertilization hypothesis suggests some complex feedbacks: for example, much oceanic iron comes from windborne dust, which might be more mobilized in a drier world.
Cenozoic uplift of the Himalayas
About 40-50 million years ago, the Indian subcontinent began to collide with Asia, pushing up the crust to form the Himalayan plateau. A period of global cooling began at about the same time. Coincidence? Perhaps not, according to geologists Maureen Raymo and William Ruddiman and their collaborators (Raymo, M. E. & Ruddiman, W. F. Nature 359, 117-122; 1992). The mountain range and high ground may have intensified monsoon rainfall, and the uplift exposed rock to the downpour which underwent ‘chemical weathering’, a process involving the conversion of silicate to carbonate minerals. This consumes carbon dioxide from the atmosphere, cooling the climate.
A proof must negotiate many links in the chain of reasoning. Was weathering really more extensive then? And the monsoon more intense? How might the growth of mountain glaciers affect erosion and weathering? How do changes in dissolved minerals washed into the sea interact with CO2-dependent ocean acidity to affect the relevant biogeochemical cycles? The details are still debated.
Plant growth, carbon dioxide and the hydrological cycle
How changes in atmospheric CO2 levels will affect plant growth has been one of the most contentious issues in climate modelling. Will plants grow faster when they have more carbon dioxide available for photosynthesis, thus providing a negative feedback on climate? That’s still unclear. A separate issue has been explored by Ian Woodward at Cambridge University, who reported that plants have fewer stomata – pores that open and close to let in atmospheric CO2 – in their leaves when CO2 levels are greater (Woodward, F. I. Nature 327, 617-618; 1987). They simply don’t need so many portals when the gas is plentiful. The relationship is robust enough for stomatal density of fossil plants to be used as a proxy for ancient CO2 levels.
But stomata are also the leaks through which water vapour escapes from plants in a process called transpiration. This is a vital part of the hydrological cycle, the movement of water between the atmosphere, oceans and ground. So fewer stomata means that plants take up and evaporate less water from the earth, making the local climate less moist and producing knock-on effects such as greater runoff and increased erosion.
Ozone depletion
They sounded so good, didn’t they? Chlorofluorocarbons are gases that seemed chemically inert and therefore unlikely to harm us or the environment when used as the coolants in refrigerators from the early twentieth century. So what if the occasional whiff of CFCs leaked into the atmosphere when fridges were dumped? – the quantities would be tiny.
But their very inertness meant that they could accumulate in the air. And when exposed to harsh ultraviolet rays in the upper atmosphere, the molecules could break apart into reactive chlorine free radicals, which react with and destroy the stratospheric ozone that protects the Earth’s surface from the worst of the Sun’s harmful UV rays. This danger wasn’t seen until 1974, when it was pointed out by chemists Mark Molina and Sherwood Rowland (Molina, M. J. & Rowland, F. S. Nature 249, 810-812; 1974).
Even then, when ozone depletion was first observed in the Antarctic atmosphere in the 1980s, it was put down to instrumental error. Not until 1985 did the observations become impossible to ignore: CFCs were destroying the ozone layer (Farman, J. C., Gardiner, B. G. & Shanklin, J. D. Nature 315, 207-209; 1985). The process was confined to the Antarctic (and later the Arctic) because it required the ice particles of polar stratospheric clouds to keep chlorine in an ‘active’, ozone-destroying form.
CFCs are also potent greenhouse gases; and changes in global climate might alter the distribution and formation of the polar atmospheric vortices and stratospheric clouds on which ozone depletion depends. So the feedbacks between ozone depletion and global warming are subtle and hard to untangle.
Subscribe to:
Posts (Atom)