Physics, ultimate reality, and an awful lot of money
[A dramatically truncated version of this comment appears in the Diary section of the latest issue of Prospect.]
If you’re a non-believer, it’s easy to mock or even despise efforts to bridge science and religion. But you don’t need to be Richard Dawkins to sense that there’s an imbalance in these often well-meaning initiatives: science has no need of religion in its quest to understand the universe (the relevance to scientific ethics might be more open to debate), whereas religion appears sometimes to crave the intellectual force of science’s rigour. And since it seems hard to imagine how science could ever supply supporting evidence for religion (as opposed to simply unearthing new mysteries), mustn’t any contribution it might make to the logical basis of belief be inevitably negative?
That doesn’t stop people from trying to build bridges, and nor should it. Yet overtures from the religious side are often seen as attempts to sneak doctrine into places where it has no business: witness the controversy over the Royal Society hosting talks and events sponsored by the Templeton Foundation. Philosopher A. C. Grayling, recently denounced as scandalous the willingness of the Royal Society to offer a launching pad for a new book exploring the views of one of its Fellows, Christian minister and physicist John Polkinghorne, on the interactions of science and religion.
The US-based Templeton Foundation has been in the middle of some of the loudest recent controversies about religion and science. Created by ‘global investor and philanthropist’ Sir John Templeton, it professes to ‘serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions, ranging from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness, and creativity.’ For some skeptics, this simply means promoting religion, particularly Christianity, from a seemingly bottomless funding barrel. Templeton himself, a relatively liberal Christian by US standards and a supporter of inter-faith initiatives, once claimed that ‘scientific revelations may be a gold mine for revitalizing religion in the 21st century’. That’s precisely what makes many scientists nervous.
The Templeton Foundation awards an annual prize of £1million to ‘outstanding individuals who have devoted their talents to those aspects of human experience that, even in an age of astonishing scientific advance, remain beyond the reach of scientific explanation.’ This is the world’s largest annual award given to an individual – bigger than a Nobel. And scientists have been prominent among the recipients, especially in recent years: they include cosmologist John Barrow, physicist Freeman Dyson, physics Nobel laureate Charles H. Townes, physicist Paul Davies – and Polkinghorne. That helps to explain why the Royal Society has previously been ready to host the prize’s ceremonials.
I must declare an interest here, because I have taken part in a meeting funded by the Templeton Foundation. In 2005 it convened a gathering of scientists to consider the question of whether water seems ‘fine-tuned’ to support the existence of life. This was an offshoot of an earlier symposium that investigated the broader question of ‘fine tuning’ in the laws of physics, a topic now very much in vogue thanks to recent discoveries in cosmology. That first meeting considered how the basic constants of nature seem to be finely poised to an absurd degree: just a tiny change would seem to make the universe uninhabitable. (The discovery in the 1990s of the acceleration of the expanding universe, currently attributed to a mysterious dark energy, makes the cosmos seem even more improbable than before.) This is a genuine and deep mystery, and at present there is no convincing explanation for it. The issue of water is different, as we concluded at the 2005 meeting: there is no compelling argument for it being a unique solvent for life, or for it being especially fine-tuned even if it were. More pertinently here, this meeting had first-rate speakers and a sound scientific rationale, and even somewhat wary attendees like me detected no hidden agenda beyond an exploration of the issues. If Templeton money is to be used for events like that, I have no problem with that. And it was rather disturbing, even shameful, to find that at least one reputable university press subsequently shied away from publishing the meeting proceedings (soon now to be published by Taylor & Francis) not on any scientific grounds but because of worries about Templeton involvement.
So while I worry about the immodesty of the Templeton Prize, I don’t side with those who consider it basically a bribe to attract good scientists to a disreputable cause. All the same, there is something curious going on. Five of the seven most recent winners have been scientists, and all are listed in the Physics and Cosmology Group of the Center for Theology and the Natural Sciences (CTNS), affiliated to the Graduate Theological Union, an inter-faith centre in Berkeley, California. This includes the latest winner, announced on Monday: French physicist Bernard d’Espagnat, ‘whose explorations of the philosophical implications of quantum physics have’ (according to the prize announcement) ‘cast new light on the definition of reality and the potential limits of knowable science.’ D’Espagnat has suggested ‘the possibility that the things we observe may be tentatively interpreted as signs providing us with some perhaps not entirely misleading glimpses of a higher reality and, therefore, that higher forms of spirituality are fully compatible with what seems to emerge from contemporary physics.’ (See more here and here.) Others might consider this an unnecessary addendum to modern quantum theory, not so far removed from the vague and post hoc analogies of Fritjof Capra’s The Tao of Physics (which was very much a product of its time).
But why this preference for CTNS affiliates? Perhaps it simply means that the people interested in this stuff are a rather small group who are almost bound to get co-opted onto any body with similar interests. Or you might want to view it as an indication that the fastest way to make a million is to join the CTNS’s Physics and Cosmology group. More striking, though, is the fact that all these chaps (I’m afraid so) are physicists of some description. That, it appears, is pretty much the only branch of the natural sciences either willing or able to engage in matters of faith. Of course, American biologists have been given more than enough reason to flee any hint of religiosity; but that alone doesn’t quite seem sufficient to explain this skewed representation of the sciences. I have some ideas about that… but another time.
Friday, March 27, 2009
Wednesday, March 18, 2009
Nature’s Patterns
The first volume (Shapes) of my trilogy on pattern formation, Nature’s Patterns (OUP), is now out. Sort of. At any event, it should be in the shops soon. Nearly all the hiccups with the figures got ironed out in the end (thank you Chantal for your patience) – there are one or two things to put right in the reprints/paperback. Sorry, I tried my best. The second and third volumes (Flow and Branches) are not officially available until (I believe) July and September respectively. But if you talk to OUP sweetly enough, you might get lucky. Better still, they should be on sale at talks, such as the one I’m scheduled to give at the Cheltenham Science Festival on 4 June (8 pm). Maybe see you there.
The right honourable Nigel Lawson
At a university talk I gave recently, a member of the department suggested that I might look at Nigel Lawson’s book An Appeal to Reason: A Cool Look at Climate Change. It’s not that Lawson is necessarily right to be sceptical about climate change and the need to mitigate (rather than adapt to) it, he said. It’s simply that you have to admire the way he makes his case, with the tenacity and rhetorical flair characteristic of his lawyer’s training.
And as chance would have it, I soon thereafter came across some pages of Lawson’s 2006 essay from which the book sprang: ‘The Economics and Politics of Climate Change: An Appeal to Reason’, published by the right-wing think-tank the Centre for Policy Studies. (My daughter was drawing on the other side.) And I was reminded why I doubted that there was indeed very much to admire in Lawson’s methodology. There seems nothing admirable in a bunch of lies; anyone can make nonsense sound correct and reasonable if they are prepared to tell enough bare-faced fibs.
For example, Lawson quotes the Met Office’s Hadley Centre for Climate Prediction and Research:
“Although there is considerable year-to-year variability in annual-mean global temperature, an upward trend can be clearly seen; firstly over the period from about 1920-1940, with little change or a small cooling from 1940-1975, followed by a sustained rise over the last three decades since then.”
He goes on to say: “This last part is a trifle disingenuous, since what the graph actually shows is that the sustained rise took place entirely during the last quarter of the last century.” No. The quote from the Hadley Centre says it exactly as it is, and Lawson’s comment is totally consistent with that. There is nothing disingenuous. Indeed, Lawson goes on to say
“The Hadley Centre graph shows that, for the first phase, from 1920 to 1940, the increase was 0.4 degrees centigrade. From 1940 to 1975 there was a cooling of about 0.2 degrees… Finally, since 1975 there has been a further warming of about 0.5 degrees, making a total increase of some 0.7 degrees over the 20th century as a whole (from 1900 to 1920 there was no change).”
Right. And that is what they said. Lawson has cast aspersions on grounds that are transparently specious. Am I meant to admire this?
It gets worse, of course. Carbon dioxide, he tells us, is only the second most important greenhouse gas, after water vapour. Correct, if you don’t worry about how one technically defines ‘greenhouse gas’ (many scientists don’t usually regard water vapour that way). And your point is? My point is that we are not directly pumping water vapour into the atmosphere in a way that makes much difference to its atmospheric concentration (although anthropogenic warming will increase evaporation). We are doing that for carbon dioxide. What matters for climate change is not the amounts, but whether or not there’s a steady state. Who is being disingenuous?
“It is the published view of the Met Office that is it likely that more than half the warming of recent decades (say 0.3 degrees centigrade out of the overall 0.5 degrees increase between 1975 and 2000) is attributable to man-made sources of greenhouse gases – principally, although by no means exclusively, carbon dioxide”, says Lawson. “But this is highly uncertain, and reputable climate scientists differ sharply over the subject.”
What he means here is that a handful of climate scientists at professional institutions disagree with just about all the others everywhere in the world in maintaining that the warming is not anthropogenic. ‘Reputable’ scientists differ over almost everything – but when the difference is in the ratio of 1 to 1000, say, who would you trust?
And then: “the recent attempt of the Royal Society, of all bodies, to prevent the funding of climate scientists who do not share its alarmist view of the matter is truly shocking.” No, what is truly shocking is that Lawson is so unashamed at distorting the facts. The Royal Society asked asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.
“Yet another uncertainty derives from the fact that, while the growth in manmade carbon dioxide emissions, and thus carbon dioxide concentrations in the atmosphere, continued relentlessly during the 20th century, the global mean surface temperature, as I have already remarked, increased in fits and starts, for which there us no adequate explanation.” Sounds pretty dodgy – until you hear that there is a perfectly adequate explanation in terms of the effects of sulphate aerosols. Perhaps Lawson doesn’t believe this – that’s his prerogative (although he’s then obliged to say why). But to pretend that this issue has just been swept under the carpet, and lacks any plausible explanation, is utterly dishonest.
But those mendacious climate scientists are denying that past warming such as the Medieval Warm Period ever happened, don’t you know: “A rather different account of the past was given by the so-called “hockey-stick” chart of global temperatures over the past millennium, which purported to show that the earth’s temperature was constant until the industrialisation of the 20th century. Reproduced in its 2001 Report by the supposedly authoritative Intergovernmental Panel on Climate Change, set up under the auspices of the United Nations to advise governments on what is clearly a global issue, the chart featured prominently in (among other publications) the present Government’s 2003 energy white paper. It has now been comprehensively discredited.” No. It has been largely supported (see here and here). And it was never the crux of any argument about whether 20th century climate warming is real. What’s more, it never showed that ‘the earth’s temperature was constant until the industrialization of the 20th century; the Medieval Warm Period and the Little Ice Age are both there. As you said, Mr Lawson, we’re talking here about relatively small changes of fractions of a degree. That, indeed, is the whole point: even such apparently small changes are sufficient to make a difference between a ‘warm period’ and a ‘little ice age’.
Phew. I am now on page 3. Excuse me, but I don’t think I have the stamina to wade through a whole book of this stuff. One’s spirit can only withstand a certain amount of falsehood. Admirable? I don’t think so. Imagine if a politician was caught being as dishonest as this. No, hang on a minute, that can’t be right…
I’m moved to write some of this, however, because in the face of such disinformation it becomes crucial to get the facts straight. The situation is not helped, for example, when the Independent says, as it did last Saturday, “The melting of Arctic sea ice could cause global sea levels to rise by more than a metre by the end of the century.” Perhaps there’s some indirect effect here that I’m not aware of; but to my knowledge, melting sea ice has absolutely no effect on sea level. The ice merely displaces the equivalent volume of water. We need to get this stuff right.
At a university talk I gave recently, a member of the department suggested that I might look at Nigel Lawson’s book An Appeal to Reason: A Cool Look at Climate Change. It’s not that Lawson is necessarily right to be sceptical about climate change and the need to mitigate (rather than adapt to) it, he said. It’s simply that you have to admire the way he makes his case, with the tenacity and rhetorical flair characteristic of his lawyer’s training.
And as chance would have it, I soon thereafter came across some pages of Lawson’s 2006 essay from which the book sprang: ‘The Economics and Politics of Climate Change: An Appeal to Reason’, published by the right-wing think-tank the Centre for Policy Studies. (My daughter was drawing on the other side.) And I was reminded why I doubted that there was indeed very much to admire in Lawson’s methodology. There seems nothing admirable in a bunch of lies; anyone can make nonsense sound correct and reasonable if they are prepared to tell enough bare-faced fibs.
For example, Lawson quotes the Met Office’s Hadley Centre for Climate Prediction and Research:
“Although there is considerable year-to-year variability in annual-mean global temperature, an upward trend can be clearly seen; firstly over the period from about 1920-1940, with little change or a small cooling from 1940-1975, followed by a sustained rise over the last three decades since then.”
He goes on to say: “This last part is a trifle disingenuous, since what the graph actually shows is that the sustained rise took place entirely during the last quarter of the last century.” No. The quote from the Hadley Centre says it exactly as it is, and Lawson’s comment is totally consistent with that. There is nothing disingenuous. Indeed, Lawson goes on to say
“The Hadley Centre graph shows that, for the first phase, from 1920 to 1940, the increase was 0.4 degrees centigrade. From 1940 to 1975 there was a cooling of about 0.2 degrees… Finally, since 1975 there has been a further warming of about 0.5 degrees, making a total increase of some 0.7 degrees over the 20th century as a whole (from 1900 to 1920 there was no change).”
Right. And that is what they said. Lawson has cast aspersions on grounds that are transparently specious. Am I meant to admire this?
It gets worse, of course. Carbon dioxide, he tells us, is only the second most important greenhouse gas, after water vapour. Correct, if you don’t worry about how one technically defines ‘greenhouse gas’ (many scientists don’t usually regard water vapour that way). And your point is? My point is that we are not directly pumping water vapour into the atmosphere in a way that makes much difference to its atmospheric concentration (although anthropogenic warming will increase evaporation). We are doing that for carbon dioxide. What matters for climate change is not the amounts, but whether or not there’s a steady state. Who is being disingenuous?
“It is the published view of the Met Office that is it likely that more than half the warming of recent decades (say 0.3 degrees centigrade out of the overall 0.5 degrees increase between 1975 and 2000) is attributable to man-made sources of greenhouse gases – principally, although by no means exclusively, carbon dioxide”, says Lawson. “But this is highly uncertain, and reputable climate scientists differ sharply over the subject.”
What he means here is that a handful of climate scientists at professional institutions disagree with just about all the others everywhere in the world in maintaining that the warming is not anthropogenic. ‘Reputable’ scientists differ over almost everything – but when the difference is in the ratio of 1 to 1000, say, who would you trust?
And then: “the recent attempt of the Royal Society, of all bodies, to prevent the funding of climate scientists who do not share its alarmist view of the matter is truly shocking.” No, what is truly shocking is that Lawson is so unashamed at distorting the facts. The Royal Society asked asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.
“Yet another uncertainty derives from the fact that, while the growth in manmade carbon dioxide emissions, and thus carbon dioxide concentrations in the atmosphere, continued relentlessly during the 20th century, the global mean surface temperature, as I have already remarked, increased in fits and starts, for which there us no adequate explanation.” Sounds pretty dodgy – until you hear that there is a perfectly adequate explanation in terms of the effects of sulphate aerosols. Perhaps Lawson doesn’t believe this – that’s his prerogative (although he’s then obliged to say why). But to pretend that this issue has just been swept under the carpet, and lacks any plausible explanation, is utterly dishonest.
But those mendacious climate scientists are denying that past warming such as the Medieval Warm Period ever happened, don’t you know: “A rather different account of the past was given by the so-called “hockey-stick” chart of global temperatures over the past millennium, which purported to show that the earth’s temperature was constant until the industrialisation of the 20th century. Reproduced in its 2001 Report by the supposedly authoritative Intergovernmental Panel on Climate Change, set up under the auspices of the United Nations to advise governments on what is clearly a global issue, the chart featured prominently in (among other publications) the present Government’s 2003 energy white paper. It has now been comprehensively discredited.” No. It has been largely supported (see here and here). And it was never the crux of any argument about whether 20th century climate warming is real. What’s more, it never showed that ‘the earth’s temperature was constant until the industrialization of the 20th century; the Medieval Warm Period and the Little Ice Age are both there. As you said, Mr Lawson, we’re talking here about relatively small changes of fractions of a degree. That, indeed, is the whole point: even such apparently small changes are sufficient to make a difference between a ‘warm period’ and a ‘little ice age’.
Phew. I am now on page 3. Excuse me, but I don’t think I have the stamina to wade through a whole book of this stuff. One’s spirit can only withstand a certain amount of falsehood. Admirable? I don’t think so. Imagine if a politician was caught being as dishonest as this. No, hang on a minute, that can’t be right…
I’m moved to write some of this, however, because in the face of such disinformation it becomes crucial to get the facts straight. The situation is not helped, for example, when the Independent says, as it did last Saturday, “The melting of Arctic sea ice could cause global sea levels to rise by more than a metre by the end of the century.” Perhaps there’s some indirect effect here that I’m not aware of; but to my knowledge, melting sea ice has absolutely no effect on sea level. The ice merely displaces the equivalent volume of water. We need to get this stuff right.
Friday, March 13, 2009
There’s more to life than sequences
[This is the pre-edited version of my latest Muse for Nature News.]
Shape might be one of the key factors in the function of mysterious ‘non-coding’ DNA.
Everyone knows what DNA looks like. Its double helix decorates countless articles on genetics, has been celebrated in sculpture, and was even engraved on the Golden Record, our message to the cosmos on board the Voyager spacecraft.
The entwined strands, whose form was deduced in 1953 by James Watson and Francis Crick, are admired as much for their beauty as for the light they shed on the mechanism of inheritance: the complementarity between juxtaposed chemical building blocks on the two strands, held together by weak ‘hydrogen’ bonds like a zipper, immediately suggested to Crick and Watson how information encoded in the sequence of blocks could be transmitted to a new strand assembled on the template of an existing one.
With the structure of DNA ‘solved’, genetics switched its focus to the sequence of the four constituent units (called nucleotide bases). By using biotechnological methods to deduce this sequence, they claimed to be ‘reading the book of life’, with the implication that all the information needed to build an organism was held within this abstract linear code.
But beauty has a tendency to inhibit critical thinking. There is now increasing evidence that the molecular structure of DNA is not a delightfully ordered epiphenomenon of its function as a digital data bank but a crucial – and mutable – aspect of the way genomes work. A new study in Science [1] underlines that notion by showing that the precise shape of some genomic DNA has been determined by evolution. In other words, genetics is not simply about sequence, but about structure too.
The standard view – indeed, part of biology’s ‘central dogma’ – is that in its sequence of the four fundamental building blocks (called nucleotide bases) DNA encodes corresponding sequences of amino-acid units that are strung together to make a protein enzyme, with the protein’s compact folded shape (and thus its function) being uniquely determined by that sequence.
This is basically true enough. Yet as the human genome was unpicked nucleotide base by base, it became clear that most of the DNA doesn’t ‘code for’ proteins at all. Fully 98 percent of the human genome is non-coding. So what does it do?
We don’t really know, except to say that it’s clearly not all ‘junk’, as was once suspected – the detritus of evolution, like obsolete files clogging up a computer. Much of the non-coding DNA evidently has a role in cell function, since mutations (changes in nucleotide sequence) in some of these regions have observable (phenotypic) consequences for the organism. We don’t know, however, how the former leads to the latter.
This is the question that Elliott Margulies of the National Institutes of Health in Bethesda, Maryland, Tom Tullius of Boston University, and their coworkers set out to investigate. According to the standard picture, the function of non-coding regions, whatever it is, should be determined by their sequence. Indeed, one way of identifying important non-coding regions is to look for ones that are sensitive to sequence, with the implication that the sequence has been finely tuned by evolution.
But Margulies and colleagues wondered if the shape of non-coding DNA might also be important. As they point out, DNA isn’t simply a uniform double helix: it can be bent or kinked, and may have a helical pitch of varying width, for example. These differences depend on the sequence, but not in any straightforward manner. Two near-identical sequences can adopt quite different shapes, or two very different sequences can have a similar shape.
The researchers used a chemical method to deduce the relationship between sequence and shape. They then searched for shape similarities between analogous non-coding regions in the genomes of 36 different species. Such similarity implies that the shapes have been selected and preserved by evolution – in other words, that shape, rather than sequence per se, is what is important. They found twice as many evolutionarily constrained (and thus functionally important) parts of the non-coding genome than were evident from trans-species correspondences using only sequence data.
So in these non-coding regions, at least, sequence appears to be important only insofar as it specifies a certain molecular shape and not because if its intrinsic information content – a different sequence with the same shape might do just as well.
That doesn’t answer why shape matters to DNA. But it suggests that we are wrong to imagine that the double helix is the beginning and end of the story.
There are plenty of other good reasons to suspect that is true. For example, DNA can adopt structures quite different from Watson and Crick’s helix, called the B-form. It can, under particular conditions of saltiness or temperature, switch to at least two other double-helical structures, called the A and Z forms. It may also from triple- and quadruple-stranded variants, linked by different types of hydrogen-bonding matches between nucleotides. One such is called Hoogsteen base-pairing.
Biochemist Naoki Sugimoto and colleagues at Konan University in Kobe, Japan, have recently shown that, when DNA in solution is surrounded by large polymer molecules, mimicking the crowded conditions of a real cell, Watson-Crick base pairing seems to be less stable than it is in pure, dilute solution, while Hoogsteen base-pairing, which favours the formation of triple and quadruple helices, becomes more stable [2-4].
The researchers think that this is linked to the way water molecules surround the DNA in a ‘hydration shell’. Hoogsteen pairing demands less water in this shell, and so is promoted when molecular crowding makes water scarce.
Changes to the hydration shell, for example induced by ions, may alter DNA shape in a sequence-dependent manner, perhaps being responsible for the sequence-structure relationships studied by Margulies and his colleagues. After all, says Tullius, the method they use to probe structure is a measure of “the local exposure of the surface of DNA to the solvent.”
The importance of DNA’s water sheath on its structure and function is also revealed in work that uses small synthetic molecules as drugs that bind to DNA and alter its behaviour, perhaps switching certain genes on or off. It is conventionally assumed that these molecules must fit snugly into the screw-like groove of the double helix. But some small molecules seem able to bind and show useful therapeutic activity even without such a fit, apparently because they can exploit water molecules in the hydration shell as ‘bridges’ to the DNA itself [5]. So here there is a subtle and irreducible interplay between sequence, shape and ‘environment’.
Then there are mechanical effects too. Some proteins bend and deform DNA significantly when they dock, making the molecule’s stiffness (and its dependence on sequence) a central factor in that process. And the shape and mechanics of DNA can influence gene function at larger scales. For example, the packaging of DNA and associated proteins into a compact form, called chromatin, in cells can affect whether particular genes are active or not. Special ‘chromatin-remodelling’ enzymes are needed to manipulate its structure and enable processes such as gene expression of DNA repair.
None of this is yet well understood. But it feels reminiscent of the way early work on protein structure in the 1930s and 40s grasped for dimly sensed principles before an understanding of the factors governing shape and function transformed our view of life’s molecular machinery. Are studies like these, then, a hint at some forthcoming insight that will reveal gene sequence to be just one element in the logic of life?
References
1. Parker, S. C. J. et al., Science Express doi:10.1126/science.1169050 (2009). Paper here.
2. Miyoshi, D., Karimata, H. & Sugimoto, N. J. Am. Chem. Soc. 128, 7957-7963 (2006). Paper here.
3. Nakano, S. et al., J. Am. Chem. Soc. 126, 14330-14331 (2004). Paper here.
4. Miyoshi, D. et al., J. Am. Chem. Soc. doi:10.1021/ja805972a (2009). Paper here.
5. Nguyen, B., Neidle, S. & Wilson, W. D. Acc. Chem. Res. 42, 11-21 (2009). Paper here.
[This is the pre-edited version of my latest Muse for Nature News.]
Shape might be one of the key factors in the function of mysterious ‘non-coding’ DNA.
Everyone knows what DNA looks like. Its double helix decorates countless articles on genetics, has been celebrated in sculpture, and was even engraved on the Golden Record, our message to the cosmos on board the Voyager spacecraft.
The entwined strands, whose form was deduced in 1953 by James Watson and Francis Crick, are admired as much for their beauty as for the light they shed on the mechanism of inheritance: the complementarity between juxtaposed chemical building blocks on the two strands, held together by weak ‘hydrogen’ bonds like a zipper, immediately suggested to Crick and Watson how information encoded in the sequence of blocks could be transmitted to a new strand assembled on the template of an existing one.
With the structure of DNA ‘solved’, genetics switched its focus to the sequence of the four constituent units (called nucleotide bases). By using biotechnological methods to deduce this sequence, they claimed to be ‘reading the book of life’, with the implication that all the information needed to build an organism was held within this abstract linear code.
But beauty has a tendency to inhibit critical thinking. There is now increasing evidence that the molecular structure of DNA is not a delightfully ordered epiphenomenon of its function as a digital data bank but a crucial – and mutable – aspect of the way genomes work. A new study in Science [1] underlines that notion by showing that the precise shape of some genomic DNA has been determined by evolution. In other words, genetics is not simply about sequence, but about structure too.
The standard view – indeed, part of biology’s ‘central dogma’ – is that in its sequence of the four fundamental building blocks (called nucleotide bases) DNA encodes corresponding sequences of amino-acid units that are strung together to make a protein enzyme, with the protein’s compact folded shape (and thus its function) being uniquely determined by that sequence.
This is basically true enough. Yet as the human genome was unpicked nucleotide base by base, it became clear that most of the DNA doesn’t ‘code for’ proteins at all. Fully 98 percent of the human genome is non-coding. So what does it do?
We don’t really know, except to say that it’s clearly not all ‘junk’, as was once suspected – the detritus of evolution, like obsolete files clogging up a computer. Much of the non-coding DNA evidently has a role in cell function, since mutations (changes in nucleotide sequence) in some of these regions have observable (phenotypic) consequences for the organism. We don’t know, however, how the former leads to the latter.
This is the question that Elliott Margulies of the National Institutes of Health in Bethesda, Maryland, Tom Tullius of Boston University, and their coworkers set out to investigate. According to the standard picture, the function of non-coding regions, whatever it is, should be determined by their sequence. Indeed, one way of identifying important non-coding regions is to look for ones that are sensitive to sequence, with the implication that the sequence has been finely tuned by evolution.
But Margulies and colleagues wondered if the shape of non-coding DNA might also be important. As they point out, DNA isn’t simply a uniform double helix: it can be bent or kinked, and may have a helical pitch of varying width, for example. These differences depend on the sequence, but not in any straightforward manner. Two near-identical sequences can adopt quite different shapes, or two very different sequences can have a similar shape.
The researchers used a chemical method to deduce the relationship between sequence and shape. They then searched for shape similarities between analogous non-coding regions in the genomes of 36 different species. Such similarity implies that the shapes have been selected and preserved by evolution – in other words, that shape, rather than sequence per se, is what is important. They found twice as many evolutionarily constrained (and thus functionally important) parts of the non-coding genome than were evident from trans-species correspondences using only sequence data.
So in these non-coding regions, at least, sequence appears to be important only insofar as it specifies a certain molecular shape and not because if its intrinsic information content – a different sequence with the same shape might do just as well.
That doesn’t answer why shape matters to DNA. But it suggests that we are wrong to imagine that the double helix is the beginning and end of the story.
There are plenty of other good reasons to suspect that is true. For example, DNA can adopt structures quite different from Watson and Crick’s helix, called the B-form. It can, under particular conditions of saltiness or temperature, switch to at least two other double-helical structures, called the A and Z forms. It may also from triple- and quadruple-stranded variants, linked by different types of hydrogen-bonding matches between nucleotides. One such is called Hoogsteen base-pairing.
Biochemist Naoki Sugimoto and colleagues at Konan University in Kobe, Japan, have recently shown that, when DNA in solution is surrounded by large polymer molecules, mimicking the crowded conditions of a real cell, Watson-Crick base pairing seems to be less stable than it is in pure, dilute solution, while Hoogsteen base-pairing, which favours the formation of triple and quadruple helices, becomes more stable [2-4].
The researchers think that this is linked to the way water molecules surround the DNA in a ‘hydration shell’. Hoogsteen pairing demands less water in this shell, and so is promoted when molecular crowding makes water scarce.
Changes to the hydration shell, for example induced by ions, may alter DNA shape in a sequence-dependent manner, perhaps being responsible for the sequence-structure relationships studied by Margulies and his colleagues. After all, says Tullius, the method they use to probe structure is a measure of “the local exposure of the surface of DNA to the solvent.”
The importance of DNA’s water sheath on its structure and function is also revealed in work that uses small synthetic molecules as drugs that bind to DNA and alter its behaviour, perhaps switching certain genes on or off. It is conventionally assumed that these molecules must fit snugly into the screw-like groove of the double helix. But some small molecules seem able to bind and show useful therapeutic activity even without such a fit, apparently because they can exploit water molecules in the hydration shell as ‘bridges’ to the DNA itself [5]. So here there is a subtle and irreducible interplay between sequence, shape and ‘environment’.
Then there are mechanical effects too. Some proteins bend and deform DNA significantly when they dock, making the molecule’s stiffness (and its dependence on sequence) a central factor in that process. And the shape and mechanics of DNA can influence gene function at larger scales. For example, the packaging of DNA and associated proteins into a compact form, called chromatin, in cells can affect whether particular genes are active or not. Special ‘chromatin-remodelling’ enzymes are needed to manipulate its structure and enable processes such as gene expression of DNA repair.
None of this is yet well understood. But it feels reminiscent of the way early work on protein structure in the 1930s and 40s grasped for dimly sensed principles before an understanding of the factors governing shape and function transformed our view of life’s molecular machinery. Are studies like these, then, a hint at some forthcoming insight that will reveal gene sequence to be just one element in the logic of life?
References
1. Parker, S. C. J. et al., Science Express doi:10.1126/science.1169050 (2009). Paper here.
2. Miyoshi, D., Karimata, H. & Sugimoto, N. J. Am. Chem. Soc. 128, 7957-7963 (2006). Paper here.
3. Nakano, S. et al., J. Am. Chem. Soc. 126, 14330-14331 (2004). Paper here.
4. Miyoshi, D. et al., J. Am. Chem. Soc. doi:10.1021/ja805972a (2009). Paper here.
5. Nguyen, B., Neidle, S. & Wilson, W. D. Acc. Chem. Res. 42, 11-21 (2009). Paper here.
Wednesday, March 11, 2009
Who should bear the carbon cost of exports?
[This is the pre-edited version of my latest Muse column for Nature News. (So far it seems only to have elicited outraged comment from some chap who rants against ‘Socialist warming alarmists’, which I suppose says it all.)]
China has become the world’s biggest carbon emitter partly because of its exports. So whose responsibility is that?
There was once a town with a toy factory. Everyone loved the toys, but hated the smell and noise of the factory. ‘That factory boss doesn’t care about us’, they grumbled. ‘He’s getting rich from our pockets, but he should be fined for all the muck he creates.’ Then one entrepreneur decided he could make the same toys without the pollution, using windmills and water filters and so forth. So he did; but they cost twice as much, and no one bought them.
Welcome to the world. Right now, our toy factory is in China. And according to an analysis by Dabo Guan of the University of Cambridge and his colleagues, these exports have helped to turn China into the world’s biggest greenhouse-gas emitting nation [1,2 – papers here and here].
That China now occupies this slot is no surprise; the nation tops the list for most national statistics, simply because it is so big. Per capita emissions of CO2 are still only about a quarter that of the USA, and gasoline consumption per person in 2005 was less than 5 percent that of Americans (but rising fast).
It’s no shocker either that China’s CO2 emissions have surged since it became an economic superpower. In 1981 it was responsible for 8 percent of the global total; in 2002 this reached 14 percent, and by 2007, 21 percent.
But what is most revealing in the new study is that about half of recent emissions increases from China can be attributed to the boom in exports. Their production now accounts for 6 percent of all global CO2 emissions. This invites the question: who is responsible?
Needless to say, China can hardly throw up its hands and say “Don’t blame us – we’re only giving you rich folks what you want.” After all, the revenues from exports are contributing to the remarkable rise in China’s prosperity.
But equally, it would be hypocritical for Western nations to condemn China for the pollution generated in supplying them with the cheap goods that they no longer care to make themselves. Let’s not forget, though, that China imports a lot too, thereby shifting those carbon costs of production somewhere else.
Part of the problem is that China continues to rely on coal for its energy, which provides 70 percent of the total. Nuclear and renewables supply only 7 percent, and while Chinese energy production has become somewhat more efficient, any gains there are vastly overwhelmed by increased demand.
One response to these figures is that they underline the potential value of a globally agreed carbon tax. In theory, this builds the global-warming cost of a product – whether a computer or an airplane flight – into its price. Worries that this enables producers simply to pass on that cost to the consumer might be valid for the production of essentials such as foods. But much of China’s export growth has been in consumer electronics (which have immense ‘embodied energy’) – exports of Chinese-built televisions increased from 21 million in 2002 to 86 million in 2005. Why shouldn’t consumers feel the environmental cost of luxury items? And won’t the hallowed laws of the marketplace ultimately cut sales and profits for manufacturers who simply raise their prices?
Some environmentalists are wary of carbon taxes because they fail to guarantee explicit emissions limits. But the main alternative, cap-and-trade, seems to have bigger problems. The idea here is that carbon emitters – nations, industrial sectors, even individual factories or plants – are given a carbon allocation but can exceed it by buying credits off others. That’s the scheme currently adopted in the European Union, and preferred by the Obama adminstration in the USA.
The major drawback is that it makes costs of emissions virtually impossible to predict, and susceptible to outside influences such as weather or other economic variables. The result would be a dangerously volatile carbon market, with prices that could soar or plummet (the latter a dream case for polluters). We hardly need any reminder now of the hazards of such market mechanisms.
Both a carbon tax and cap-and-trade schemes arguably offer a ‘fair’ way of sharing the carbon cost of exports (although there may be no transparent way to set the cap levels in the latter). But surely the Chinese picture reinforces the need for a broader view too, in which there is rational self-interest in international collaboration on and sharing of technologies that reduce emissions and increase efficiency. The issue also brings some urgency to debates about the best reward mechanisms for stimulating innovation [3].
These figures also emphasize the underlying dilemma. As Laura Bodey puts it in Richard Powers’ 1998 novel Gain, when she declines with cancer possibly caused by proximity to a chemical plant that has given her all kinds of convenient domestic products: “People want everything. That’s their problem.”
References
1. Guan, D., Peters, G. P., Weber, C. L. & Hubacek, K. Geophys. Res. Lett. 36, L04709 (2009).
2. Weber, C. L., Peters, G. P., Guan, D. & Hubacek, K. Energy Policy 36, 3572-3577 (2008).
3. Meloso, D., Copic, J. & Bossaerts, P. Science 323, 1335-1339 (2009).
[This is the pre-edited version of my latest Muse column for Nature News. (So far it seems only to have elicited outraged comment from some chap who rants against ‘Socialist warming alarmists’, which I suppose says it all.)]
China has become the world’s biggest carbon emitter partly because of its exports. So whose responsibility is that?
There was once a town with a toy factory. Everyone loved the toys, but hated the smell and noise of the factory. ‘That factory boss doesn’t care about us’, they grumbled. ‘He’s getting rich from our pockets, but he should be fined for all the muck he creates.’ Then one entrepreneur decided he could make the same toys without the pollution, using windmills and water filters and so forth. So he did; but they cost twice as much, and no one bought them.
Welcome to the world. Right now, our toy factory is in China. And according to an analysis by Dabo Guan of the University of Cambridge and his colleagues, these exports have helped to turn China into the world’s biggest greenhouse-gas emitting nation [1,2 – papers here and here].
That China now occupies this slot is no surprise; the nation tops the list for most national statistics, simply because it is so big. Per capita emissions of CO2 are still only about a quarter that of the USA, and gasoline consumption per person in 2005 was less than 5 percent that of Americans (but rising fast).
It’s no shocker either that China’s CO2 emissions have surged since it became an economic superpower. In 1981 it was responsible for 8 percent of the global total; in 2002 this reached 14 percent, and by 2007, 21 percent.
But what is most revealing in the new study is that about half of recent emissions increases from China can be attributed to the boom in exports. Their production now accounts for 6 percent of all global CO2 emissions. This invites the question: who is responsible?
Needless to say, China can hardly throw up its hands and say “Don’t blame us – we’re only giving you rich folks what you want.” After all, the revenues from exports are contributing to the remarkable rise in China’s prosperity.
But equally, it would be hypocritical for Western nations to condemn China for the pollution generated in supplying them with the cheap goods that they no longer care to make themselves. Let’s not forget, though, that China imports a lot too, thereby shifting those carbon costs of production somewhere else.
Part of the problem is that China continues to rely on coal for its energy, which provides 70 percent of the total. Nuclear and renewables supply only 7 percent, and while Chinese energy production has become somewhat more efficient, any gains there are vastly overwhelmed by increased demand.
One response to these figures is that they underline the potential value of a globally agreed carbon tax. In theory, this builds the global-warming cost of a product – whether a computer or an airplane flight – into its price. Worries that this enables producers simply to pass on that cost to the consumer might be valid for the production of essentials such as foods. But much of China’s export growth has been in consumer electronics (which have immense ‘embodied energy’) – exports of Chinese-built televisions increased from 21 million in 2002 to 86 million in 2005. Why shouldn’t consumers feel the environmental cost of luxury items? And won’t the hallowed laws of the marketplace ultimately cut sales and profits for manufacturers who simply raise their prices?
Some environmentalists are wary of carbon taxes because they fail to guarantee explicit emissions limits. But the main alternative, cap-and-trade, seems to have bigger problems. The idea here is that carbon emitters – nations, industrial sectors, even individual factories or plants – are given a carbon allocation but can exceed it by buying credits off others. That’s the scheme currently adopted in the European Union, and preferred by the Obama adminstration in the USA.
The major drawback is that it makes costs of emissions virtually impossible to predict, and susceptible to outside influences such as weather or other economic variables. The result would be a dangerously volatile carbon market, with prices that could soar or plummet (the latter a dream case for polluters). We hardly need any reminder now of the hazards of such market mechanisms.
Both a carbon tax and cap-and-trade schemes arguably offer a ‘fair’ way of sharing the carbon cost of exports (although there may be no transparent way to set the cap levels in the latter). But surely the Chinese picture reinforces the need for a broader view too, in which there is rational self-interest in international collaboration on and sharing of technologies that reduce emissions and increase efficiency. The issue also brings some urgency to debates about the best reward mechanisms for stimulating innovation [3].
These figures also emphasize the underlying dilemma. As Laura Bodey puts it in Richard Powers’ 1998 novel Gain, when she declines with cancer possibly caused by proximity to a chemical plant that has given her all kinds of convenient domestic products: “People want everything. That’s their problem.”
References
1. Guan, D., Peters, G. P., Weber, C. L. & Hubacek, K. Geophys. Res. Lett. 36, L04709 (2009).
2. Weber, C. L., Peters, G. P., Guan, D. & Hubacek, K. Energy Policy 36, 3572-3577 (2008).
3. Meloso, D., Copic, J. & Bossaerts, P. Science 323, 1335-1339 (2009).
Wednesday, March 04, 2009
What does it all mean?
[This is the pre-edited version of my latest Muse for Nature News.]
Science depends on clear terms and definitions – but the world doesn’t always oblige.
What’s wrong with this statement: ‘The acceleration of an object is proportional to the force acting on it.’ You might think no one could object to this expression of Newton’s second law. But Nobel laureate physicist Frank Wilczek does. This law, he admits, ‘is the soul of classical mechanics.’ But he adds that, ‘like other souls, it is insubstantial’ [1].
Bertrand Russell went further. In 1925 he called for the abolition of the concept of force in physics, and claimed that if people learnt to do without it, this ‘would alter not only their physical imagination, but probably also their morals and politics.’ [2]
That seems an awfully heavy burden for a word that most scientists will use unquestioningly. Wilczek does not go as far as Russell, but he agrees that the concept of ‘force’ acquires meaning only through convention – through the culture of physics – and not because it refers to anything objective. He suspects that only ‘intellectual inertia’ accounts for its continued use.
It’s a disconcerting reminder that scientific terminology, supposed to be so precise and robust, is often much more mutable and ambiguous than we think – which makes it prone to misuse, abuse and confusion [3,4]. But why should that be so?
There are, broadly speaking, several potential problems with words in science. Let’s take each in turn.
Misuse
Some scientific words are simply misapplied, often because their definition is ignored in favour of something less precise. Can’t we just stamp out such transgressions? Not necessarily, for science can’t expect to evade the transformations that any language undergoes through changing conventions of usage. When misuse becomes endemic, we must sometimes accept that a word’s definition has changed de facto. ‘Fertility’ now often connotes birth rate, not just in general culture but among demographers. That is simply not its dictionary meaning, but is it now futile to argue against it? Similarly, it is now routine to speak of protein molecules undergoing phase transitions, which they cannot in the strict sense since phase transitions are only defined in systems that can be extrapolated to infinite size. Here, however, the implication is clear, and inventing a new term is arguably unhelpful.
Perhaps word misuse matters less when it simply alters or broadens meaning – the widespread use of ‘momentarily’ to indicate ‘in a moment’ is wrong and ugly, but it is scarcely disastrous to tolerate it. It’s more problematic when misuse threatens to traduce logic, as for example when the new meaning attached to ‘fertility’ allows the existence of fertile people who have zero fertility.
Everyday words used in science
In 1911 the geologist John W. Gregory, chairman of the British Association for the Advancement of Science, warned of the dangers of appropriating everyday words into science [5]. Worms, elements, rocks – all, he suggested, run risks of securing ‘specious simplicity at the price of subsequent confusion.’ Interestingly, Gregory also worried about the differing uses of ‘metal’ in chemistry and geology; what would he have said, one wonders, about the redefinition later placed on the term by astronomers (any element heavier than helium) which, whatever the historical justification, shows a deplorable lack of self-discipline. Such Humpty Dumpty-style assertions that a familiar word can mean whatever one chooses are more characteristic of the excesses of postmodern philosophy that scientists often lament.
There are hazards in trying to assign new and precise meanings to old and imprecise terms. Experts in nonlinear dynamics can scarcely complain about misuses of ‘chaos’ when it already had several perfectly good meanings before they came along. On the other hand, by either refusing or failing to provide a definition of everyday words that they appropriate – ‘life’ being a prime victim here – scientists risk breeding confusion. In this regard, science can’t win.
Fuzzy boundaries
When scientific words become fashionable, haziness is an exploitable commodity. One begins to suspect there are few areas of science that cannot be portrayed as complexity or nanotechnology. It recently became popular to assert a fractal nature in almost any convoluted shape, until some researchers eventually began to balk at the term being awarded to structures (like ferns) whose self-similarity barely extends beyond a couple of levels of magnification [6].
Heuristic value
The reasons for Wilczek’s scepticism about force are too subtle to describe here, but they don’t leave him calling for its abolition. He points out that it holds meaning because it fits our intuitions – we feel forces and see their effects, even if we don’t strictly need them theoretically. In short, the concept of force is easy to work with: it has heuristic value.
Science is full of concepts that lack sharp definition or even logic but which help us understand the world. Genes are another. The way things are going, it is possible that one day the notion of a gene may create more confusion than enlightenment [7], but at present it doesn’t seem feasible to understand heredity or evolution without their aid – and there’s nothing better yet on offer.
Chemists have recently got themselves into a funk over the concept of oxidation state [8,9]. Some say it is a meaningless measure of an atom’s character; but the fact remains that oxidation states bring into focus a welter of chemical facts, from balancing equations to understanding chemical colour and crystal structure. One could argue that ‘wrong’ ideas that nonetheless systematize observations are harmful only when they refuse to give way to better ones (pace Aristotelian physics and phlogiston), while teaching science is a matter of finding useful (as opposed to ‘true’) hierarchies of knowledge that organize natural phenomena.
The world doesn’t fit into boxes
We’ve known that for a long time: race and species are terms guaranteed to make biologists groan. Now astronomers fare little better, as the furore over the meaning of ‘planet’ illustrated [10] – a classic example of the tension between word use sanctioned by definition or by convention.
The same applies to ‘meteorite’. According to one, perfectly logical, definition of a meteorite, it is not possible for a meteorite ever to strike the Earth (since it becomes one only after having done so). Certainly, the common rule of thumb that meteors are extraterrestrial bodies that enter the atmosphere but don’t hit the surface, while meteorites do, is not one that planetary scientists will endorse. There is no apparent consensus about what they will endorse, which seems to be a result of trying to define processes on the basis of the objects they involve.
All of this suggests some possible rules of thumb for anyone contemplating a scientific neologism. Don’t invent a new word without really good reason (for example, don’t use it to patch over ignorance). Don’t neglect to check if one exists already (we don’t want both amphiphilic and amphipathic). Don’t assume you can put an old word to new use. Make the definition transparent, and think carefully about its boundaries. Oh, and try to make it easy to pronounce - not just in Cambridge but in Tokyo too.
References
1. Wilczek, F. Physics Today 57(10), 11-12 (2004).
2. Russell, B. The ABC of Relativity, 5th edn, p.135 (Routledge, London, 1997).
3. Nature 455, 1023-1028 (2008).
4. Parsons, J. & Wand, Y., Nature 455, 1040-1041 (2008).
5. Gregory, J. W. Nature 87, 538-541 (1911).
6. Avnir, D., Biham, O., Lidar, D. & Malcar, O. Science 279, 39-40 (1998).
7. Pearson, H. Nature 441, 398-401 (2006).
8. Raebinger, H., Lany, S. & Zunger, A. Nature 453, 763 (2008).
9. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. doi:10.1002/anie.200803605.
10. Giles, J. Nature 437, 456-457 (2005).
[This is the pre-edited version of my latest Muse for Nature News.]
Science depends on clear terms and definitions – but the world doesn’t always oblige.
What’s wrong with this statement: ‘The acceleration of an object is proportional to the force acting on it.’ You might think no one could object to this expression of Newton’s second law. But Nobel laureate physicist Frank Wilczek does. This law, he admits, ‘is the soul of classical mechanics.’ But he adds that, ‘like other souls, it is insubstantial’ [1].
Bertrand Russell went further. In 1925 he called for the abolition of the concept of force in physics, and claimed that if people learnt to do without it, this ‘would alter not only their physical imagination, but probably also their morals and politics.’ [2]
That seems an awfully heavy burden for a word that most scientists will use unquestioningly. Wilczek does not go as far as Russell, but he agrees that the concept of ‘force’ acquires meaning only through convention – through the culture of physics – and not because it refers to anything objective. He suspects that only ‘intellectual inertia’ accounts for its continued use.
It’s a disconcerting reminder that scientific terminology, supposed to be so precise and robust, is often much more mutable and ambiguous than we think – which makes it prone to misuse, abuse and confusion [3,4]. But why should that be so?
There are, broadly speaking, several potential problems with words in science. Let’s take each in turn.
Misuse
Some scientific words are simply misapplied, often because their definition is ignored in favour of something less precise. Can’t we just stamp out such transgressions? Not necessarily, for science can’t expect to evade the transformations that any language undergoes through changing conventions of usage. When misuse becomes endemic, we must sometimes accept that a word’s definition has changed de facto. ‘Fertility’ now often connotes birth rate, not just in general culture but among demographers. That is simply not its dictionary meaning, but is it now futile to argue against it? Similarly, it is now routine to speak of protein molecules undergoing phase transitions, which they cannot in the strict sense since phase transitions are only defined in systems that can be extrapolated to infinite size. Here, however, the implication is clear, and inventing a new term is arguably unhelpful.
Perhaps word misuse matters less when it simply alters or broadens meaning – the widespread use of ‘momentarily’ to indicate ‘in a moment’ is wrong and ugly, but it is scarcely disastrous to tolerate it. It’s more problematic when misuse threatens to traduce logic, as for example when the new meaning attached to ‘fertility’ allows the existence of fertile people who have zero fertility.
Everyday words used in science
In 1911 the geologist John W. Gregory, chairman of the British Association for the Advancement of Science, warned of the dangers of appropriating everyday words into science [5]. Worms, elements, rocks – all, he suggested, run risks of securing ‘specious simplicity at the price of subsequent confusion.’ Interestingly, Gregory also worried about the differing uses of ‘metal’ in chemistry and geology; what would he have said, one wonders, about the redefinition later placed on the term by astronomers (any element heavier than helium) which, whatever the historical justification, shows a deplorable lack of self-discipline. Such Humpty Dumpty-style assertions that a familiar word can mean whatever one chooses are more characteristic of the excesses of postmodern philosophy that scientists often lament.
There are hazards in trying to assign new and precise meanings to old and imprecise terms. Experts in nonlinear dynamics can scarcely complain about misuses of ‘chaos’ when it already had several perfectly good meanings before they came along. On the other hand, by either refusing or failing to provide a definition of everyday words that they appropriate – ‘life’ being a prime victim here – scientists risk breeding confusion. In this regard, science can’t win.
Fuzzy boundaries
When scientific words become fashionable, haziness is an exploitable commodity. One begins to suspect there are few areas of science that cannot be portrayed as complexity or nanotechnology. It recently became popular to assert a fractal nature in almost any convoluted shape, until some researchers eventually began to balk at the term being awarded to structures (like ferns) whose self-similarity barely extends beyond a couple of levels of magnification [6].
Heuristic value
The reasons for Wilczek’s scepticism about force are too subtle to describe here, but they don’t leave him calling for its abolition. He points out that it holds meaning because it fits our intuitions – we feel forces and see their effects, even if we don’t strictly need them theoretically. In short, the concept of force is easy to work with: it has heuristic value.
Science is full of concepts that lack sharp definition or even logic but which help us understand the world. Genes are another. The way things are going, it is possible that one day the notion of a gene may create more confusion than enlightenment [7], but at present it doesn’t seem feasible to understand heredity or evolution without their aid – and there’s nothing better yet on offer.
Chemists have recently got themselves into a funk over the concept of oxidation state [8,9]. Some say it is a meaningless measure of an atom’s character; but the fact remains that oxidation states bring into focus a welter of chemical facts, from balancing equations to understanding chemical colour and crystal structure. One could argue that ‘wrong’ ideas that nonetheless systematize observations are harmful only when they refuse to give way to better ones (pace Aristotelian physics and phlogiston), while teaching science is a matter of finding useful (as opposed to ‘true’) hierarchies of knowledge that organize natural phenomena.
The world doesn’t fit into boxes
We’ve known that for a long time: race and species are terms guaranteed to make biologists groan. Now astronomers fare little better, as the furore over the meaning of ‘planet’ illustrated [10] – a classic example of the tension between word use sanctioned by definition or by convention.
The same applies to ‘meteorite’. According to one, perfectly logical, definition of a meteorite, it is not possible for a meteorite ever to strike the Earth (since it becomes one only after having done so). Certainly, the common rule of thumb that meteors are extraterrestrial bodies that enter the atmosphere but don’t hit the surface, while meteorites do, is not one that planetary scientists will endorse. There is no apparent consensus about what they will endorse, which seems to be a result of trying to define processes on the basis of the objects they involve.
All of this suggests some possible rules of thumb for anyone contemplating a scientific neologism. Don’t invent a new word without really good reason (for example, don’t use it to patch over ignorance). Don’t neglect to check if one exists already (we don’t want both amphiphilic and amphipathic). Don’t assume you can put an old word to new use. Make the definition transparent, and think carefully about its boundaries. Oh, and try to make it easy to pronounce - not just in Cambridge but in Tokyo too.
References
1. Wilczek, F. Physics Today 57(10), 11-12 (2004).
2. Russell, B. The ABC of Relativity, 5th edn, p.135 (Routledge, London, 1997).
3. Nature 455, 1023-1028 (2008).
4. Parsons, J. & Wand, Y., Nature 455, 1040-1041 (2008).
5. Gregory, J. W. Nature 87, 538-541 (1911).
6. Avnir, D., Biham, O., Lidar, D. & Malcar, O. Science 279, 39-40 (1998).
7. Pearson, H. Nature 441, 398-401 (2006).
8. Raebinger, H., Lany, S. & Zunger, A. Nature 453, 763 (2008).
9. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. doi:10.1002/anie.200803605.
10. Giles, J. Nature 437, 456-457 (2005).