In defence of consensus
If I were to hope for psychological subtlety from soap operas, or historical accuracy from Dan Brown, I’d have only myself to blame for my pain. So I realize that I am scarcely doing myself any favours by allowing myself to be distressed by scientifically illiterate junk in the financial pages of the Daily Telegraph. I know that. Yet there is a small part of me, no doubt immature, that exclaims “But this is a national newspaper – how can it be printing sheer nonsense?”
To wit: Ruth Lea, director of the Centre for Policy Research, on the unreliability of consensus views. These are, apparently, “frequently very wrong indeed.” The target of this extraordinarily silly diatribe is the consensus on the human role in climate change. We are reminded by Lea that Galileo opposed the ‘consensus’ view. Let’s just note in passing that the invocation of Galileo is the surefire signature of the crank, and move on instead to the blindingly obvious point that Galileo’s ‘heresy’ represented the voice of scientific reason, and the consensus he opposed was a politico-religious defence of vested interests. Rather precisely, one might think, the opposite of the situation in the climate-change ‘consensus.’ (The truth about Galileo is actually a little more complicated – see Galileo in Rome by William Shea and Mariano Artigas – but this will do for now.)
In any case, the rejoinder is really very simple. Of course scientific consensus can be wrong – that’s the nature of science. But much more often it is ‘right’ (which is to say, it furnishes the best explanation for the observations with the tools to hand).
As further evidence of the untrustworthiness of consensus, however, Lea regales us with tales of how economists (for God’s sake) have in the past got things wrong en masse – apparently she thinks economics has a claim to the analytical and predictive capacity of natural science. Or perhaps she imagines that consensus-making is an arbitrary affair, a thing that just happens when lots of people get together to debate an issue, and not, as in science, a hard-won conclusion wrested from observation and understanding.
Ah, but you see, the science of global warming has been overturned by a paper “of the utmost scientific significance”, published by the venerable Royal Society. The paper’s author, a Danish scientist named Henrik Svensmark, “has been impeded and persecuted by scientific and government establishments” (they do that, you know) because his findings were “politically inconvenient”. What are these findings of the “utmost significance”? He has shown, according to Lea, that there has been a reduction in low-altitude cloudiness in the twentieth century owing to a reduction in the cosmic-ray flux into the atmosphere, because of a weakening of the shielding provided by the Sun’s magnetic field. Clouds have an overall cooling effect, and so this reduction in cloudiness probably lies behind the rise in global mean temperature.
Now, that sounds important, doesn’t it? Except that of course Svensmark has shown nothing of the sort. He has found that cosmic rays may induce the formation of sulphate droplets in a plastic box containing gases simulating the composition of the atmosphere. That’s an interesting result, demonstrating that cosmic rays might indeed affect cloud formation. It’s certainly worth publishing in the Proceedings of the Royal Society. The next step might be to look for ways of investigating whether the process works in the real atmosphere (and not just a rough lab simulacra of it). And then whether it does indeed lead to the creation of cloud condensation nuclei (which these sulphate droplets are not yet), and then to clouds. And then to establish whether there is in fact any record of increased cosmic-ray flux over the twentieth century. (We can answer that already: it’s been measured for the past 50 years, and there is no such trend.) And then whether there is evidence of changes in low-altitude cloudiness of the sort Svensmark’s idea predicts. And if so, whether it leads to the right predictions of temperature trends in climate models. And then to try to understand why the theory predicts a stronger daytime warming trend, whereas observations show that it’s stronger at night.
But that’s all nitpicking, surely, because in Lea’s view this new result “seriously challenges the current pseudo-consensus that global warming is largely caused by manmade carbon emissions.” Like most climate-change sceptics, Lea clearly feels this consensus is pulled out of a hat through vague and handwaving arguments, rather than being supported by painstaking comparisons of modelling and observation, such as the identification of a characteristic anthropogenic spatial fingerprint in the overall warming trend. It is truly pitiful.
“I am no climate scientist”, says Lea. (I take it we could leave out “climate” here.) So why is she commenting on climate science? I am no ballet dancer, which is why, should the opportunity bizarrely present itself for me to unveil my interpretation of Swan Lake before the nation, I will regretfully decline.
Simon Jenkins has recently argued in the Guardian that science should not be compulsory beyond primary-school level. I don’t think we need be too reactionary about his comments, though I disagree with much of them. But when a director of a ‘policy research centre’ shows such astonishing ignorance of scientific thinking, and perhaps worse still, no one on a national newspaper’s editorial or production team can see that this is so (would the equivalent historical ignorance be tolerated, say?), one has to wonder whether increasing scientific illiteracy still further is the right way to go. In fact, the scientific ignorance on display here is only the tip of the iceberg. The real fault is a complete lack of critical thinking. There are few things more dangerous in public life than people educated just far enough to be able to mask that lack with superficially confident and polished words.
But it’s perhaps most surprising of all to see someone in ‘policy research’ fail to understand how a government should use expert opinion. If there is a scientific consensus on this question, what does she want them to do? The opposite? Nothing? A responsible government acts according to the best advice available. If that advice turns out to be wrong (and science, unlike politics, must always admit to that possibility), the government nevertheless did the right thing. If this Policy Research Centre actually has any influence on policy-making, God help us.
Wednesday, October 25, 2006
Monday, October 23, 2006

Decoding Da Vinci, decoded
I’m hoping that anyone who feels moved to challenge my dismissal of Fibonacci sequences and the Golden Mean in nature, in the Channel 4 TV series Decoding da Vinci will think first about how much ends up on the cutting-room floor in television studios. I stand by what I said in the programme, but I didn’t suggest to the presenter Dan Rivers that Fibonacci and phi are totally irrelevant in the natural world. Sure, overblown claims are made for them – just about all of what is said in this regard about human proportion is mere numerology (of which my favourite is the claim that the vital statistics of Veronica Lake were Fibonacci numbers). And the role of these numbers in phyllotaxis has been convincingly challenged recently by Todd Cooke, in a paper in the Botanical Journal of the Linnean Society. But even Cooke acknowledges that the spiral patterns of pine cones, sunflower florets and pineapples do seem to have Fibonacci parastichies (that is, counter-rotating spirals come in groups of (3,5), (5,8), (8,13) and so on). That has yet to be fully explained, although it doesn’t seem to be a huge mystery: the explanation surely has something to do with packing effects at the tip of the stem, where new buds form. It’s a little known fact that Alan Turing was developing his reaction-diffusion theory of pattern formation to explain this aspect of phyllotaxis just before he committed suicide. Jonathan Swinton has unearthed some fascinating material on this.
So the real story of Fibonacci numbers and phi in phyllotaxis is complicated, and certainly not something that could be squeezed into five minutes of TV. I shall discuss it in depth in my forthcoming, thorough revision of my book The Self-Made Tapestry, which Oxford University Press will publish as a three volume-set (under a title yet to be determined), beginning some time in late 2007.
Thursday, October 19, 2006

Paint it black
I don’t generally tend to post my article for Nature’s nanozone here, as they are a bit too techie. But this was just such a cute story…
Nanotechnology is older than we thought. The Egyptians were using it four millennia ago to darken their graying locks.
Artisans were making semiconductor quantum dots more than four thousand years ago, a team in France has claimed. Needless to say, the motivation was far removed from that today, when these nanoparticles are of interest for making light-emitting devices and as components of photonic circuits and memories. It seems that the ancient Egyptians and Greeks were instead making nanocrystals to dye their hair black.
Philippe Walter of the Centre for Research and Restoration of the Museums of France in Paris and his colleagues have investigated an ancient recipe for blackening hair using lead compounds. They find that the procedure described in historical sources produces nanoparticles of black lead sulphide (PbS), which are formed deep within the protein-rich matrix of hair [1].
That the chemical technologies of long ago sometimes involved surprisingly sophisticated processes and products is well known [2]. The synthesis of nanoparticles has, for example, been identified in metallic, lustrous glazes used by potters in the Middle Ages [3]. Such practices are remarkable given that ancient craftspeople generally had no real knowledge of chemical principles and had only crude means of transforming natural materials, such as heating, at their disposal.
The nanocrystal hair dye is particularly striking. Walter and colleagues say that these particles, with a size of about 5 nm, are “quite similar to PbS quantum dots synthesized by recent materials science techniques.” Moreover, the method alters the appearance of hair permanently, because of the deep penetration of the nanoparticles, yet without affecting its mechanical properties.
That makes the process an attractive dyeing procedure even today, despite the potential toxicity of lead-based compounds. Walter and colleagues point out that some modern hair darkeners indeed contain lead acetate, which forms lead sulphide in situ on hair fibres. In any event, safety concerns do not seem to have troubled people in ancient times, perhaps because of their short life expectancy – as well as using lead to dye hair, the Egyptians used lead carbonate as a skin whitener, and toxic antimony sulphide for eye shadow (kohl).
The recipe for making the lead-based hair dye is simple. Lead oxide is mixed with slaked lime (calcium hydroxide, which is strongly alkaline) and water to make a paste, which is then rubbed into the hair. A reaction between the leads ions and sulphur from hair keratins (proteins) produces lead sulphide. These proteins have a high sulphur content: they are strongly crosslinked by disulphide bonds formed from cysteine amino acids, which gives hair its resilience and springiness (such bonds are broken in hair-straightening treatments). The researchers found that the alkali seems to be essential for releasing sulphur from cysteine to form PbS.
The French team dyed blond human hairs black by applying this treatment for three days. They then looked at the distribution of lead within cross-sections of the hairs using X-ray fluorescence spectroscopy, and saw that it was present throughout. X-ray diffraction from treated hairs showed evidence of lead sulphide crystals, which electron microscopy revealed as nanoparticles about 4.8 nm across.
The nanoparticles decorate fibrillar aggregates of proteins within the cortex of hair strands – the inner region, beneath the cuticle of the hair surface. High-resolution microscopy revealed that these particles are highly organized: they seem to be attached to individual microfibrils, which are about 7 nm in diameter and are formed from alpha-helical proteins. Thus the distribution of particles echoes the supramolecular arrangement of the microfibrils, being placed in rows about 8-10 nm apart and aligned with the long axis of the hair strands. So the ancient recipe provides a means not only of making nanocrystals but of organizing them in a roughly regular fashion at the nanoscale – one of the major objectives of modern synthetic methods.
The discovery throws a slightly ironic light on the debate today about the use of nanoparticles in cosmetics [4]. Quite properly, critics point out that the toxicological behaviour of such particles is not yet well understood. It now seems this is a much older issue than anyone suspected.
References
1. Walter, P. et al. Early use of PbS nanotechnology for an ancient hair dyeing formula. Nano Lett. 6, 2215-2219 (2006) [article here]
2. Ball, P. Where is there wisdom to be found in ancient materials technologies? MRS Bull. March 2005, 149-151.
3. Pérez-Arantegui et al. Luster pottery from the thirteenth century to the sixteenth century: a nanostructured thin metallic film. J. Am. Ceram. Soc. 84, 442 (2001) [article here]
4. ‘Nanoscience and nanotechnologies: opportunities and uncertainties.’ Report by the Royal Society/Royal Academy of Engineering (2004). [Available here]
Tuesday, October 17, 2006

A sign of the times?
The ETC Group, erstwhile campaigners against nanotechnology, have launched a competition for the design of a ‘nano-hazard’ symbol analogous to those used already to denote toxicity, biohazards or radioactive materials. My commentary for Nature’s muse@nature.com on this unhelpful initiative is here.
I worry slightly that the ETC Group is a soft target, in that their pronouncements on nanotechnology rarely make much sense and show a deep lack of understanding of the field (and I say this as a supporter of many environmental causes and a strong believer in the ethical responsibilities of scientists). But I admit that the announcement left me a little riled, filled as it was with a fair degree of silliness and misinformation. For example:
“Nanoparticles are able to move around the body and the environment more readily than larger particles of pollution.” First, we don’t know much about how nanoparticles move around the body or the environment (and yes, that’s a problem in itself). Second, this sentence implies that nanoparticles (here meaning human-made nanoparticles, though that’s not specified) are ‘pollution’ by default, which one simply cannot claim with such generality. Some may be entirely harmless.
“Some designer nanomaterials may come to replace natural products such as cotton, rubber and metals – displacing the livelihoods of some of the poorest and most vulnerable people in the world.” I don’t want to see the livelihoods of poor, vulnerable people threatened. Yet not only is this claim completely contentious, but it offers us the prospect of a group that originated from concerns about soil erosion and land use now suggesting that metals are ‘natural products’ – as though mining has not, since ancient times, been one of the biggest polluters on the planet.
“Nano-enabled technologies also aim to ‘enhance’ human beings and ‘fix’ the disabled, a goal that raises troubling ethical issues and the specter of a new divide between the technologically “improved” and “unimproved.”” Many of these ‘human enhancements’ are silly dreams of Californian fantasists. There’s nothing specific to nanotech in such goals anyway. What nanotech does show some promise of doing is enabling important advances in biomedicine. If that is a ‘fix’, I suspect it is one many people would welcome.
And so on. I was one of those who wrote to the Royal Society, when they were preparing their report on nanotech, urging that they take seriously the social and ethical implications, even if these lay outside the usual remit of what scientists consider in terms of ethics. I feel that is an important obligation, and I was glad to see that the Royal Society/RAE report acknowledges it as such. But sticking ‘Danger: Nano’ stickers on sun creams isn’t the answer.
Friday, October 06, 2006
When it’s time to speak out
[The following is the unedited form of my latest article on muse@nature.com. The newsblog on this story is worth checking out too.]
By confronting ExxonMobil, the Royal Society is not being a censor of science but an advocate for it.
When Bob Ward, former manager of policy communication at the Royal Society in London, wrote a letter to the oil company ExxonMobil taking it to task for funding groups that deny the human role in global warning, it isn’t clear he knew quite what he was letting himself in for. But with hindsight the result was predictable: once the letter was obtained and published by the British Guardian newspaper, the Royal Society (RS) was denounced from all quarters as having overstepped its role as impartial custodians of science.
Inevitably, Ward’s letter fuels the claims of ‘climate sceptics’ that the scientific community is seeking to impose a consensus and to suppress dissent. But the RS has been denounced by less partisan voices too. David Whitehouse, formerly a science reporter for the BBC, argues that “you tackle bad science with good science”, rather than trying to turn off the money to your opponents. “Is it appropriate”, says Whitehouse, “that [the RS] should be using its authority to judge and censor in this way?”
And Roger Pielke Jr, director of the University of Colorado’s Center for Science and Technology Policy Research, who is a controversialist but far from a climate sceptic, says that “the actions by the Royal Society are inconsistent with the open and free exchange of ideas, as well as the democratic notion of free speech.”
Yes, there is nothing like the scent of scientific censorship to make scientists of all persuasions come over all sanctimonious about free speech.
The problem is that these critics do not seem to understand what the RS (or rather, Bob Ward) actually said, nor the context in which he said it, nor what the RS now stands for.
Ward wrote his letter to Nick Thomas, Director of Corporate Affairs at ExxonMobil’s UK branch Esso. He expressed surprise and disappointment at the way that ExxonMobil’s 2005 Corporate Citizenship Report claimed that the conclusions of the Intergovernmental Panel on Climate Change that recent global warming has a human cause “rely on expert judgement rather than objective, reproducible statistical methods”. Ward’s suggestion that this claim is “inaccurate” is in fact far too polite.
Model uncertainties and natural variability, the report goes on to claim, “make it very difficult to determine objectively the extent to which recent climate changes might be the result of human actions.” But anyone who has followed the course of the scientific debate over the past two decades will know how determinedly the scientists have refrained from pointing the finger at human activities until the evidence allows no reasonable alternative.
Most serious scientists will agree on this much, at least. The crux of the argument, however, is Ward’s alleged insistence that ExxonMobil stop funding climate-change deniers. (He estimates that ExxonMobil provided $2.9 million last year to US organizations “which misinformed the public about climate change.”) Actually, Ward makes no such demand. He points out that he expressed concerns about the company’s support for such lobby groups in a previous meeting with Thomas, who told him that the company intended to stop it. Ward asked in his letter when ExxonMobil plans to make that change.
So there is no demand here, merely a request for information about an action ExxonMobil had said it planned to undertake. Whitehouse and Pielke are simply wrong in what they allege. But was the RS wrong to intervene at all?
First, anyone who is surprised simply hasn’t being paying attention. Under outspoken presidents such as Robert May and Martin Rees, the Royal Society is no longer the remote, patrician and blandly noncommittal body of yore. It means business. In his 2005 Anniversary Address, May criticized “the campaigns waged by those whose belief systems or commercial interests impel them to deny, or even misrepresent, the scientific facts”.
“We must of course recognise there is always a case for hearing alternative, even maverick, views”, he added. “But we need to give sensible calibration to them. The intention of ‘balance’ is admittedly admirable, but this problem of wildly disparate ‘sides’ being presented as if they were two evenly balanced sporting teams is endemic to radio, TV, print media, and even occasional Parliamentary Select Committees.”
In response to his critics, Ward has said that “the Society has spoken out frequently, on many issues and throughout its history, when the scientific evidence is being ignored or misrepresented”. If anything, it hasn’t done that often enough.
Second, Ward rightly ridicules the notion of ExxonMobil as the frail David to the Royal Society’s Goliath. The accusations of “bullying” here are just risible. The RS is no imperious monarch, but a cash-strapped aristocrat who lives in the crumbling family pile and contrives elegantly to hide his impecuniosity. In contrast, the climate sceptics count among their number the most powerful man in the world, who has succeeded in emasculating the only international emissions treaty we have.
And it’s not just the oil industry (and its political allies) that the RS faces. The media are dominated by scientific illiterates like Neil Collins, who writes in the Telegraph newspaper à propos this little spat of his “instinctive leaning towards individuals on the fringe”, that being the habitual raffish pose of the literati. (My instinctive leaning, in contrast, is towards individuals who I think are right.) “Sea level does not appear to be rising”, says Collins (wrong), while “the livelihoods of thousands of scientists depend on our being sufficiently spooked to keep funding the research” (don’t even get me started on this recurrent idiocy). I fear the scientific community does not appreciate the real dangers posed by this kind of expensively educated posturing from high places.
If not, it ought to. In the early 1990s, the then editor of the Sunday Times Andrew Neil supported a campaign by his reporter Neville Hodgkinson suggesting that HIV does not cause AIDS.
Like most climate sceptics, Neil and the HIV-deniers did not truly care about having a scientific debate – their agenda was different. To them, the awful thing about the HIV theory was that it placed every sexual libertine at risk. How dare science threaten to spoil our fun? Far better to confine the danger to homosexuals: Hodgkinson implied that AIDS might somehow be the result of gay sex. For a time, the Sunday Times campaign did real damage to AIDS prevention in Africa. But now it is forgotten and the sceptics discredited, while Neil has gone from strength to strength as a media star.
On that occasion, Nature invited accusations of scientific censorship by standing up to the Sunday Times’s programme of misinformation – making me proud to be working for the journal. As I recall, the RS remained aloof from that matter (though May mentions it in his 2005 speech). We should be glad that it is now apparently ready to enter the fray. Challenging powerful groups that distort science for personal, political or commercial reasons is not censorship, it is being an advocate for science in the real world.
[The following is the unedited form of my latest article on muse@nature.com. The newsblog on this story is worth checking out too.]
By confronting ExxonMobil, the Royal Society is not being a censor of science but an advocate for it.
When Bob Ward, former manager of policy communication at the Royal Society in London, wrote a letter to the oil company ExxonMobil taking it to task for funding groups that deny the human role in global warning, it isn’t clear he knew quite what he was letting himself in for. But with hindsight the result was predictable: once the letter was obtained and published by the British Guardian newspaper, the Royal Society (RS) was denounced from all quarters as having overstepped its role as impartial custodians of science.
Inevitably, Ward’s letter fuels the claims of ‘climate sceptics’ that the scientific community is seeking to impose a consensus and to suppress dissent. But the RS has been denounced by less partisan voices too. David Whitehouse, formerly a science reporter for the BBC, argues that “you tackle bad science with good science”, rather than trying to turn off the money to your opponents. “Is it appropriate”, says Whitehouse, “that [the RS] should be using its authority to judge and censor in this way?”
And Roger Pielke Jr, director of the University of Colorado’s Center for Science and Technology Policy Research, who is a controversialist but far from a climate sceptic, says that “the actions by the Royal Society are inconsistent with the open and free exchange of ideas, as well as the democratic notion of free speech.”
Yes, there is nothing like the scent of scientific censorship to make scientists of all persuasions come over all sanctimonious about free speech.
The problem is that these critics do not seem to understand what the RS (or rather, Bob Ward) actually said, nor the context in which he said it, nor what the RS now stands for.
Ward wrote his letter to Nick Thomas, Director of Corporate Affairs at ExxonMobil’s UK branch Esso. He expressed surprise and disappointment at the way that ExxonMobil’s 2005 Corporate Citizenship Report claimed that the conclusions of the Intergovernmental Panel on Climate Change that recent global warming has a human cause “rely on expert judgement rather than objective, reproducible statistical methods”. Ward’s suggestion that this claim is “inaccurate” is in fact far too polite.
Model uncertainties and natural variability, the report goes on to claim, “make it very difficult to determine objectively the extent to which recent climate changes might be the result of human actions.” But anyone who has followed the course of the scientific debate over the past two decades will know how determinedly the scientists have refrained from pointing the finger at human activities until the evidence allows no reasonable alternative.
Most serious scientists will agree on this much, at least. The crux of the argument, however, is Ward’s alleged insistence that ExxonMobil stop funding climate-change deniers. (He estimates that ExxonMobil provided $2.9 million last year to US organizations “which misinformed the public about climate change.”) Actually, Ward makes no such demand. He points out that he expressed concerns about the company’s support for such lobby groups in a previous meeting with Thomas, who told him that the company intended to stop it. Ward asked in his letter when ExxonMobil plans to make that change.
So there is no demand here, merely a request for information about an action ExxonMobil had said it planned to undertake. Whitehouse and Pielke are simply wrong in what they allege. But was the RS wrong to intervene at all?
First, anyone who is surprised simply hasn’t being paying attention. Under outspoken presidents such as Robert May and Martin Rees, the Royal Society is no longer the remote, patrician and blandly noncommittal body of yore. It means business. In his 2005 Anniversary Address, May criticized “the campaigns waged by those whose belief systems or commercial interests impel them to deny, or even misrepresent, the scientific facts”.
“We must of course recognise there is always a case for hearing alternative, even maverick, views”, he added. “But we need to give sensible calibration to them. The intention of ‘balance’ is admittedly admirable, but this problem of wildly disparate ‘sides’ being presented as if they were two evenly balanced sporting teams is endemic to radio, TV, print media, and even occasional Parliamentary Select Committees.”
In response to his critics, Ward has said that “the Society has spoken out frequently, on many issues and throughout its history, when the scientific evidence is being ignored or misrepresented”. If anything, it hasn’t done that often enough.
Second, Ward rightly ridicules the notion of ExxonMobil as the frail David to the Royal Society’s Goliath. The accusations of “bullying” here are just risible. The RS is no imperious monarch, but a cash-strapped aristocrat who lives in the crumbling family pile and contrives elegantly to hide his impecuniosity. In contrast, the climate sceptics count among their number the most powerful man in the world, who has succeeded in emasculating the only international emissions treaty we have.
And it’s not just the oil industry (and its political allies) that the RS faces. The media are dominated by scientific illiterates like Neil Collins, who writes in the Telegraph newspaper à propos this little spat of his “instinctive leaning towards individuals on the fringe”, that being the habitual raffish pose of the literati. (My instinctive leaning, in contrast, is towards individuals who I think are right.) “Sea level does not appear to be rising”, says Collins (wrong), while “the livelihoods of thousands of scientists depend on our being sufficiently spooked to keep funding the research” (don’t even get me started on this recurrent idiocy). I fear the scientific community does not appreciate the real dangers posed by this kind of expensively educated posturing from high places.
If not, it ought to. In the early 1990s, the then editor of the Sunday Times Andrew Neil supported a campaign by his reporter Neville Hodgkinson suggesting that HIV does not cause AIDS.
Like most climate sceptics, Neil and the HIV-deniers did not truly care about having a scientific debate – their agenda was different. To them, the awful thing about the HIV theory was that it placed every sexual libertine at risk. How dare science threaten to spoil our fun? Far better to confine the danger to homosexuals: Hodgkinson implied that AIDS might somehow be the result of gay sex. For a time, the Sunday Times campaign did real damage to AIDS prevention in Africa. But now it is forgotten and the sceptics discredited, while Neil has gone from strength to strength as a media star.
On that occasion, Nature invited accusations of scientific censorship by standing up to the Sunday Times’s programme of misinformation – making me proud to be working for the journal. As I recall, the RS remained aloof from that matter (though May mentions it in his 2005 speech). We should be glad that it is now apparently ready to enter the fray. Challenging powerful groups that distort science for personal, political or commercial reasons is not censorship, it is being an advocate for science in the real world.
Physics gets dirty
[This is my Materials Witness column for the November issue of Nature Materials.]
My copy of The New Physics, published in 1989 by Cambridge University Press, is much thumbed. Now regarded as something of a classic, it provides a peerless overview of key areas of modern physics, written by leading experts who achieve the rare combination of depth and clarity.
It’s reasonable, then, to regard the revised edition, just published as The New Physics for the 21st Century, as something of an authoritative statement on what’s in and what’s out in physics. And so it is striking to see materials, more or less entirely absent from the 1989 book, prominent on the new agenda.
Most noticeably, Robert Cahn of Cambridge University has contributed a chapter called ‘Physics and materials’, which covers topics ranging from dopant distributions in semiconductors to liquid crystal displays, photovoltaics and magnetic storage. In addition, Yoseph Imry of the Weizmann Institute in Israel contributes a chapter on ‘Small-scale structure and nanoscience’, a snapshot of one of the hottest areas of materials science.
All very well, but it begs the question of why materials science was, according to this measure, more or less absent from twentieth-century physics but central to that of the twenty-first. Indeed, one might have thought that the traditional image of materials science as an empirical engineering discipline with a theoretical framework based in classical mechanics looks far from cutting-edge, and would hardly rival the appeal of quantum field theory or cosmology.
Of course, topics such as inflationary theory and quantum gravity are still very much on the menu. But the new book drops topics that might be deemed the epitome of physicists’ reputed delight in abstraction: gone are the chapters on grand unified theories, gauge theories, and the conceptual foundations of quantum theory. Even Stephen Hawking’s contribution on ‘The edge of spacetime’ has been axed (a brave move by the publishers) in favour of down-to-earth biophysics and medical physics.
So what took physics so long to realize that it must acknowledge its material aspects? “Straight physicists alternate between the deep conviction that they could do materials science much better than trained materials scientists (they are apt to regards the latter as fictional) and a somewhat stand-offish refusal to take an interest”, claims Cahn.
One could also say that physics has sometimes tried to transcend material particularities. “There has been the thought that condensed matter and material physics is second-rate dirty, applied stuff”, Imry says. Even though condensed matter is fairly well served in the first edition, it tended to be rather dematerialized, couched in terms of critical points, dimensionality, theories of quantum phase transitions. But it is now clear that universality has its limits – high-temperature superconductors need their own theory, graphene is not like a copper monolayer nor poly(phenylene vinylene) like silicon.
“Nanoscience has both universal aspects, which has been much of the focus of modern physics, and variety due to the wealth of real materials”, says Imry. “That’s a part of the beauty of this field!”
[This is my Materials Witness column for the November issue of Nature Materials.]
My copy of The New Physics, published in 1989 by Cambridge University Press, is much thumbed. Now regarded as something of a classic, it provides a peerless overview of key areas of modern physics, written by leading experts who achieve the rare combination of depth and clarity.
It’s reasonable, then, to regard the revised edition, just published as The New Physics for the 21st Century, as something of an authoritative statement on what’s in and what’s out in physics. And so it is striking to see materials, more or less entirely absent from the 1989 book, prominent on the new agenda.
Most noticeably, Robert Cahn of Cambridge University has contributed a chapter called ‘Physics and materials’, which covers topics ranging from dopant distributions in semiconductors to liquid crystal displays, photovoltaics and magnetic storage. In addition, Yoseph Imry of the Weizmann Institute in Israel contributes a chapter on ‘Small-scale structure and nanoscience’, a snapshot of one of the hottest areas of materials science.
All very well, but it begs the question of why materials science was, according to this measure, more or less absent from twentieth-century physics but central to that of the twenty-first. Indeed, one might have thought that the traditional image of materials science as an empirical engineering discipline with a theoretical framework based in classical mechanics looks far from cutting-edge, and would hardly rival the appeal of quantum field theory or cosmology.
Of course, topics such as inflationary theory and quantum gravity are still very much on the menu. But the new book drops topics that might be deemed the epitome of physicists’ reputed delight in abstraction: gone are the chapters on grand unified theories, gauge theories, and the conceptual foundations of quantum theory. Even Stephen Hawking’s contribution on ‘The edge of spacetime’ has been axed (a brave move by the publishers) in favour of down-to-earth biophysics and medical physics.
So what took physics so long to realize that it must acknowledge its material aspects? “Straight physicists alternate between the deep conviction that they could do materials science much better than trained materials scientists (they are apt to regards the latter as fictional) and a somewhat stand-offish refusal to take an interest”, claims Cahn.
One could also say that physics has sometimes tried to transcend material particularities. “There has been the thought that condensed matter and material physics is second-rate dirty, applied stuff”, Imry says. Even though condensed matter is fairly well served in the first edition, it tended to be rather dematerialized, couched in terms of critical points, dimensionality, theories of quantum phase transitions. But it is now clear that universality has its limits – high-temperature superconductors need their own theory, graphene is not like a copper monolayer nor poly(phenylene vinylene) like silicon.
“Nanoscience has both universal aspects, which has been much of the focus of modern physics, and variety due to the wealth of real materials”, says Imry. “That’s a part of the beauty of this field!”
Tuesday, September 26, 2006

One small step: NASA’s first date with China
Here’s my latest latest article for muse@nature.com, pondering on the implications of the visit by NASA’s Mike Griffin to China. (There’ll be a few differences due to editing, and this version also has handy links in the text.)
NASA’s visit to China is overdue – the rest of the world got there long ago.
This could be the start of a beautiful friendship. That, at least, is how the Chinese press seems keen to portray the visit this week by NASA’s head Mike Griffin, who is touring Beijing and Shanghai, at the invitation of Chinese president Hu Jintao, to “become acquainted with my counterparts in China and to understand their goals for space exploration.” China Central Television proudly proclaims “China, US to boost space cooperation”, while China Daily reports “China-US space co-op set for lift-off.”
But Griffin himself is more circumspect. “It’s our get-acquainted visit, it’s our exploratory visit and it’s our first date”, he told a press conference, adding “There are differences between our nations on certain key points” – unsurprisingly, for example, the control of missiles. He stressed shortly before the visit that he did not want to “create expectations that would be possibly embarrassing to us or embarrassing to China.”
Griffin’s caution is understandable, since this is after all the first visit by a NASA administrator, and he confessed before the trip that he did not know much about China’s capabilities in space. But why did it take them so long? After all, China has well established joint space projects with Europe, Russia and Brazil, and is one of only three nations to have put people into space.
Rising star
NASA’s Chinese jaunt is not entirely out of the blue. The administrator of the Chinese National Space Agency (CNSA), Sun Laiyan, visited Griffin’s predecessor Sean O’Keeffe at the end of 2004 on a similar introductory mission, a year after the first Chinese manned spaceflight. US space scientists were given a wake-up call last April when CNSA’s vice administrator Luo Ge revealed the extent of China’s space plans at the National Space Symposium in Colorado. These included the possibility of a manned moon shot.
And the full reality of Chinese capabilities became evident to US congressman Tom Feeney on a visit to China in January as part of Congress’s China Working Group. He and his colleagues saw the Jiuquan satellite launch centre in Gansu province at first hand. “In the United States, we’re training aerospace engineers how to maintain 20 to 40-year old technology”, said Feeley. “The Chinese are literally developing new technology on their own.”
There can be no remaining doubt that China is a serious player in space technology, however much it is a latecomer to the party. Griffin admits that “China has clearly made enormous strides in a very short period”. The ‘can-do’ philosophy apparent in China’s domestic industrial and engineering schemes, pursued with a determination that can appear little short of ruthless, will surely be sounding alarms within the US space industry.
All of which makes it strange that a NASA trip to China has been so long in coming.
Enemy at the gate
The reticence must be due in large measure to the fact that China has long been regarded as a rival rather than a collaborator. China’s desires to become involved in the International Space Station (ISS) have previously been stymied by the USA, for example. In 2001 Dana Rohrabacher, chair the space and aeronautics subcommittee of the House of Representatives, told journalists that he was not interested in Chinese offers to pay for ISS hardware, because of the country’s human-rights record. “The space station’s supposed to stand for something better,” he said, after seeking help from countries including the United Arab Emirates.
The real reasons for a US reluctance to engage with China over space technology must include a considerable dose of Cold War paranoia, especially now that China is emerging as such a strong player. Griffin himself says that Russian involvement with the ISS also initially met with some resistance, although now it’s clear that the space station would have been doomed without it.
The current talks of cooperation do not necessarily signal a lessening of that scepticism, but are possibly boosted by a mixture of realpolitik and economics. Since China is going ahead at full steam with its links to the space programs of Russia and Europe, the US could risk creating a powerful competitor if it doesn’t join in. And preventing US companies from exporting technologies to the most rapidly growing space program in the world threatens to undercut their own competitiveness. In fact, one of the obstacles to such trade is the question of China’s readiness to observe patents and copyrights.
But why has the US position on collaboration with China differed so much from that in Europe? Vincent Sabathier, previously Space Attaché at the French Embassy in the US, says that it comes down to a fundamental difference in attitudes to international relations: the US adopts a ‘realist’ stance based on opposed national interests, while European states have a more liberal approach that favours international dialogue and partnership. “While the US places an emphasis on space power and control, Europe maintains that its focus is on the peaceful use of outer space”, Sabathier says.
Power of partnerships
This has been reflected in Europe-China collaborations on satellite technology, such as the Galileo global-positioning system, intended for civilian use. Some Americans were unhappy that this threatened the hegemony of the US-controlled Global Positioning System, which has a large military component. The close links between the US space and military programs have hindered trade of its space technologies with China because of military export controls, whereas in Europe the issues are largely decoupled (Europe maintains a rather precarious arms embargo on trade with China). “The US’s isolationist policy forces other space-faring nations, such as Europe, Japan, Russia, India and China, to cooperate among themselves”, Sabathier asserts.
Fears about how China plans to use its space capabilities cannot be wholly dismissed as paranoia, however. China’s defence spending has increased in recent years, although it is notoriously cagey about the figures. Some worry that strengthening its military force is partly a move to intimidate Taiwan – an objective that could be bolstered by satellite technology. The fact remains, however, that China’s young space program doesn’t have the military legacy of NASA, fueled by an entire industry of defence-based aerospace. At this moment, it looks as though China’s space ambitions are driven more by national pride – by the wish to be seen as a technological world leader. That claim is becoming increasingly justified. Rather than worrying about losing technical secrets, China’s space collaborators seem now more likely to gain some handy tips.
Friday, September 08, 2006
Latest Lab Report
Here is my Lab Report column for the October issue of Prospect. And while I’m about it, I’d like to mention the excellent comment on Prospect’s web site about the shameful issue of Britain’s stance on the Trident nuclear submarines. Sadly, this kind of clear-headedness doesn’t find a voice in Westminster.
*************************
In-flight chemistry
It is not easy to make TNT, as I discovered by boiling toluene and nitric acid to no great effect during a school lunchtime. Admittedly it is not terribly hard either, if you have the right recipe, equipment and ingredients – the details can be found on the web, and the raw materials at DIY stores – but a little practical experience with chemistry provides some perspective on the notion of concocting an aircraft-busting explosive in the cabin toilet.
So it’s not surprising that some chemists have expressed doubts about the alleged terrorist plot to blow up transatlantic flights. Could two liquids really be combined to make an instant, deadly explosive?
Speculation has it that the plotters were going to mix up triacetone triperoxide (TATP), an explosive allegedly used in the London tube bombings last year. In principle this can be made from hydrogen peroxide (bleach), acetone (paint thinner) and sulphuric acid (drain cleaner). But like so much of chemistry, it’s not that straightforward. The ingredients have to be highly concentrated, so can’t easily be passed off as mineral water or shampoo. The reaction needs to be carried out at low temperature. And even if you succeed in making TATP, it isn’t dangerous until purified and crystallized. In other words, you’d be smuggling into the loo not just highly potent liquids but also a refrigerant and distilling apparatus – and the job might take several hours. Gerry Murray of the Forensic Science Agency of Northern Ireland told Chemistry World magazine that making TATP in-flight would be “extremely difficult.”
Why not just smuggle a ready-made liquid explosive on board? Some media reports suggested that the plotters intended instead to use bottled nitroglycerine. But you’d need a lot of it to do serious damage, and it is so delicate that it could well go off during check-in. The same is true for pure TATP itself (a solid resembling sugar), which is why the unconfirmed suggestion that it was used for the tube bombings has met with some scepticism.
What does this mean for the security measures currently in place? It is hard to understand the obsession with liquids and gels. It’s not clear, for example, that there is any vital component of any ‘mixable’ explosive that would be odourless and pass a ‘swig test’, let alone be feasibly used in flight to brew up a lethal charge. Why are solids not subject to the same scrutiny? Most explosives (including TATP) in any case emit volatile fumes that can be detected at very low concentrations.
When airports instigated the ‘no liquids’ policy in August, they were making an understandable quick response to a poorly known threat. But they now seem to be at risk of perpetuating a myth about how easy it is to do complex chemistry.
Moon crash
Smashing spacecraft into celestial bodies has become something of a craze among space scientists. In 1999 they disposed of the Lunar Prospector craft, at the end of its mission to survey the moon for water ice and magnetic fields, by crashing it into a lunar crater in the hope that the impact would throw up evidence of water visible from telescopes. (It didn’t.) The Deep Impact mission ploughed into the comet Tempel 1 last February, revealing a puff of ice hidden below the surface. A rocket stage used to send a new satellite to the moon in 2008 has been proposed for a more massive re-run of the Prospector experiment. And the THOR mission pencilled in for 2011 would send a 100-kg copper projectile crashing into Mars, creating a 50-m wide crater and possibly ejecting ice, organic compounds and other materials.
The most recent of these kamikaze missions is SMART-1, the European Space Agency’s moon-observing satellite, which ended its career on 3 September by smashing into the lunar Lake of Excellence. Again, the aim was to analyse images of the impact to identify the chemical composition of the debris, using the technique of spectroscopy. SMART-1 had already stayed active for longer than originally expected, and its experimental ion-thrust propulsion system was exhausted, making a lunar crash landing inevitable anyway. This was another case of wringing a last bit of value from a moribund mission. The disposal of a washing-machine-sized probe on the moon is hardly the most heinous act of fly-tipping – but it can’t be long before this trend starts to raise mutters of environmental disapproval.
Perhaps we can clear up the mess when we return to the moon. Lockheed Martin has recently been awarded the NASA contract to build the Orion Crew Exploration Vehicle, the replacement for the beleaguered space shuttle and the basis of a new manned moon shot. Scheduled for 2014 at the latest, Orion will ditch the airplane chic of the shuttle, comprising a single-use tubular rocket with a lunar lander and re-entry capsule in its tip, the latter provided with heat shield and parachutes. Lockheed Martin has presumably been working hard on this design, but cynics might suspect they just stole the idea from that film with Tom Hanks in it.
The Macbeth effect
Shakespeare’s insight into the human psyche is vindicated once again. The impulse to wash after committing an unethical act, immortalized in Lady Macbeth’s “Out, damned spot!”, has been confirmed as a genuine psychological phenomenon. Two social scientists say that ‘cleansing-related words’ were more readily produced in exercises by subjects who had first been asked to recall an unethical deed. These subjects were also more likely to take a proffered antiseptic wipe – and, rather alarmingly, such physical cleansing seemed to expunge their guilt and make them less likely to show philanthropic behaviour afterwards. There is nothing particularly godly about cleanliness, then, which is a sign of a guilty conscience cheaply assuaged.
Here is my Lab Report column for the October issue of Prospect. And while I’m about it, I’d like to mention the excellent comment on Prospect’s web site about the shameful issue of Britain’s stance on the Trident nuclear submarines. Sadly, this kind of clear-headedness doesn’t find a voice in Westminster.
*************************
In-flight chemistry
It is not easy to make TNT, as I discovered by boiling toluene and nitric acid to no great effect during a school lunchtime. Admittedly it is not terribly hard either, if you have the right recipe, equipment and ingredients – the details can be found on the web, and the raw materials at DIY stores – but a little practical experience with chemistry provides some perspective on the notion of concocting an aircraft-busting explosive in the cabin toilet.
So it’s not surprising that some chemists have expressed doubts about the alleged terrorist plot to blow up transatlantic flights. Could two liquids really be combined to make an instant, deadly explosive?
Speculation has it that the plotters were going to mix up triacetone triperoxide (TATP), an explosive allegedly used in the London tube bombings last year. In principle this can be made from hydrogen peroxide (bleach), acetone (paint thinner) and sulphuric acid (drain cleaner). But like so much of chemistry, it’s not that straightforward. The ingredients have to be highly concentrated, so can’t easily be passed off as mineral water or shampoo. The reaction needs to be carried out at low temperature. And even if you succeed in making TATP, it isn’t dangerous until purified and crystallized. In other words, you’d be smuggling into the loo not just highly potent liquids but also a refrigerant and distilling apparatus – and the job might take several hours. Gerry Murray of the Forensic Science Agency of Northern Ireland told Chemistry World magazine that making TATP in-flight would be “extremely difficult.”
Why not just smuggle a ready-made liquid explosive on board? Some media reports suggested that the plotters intended instead to use bottled nitroglycerine. But you’d need a lot of it to do serious damage, and it is so delicate that it could well go off during check-in. The same is true for pure TATP itself (a solid resembling sugar), which is why the unconfirmed suggestion that it was used for the tube bombings has met with some scepticism.
What does this mean for the security measures currently in place? It is hard to understand the obsession with liquids and gels. It’s not clear, for example, that there is any vital component of any ‘mixable’ explosive that would be odourless and pass a ‘swig test’, let alone be feasibly used in flight to brew up a lethal charge. Why are solids not subject to the same scrutiny? Most explosives (including TATP) in any case emit volatile fumes that can be detected at very low concentrations.
When airports instigated the ‘no liquids’ policy in August, they were making an understandable quick response to a poorly known threat. But they now seem to be at risk of perpetuating a myth about how easy it is to do complex chemistry.
Moon crash
Smashing spacecraft into celestial bodies has become something of a craze among space scientists. In 1999 they disposed of the Lunar Prospector craft, at the end of its mission to survey the moon for water ice and magnetic fields, by crashing it into a lunar crater in the hope that the impact would throw up evidence of water visible from telescopes. (It didn’t.) The Deep Impact mission ploughed into the comet Tempel 1 last February, revealing a puff of ice hidden below the surface. A rocket stage used to send a new satellite to the moon in 2008 has been proposed for a more massive re-run of the Prospector experiment. And the THOR mission pencilled in for 2011 would send a 100-kg copper projectile crashing into Mars, creating a 50-m wide crater and possibly ejecting ice, organic compounds and other materials.
The most recent of these kamikaze missions is SMART-1, the European Space Agency’s moon-observing satellite, which ended its career on 3 September by smashing into the lunar Lake of Excellence. Again, the aim was to analyse images of the impact to identify the chemical composition of the debris, using the technique of spectroscopy. SMART-1 had already stayed active for longer than originally expected, and its experimental ion-thrust propulsion system was exhausted, making a lunar crash landing inevitable anyway. This was another case of wringing a last bit of value from a moribund mission. The disposal of a washing-machine-sized probe on the moon is hardly the most heinous act of fly-tipping – but it can’t be long before this trend starts to raise mutters of environmental disapproval.
Perhaps we can clear up the mess when we return to the moon. Lockheed Martin has recently been awarded the NASA contract to build the Orion Crew Exploration Vehicle, the replacement for the beleaguered space shuttle and the basis of a new manned moon shot. Scheduled for 2014 at the latest, Orion will ditch the airplane chic of the shuttle, comprising a single-use tubular rocket with a lunar lander and re-entry capsule in its tip, the latter provided with heat shield and parachutes. Lockheed Martin has presumably been working hard on this design, but cynics might suspect they just stole the idea from that film with Tom Hanks in it.
The Macbeth effect
Shakespeare’s insight into the human psyche is vindicated once again. The impulse to wash after committing an unethical act, immortalized in Lady Macbeth’s “Out, damned spot!”, has been confirmed as a genuine psychological phenomenon. Two social scientists say that ‘cleansing-related words’ were more readily produced in exercises by subjects who had first been asked to recall an unethical deed. These subjects were also more likely to take a proffered antiseptic wipe – and, rather alarmingly, such physical cleansing seemed to expunge their guilt and make them less likely to show philanthropic behaviour afterwards. There is nothing particularly godly about cleanliness, then, which is a sign of a guilty conscience cheaply assuaged.
Sunday, September 03, 2006
Unbelievable fiction
In telling us “how to read a novel”, John Sutherland in the Guardian Review (2 September 2006) shows an admirable willingness to avoid the usual literary snobbery about science fiction, suggesting that among other things it can have a pedagogical value. That’s certainly true of the brand of sci-fi pioneered by the likes of Arthur C. Clarke and Isaac Asimov, which took pride in the accuracy of its science. Often, however, sci-fi writers might appropriate just enough real science to make that aspect of the plot vaguely plausible – which is entirely proper for a work of fiction, but not always the most reliable way to learn about science. Even that, however, can encourage the reader to find out more, as Sutherland says.
Sadly, however, he chooses to use the books of Michael Crichton to illustrate his point. Now, Crichton likes to let it be known that he does his homework, and certainly his use of genetic engineering in Jurassic Park is perfectly reasonable for a sci-fi thriller: that’s to say, he stretches the facts, but not unduly, and one has to be a bit of a pedant to object to his reconstituted T. rexes. But Crichton has now seemingly succumbed to the malaise that threatens many pretty smart and successful people, in that they forget the limitations of that smartness. In Prey, Crichton made entertaining use of the eccentric vision of nanotechnology presented by Eric Drexler (self-replicating rogue nanobots), supplemented with some ideas from swarm intelligence, but one’s heart sank when it became clear at the end of the book that in fact Crichton believed this was what nanotech was really all about. (I admit that I’m being generous about the definition of ‘entertaining’ here – I read the book for professional purposes, you understand, and was naively shocked by what passes for characterisation and dialogue in this airport genre. But that’s just a bit of literary snobbishness of my own.)
The situation is far worse, however, in Crichton’s climate-change thriller State of Fear, which portrays anthropogenic climate change as a massive scam. Crichton wants us to buy into this as a serious point of view – one, you understand, that he has come to himself after examining the scientific literature on the subject.
I’ve written about this elsewhere. But Sutherland’s comments present a new perspective. He seems to accept a worrying degree of ignorance on the part of the reader, such that we are assumed to be totally in the dark about whether Crichton or his ‘critics’ (the entire scientific community, aside from the predictable likes of Bjorn Lomborg, Patrick Michaels, Richard Lindzen and, er, about two or three others) are correct. “No one knows the accuracy of what Crichton knows, or thinks he knows”, says Sutherland. Well, we could do worse than consult the latest report by the Intergovernmental Panel on Climate Change, composed of the world’s top climate scientists, which flatly contradicts Crichton’s claims. Perhaps in the literary world one person’s opinion is as good as another’s, but thankfully science doesn’t work that way. Sutherland’s suggestion that readers of State of Fear will end up knowing more about the subject is wishful thinking: misinformation is the precise opposite of information.
It isn’t clear whether or not he thinks we should be impressed by the fact that Crichton testified in 2005 before a US senate committee on climate change, but in fact this showed in truly chilling fashion how hard some US politicians find it to distinguish fact from fiction. (That State of Fear was given an award for ‘journalism’ by the American Association of Petroleum Geologists earlier this year was more nakedly cynical.)
Yes, fiction can teach us facts, but not when it is written by authors who have forgotten they are telling a story and have started to believe this makes them experts on their subject. That’s the point at which fiction starts to become dangerous.
In telling us “how to read a novel”, John Sutherland in the Guardian Review (2 September 2006) shows an admirable willingness to avoid the usual literary snobbery about science fiction, suggesting that among other things it can have a pedagogical value. That’s certainly true of the brand of sci-fi pioneered by the likes of Arthur C. Clarke and Isaac Asimov, which took pride in the accuracy of its science. Often, however, sci-fi writers might appropriate just enough real science to make that aspect of the plot vaguely plausible – which is entirely proper for a work of fiction, but not always the most reliable way to learn about science. Even that, however, can encourage the reader to find out more, as Sutherland says.
Sadly, however, he chooses to use the books of Michael Crichton to illustrate his point. Now, Crichton likes to let it be known that he does his homework, and certainly his use of genetic engineering in Jurassic Park is perfectly reasonable for a sci-fi thriller: that’s to say, he stretches the facts, but not unduly, and one has to be a bit of a pedant to object to his reconstituted T. rexes. But Crichton has now seemingly succumbed to the malaise that threatens many pretty smart and successful people, in that they forget the limitations of that smartness. In Prey, Crichton made entertaining use of the eccentric vision of nanotechnology presented by Eric Drexler (self-replicating rogue nanobots), supplemented with some ideas from swarm intelligence, but one’s heart sank when it became clear at the end of the book that in fact Crichton believed this was what nanotech was really all about. (I admit that I’m being generous about the definition of ‘entertaining’ here – I read the book for professional purposes, you understand, and was naively shocked by what passes for characterisation and dialogue in this airport genre. But that’s just a bit of literary snobbishness of my own.)
The situation is far worse, however, in Crichton’s climate-change thriller State of Fear, which portrays anthropogenic climate change as a massive scam. Crichton wants us to buy into this as a serious point of view – one, you understand, that he has come to himself after examining the scientific literature on the subject.
I’ve written about this elsewhere. But Sutherland’s comments present a new perspective. He seems to accept a worrying degree of ignorance on the part of the reader, such that we are assumed to be totally in the dark about whether Crichton or his ‘critics’ (the entire scientific community, aside from the predictable likes of Bjorn Lomborg, Patrick Michaels, Richard Lindzen and, er, about two or three others) are correct. “No one knows the accuracy of what Crichton knows, or thinks he knows”, says Sutherland. Well, we could do worse than consult the latest report by the Intergovernmental Panel on Climate Change, composed of the world’s top climate scientists, which flatly contradicts Crichton’s claims. Perhaps in the literary world one person’s opinion is as good as another’s, but thankfully science doesn’t work that way. Sutherland’s suggestion that readers of State of Fear will end up knowing more about the subject is wishful thinking: misinformation is the precise opposite of information.
It isn’t clear whether or not he thinks we should be impressed by the fact that Crichton testified in 2005 before a US senate committee on climate change, but in fact this showed in truly chilling fashion how hard some US politicians find it to distinguish fact from fiction. (That State of Fear was given an award for ‘journalism’ by the American Association of Petroleum Geologists earlier this year was more nakedly cynical.)
Yes, fiction can teach us facts, but not when it is written by authors who have forgotten they are telling a story and have started to believe this makes them experts on their subject. That’s the point at which fiction starts to become dangerous.
Saturday, August 26, 2006
Tyred out
Here’s my Materials Witness column for the September issue of Nature Materials. It springs from a recent broadcast in which I participated on BBC Radio 4’s Material World – I was there to talk about synthetic biology, but the item before me was concerned with the unexpectedly fascinating, and important, topic of tyre disposal. It seemed to me that the issue highlighted the all too common craziness of our manufacturing systems, in which potentially valuable materials are treated as ‘waste’ simply because we have not worked out the infrastructure sensibly. We can’t afford this profligacy, especially with oil-based products. I know that incineration has a bad press, and I can believe that is sometimes deserved; but surely it is better to recover some of this embodied energy rather than to simply dump it in the nearest ditch?
*****
In July it became illegal to dump almost any kind of vehicle tyres in landfill sites in Europe. Dumping of whole tyres has been banned since 2003; the new directive forbids such disposal of shredded tyres too. That is going to leave European states with an awful lot of used tyres to dispose of in other ways. What can be done with them?
This is a difficult question for the motor industry, but also raises a broader issue about the life cycle of industrial materials. The strange thing about tyres is that there are many ways in which they could be a valuable resource, and yet somehow they end up being regarded as toxic waste. Reduced to crumbs, tyre rubber can be incorporated into soft surfacing for sports grounds and playgrounds. Added to asphalt for road surfaces, it makes the roads harder-wearing.
And rubber is of course an energy carrier: a potential fuel. Pyrolysis of tyres generates gas and oil, recovering some of the carbon that went into their making. This process can be made relatively clean – certainly more so than combustion of coal in power stations.
Alternatively, tyres can simply be burnt to create heat: they have 10% more calorific content than coal. At present, the main use of old tyres is as fuel for cement kilns. But the image of burning tyres sounds deeply unappealing, and there is opposition to this practice from environmental groups, who dispute the claim that it is cleaner than coal. Such concerns make it hard to secure approval for either cement-kiln firing or pyrolysis. And the emissions regulations are strict – rightfully so, but reducing the economic viability. As a result, these uses tend to be capacity-limited.
Tyre retreads have a bad image too – they are seen as second-rate, whereas the truth is that they can perform very well and the environmental benefits of reuse are considerable. Such recycling is also undermined by cheap imports – why buy a second-hand tyre when a new one costs the same?
Unfortunately, other environmental concerns are going to make the problem of tyre disposal even worse. Another European ruling prohibits the use of polycyclic aromatic hydrocarbon oil components in tyre rubber because of their carcinogenicity. It’s a reasonable enough precaution, given that a Swedish study in 2002 found that tyre wear on roads was responsible for a significant amount of the polycyclic aromatics detected in aquatic organisms around Stockholm. But without these ingredients, a tyre’s lifetime is likely to be cut to perhaps just a quarter of its present value. That means more worn-out tyres: the current 42 million tyres discarded in the UK alone could rise to around 100 million as a consequence.
Whether Europe will avoid a used-tyre mountain remains to be seen. But the prospect of an evidently useful, energy-rich material being massively under-exploited seems to say something salutary about the notion that market economics can guarantee efficient materials use. Perhaps it’s time for some incentives?
Here’s my Materials Witness column for the September issue of Nature Materials. It springs from a recent broadcast in which I participated on BBC Radio 4’s Material World – I was there to talk about synthetic biology, but the item before me was concerned with the unexpectedly fascinating, and important, topic of tyre disposal. It seemed to me that the issue highlighted the all too common craziness of our manufacturing systems, in which potentially valuable materials are treated as ‘waste’ simply because we have not worked out the infrastructure sensibly. We can’t afford this profligacy, especially with oil-based products. I know that incineration has a bad press, and I can believe that is sometimes deserved; but surely it is better to recover some of this embodied energy rather than to simply dump it in the nearest ditch?
*****
In July it became illegal to dump almost any kind of vehicle tyres in landfill sites in Europe. Dumping of whole tyres has been banned since 2003; the new directive forbids such disposal of shredded tyres too. That is going to leave European states with an awful lot of used tyres to dispose of in other ways. What can be done with them?
This is a difficult question for the motor industry, but also raises a broader issue about the life cycle of industrial materials. The strange thing about tyres is that there are many ways in which they could be a valuable resource, and yet somehow they end up being regarded as toxic waste. Reduced to crumbs, tyre rubber can be incorporated into soft surfacing for sports grounds and playgrounds. Added to asphalt for road surfaces, it makes the roads harder-wearing.
And rubber is of course an energy carrier: a potential fuel. Pyrolysis of tyres generates gas and oil, recovering some of the carbon that went into their making. This process can be made relatively clean – certainly more so than combustion of coal in power stations.
Alternatively, tyres can simply be burnt to create heat: they have 10% more calorific content than coal. At present, the main use of old tyres is as fuel for cement kilns. But the image of burning tyres sounds deeply unappealing, and there is opposition to this practice from environmental groups, who dispute the claim that it is cleaner than coal. Such concerns make it hard to secure approval for either cement-kiln firing or pyrolysis. And the emissions regulations are strict – rightfully so, but reducing the economic viability. As a result, these uses tend to be capacity-limited.
Tyre retreads have a bad image too – they are seen as second-rate, whereas the truth is that they can perform very well and the environmental benefits of reuse are considerable. Such recycling is also undermined by cheap imports – why buy a second-hand tyre when a new one costs the same?
Unfortunately, other environmental concerns are going to make the problem of tyre disposal even worse. Another European ruling prohibits the use of polycyclic aromatic hydrocarbon oil components in tyre rubber because of their carcinogenicity. It’s a reasonable enough precaution, given that a Swedish study in 2002 found that tyre wear on roads was responsible for a significant amount of the polycyclic aromatics detected in aquatic organisms around Stockholm. But without these ingredients, a tyre’s lifetime is likely to be cut to perhaps just a quarter of its present value. That means more worn-out tyres: the current 42 million tyres discarded in the UK alone could rise to around 100 million as a consequence.
Whether Europe will avoid a used-tyre mountain remains to be seen. But the prospect of an evidently useful, energy-rich material being massively under-exploited seems to say something salutary about the notion that market economics can guarantee efficient materials use. Perhaps it’s time for some incentives?
Sunday, August 06, 2006
Star treatment
Am I indulging in cheap ‘kiss & tell’ by musing on news@nature about my meeting with Madonna? Too late now for that kind of soul-searching, but in any case I figured that (1) this is now ancient history; (2) she’s talked about her interest in ‘neutralizing nuclear waste’ to Rolling Stone; and (3) I’ve no interest in trying to make a famous person sound silly. As far as I’m concerned, it’s great that some people with lots of money will look into ways of investing it philanthropically. But I did feel some obligation to suggest to her that this scheme did not seem like a particularly good investment. After all, part of the reason why she asked me over was to proffer advice (at least, I hope so – I’d no intention of acting simply as the PR officer).
The point I really wanted to make in this article, however, is how perpetually alluring these cultural myths of science are. Once you start to dig into the idea that radioactivity can be ‘neutralized’, it’s astonishing what is out there. My favourite is Brown’s gas, the modern equivalent of a perpetual-motion machine (actually a form of electrolysed water, though heaven forbid that we should suggest it is hydrogen + oxygen). None of this, however, is to deny that radioactive half-lives really can be altered by human means – but by such tiny amounts that there doesn’t seem much future, right now, in that dream of eliminating nuclear waste. So as I say in the article, it seems that for now we will have to learn to live with the stuff. Keith Richards does – apparently he drinks it. In his case it's just a nickname for his favourite cocktail of vodka and orange soda. But as everyone knows, Keith can survive anything.
Am I indulging in cheap ‘kiss & tell’ by musing on news@nature about my meeting with Madonna? Too late now for that kind of soul-searching, but in any case I figured that (1) this is now ancient history; (2) she’s talked about her interest in ‘neutralizing nuclear waste’ to Rolling Stone; and (3) I’ve no interest in trying to make a famous person sound silly. As far as I’m concerned, it’s great that some people with lots of money will look into ways of investing it philanthropically. But I did feel some obligation to suggest to her that this scheme did not seem like a particularly good investment. After all, part of the reason why she asked me over was to proffer advice (at least, I hope so – I’d no intention of acting simply as the PR officer).
The point I really wanted to make in this article, however, is how perpetually alluring these cultural myths of science are. Once you start to dig into the idea that radioactivity can be ‘neutralized’, it’s astonishing what is out there. My favourite is Brown’s gas, the modern equivalent of a perpetual-motion machine (actually a form of electrolysed water, though heaven forbid that we should suggest it is hydrogen + oxygen). None of this, however, is to deny that radioactive half-lives really can be altered by human means – but by such tiny amounts that there doesn’t seem much future, right now, in that dream of eliminating nuclear waste. So as I say in the article, it seems that for now we will have to learn to live with the stuff. Keith Richards does – apparently he drinks it. In his case it's just a nickname for his favourite cocktail of vodka and orange soda. But as everyone knows, Keith can survive anything.
Wednesday, August 02, 2006

Numerology
… is alive and well, and living somewhere between Chartres cathedral and Wall Street. I am one of the few remaining humans not to have read The Da Vinci Code, but it seems clear from what I have been told that it has given our collective system another potent dose of the Fibonacci virus. This is something that, as the author of a forthcoming book about Chartres, I knew I’d have to grapple with at some point. That point came last week, in the course of a fascinating day in York exploring aspects of Gothic in a summer school for architects. I’m too polite to mention names, but a talk in the evening on ‘sacred geometry’ was, for this naïve physicist, and eye-opener about the pull that numerology continues to exercise. There is plenty of room for healthy arguments about the degree to which the Gothic cathedrals were or weren’t designed according to Platonic geometry, and there will surely be no end to the time-honoured practice of drawing more or less fanciful geometric schemes on the ground plan of Chartres using thick pencil to reveal the builders’ ‘hidden code’. John James is a part master of this art, while Nigel Hiscock is one of the few to make a restrained and well argued case for it.
But the speaker last week was determined to go well beyond Chartres, by revealing divine geometry in a teleological universe. Most strikingly, he suggested that many of the planetary orbits, when suitably ‘corrected’ to get rid of the inconvenient eccentricity, become circles that can be inscribed or circumscribed on a variety of geometric figures with uncanny accuracy. One such, relating the orbits of the Earth and Venus, is shown in Figure a. The claim was that these ratios are extraordinarily precise. For example, in this case the orbits fit the construction to greater than 99% precision, it was asserted.
This seemed to me like a wonderful exercise to set A level students: how might one assess such claims? I decided to do that for myself. The Earth/Venus case is a relatively easy one to test: the ratio of the two ‘circular’ orbits should be equal to the square root of 2, which is approximately 1.414. Now, there is a question of exactly what is the right way to ‘circularize’ an elliptical orbit, but it seems to me that the most reasonable way is to use the average distances of the two planets from the sun – the mean of the major and minor axes of the ellipses. This apparently gives 149,476,000 km for Earth (to 6 s.f.) and 108,209,000 km for Venus. That gives us a ratio of 1.381. Not within 99% of root 2, then – but not bad, only out by about 2.4%. (I’m sure ‘sacred geometers’ will insist there is a ‘better’ way to circularize the orbits, but I think it would be hard to find one that is as neutral as this.)
How do we know if this near-coincidence is mere chance or not? Well, a relatively simple test is to consider all the geometric figures of the type shown by the speaker, and see how much of numerical space is spanned by the corresponding ratios – given a leeway of, say, 3% either way, to allow for the fact that (as was explained to me) the real world lacks pure Platonic perfection. So I did this, considering just the inscribed and circumscribed circles for the perfect polygons up to the hexagon, along with a couple of others in which these polygons are adorned with equilateral triangles on each side (see Figure b). (I know the latter look a little contrived, but one of them was used in this context in the talk.) I’m sure one can come up with several other ‘geometric’ figures of this kind, but this seemed like a reasonable minimal set. The ratios concerned then cover the space between 1 and 2. With the exception of Mars/Jupiter, all of the planetary orbits produce a ratio within this range when we consider each planet in turn and the next one beyond it.
Now, at the low end of the range (close to 1), one can get more or less any number by using a sufficiently many-sided polygon. For hexagons, the two circles produce a ratio range of 1.12 to 1.19, allowing for 3% variation each way. And in any event, while I don’t know the exact number, it seems highly likely from the dynamics of solar-system formation that one can’t get two orbits too close together – I suspect a lower limit on the radius ratio of something like 1.1.
OK, so adding up all the ranges covered by these figures leaves us with just 32% or so of the range between 1 and 2 not included. In other words, draw two circles at random with a radius ratio of between 1 and 2, and there is a two in three chance that you can fit this ratio to one of these geometric figures with 3% precision. With seven pairs to choose from in the solar system, we’d expect roughly 4-5 of them to ‘fit’.
It took me less than an hour to figure this out using school maths. The speaker last week was hardly lacking in arithmetical skills, but it seems not to have occurred to him to test his ‘coincidences’ in this way. I can only understand that in one way: these numerological arguments are ones he desperately wants to believe, to a degree that blinds him to reason.
That was born out by other statements about the ‘foolishness’ of science that would have been disproved with the most minimal of checking (such as that scientists discovered in 1987 that crystals with five-fold symmetry are possible, while Islamic artists had known that for centuries). The only explanation I can see is that of judgement being clouded by an anti-science agenda: we rarely question ‘facts’ that fit our preconceptions. I have to confess that I do find it troubling that educated people develop such an antipathy to science, and such a desperation to believe in some cosmic plan that generally turns out to be remarkably banal (whereby God fills nature with cheap number tricks), that they abandon all their critical faculties to embrace it – and to serve it up in a seemingly authoritative manner to people who don’t necessarily have the resources to assess it. I’d like to recommend to them the words of Adelard of Bath, who suggests that the intellectuals of the twelfth century had a rather more astute grip on these matters than we sometimes do today: “I do not detract from God. Everything that is, is from him, and because of him. But [nature] is not confused and without system, and so far as human knowledge has progressed it should be given a hearing. Only when it fails utterly should there be recourse to God.”
Wednesday, July 26, 2006
Genetic revisionism
Is it time for a thorough re-evaluation of how the genome is structured and how it operates? Recent work seems to be hinting that that could be so, as I’ve argued in an article in the August issue of Prospect, reproduced below (with small changes due to editing corrections that somehow got omitted). The idea is further supported by very recent work in Nature from Eran Segal at the Weizmann Institute and Jonathan Widom of Northwestern, which claims to have uncovered a kind of genomic code for DNA packing in nucleosomes. I suspect there is much more to come – this latest work already raises lots of interesting questions. Is it time to start probing the informational basis of the mesoscale structure of chromatin, which is clearly of great importance in transcription and regulation? I hope so. Watch this space.
[From Prospect, August 2006, p.14, with minor amendments]
The latest findings in genetics and molecular biology are revealing the human genome, the so-called “book of life”, to be messy to the point of incomprehensibility. Each copy is filled with annotations that change the meaning, there are some instructions that the book omits, the words overlap, and we don’t have a clue about the grammar.
Or maybe we simply have the wrong metaphor. The genome is no book, and the longer we talk about it in that way, the harder it will be to avoid misconceptions about how genes work.
But it’s not just the notion of the genome as a “list of parts” that now appears under threat. The entire central dogma of genetics—that a gene is a self-contained stretch of a DNA molecule that encodes instructions for making an enzyme, and that genetic inheritance works via DNA—is now under revision. It is not that this picture is wrong, but it is certainly incomplete in ways that are challenging the textbook image of how genes work.
Take, for example, the discovery in May by a team of French researchers that mice can show the physiological expression (the phenotype) of a genetic mutation that produces a spotty tail even if they don’t carry the mutant gene. Minoo Rassoulzadegan’s group in Nice found that the spotty phenotype can be induced by molecules of RNA that are passed from sperm to egg and then affect (in ways that are not understood) the operation of genes in the developing organism.
RNA is not normally regarded as an agent of inheritance. It is the ephemeral intermediate in the translation of genetic information on DNA to protein enzymes. According to the standard model of genetics, a gene on DNA is “transcribed” into an RNA molecule, which is then “translated” into a protein. In the book metaphor, the RNA is like a word that is copied from the pages of the genome and sent to a translator to be converted into the “protein” language.
The RNA transcripts are thrown away once they have been translated—they aren’t supposed to ferry genetic information between generations. Yet it seems that sometimes they can. That is profoundly challenging to current ideas about the role of genes in inheritance. It is not exactly a Lamarckian process (in which characteristics acquired by an organism through its interaction with the environment may be inherited)—but it is not consistent with the usual neo-Darwinian picture.
“Inheritance” via RNA is an example of so-called epigenetic variation: a change in an organism’s phenotype that is not induced by a change in its genome. Rassoulzadegan thinks that this offers nature a way of “trying out” mutations in the Darwinian arena without committing to the irreversible step of altering the genome itself. “It may be a way to induce variations quickly without taking the risk of changing primary DNA sequences,” he says. “Epigenetic variations in the phenotype are reversible events, and probably more flexible.” He also suspects they could be common.
There is even evidence that genetic mutations can be “corrected” in subsequent generations, implying that back-up copies of the original genetic information are kept and passed on between generations. This all paints a more complex picture than the standard neo-Darwinian story of random mutation pruned by natural selection.
Epigenetic influences on the development of organisms have been known for a long time. For example, identical twins with the same genomes don’t necessarily have the same physical characteristics or disposition to genetic disease. It’s clear that the influence of genes may be altered by their environment: a study in 2005 showed that differences in the activity of genes in identical twins become more pronounced with age, as the messages in the genome get progressively more modified. These “books” are constantly being revised and edited.
One way in which this happens is by chemical modification of DNA, such as the attachment of tags that “silence” the genes. These modifications can be strongly influenced by environmental factors such as diet. In effect, such epigenetic alterations constitute a second kind of code that overwrites the primary genetic instructions. An individual’s genetic character is therefore defined not just by his or her genome, but by the way it is epigenetically annotated. So rapid genetic screening will provide only part of the picture for the kind of personalised medicine that has been promised in the wake of the genome project: merely possessing a gene doesn’t mean that it is “used.”
Even the basic idea of a gene as a self-contained unit of biological information is now being contested. It has been known for 30 years that a single gene can encode several different proteins: the genetic information can be reshuffled after transcription into RNA. But the situation is far more complex than that. The molecular machinery that transcribes DNA doesn’t just start at the beginning of a gene and stop at the end: it seems regularly to overrun into another gene, or into regions that don’t seem to encode proteins at all. (A sobering 98-99 per cent of the human genome consists of such non-coding DNA, often described as “junk.”) Thus, many RNA molecules aren’t neat copies of genetic “words,” but contain fragments of others and appear to disregard the distinctions between them. If this looks like sloppy work, that may be because we simply don’t understand how the processes of transcription and translation really operate, and have developed an over-simplistic way of describing them.
This is backed up by observations that some RNA transcripts are composites of “words” from completely different parts of the genome, as though the copyist began writing down one word, then turned several pages and continued with another word entirely. If that’s so, one has to wonder whether the notions of copying, translation and books really have much value at all in describing the way genetics works. Worse, they could be misleading, persuading scientists that they understand more than they really do and tempting them towards incorrect interpretations. Already there are disagreements about precisely what a gene is.
In 2002 US scientists reported that the known protein-encoding regions of the genome account for perhaps less than a tenth of what gets transcribed into RNA. It is hard to imagine that cells would bother with the energetically costly process of making all that RNA unless it needed to. So apparently genetics isn’t all, or perhaps even primarily, about making proteins from genetic instructions.“The concept of a gene may not be as useful as it once was”, admits Thomas Gingeras of the biotech company Affymetrix. He suggests that a gene may be not a piece of DNA but a collective phenomenon involving a whole group of protein-coding and non-coding RNA transcripts. Perhaps these transcripts, not genes in the classical sense, are the fundamental functional units of the genome.
Scientists are constantly trying to express what they discover using concepts that are already familiar—not only when they communicate outside their field, but also when they talk among themselves. This is natural, and probably essential in order to gain a foothold on the slopes of new knowledge. But there’s no guarantee that those footholds are the ones that will lead us in the right direction, or anywhere at all. Relativity and quantum mechanics are still, after 100 years, deemed difficult and mysterious, not because they truly are but because we don’t really possess any good metaphors for them: the quantum world is not a game of billiards, the universe is not an expanding balloon and a light beam is not like a bullet train. Genetics, in contrast, looked accessible, because we thought we knew how to talk about information: that it is held as discrete, self-contained and stable packages of meaning that may be kept in data banks, copied, translated. Meaning is constructed by assembling those entities into linear strings organised by grammatical rules. The delight that accompanied the discovery of the structure of DNA half a century ago was that nature seemed to use this model too.
And indeed, the notion of information stored in the genome, passed on by copying, and read out as proteins, still seems basically sound. But it is looking increasingly doubtful that nature acts like a librarian or a computer programmer. It is quite possible that genetic information is parcelled and manipulated in ways that have no direct analogue in our own storage and retrieval systems. If we cleave to simplistic images of books and libraries, we may be missing the point.
Is it time for a thorough re-evaluation of how the genome is structured and how it operates? Recent work seems to be hinting that that could be so, as I’ve argued in an article in the August issue of Prospect, reproduced below (with small changes due to editing corrections that somehow got omitted). The idea is further supported by very recent work in Nature from Eran Segal at the Weizmann Institute and Jonathan Widom of Northwestern, which claims to have uncovered a kind of genomic code for DNA packing in nucleosomes. I suspect there is much more to come – this latest work already raises lots of interesting questions. Is it time to start probing the informational basis of the mesoscale structure of chromatin, which is clearly of great importance in transcription and regulation? I hope so. Watch this space.
[From Prospect, August 2006, p.14, with minor amendments]
The latest findings in genetics and molecular biology are revealing the human genome, the so-called “book of life”, to be messy to the point of incomprehensibility. Each copy is filled with annotations that change the meaning, there are some instructions that the book omits, the words overlap, and we don’t have a clue about the grammar.
Or maybe we simply have the wrong metaphor. The genome is no book, and the longer we talk about it in that way, the harder it will be to avoid misconceptions about how genes work.
But it’s not just the notion of the genome as a “list of parts” that now appears under threat. The entire central dogma of genetics—that a gene is a self-contained stretch of a DNA molecule that encodes instructions for making an enzyme, and that genetic inheritance works via DNA—is now under revision. It is not that this picture is wrong, but it is certainly incomplete in ways that are challenging the textbook image of how genes work.
Take, for example, the discovery in May by a team of French researchers that mice can show the physiological expression (the phenotype) of a genetic mutation that produces a spotty tail even if they don’t carry the mutant gene. Minoo Rassoulzadegan’s group in Nice found that the spotty phenotype can be induced by molecules of RNA that are passed from sperm to egg and then affect (in ways that are not understood) the operation of genes in the developing organism.
RNA is not normally regarded as an agent of inheritance. It is the ephemeral intermediate in the translation of genetic information on DNA to protein enzymes. According to the standard model of genetics, a gene on DNA is “transcribed” into an RNA molecule, which is then “translated” into a protein. In the book metaphor, the RNA is like a word that is copied from the pages of the genome and sent to a translator to be converted into the “protein” language.
The RNA transcripts are thrown away once they have been translated—they aren’t supposed to ferry genetic information between generations. Yet it seems that sometimes they can. That is profoundly challenging to current ideas about the role of genes in inheritance. It is not exactly a Lamarckian process (in which characteristics acquired by an organism through its interaction with the environment may be inherited)—but it is not consistent with the usual neo-Darwinian picture.
“Inheritance” via RNA is an example of so-called epigenetic variation: a change in an organism’s phenotype that is not induced by a change in its genome. Rassoulzadegan thinks that this offers nature a way of “trying out” mutations in the Darwinian arena without committing to the irreversible step of altering the genome itself. “It may be a way to induce variations quickly without taking the risk of changing primary DNA sequences,” he says. “Epigenetic variations in the phenotype are reversible events, and probably more flexible.” He also suspects they could be common.
There is even evidence that genetic mutations can be “corrected” in subsequent generations, implying that back-up copies of the original genetic information are kept and passed on between generations. This all paints a more complex picture than the standard neo-Darwinian story of random mutation pruned by natural selection.
Epigenetic influences on the development of organisms have been known for a long time. For example, identical twins with the same genomes don’t necessarily have the same physical characteristics or disposition to genetic disease. It’s clear that the influence of genes may be altered by their environment: a study in 2005 showed that differences in the activity of genes in identical twins become more pronounced with age, as the messages in the genome get progressively more modified. These “books” are constantly being revised and edited.
One way in which this happens is by chemical modification of DNA, such as the attachment of tags that “silence” the genes. These modifications can be strongly influenced by environmental factors such as diet. In effect, such epigenetic alterations constitute a second kind of code that overwrites the primary genetic instructions. An individual’s genetic character is therefore defined not just by his or her genome, but by the way it is epigenetically annotated. So rapid genetic screening will provide only part of the picture for the kind of personalised medicine that has been promised in the wake of the genome project: merely possessing a gene doesn’t mean that it is “used.”
Even the basic idea of a gene as a self-contained unit of biological information is now being contested. It has been known for 30 years that a single gene can encode several different proteins: the genetic information can be reshuffled after transcription into RNA. But the situation is far more complex than that. The molecular machinery that transcribes DNA doesn’t just start at the beginning of a gene and stop at the end: it seems regularly to overrun into another gene, or into regions that don’t seem to encode proteins at all. (A sobering 98-99 per cent of the human genome consists of such non-coding DNA, often described as “junk.”) Thus, many RNA molecules aren’t neat copies of genetic “words,” but contain fragments of others and appear to disregard the distinctions between them. If this looks like sloppy work, that may be because we simply don’t understand how the processes of transcription and translation really operate, and have developed an over-simplistic way of describing them.
This is backed up by observations that some RNA transcripts are composites of “words” from completely different parts of the genome, as though the copyist began writing down one word, then turned several pages and continued with another word entirely. If that’s so, one has to wonder whether the notions of copying, translation and books really have much value at all in describing the way genetics works. Worse, they could be misleading, persuading scientists that they understand more than they really do and tempting them towards incorrect interpretations. Already there are disagreements about precisely what a gene is.
In 2002 US scientists reported that the known protein-encoding regions of the genome account for perhaps less than a tenth of what gets transcribed into RNA. It is hard to imagine that cells would bother with the energetically costly process of making all that RNA unless it needed to. So apparently genetics isn’t all, or perhaps even primarily, about making proteins from genetic instructions.“The concept of a gene may not be as useful as it once was”, admits Thomas Gingeras of the biotech company Affymetrix. He suggests that a gene may be not a piece of DNA but a collective phenomenon involving a whole group of protein-coding and non-coding RNA transcripts. Perhaps these transcripts, not genes in the classical sense, are the fundamental functional units of the genome.
Scientists are constantly trying to express what they discover using concepts that are already familiar—not only when they communicate outside their field, but also when they talk among themselves. This is natural, and probably essential in order to gain a foothold on the slopes of new knowledge. But there’s no guarantee that those footholds are the ones that will lead us in the right direction, or anywhere at all. Relativity and quantum mechanics are still, after 100 years, deemed difficult and mysterious, not because they truly are but because we don’t really possess any good metaphors for them: the quantum world is not a game of billiards, the universe is not an expanding balloon and a light beam is not like a bullet train. Genetics, in contrast, looked accessible, because we thought we knew how to talk about information: that it is held as discrete, self-contained and stable packages of meaning that may be kept in data banks, copied, translated. Meaning is constructed by assembling those entities into linear strings organised by grammatical rules. The delight that accompanied the discovery of the structure of DNA half a century ago was that nature seemed to use this model too.
And indeed, the notion of information stored in the genome, passed on by copying, and read out as proteins, still seems basically sound. But it is looking increasingly doubtful that nature acts like a librarian or a computer programmer. It is quite possible that genetic information is parcelled and manipulated in ways that have no direct analogue in our own storage and retrieval systems. If we cleave to simplistic images of books and libraries, we may be missing the point.
Wednesday, July 12, 2006

Stormy Starry Night
Did Vincent van Gogh have a deep intuition for the forms of turbulence? That's what has been suggested by a recent mathematical analysis of the structure of his paintings. It seems that these display the statistical fingerprint of genuine turbulence – but only when the artist was feeling particularly turbulent himself. Well, maybe – I think that conclusion will have to await a more comprehensive analysis of the paintings. All the same, it is striking that other artists noted for their apparently turbulent canvases, such as Turner and Munch, don't seem to capture this same statistical signature in the correlations between patches of light and shade (I've received an analysis of Turner's stormy Mouth of the Seine, which confirms that this is so). Did van Gogh, then, achieve what Leonardo strove towards in his depictions of flowing water? I'll explore this further in my forthcoming new version of my 1999 book on pattern formation, The Self-Made Tapestry.
Wednesday, June 21, 2006
Who’s afraid of nanoparticles?
Lots of people, it seems. They have the potential to become the new DDT or dioxins or hormone mimics, the invisible ingredients of our environment and our synthetic products that are suspected of wreaking biochemical havoc. I don’t say the notion is ridiculous, but we need to keep it in proportion. The Royal Society/RAE report on the hazards and ethics of nanotechnology did a good job of giving some perspective on the concerns: we shouldn’t take it for granted that nanoparticles are safe (even if their bulkier counterparts are), but neither are we totally ignorant about exposure to such particles, and it is unlikely that we can make many generalizations about their health risks. Certainly, we need legislation to stop these tiny grains from slipping through current health & safety regulations.
My recent article on the damaging effects that titania nanoparticles apparently have on mouse microglia, the defensive cells of the brain, will probably be welcomed as ammunition by those who want a moratorium on all nanoparticle research. It does not give grounds for that – this research is a long way from establishing actual neurotoxocity – but it does give pause for thought about nanoparticle sun creams. I rather suspect this stuff is not going to be a major hazard, but we can’t be sure of that, and I confess that I’ll prefer to avoid them this summer.
For various mundane reasons, some comments by the EPA researchers involved in this work didn’t make it into the article. But they help to put the implications in perspective, and so I’m posting them here:
Responses from Dr. Bellina Veronesi
Question: Apart from sun creams, which consumer products that involve contact with the human body currently use titania nanoparticles?
Answer: Cosmetics, prosthetics (artificial joints, for example)
Question: You mention toothpaste and cosmetics - do you know of specific examples of these?
Answer: Most product labels would probably not note if a given chemical concentration was in the “nano” range. Often times, the titanium oxide is listed in the ingredients, but is not identified as "nano-." More specific information might be considered to be confidential business information (CBI) and not available.
Question: I'm finding it hard to see from the paper exactly how long the production of ROS tended to continue for after the microglia were exposed.
Answer: Over a 120 minute period, which was the extent of our measurements.
Question: It seems that the worry is not about the response per se, but that it is sustained.
Answer: The concern from the neurobiology/neurotoxicology point of view is that a cell type (the microglia), whose job it is to react to offending foreign stimuli in the brain by releasing free radicals (ROS), is doing just that in response to nanosize Titanium dioxide. If those free radicals are not neutralized by anti-oxidants present in the brain (Vitamin C, Vitamin E, super oxide dismutase), they can damage neurons.
But remember, these measurements were made in isolated microglia, so we can't yet say if it is neurotoxic. Rather, the next step would be to examine the consequences of ROS release in a more complex culture system consisting of mixtures of brain cells, including microglia and neurons. Based on those findings, we would then test in animals.
Question: What do we know about how such nanoparticles might get transported around the body? Can you say anything about the chances of them reaching the brain?
Answer: Experts such as Dr. Wolfgang Kreyling (GSF Institute for Inhalation Biology (Munich)) have shown that nanosize particles, such as TiO2, can leave the lungs of exposed animals and distribute to other organs. However, it is still undetermined whether TiO2 can cross the blood brain barrier and enter the brain.
Question: Can you say anything about whether the concentrations you studied might be realistic in terms of exposure levels?
Answer: It is not “good science” to extrapolate in vitro data to whole animal/human response. There are many obligatory steps/test models that must be tested first. Similarly, our study was not designed to assess whether the test concentrations used in the cell culture studies have relevance to those found in consumer products.
Question: How worrying are the results at this point, given that they are not in vivo studies?
Answer: This was a carefully designed study that followed a format prescribed in the nanoparticle scientific literature ((Nel et al., Science 2006) That format entails moving from cell culture to animal testing in a tiered fashion. We are examining further the possibility that TiO2 may be neurotoxic in culture. If these results prove positive, we will adhere to the format and next test in more complex culture models that use neurons or dissociated whole brain to determine. Results of these studies will determine if animal studies should be pursued.
Question: What are the major uncertainties about how the findings might translate to humans, and what are the next steps?
Answer: This study exposed TiO2 to isolated, brain cells taken from a mouse. Within the confines of this model, it would be speculative to say what the effects would occur in human cells, let alone a human being. Such a prediction requires an extremely lengthy course of testing, involving successively more complicated experimental models. As I noted in my previous answer, we will follow a format that allows for such sequential research.
Question: How do you feel about the fact that titania nanoparticles are currently in use in consumer products? Would you want to use such products yourself?
Answer: Nano-size TiO2 has been in commercial use/multiple routes of human exposure for several years, providing great benefits without incident. Numerous already published studies give TiO2 (nanosize, larger size) a clean bill of health.
The uniqueness of this study is that we are looking at the response of cells with very high resolution, state-of-the-art measurements. Again, this is the initial stage of a very lengthy experimental process the findings of which will provide better insight and guidance related to the use of such products.
Lots of people, it seems. They have the potential to become the new DDT or dioxins or hormone mimics, the invisible ingredients of our environment and our synthetic products that are suspected of wreaking biochemical havoc. I don’t say the notion is ridiculous, but we need to keep it in proportion. The Royal Society/RAE report on the hazards and ethics of nanotechnology did a good job of giving some perspective on the concerns: we shouldn’t take it for granted that nanoparticles are safe (even if their bulkier counterparts are), but neither are we totally ignorant about exposure to such particles, and it is unlikely that we can make many generalizations about their health risks. Certainly, we need legislation to stop these tiny grains from slipping through current health & safety regulations.
My recent article on the damaging effects that titania nanoparticles apparently have on mouse microglia, the defensive cells of the brain, will probably be welcomed as ammunition by those who want a moratorium on all nanoparticle research. It does not give grounds for that – this research is a long way from establishing actual neurotoxocity – but it does give pause for thought about nanoparticle sun creams. I rather suspect this stuff is not going to be a major hazard, but we can’t be sure of that, and I confess that I’ll prefer to avoid them this summer.
For various mundane reasons, some comments by the EPA researchers involved in this work didn’t make it into the article. But they help to put the implications in perspective, and so I’m posting them here:
Responses from Dr. Bellina Veronesi
Question: Apart from sun creams, which consumer products that involve contact with the human body currently use titania nanoparticles?
Answer: Cosmetics, prosthetics (artificial joints, for example)
Question: You mention toothpaste and cosmetics - do you know of specific examples of these?
Answer: Most product labels would probably not note if a given chemical concentration was in the “nano” range. Often times, the titanium oxide is listed in the ingredients, but is not identified as "nano-." More specific information might be considered to be confidential business information (CBI) and not available.
Question: I'm finding it hard to see from the paper exactly how long the production of ROS tended to continue for after the microglia were exposed.
Answer: Over a 120 minute period, which was the extent of our measurements.
Question: It seems that the worry is not about the response per se, but that it is sustained.
Answer: The concern from the neurobiology/neurotoxicology point of view is that a cell type (the microglia), whose job it is to react to offending foreign stimuli in the brain by releasing free radicals (ROS), is doing just that in response to nanosize Titanium dioxide. If those free radicals are not neutralized by anti-oxidants present in the brain (Vitamin C, Vitamin E, super oxide dismutase), they can damage neurons.
But remember, these measurements were made in isolated microglia, so we can't yet say if it is neurotoxic. Rather, the next step would be to examine the consequences of ROS release in a more complex culture system consisting of mixtures of brain cells, including microglia and neurons. Based on those findings, we would then test in animals.
Question: What do we know about how such nanoparticles might get transported around the body? Can you say anything about the chances of them reaching the brain?
Answer: Experts such as Dr. Wolfgang Kreyling (GSF Institute for Inhalation Biology (Munich)) have shown that nanosize particles, such as TiO2, can leave the lungs of exposed animals and distribute to other organs. However, it is still undetermined whether TiO2 can cross the blood brain barrier and enter the brain.
Question: Can you say anything about whether the concentrations you studied might be realistic in terms of exposure levels?
Answer: It is not “good science” to extrapolate in vitro data to whole animal/human response. There are many obligatory steps/test models that must be tested first. Similarly, our study was not designed to assess whether the test concentrations used in the cell culture studies have relevance to those found in consumer products.
Question: How worrying are the results at this point, given that they are not in vivo studies?
Answer: This was a carefully designed study that followed a format prescribed in the nanoparticle scientific literature ((Nel et al., Science 2006) That format entails moving from cell culture to animal testing in a tiered fashion. We are examining further the possibility that TiO2 may be neurotoxic in culture. If these results prove positive, we will adhere to the format and next test in more complex culture models that use neurons or dissociated whole brain to determine. Results of these studies will determine if animal studies should be pursued.
Question: What are the major uncertainties about how the findings might translate to humans, and what are the next steps?
Answer: This study exposed TiO2 to isolated, brain cells taken from a mouse. Within the confines of this model, it would be speculative to say what the effects would occur in human cells, let alone a human being. Such a prediction requires an extremely lengthy course of testing, involving successively more complicated experimental models. As I noted in my previous answer, we will follow a format that allows for such sequential research.
Question: How do you feel about the fact that titania nanoparticles are currently in use in consumer products? Would you want to use such products yourself?
Answer: Nano-size TiO2 has been in commercial use/multiple routes of human exposure for several years, providing great benefits without incident. Numerous already published studies give TiO2 (nanosize, larger size) a clean bill of health.
The uniqueness of this study is that we are looking at the response of cells with very high resolution, state-of-the-art measurements. Again, this is the initial stage of a very lengthy experimental process the findings of which will provide better insight and guidance related to the use of such products.
Wednesday, June 14, 2006
To boldly go…?
My Nature article on NASA’s manned spaceflight program has drawn some flak, as I suspected it might. That’s good – it is only by compelling people to voice their arguments for such a goal, rather than taking its validity as a self-evident truth, that we can assess them. And how woeful they can seem. Certainly, I find it depressing when people suggest that the only way the human race can survive is to get off the planet as soon as possible. I’m struck also at the difference in attitude between the US and the rest of the world – it really does seem as though the national narrative of frontiers and pioneers in the States shapes so much of public thinking in a way that just doesn’t resonate elsewhere. I don’t want to be too critical of that – it’s surely driven some of the great American achievements of modern times. But I do wonder whether it has led to distortions of history all the way from Columbus’s voyage to the Apollo missions – for example, a failure to appreciate how much commerce and military dominance have played a part in such events. That's certainly brought out in the gobsmacking article by Michael Lembeck mentioned by one of the correspondents - read it and weep. I've responded to that on the Nature site.
Brian Enke makes a new point, however: that manned spaceflight is needed to keep the public on board. I took this up with Brian, and we've had what I think is a productive exchange (certainly more so than some of the stuff posted on the Nature weblog). Here it is:
Dear Brian,
Your comment on the Nature weblog raises an interesting point that I'd not heard put before: that a manned space program is necessary to keep the public on board in funding space science. As a pragmatic expedient, I could accept that. If the only way public funding of space science can be sustained is to provide the public with the justification that ultimately the aim is to get us ‘out there’, then so be it. But that would be a sad state of affairs, and one that I'd find somewhat intellectually dishonest. It may be that some space researchers do indeed feel this is the appropriate end goal, but I suspect that just as many would argue that it is right to fund intellectual inquiry into the universe and our place in it just as it is right to fund any other kind of basic 'big' science, or indeed to fund research on history or linguistics or to give money to the arts. High-energy physics does not need to appeal to any great mission beyond that of finding out about the world. Space science poses equally grand questions, and these ought to be justification enough. Certainly, I find it highly disingenuous to argue (as some have done, though you have not) that manned spaceflight is warranted for economic reasons such as space mining and tourism. I take the point that, in the current climate, science has to be able to 'sell itself', but I'd prefer that we can be as honest as possible about why it is being done.
I'm not against manned spaceflight per se. Indeed, I think we need it for good space science: the Hubble repair showed just why it is valuable to be able to put people into low-Earth orbit. And if we could put a person on Mars tomorrow, I'd be hugely excited. I simply don't see the latter as such a priority that it needs so much of NASA's budget channelled in that direction. This is really the point: the science is suffering, and at the expense of a politically motivated program. I agree that it would be wonderful for NASA to be given enough money to do all things well (I can't say I'd consider the money well spent that would go on a manned mission to the moon or Mars, but I'd rather that than have it spent on defence). But this isn't going to happen, it seems. With limited resources, it seems a tragedy to have to devote so much of them to what is basically a PR exercise.
The international scientific community owes NASA a huge debt of gratitude for the fantastic things it has done. But it does seem to me that Mike Griffin is now having to practice some realpolitik that goes against his inclinations, and that seems a shame.
Best wishes,
Phil
Hi Phil -
Thanks for the comments - it's always exciting to have a good dialogue on these issues. Most of the time, people end up stomping around, posturing, blind to any alternative viewpoints, and ultimately frustrated for that very reason.
I was simply pointing out how things are, motivationally, rather than how they "should be" (if anyone could ever possibly agree on that). I hear your words, respect your opinion, and agree with most of it... but I personally don't have a problem with science being MORE than a mere intellectual pursuit. If science can lead to something tangible, so much the better. That's how you tie into real funding... make a business case for the science. Otherwise, we're left over with the dregs - mere stipends that keep a program alive but lacking inspiration and excitement. Off my philosophy box... ;)
Here's a major point for you to consider quite carefully. You refer to "the NASA budget" below, as is common... but what exactly IS "the NASA budget". You can take the stipend view - our government doles out X amount of dollars to create Y number of high-tech jobs, and what comes out of it really doesn't matter. In that sad state of affairs, one science job equals one engineering job, more or less... and one science dollar equals one human exploration dollar. Again, what is accomplished doesn't matter - it's all a line-item in the federal budget - a happy-pill to placate the masses. OR you can take (IMHO) a far more progressive view - the NASA budget is a combination of individual programs, each of which has meaning and stands strongly on its own merits. Sounds much more honest, right? But watch out - here's the trap - in that progressive view, taking a dollar away from human spaceflight doesn't necessarily add a dollar to space science research. They are individual programs - and the budgets must (eventually) be justified individually. Taking a dollar from human spaceflight eliminates a dollar from the federal deficit. That's it, period.
As we all know, in reality, NASA is a combination of the two approaches... and it IS likely that taking a dollar from human spaceflight will (in the short-term) add a dollar, or 50 cents, or a dime, or whatever to space science. This effect is little more than a local optimization within the government bureaucracy. Long term, it's obvious that everyone at NASA loses, as I pointed out in my initial comments.
There's one other trap in the above paradigm.... or rather a juicy angle for us to exploit. If each program truly stands on its own merits, then a dollar of space science funding equals a dollar of hurricane Katrina relief or Iraq whatever-they're-doing-over-there. Better yet, it's a dollar of Medicare fraud. In this budget-view, there is ZERO reason to take money away from human spaceflight. Bob Park is notorious for preaching a flawed, divisive premise - he wants to fund space science at the explicit expense of the human spaceflight community. Winners vs Losers. I'm going to eat your cake because I like cake, and your piece looks really yummy. Why did he decide to pick on human spaceflight? Did a manned rocket land on his cat or something? Why isn't he on a rampage against Medicare - which wastes nearly twice the entire NASA budget every year funding fraudulent claims?
A robust space industry is either a national priority or it isn't. If space is important, we'll have plenty of funds for space science AND human exploration. If not, we won't. So... I'd rather dedicate my personal efforts toward increasing public awareness of the importance of space science AND exploration. That way, we all win.
Cheers,
- Brian
Dear Brian,
You almost persuade me. That's to say, if NASA budgetting really isn't a simplistic zero-sum game, then I can see the case for why I shouldn't be too concerned if the US administration wants to spend billions on a program that doesn't strike me as having much intrinsic value at this point in time. But it does seem clear that increases in spending on manned spaceflight have taken money from science projects, even if not in a directly zero-sum manner. I do understand and accept your point, however, that providing a vision that captivates the public might in the long term mean that the science gains too.
I suppose I also can't suppress some concern that that vision can lead to snowballing rhetoric that some people take seriously, so that we end up with chaps like the one on the Nature weblog who genuinely believe that humanity is doomed if we don't get on and colonize the moon or Mars quickly. That seems a bit unhealthy.
In any event, thanks very much for your considered response - it has given me something to think about.
With best wishes,
Phil
Hi Phil -
I've truly enjoyed our exchange, and I admire your open-mindedness. Perhaps you even want to throw in a mention of my recent science fiction novel, Shadows of Medusa? It turned out to be a fun read, I'm happy to say, and it touches on the theme we've been discussing - that science and human exploration can co-exist (though I take some fictional liberties and digressions, heh heh). If anyone out there wants to order the book, they should go through the website, though... because the full publisher price on Amazon is a total rip-off.
And I agree - there are some reasons for exploring and settling space thrown about occasionally that I don't personally agree with either. Sometimes you have to step back and say, "hmmm... no thanks." But on the other hand, most of the time the people with the opinions strongly believe them, and one has to be extremely cautious and respectful when stepping on people's beliefs. I suppose that's MY belief. :)
In fact, tying that position into the whole government budget and zero-sum game matter... I don't personally believe the government should be spending MY money (there's a charged term you hear all the time) on certain things that I don't see much intrinsic value in. And I reserve the right to complain about it occasionally. But I do have to recognize and accept that for every single government dollar spent, someone out there somewhere feels very strongly that it's being spent well.
So in the end, perhaps "almost" being persuaded by someone is a good thing, if it leads to a healthy dose of tolerance.
Oh, one more parting thought, spurred by what you noted below: "it does seem clear that increases in spending on manned spaceflight have taken money from science projects". Agreed again - it shouldn't be that way in a perfect world, but that's just the way it is. On the other hand, one can find examples of spending in human spaceflight leading directly to increases in science spending. Our LAMP instrument (in my day job, I work at the Southwest Research Institute) is a great example - it will fly on the Lunar Reconnaissance Orbiter, an entire juicy science mission funded/motivated with human exploration dollars. This is beautiful synergy - great science coupled directly with human exploration. Personally, I'd like to see more of that.
Cheers,
- Brian
My Nature article on NASA’s manned spaceflight program has drawn some flak, as I suspected it might. That’s good – it is only by compelling people to voice their arguments for such a goal, rather than taking its validity as a self-evident truth, that we can assess them. And how woeful they can seem. Certainly, I find it depressing when people suggest that the only way the human race can survive is to get off the planet as soon as possible. I’m struck also at the difference in attitude between the US and the rest of the world – it really does seem as though the national narrative of frontiers and pioneers in the States shapes so much of public thinking in a way that just doesn’t resonate elsewhere. I don’t want to be too critical of that – it’s surely driven some of the great American achievements of modern times. But I do wonder whether it has led to distortions of history all the way from Columbus’s voyage to the Apollo missions – for example, a failure to appreciate how much commerce and military dominance have played a part in such events. That's certainly brought out in the gobsmacking article by Michael Lembeck mentioned by one of the correspondents - read it and weep. I've responded to that on the Nature site.
Brian Enke makes a new point, however: that manned spaceflight is needed to keep the public on board. I took this up with Brian, and we've had what I think is a productive exchange (certainly more so than some of the stuff posted on the Nature weblog). Here it is:
Dear Brian,
Your comment on the Nature weblog raises an interesting point that I'd not heard put before: that a manned space program is necessary to keep the public on board in funding space science. As a pragmatic expedient, I could accept that. If the only way public funding of space science can be sustained is to provide the public with the justification that ultimately the aim is to get us ‘out there’, then so be it. But that would be a sad state of affairs, and one that I'd find somewhat intellectually dishonest. It may be that some space researchers do indeed feel this is the appropriate end goal, but I suspect that just as many would argue that it is right to fund intellectual inquiry into the universe and our place in it just as it is right to fund any other kind of basic 'big' science, or indeed to fund research on history or linguistics or to give money to the arts. High-energy physics does not need to appeal to any great mission beyond that of finding out about the world. Space science poses equally grand questions, and these ought to be justification enough. Certainly, I find it highly disingenuous to argue (as some have done, though you have not) that manned spaceflight is warranted for economic reasons such as space mining and tourism. I take the point that, in the current climate, science has to be able to 'sell itself', but I'd prefer that we can be as honest as possible about why it is being done.
I'm not against manned spaceflight per se. Indeed, I think we need it for good space science: the Hubble repair showed just why it is valuable to be able to put people into low-Earth orbit. And if we could put a person on Mars tomorrow, I'd be hugely excited. I simply don't see the latter as such a priority that it needs so much of NASA's budget channelled in that direction. This is really the point: the science is suffering, and at the expense of a politically motivated program. I agree that it would be wonderful for NASA to be given enough money to do all things well (I can't say I'd consider the money well spent that would go on a manned mission to the moon or Mars, but I'd rather that than have it spent on defence). But this isn't going to happen, it seems. With limited resources, it seems a tragedy to have to devote so much of them to what is basically a PR exercise.
The international scientific community owes NASA a huge debt of gratitude for the fantastic things it has done. But it does seem to me that Mike Griffin is now having to practice some realpolitik that goes against his inclinations, and that seems a shame.
Best wishes,
Phil
Hi Phil -
Thanks for the comments - it's always exciting to have a good dialogue on these issues. Most of the time, people end up stomping around, posturing, blind to any alternative viewpoints, and ultimately frustrated for that very reason.
I was simply pointing out how things are, motivationally, rather than how they "should be" (if anyone could ever possibly agree on that). I hear your words, respect your opinion, and agree with most of it... but I personally don't have a problem with science being MORE than a mere intellectual pursuit. If science can lead to something tangible, so much the better. That's how you tie into real funding... make a business case for the science. Otherwise, we're left over with the dregs - mere stipends that keep a program alive but lacking inspiration and excitement. Off my philosophy box... ;)
Here's a major point for you to consider quite carefully. You refer to "the NASA budget" below, as is common... but what exactly IS "the NASA budget". You can take the stipend view - our government doles out X amount of dollars to create Y number of high-tech jobs, and what comes out of it really doesn't matter. In that sad state of affairs, one science job equals one engineering job, more or less... and one science dollar equals one human exploration dollar. Again, what is accomplished doesn't matter - it's all a line-item in the federal budget - a happy-pill to placate the masses. OR you can take (IMHO) a far more progressive view - the NASA budget is a combination of individual programs, each of which has meaning and stands strongly on its own merits. Sounds much more honest, right? But watch out - here's the trap - in that progressive view, taking a dollar away from human spaceflight doesn't necessarily add a dollar to space science research. They are individual programs - and the budgets must (eventually) be justified individually. Taking a dollar from human spaceflight eliminates a dollar from the federal deficit. That's it, period.
As we all know, in reality, NASA is a combination of the two approaches... and it IS likely that taking a dollar from human spaceflight will (in the short-term) add a dollar, or 50 cents, or a dime, or whatever to space science. This effect is little more than a local optimization within the government bureaucracy. Long term, it's obvious that everyone at NASA loses, as I pointed out in my initial comments.
There's one other trap in the above paradigm.... or rather a juicy angle for us to exploit. If each program truly stands on its own merits, then a dollar of space science funding equals a dollar of hurricane Katrina relief or Iraq whatever-they're-doing-over-there. Better yet, it's a dollar of Medicare fraud. In this budget-view, there is ZERO reason to take money away from human spaceflight. Bob Park is notorious for preaching a flawed, divisive premise - he wants to fund space science at the explicit expense of the human spaceflight community. Winners vs Losers. I'm going to eat your cake because I like cake, and your piece looks really yummy. Why did he decide to pick on human spaceflight? Did a manned rocket land on his cat or something? Why isn't he on a rampage against Medicare - which wastes nearly twice the entire NASA budget every year funding fraudulent claims?
A robust space industry is either a national priority or it isn't. If space is important, we'll have plenty of funds for space science AND human exploration. If not, we won't. So... I'd rather dedicate my personal efforts toward increasing public awareness of the importance of space science AND exploration. That way, we all win.
Cheers,
- Brian
Dear Brian,
You almost persuade me. That's to say, if NASA budgetting really isn't a simplistic zero-sum game, then I can see the case for why I shouldn't be too concerned if the US administration wants to spend billions on a program that doesn't strike me as having much intrinsic value at this point in time. But it does seem clear that increases in spending on manned spaceflight have taken money from science projects, even if not in a directly zero-sum manner. I do understand and accept your point, however, that providing a vision that captivates the public might in the long term mean that the science gains too.
I suppose I also can't suppress some concern that that vision can lead to snowballing rhetoric that some people take seriously, so that we end up with chaps like the one on the Nature weblog who genuinely believe that humanity is doomed if we don't get on and colonize the moon or Mars quickly. That seems a bit unhealthy.
In any event, thanks very much for your considered response - it has given me something to think about.
With best wishes,
Phil
Hi Phil -
I've truly enjoyed our exchange, and I admire your open-mindedness. Perhaps you even want to throw in a mention of my recent science fiction novel, Shadows of Medusa? It turned out to be a fun read, I'm happy to say, and it touches on the theme we've been discussing - that science and human exploration can co-exist (though I take some fictional liberties and digressions, heh heh). If anyone out there wants to order the book, they should go through the website, though... because the full publisher price on Amazon is a total rip-off.
And I agree - there are some reasons for exploring and settling space thrown about occasionally that I don't personally agree with either. Sometimes you have to step back and say, "hmmm... no thanks." But on the other hand, most of the time the people with the opinions strongly believe them, and one has to be extremely cautious and respectful when stepping on people's beliefs. I suppose that's MY belief. :)
In fact, tying that position into the whole government budget and zero-sum game matter... I don't personally believe the government should be spending MY money (there's a charged term you hear all the time) on certain things that I don't see much intrinsic value in. And I reserve the right to complain about it occasionally. But I do have to recognize and accept that for every single government dollar spent, someone out there somewhere feels very strongly that it's being spent well.
So in the end, perhaps "almost" being persuaded by someone is a good thing, if it leads to a healthy dose of tolerance.
Oh, one more parting thought, spurred by what you noted below: "it does seem clear that increases in spending on manned spaceflight have taken money from science projects". Agreed again - it shouldn't be that way in a perfect world, but that's just the way it is. On the other hand, one can find examples of spending in human spaceflight leading directly to increases in science spending. Our LAMP instrument (in my day job, I work at the Southwest Research Institute) is a great example - it will fly on the Lunar Reconnaissance Orbiter, an entire juicy science mission funded/motivated with human exploration dollars. This is beautiful synergy - great science coupled directly with human exploration. Personally, I'd like to see more of that.
Cheers,
- Brian
Wednesday, May 31, 2006
Get lucky
William Perkin did 150 years ago, when he discovered the first aniline dye. (Luck had little to do, however, with the commercial success that he had from it.) In an article in Chemistry World I explore this and other serendipitous discoveries in chemistry. Perkin’s wonderful dye is shown in all its glory on the magazine’s cover, and here is the article and the leader that I wrote for Nature to celebrate the anniversary:
*************
Perkin, the mauve maker
150 years ago this week, a teenager experimenting in his makeshift home laboratory made a discovery that can be said without exaggeration to have launched the modern chemicals industry. William Perkin was an 18-year-old student of August Wilhelm Hofmann at the Royal College of Chemistry in London, where he worked on the chemical synthesis of natural products. In one of the classic cases of serendipity for which chemistry is renowned, the young Perkin chanced upon his famous ‘aniline mauve’ dye while attempting to synthesize something else entirely: quinine, the only known cure for malaria.
As a student of Justus von Liebig, Hofmann made a name for himself by showing that the basic compound called aniline that could be obtained from coal tar was the same as that which could be distilled from raw indigo. Coal tar was the residue of gas production, and the interest in finding uses for this substance led to the discovery of many other aromatic compounds. At his parents’ home in Shadwell, east London, Perkin tried to make quinine from an aniline derivative by oxidation, based only on the similarity of their chemical formulae (the molecular structures are quite different). The reaction produced only a reddish sludge; but when the inquisitive Perkin tried it with aniline instead, he got a black precipitate which dissolved in methylated spirits to give a purple solution. Textiles and dyeing being big business at that time, Perkin was astute enough to test the coloured compound on silk, which it dyed richly.
Boldly, Perkin resigned from the college that autumn and persuaded his father and brother to set up a small factory with him in Harrow to manufacture the dye, called mauve after the French for ‘mallow’. The Perkins and others (including Hofmann) soon discovered a whole rainbow of aniline dyes, and by the mid-1860s aniline dye companies already included the nascent giants of today’s chemicals industry.
(From Nature 440, p.429; 23 March 2006)
***************
A colourful past
The 150th anniversary of William Perkin’s synthesis of aniline mauve dye (see page 429) is more than just an excuse to retell a favourite story from chemistry’s history. It’s true enough that there is still plenty to delight at in that story – Perkin’s extraordinary youth and good fortune, the audacity of his gamble in setting up business to mass-produce the dye, and the chromatic riches that so quickly flowed from the unpromising black residue of coal gas production. As a study in entrepreneurship it could hardly be bettered, for all that Perkin himself was a rather shy and retiring man.
But perhaps the most telling aspect of the story is the relationship that it engendered between pure and applied science. The demand for new, brighter and more colourfast synthetic dyes, along with means of mordanting them to fabrics, stimulated manufacturing companies to set up their own research divisions, and cemented the growing interactions between industry and academia.
Traditionally, dye-making was a practical craft, a combination of trial-and-error experimentation and the rote repetition of time-honoured recipes. This is not to say that the more ‘scholarly’ sciences failed sometimes to benefit from such empiricism – an interest in colour production led Robert Boyle to propose colour-change acidity indicators, for instance. But the idea that chemicals production required real chemical expertise did not surface until the eighteenth century, when the complexities of mordanting and multi-colour fabric printing began to seem beyond the ken of recipe-followers.
That was when the Scottish chemist William Cullen announced that if the mason wants cement, the dyer a dye and the bleacher a bleach, “it is the chemical philosopher who must supply these.” Making inorganic pigments preoccupied some of the greatest chemists of the early nineteenth century, among them Nicolas-Louis Vauquelin and Louis-Jacques Thénard and Humphry Davy. Perkin’s mauve was, however, an organic compound, and thus, in the mid-nineteenth century, rather more mysterious than metal salts. While the drive to understand the molecular structure of carbon compounds during this time is typically presented now as a challenge for pure chemistry, it owed as much to the profits that might ensue if the molecular secrets of organic colour were unlocked.
August Hofmann, Perkin’s one-time mentor, articulated the ambition in 1863: “Chemistry may ultimately teach us systematically to build up colouring molecules, the particular tint of which we may predict with the same certainty with which we at present anticipate the boiling point.” Both the need to understand molecular structure and the demand for synthetic methods were sharpened by chemists’ attempts to synthesize alizarin (the natural colourant of madder) and indigo. When Carl Graebe and Carl Liebermann found a route to the former in 1868, they quickly sold the rights to the Badische dye company, soon to become BASF. One of those who found a better route in 1869 was Ferdinand Riese, who was already working for Hoechst. (Another was Perkin.) These and other dye companies, including Bayer, Ciba and Geigy, had already seen the value of having highly skilled chemists on their payroll – something that was even more evident when they began to branch into pharmaceuticals in the early twentieth century. Then, at least, there was no doubt that good business needs good scientists.
(From Nature 440, p.384; 23 March 2006)
William Perkin did 150 years ago, when he discovered the first aniline dye. (Luck had little to do, however, with the commercial success that he had from it.) In an article in Chemistry World I explore this and other serendipitous discoveries in chemistry. Perkin’s wonderful dye is shown in all its glory on the magazine’s cover, and here is the article and the leader that I wrote for Nature to celebrate the anniversary:
*************
Perkin, the mauve maker
150 years ago this week, a teenager experimenting in his makeshift home laboratory made a discovery that can be said without exaggeration to have launched the modern chemicals industry. William Perkin was an 18-year-old student of August Wilhelm Hofmann at the Royal College of Chemistry in London, where he worked on the chemical synthesis of natural products. In one of the classic cases of serendipity for which chemistry is renowned, the young Perkin chanced upon his famous ‘aniline mauve’ dye while attempting to synthesize something else entirely: quinine, the only known cure for malaria.
As a student of Justus von Liebig, Hofmann made a name for himself by showing that the basic compound called aniline that could be obtained from coal tar was the same as that which could be distilled from raw indigo. Coal tar was the residue of gas production, and the interest in finding uses for this substance led to the discovery of many other aromatic compounds. At his parents’ home in Shadwell, east London, Perkin tried to make quinine from an aniline derivative by oxidation, based only on the similarity of their chemical formulae (the molecular structures are quite different). The reaction produced only a reddish sludge; but when the inquisitive Perkin tried it with aniline instead, he got a black precipitate which dissolved in methylated spirits to give a purple solution. Textiles and dyeing being big business at that time, Perkin was astute enough to test the coloured compound on silk, which it dyed richly.
Boldly, Perkin resigned from the college that autumn and persuaded his father and brother to set up a small factory with him in Harrow to manufacture the dye, called mauve after the French for ‘mallow’. The Perkins and others (including Hofmann) soon discovered a whole rainbow of aniline dyes, and by the mid-1860s aniline dye companies already included the nascent giants of today’s chemicals industry.
(From Nature 440, p.429; 23 March 2006)
***************
A colourful past
The 150th anniversary of William Perkin’s synthesis of aniline mauve dye (see page 429) is more than just an excuse to retell a favourite story from chemistry’s history. It’s true enough that there is still plenty to delight at in that story – Perkin’s extraordinary youth and good fortune, the audacity of his gamble in setting up business to mass-produce the dye, and the chromatic riches that so quickly flowed from the unpromising black residue of coal gas production. As a study in entrepreneurship it could hardly be bettered, for all that Perkin himself was a rather shy and retiring man.
But perhaps the most telling aspect of the story is the relationship that it engendered between pure and applied science. The demand for new, brighter and more colourfast synthetic dyes, along with means of mordanting them to fabrics, stimulated manufacturing companies to set up their own research divisions, and cemented the growing interactions between industry and academia.
Traditionally, dye-making was a practical craft, a combination of trial-and-error experimentation and the rote repetition of time-honoured recipes. This is not to say that the more ‘scholarly’ sciences failed sometimes to benefit from such empiricism – an interest in colour production led Robert Boyle to propose colour-change acidity indicators, for instance. But the idea that chemicals production required real chemical expertise did not surface until the eighteenth century, when the complexities of mordanting and multi-colour fabric printing began to seem beyond the ken of recipe-followers.
That was when the Scottish chemist William Cullen announced that if the mason wants cement, the dyer a dye and the bleacher a bleach, “it is the chemical philosopher who must supply these.” Making inorganic pigments preoccupied some of the greatest chemists of the early nineteenth century, among them Nicolas-Louis Vauquelin and Louis-Jacques Thénard and Humphry Davy. Perkin’s mauve was, however, an organic compound, and thus, in the mid-nineteenth century, rather more mysterious than metal salts. While the drive to understand the molecular structure of carbon compounds during this time is typically presented now as a challenge for pure chemistry, it owed as much to the profits that might ensue if the molecular secrets of organic colour were unlocked.
August Hofmann, Perkin’s one-time mentor, articulated the ambition in 1863: “Chemistry may ultimately teach us systematically to build up colouring molecules, the particular tint of which we may predict with the same certainty with which we at present anticipate the boiling point.” Both the need to understand molecular structure and the demand for synthetic methods were sharpened by chemists’ attempts to synthesize alizarin (the natural colourant of madder) and indigo. When Carl Graebe and Carl Liebermann found a route to the former in 1868, they quickly sold the rights to the Badische dye company, soon to become BASF. One of those who found a better route in 1869 was Ferdinand Riese, who was already working for Hoechst. (Another was Perkin.) These and other dye companies, including Bayer, Ciba and Geigy, had already seen the value of having highly skilled chemists on their payroll – something that was even more evident when they began to branch into pharmaceuticals in the early twentieth century. Then, at least, there was no doubt that good business needs good scientists.
(From Nature 440, p.384; 23 March 2006)

Platinum sales
We all know that platinum is a precious metal, but paying close to $3 million for a few grams of it seems excessive. Yet that is what a private art collector has just done. The inflated value of the metal in this case stems from how it is arranged: as tiny black particles scattered over a sheet of gummed paper so as to portray an image of the moon rising over a pond on Long Island in 1904. This is, in other words, a photograph, defined in platinum rather than silver. It was taken by the American photographer Edward Steichen, and in February it sold at Sotheby’s of New York for $2,928,000 – a record-breaking figure for a photo.
At the same sale, a photo of Georgia O’Keeffe’s hands recorded in a palladium print by Alfred Stieglitz in 1918 went for nearly $1.5 million (the story is told by Mike Ware here). Evidently these platinum-group images have become collector’s items.
The platinotype process was developed (excuse the pun) in the nineteenth century to address some of the shortcomings of silver prints. In particular, while silver salts have the high photosensitivity needed to record an image ‘instantly’, the metal fades to brown over time because of its conversion to sulphide by reaction with atmospheric sulphur gases. That frustrated John Herschel, one of the early pioneers of photography, who confessed in 1839 that ‘I was on the point of abandoning the use of silver in the enquiry altogether and having recourse to Gold or Platina’.
Herschel did go on to create a kind of gold photography, called chrysotype. But it wasn’t until the 1870s that a reliable method for making platinum prints was devised. The technique was created by the Englishman William Willis, and it became the preferred method for high-quality photography for the next 30 years. A solution of iron oxalate and chloroplatinate was spread onto a sheet coated with gum arabic or starch and exposed to light through the photographic negative. As the platinum was photochemically reduced to the finely divided metal, the image appeared in a velvety black that, because of platinum’s inertness, did not fade or discolour. ‘The tones of the pictures thus produced are most excellent, and the latter possess a charm and brilliancy we have never seen in a silver print’, said the British Journal of Photography approvingly.
To enrich his moonlit scene, Steichen added a chromium salt to the medium, which is trapped in the gum as a greenish pigment, giving a tinted, painterly image reminiscent of the night scenes painted by James McNeill Whistler. Steichen and Stieglitz helped to secure the recognition of photography as a serious art form in the USA.
Stieglitz’s use of palladium rather than platinum in 1918 reflects the demise of the platinotype. The metal was used to catalyse the manufacture of high explosives in World War I, and so it could not be spared for so frivolous a purpose: making shells was more important than making art.
(This article will appear in the July issue of Nature Materials.)
Subscribe to:
Posts (Atom)