Is there such a thing as a 'safe technology'?
[This is the pre-edited text of my latest muse for Nature, which relates to a paper published in the 16 November issue on health and safety issues in nanotechnology.]
Discussions about the risk of emerging technologies must acknowledge that their major impacts have rarely been spotted in advance.
In today's issue of Nature, an international team of scientists presents a five-point scheme for "the safe handling of nanotechnology"[1]. "If the global research community can rise to the challenges we have set", they say, "then we can surely look forward to the advent of safe nanotechnologies".
The five targets that the team sets for addressing potential health risks of nanotechnologies are excellent ones, involving the assessment of toxicities, prediction of impacts on the environment, and establishment of a general strategy for risk-focused research. In particular, the goals are aimed at determining the risks of engineered nanoparticles – how they might enter and move in the environment, to what extent humans might be exposed, and what the consequences of that will be. We need to know all these things with some urgency.
But what is a "safe technology"? According to this criterion, manufacturing nuclear warheads would be "safe" if no human was exposed to dangerous levels of radiation in the process that leads from centrifuge to silo.
To be fair, no one denies that a technology's 'safety' depends on how it is used. The proposals for mapping nanotech's risks are clearly aimed at a very specific aspect of the overall equation, concerned only with the fundamental issues of whether (and how much) exposure to nanotechnological products is bad for our health. But it highlights the curious circumstance that new technologies now seem required to carry out a risk assessment at their inception, ideally in parallel with public consultation and engagement to decide what should and shouldn't be permitted.
There is no harm in that. And there's plenty of scope for being creative about it. Some of the broader ethical issues associated with nanotech, for example, are being explored in the US through a series of public seminars organized by the public-education company ICAN Productions. Funded by the US National Science Foundation, ICAN is creating three one-hour seminars in which participants, including scientists, business leaders and members of the public, explore scenarios that illuminate plausible impacts of nanotech on daily life. The results will be presented on US television by Oregon Public Broadcasting in spring of 2007.
Yet history must leave us with little confidence that either research programs or public debates will anticipate all, or even the major, social impacts of a new technology. We smile now at how anyone believed that road safety could be addressed by having every automobile preceded by a man waving a red flag. In those early days, the pollution caused by cars was barely on the agenda, while the notion that this might affect global climate would have seemed positively bizarre.
Of course, it is something of a cliché now to say that neither the internal combustion engine nor smoking would ever have been permitted if we knew then what we know now about their dangers. But the point is that we never do – it is hard to identify any important technology for which the biggest risks have been clear in advance.
And even if some of them are, scientists generally lose the ability to do anything about it once the technology reacts with society. Nuclear proliferation was forecast and feared by many of the Manhattan Project physicists, but politicians and generals treated their proposals for avoiding it with contempt (give away secrets to the Russians, indeed!). It took no deep understanding of evolution to foresee the emergence of antibiotic-resistant bacteria, but that didn't prevent profligate over-prescription of the drugs. The dangers of global warming have been known since at least the 1980s, and… well, say no more.
In the case of nanotechnology, there have been discussions of, for example, its likelihood of increasing the gap between rich and poor nations, its impacts on surveillance and privacy, and the social effects of nanotech-enhanced longevity. These are all noble attempts to look beyond the pure science, but it's not at all clear that they will turn out to be the most relevant issues.
Part of the impetus for aiming to address the 'risks' of nanotech so early in the game comes from a fear that potentially valuable applications could be derailed by a public backlash like that which led to a rejection in Europe of genetically modified organisms – some (though by no means all) of which resulted from a general lack of information or understanding about the technology, as well as an arrogant assumption of consumer acquiescence.
The GMO experience has sensitized scientists to the need for early public engagement, and again that is surely a good thing. It's also encouraging to find scientists and even industries hurrying along governments to do more to support research into safety issues, and to draft regulations.
What they must avoid, however, is giving the impression that emerging technologies are like toys that can be 'made safe' before being handed to a separate entity called society to play with as it will. Technologies are one of the key drivers of social change, for better or worse. They simply do not exist in isolation of the society that generates them. Not only can we not foresee all their consequences, but some of those consequences aren't present even in principle until culture, sociology, economics and politics (not to mention faith) enter the arena.
Some technologies are no doubt intrinsically 'safer' or 'riskier' than others. But the more powerful they are, the less able we are to distinguish which is which, or to predict how that will play out in practice. Let's by all means look for obvious dangers at the outset – but scientists must also look for ways to become more engaged in the shaping of a technology as it unfolds, while dismantling the now-pervasive notion that all innovations must come with a 'risk-free' label.
Reference
1. Maynard, A. et al. Nature 444, 267 - 269 (2006).
Friday, November 24, 2006
Monday, November 20, 2006
Hooke: what came next?
I went to a nice talk by Lisa Jardine at the (peripatetic) Royal Institution last week, on the newly discovered notes of Robert Hooke. Lisa and her students have been studying this portfolio of notes taken by Hooke in his capacity as secretary of the Royal Society since they were rescued from auction and returned to the Royal Society earlier this year (see my earlier blog entry in May). She says that they have completely transformed her view of Hooke since writing his biography (The Curious Life of Robert Hooke, HarperCollins) in 2003. One of the hazards of being a historian, she pointed out, is that you can never be sure what may come to light and revise all your opinions, which have previously been presented to the world with such blithe authority. Well, that happens in science too, of course.
Lisa is now convinced that Hooke himself, not Newton, was his worst enemy: he was a terrible record keeper, and never finished anything. For all his protestations of priority over Huygens in regard to the invention of the spring-balance pocket watch, it seems that he may have sunk his claim himself. The new notes include a page taken from the private notes of Henry Oldenburg, written in 1670, in which Oldenburg relates how Hooke presented such a watch (Huygens’ version was patented in 1674), and then leaves a space for a description of the mechanism, apparently for Hooke himself to fill in the details. Hooke seems to have done this sketchily in pencil, but his words got worn away and he never amended them more permanently. So Oldenburg got no further in trying to transcribe them than a few lines before apparently giving up and crossing the whole lot out. But worst of all, Hooke seems to have filched the page from Oldenburg’s papers after the latter’s death, in the course of preparing his priority claim in obsessive detail – and then promptly left it buried in his own notes until it has surfaced now. So when Oldenburg’s papers were later checked to assess Hooke’s claim, there was no sign of this page!
I’m also interested to hear that the Hooke pages are shortly to be ‘conserved’ – which means that the book will be taken apart and each page placed inside plastic (Lisa says the pages are already literally falling apart beneath their fingers). So I’ll be one of the few ever to have touched the originals, and to have seen the book in its pristine form. Phew.
I went to a nice talk by Lisa Jardine at the (peripatetic) Royal Institution last week, on the newly discovered notes of Robert Hooke. Lisa and her students have been studying this portfolio of notes taken by Hooke in his capacity as secretary of the Royal Society since they were rescued from auction and returned to the Royal Society earlier this year (see my earlier blog entry in May). She says that they have completely transformed her view of Hooke since writing his biography (The Curious Life of Robert Hooke, HarperCollins) in 2003. One of the hazards of being a historian, she pointed out, is that you can never be sure what may come to light and revise all your opinions, which have previously been presented to the world with such blithe authority. Well, that happens in science too, of course.
Lisa is now convinced that Hooke himself, not Newton, was his worst enemy: he was a terrible record keeper, and never finished anything. For all his protestations of priority over Huygens in regard to the invention of the spring-balance pocket watch, it seems that he may have sunk his claim himself. The new notes include a page taken from the private notes of Henry Oldenburg, written in 1670, in which Oldenburg relates how Hooke presented such a watch (Huygens’ version was patented in 1674), and then leaves a space for a description of the mechanism, apparently for Hooke himself to fill in the details. Hooke seems to have done this sketchily in pencil, but his words got worn away and he never amended them more permanently. So Oldenburg got no further in trying to transcribe them than a few lines before apparently giving up and crossing the whole lot out. But worst of all, Hooke seems to have filched the page from Oldenburg’s papers after the latter’s death, in the course of preparing his priority claim in obsessive detail – and then promptly left it buried in his own notes until it has surfaced now. So when Oldenburg’s papers were later checked to assess Hooke’s claim, there was no sign of this page!
I’m also interested to hear that the Hooke pages are shortly to be ‘conserved’ – which means that the book will be taken apart and each page placed inside plastic (Lisa says the pages are already literally falling apart beneath their fingers). So I’ll be one of the few ever to have touched the originals, and to have seen the book in its pristine form. Phew.
Wednesday, November 15, 2006
Economists as storytellers
Economist blogger Dave Iverson has written to me about my “tease” (his words, nice choice) in the Financial Times about neoclassical economics. Dave has previously commented in a way that I found insightful and fair on the exchanges and debates in the blogosphere, particularly those on Mark Thoma’s and Dave Altig’s sites. His latest post is another useful contribution, and here it is:
“Philip Ball's Financial Times' critique of economics, titled Baroque Fantasies of a Peculiar Science caused quite a stir recently in the economics blogs (particularly here here and here here.). But last week the bickering subsided with Dave Altig (macroblog) and Philip Ball seeming to have reached an accord.. At one point Altig said, "If you want, call economics an attempt to construct coherent stories about social phenomenon..." Sounds about right to me. We economists are indeed story tellers. Following this discussion, it seems clear that economists need to be much more open and honest about our assumptions and the linkages, such as they are and often are not to the real world of policy and action. No argument from me on that score. I've been arguing similarly for years.
“For more critique, see Steve Cohn's August 2002 Telling Other Stories: Heterodox Critiques of Neoclassical Micro Principles Texts, wherein Cohn attacks the "'rhetoric' of neoclassical theory, …critiquing many of the stories told, the metaphors used, the analogies drawn, and the framing language deployed."
“In addition, there have been many book-form critiques arguing that economists, particularly neoclassical economists have over-driven their headlights in much the same way that Bell argues. Here are six of my favorites (arranged by date of publication):
J. de V. Graaff. Theoretical Welfare Economics. 1957
Guy Routh. The Origin of Economic Ideas. 1975
Mark Blaug. The Methodology of Economics: Or How Economists Explain.
1980
Robert L. Heilbroner. Behind the Veil of Economics: Essays in the
Worldly Philosophy. 1988
Mark Sagoff. The Economy of the Earth: Philosophy, Law and the
Environment. 1988
Andrew Bard Smookler. The Illusion of Choice: How the Market Economy
Controls Our Destiny. 1993”
The Cohn paper is excellent – it says pretty much all of what I said in the FT article and much more, and in more depth, and frankly more persuasively. I particularly liked this, in relation to Paul Ormerod’s FT critique of how the textbooks tell the same old neoclassical story, despite what some of the practitioners are now doing to the contrary:
“We shouldn’t allow neoclassical economists to “run away” from their textbooks. The tracts educate well over a million students a year and lay the groundwork for much of educated opinion about economic issues. They should be defended or abandoned. In critiquing principles texts we should quote from the books themselves and if charged with attacking straw men, ask who is to blame: the textbook authors who built these scarecrows, or the photographers who took their picture?”
In any event, I offer the Cohn paper to those who say I’ve misrepresented the field (or have misused the word ‘neoclassical’). And I do so partly because Cohn seems to me to be very fair, acknowledging (in a way that I admit I could have done more explicitly) some of the ways in which modern economics has moved beyond the simplistic picture. This seems to me to be about dialogue rather than attack – which is absolutely what I’d like to see.
Economist blogger Dave Iverson has written to me about my “tease” (his words, nice choice) in the Financial Times about neoclassical economics. Dave has previously commented in a way that I found insightful and fair on the exchanges and debates in the blogosphere, particularly those on Mark Thoma’s and Dave Altig’s sites. His latest post is another useful contribution, and here it is:
“Philip Ball's Financial Times' critique of economics, titled Baroque Fantasies of a Peculiar Science caused quite a stir recently in the economics blogs (particularly here here and here here.). But last week the bickering subsided with Dave Altig (macroblog) and Philip Ball seeming to have reached an accord.. At one point Altig said, "If you want, call economics an attempt to construct coherent stories about social phenomenon..." Sounds about right to me. We economists are indeed story tellers. Following this discussion, it seems clear that economists need to be much more open and honest about our assumptions and the linkages, such as they are and often are not to the real world of policy and action. No argument from me on that score. I've been arguing similarly for years.
“For more critique, see Steve Cohn's August 2002 Telling Other Stories: Heterodox Critiques of Neoclassical Micro Principles Texts, wherein Cohn attacks the "'rhetoric' of neoclassical theory, …critiquing many of the stories told, the metaphors used, the analogies drawn, and the framing language deployed."
“In addition, there have been many book-form critiques arguing that economists, particularly neoclassical economists have over-driven their headlights in much the same way that Bell argues. Here are six of my favorites (arranged by date of publication):
J. de V. Graaff. Theoretical Welfare Economics. 1957
Guy Routh. The Origin of Economic Ideas. 1975
Mark Blaug. The Methodology of Economics: Or How Economists Explain.
1980
Robert L. Heilbroner. Behind the Veil of Economics: Essays in the
Worldly Philosophy. 1988
Mark Sagoff. The Economy of the Earth: Philosophy, Law and the
Environment. 1988
Andrew Bard Smookler. The Illusion of Choice: How the Market Economy
Controls Our Destiny. 1993”
The Cohn paper is excellent – it says pretty much all of what I said in the FT article and much more, and in more depth, and frankly more persuasively. I particularly liked this, in relation to Paul Ormerod’s FT critique of how the textbooks tell the same old neoclassical story, despite what some of the practitioners are now doing to the contrary:
“We shouldn’t allow neoclassical economists to “run away” from their textbooks. The tracts educate well over a million students a year and lay the groundwork for much of educated opinion about economic issues. They should be defended or abandoned. In critiquing principles texts we should quote from the books themselves and if charged with attacking straw men, ask who is to blame: the textbook authors who built these scarecrows, or the photographers who took their picture?”
In any event, I offer the Cohn paper to those who say I’ve misrepresented the field (or have misused the word ‘neoclassical’). And I do so partly because Cohn seems to me to be very fair, acknowledging (in a way that I admit I could have done more explicitly) some of the ways in which modern economics has moved beyond the simplistic picture. This seems to me to be about dialogue rather than attack – which is absolutely what I’d like to see.
Tuesday, November 14, 2006
Was life inevitable?
Here’s the unexpurgated version of my latest story for news@nature. There’s a lot of really interesting back story here, which I hope to return to at some point. This is far and away some of the most interesting “origin of life” work I’ve seen for some time.
Life may be the ultimate in planetary stress relief, a new theory claims
The appearance of life on Earth seems to face so many obstacles that scientists often feel forced to regard it almost as miraculous. Now two scientists working at the Santa Fe Institute in New Mexico suggest that, on the contrary, it may have been inevitable.
They argue that life was the necessary consequence of the build-up of available energy on the early Earth, thanks to purely geological processes. They regard it as directly analogous to the way lightning relieves the build-up of electrical charge in thunderclouds.
In other words, say Harold Morowitz and Eric Smith in a preprint posted on the Santa Fe Institute archive [1], the geological environment "forced life into existence".
This view, the researchers say, implies not only that life had to emerge on the Earth, but that the same would happen on any similar planet. And they hope that ultimately it will be possible to predict the first steps in the origin of life based on the laws of physics and chemistry alone.
Their proposal is "instructive and inspiring", says Michael Russell, a specialist in the origin of life at the California Institute of Technology in Pasadena.
Morowitz and Smith admit that they don't yet have the theoretical tools to clinch their arguments, or to show what form this "inevitable life" must take. But they argue that it is likely to have used the same chemical processes that now drive our own metabolism – but in reverse.
They say that the young Earth would have been accumulating energy from geological processes much as a dam accumulates gravitational potential energy by piling up water. Sooner or later, something had to give.
One source of such energy would have been energy-rich compounds called polyphosphates, generated in volcanic processes. These are 'battery molecules', analogous to the compound ATP, the ubiquitous source of metabolic energy in living cells.
Another source would have been hydrogen molecules, which are likely to have been abundant in the early atmosphere even though they are almost absent today. Hydrogen would have been generated, for example, by reactions between seawater and dissolved iron.
Energy-releasing reactions between hydrogen and carbon dioxide (a volcanic gas) in the atmosphere can produce complex organic molecules, the precursors of living systems.
In our own metabolism, a series of biochemical reactions called the citric-acid cycle breaks down organic compounds from food into carbon dioxide. Horowitz and Smith say that the energy reservoirs of the young Earth could have driven a citric-acid cycle in reverse, spawning the building blocks of life while relaxing the 'energy pressure' of the environment. Eventually these processes will have become encapsulated in cells, which makes the 'energy-conducting' flows more efficient.
Life, agrees Russell is "a chemical system that drains and dissipates chemical energy." He has used similar ideas to argue that "life would emerge using the same pathways on any sunny, wet rocky planet" [2,3]. But he believes that the most likely place for it to occur was at miniature subsea volcanoes called hydrothermal vents, where the ingredients and conditions are just right for energy-harnessing chemical machinery to develop [4].
The biochemical processes of living organisms are highly organized. Scientists have long puzzled over how these 'ordered' systems can come spontaneously into being, when the Second Law of Thermodynamics suggests that the universe as a whole tends to generate increasing disorder.
The answer, broadly speaking, is that local clumps of order come at the expense of increasing the disorder in their environment. But Horowitz and Smith suggest a rationale for why such concentrations of order should happen in the first place. They draw on the idea, proposed in the 1980s by Rod Swenson of the University of Connecticut, that ordered states are much better 'lightning' conductors' for discharging excess energy.
Thus, they say, despite several major extinctions throughout geological time, when most of life on Earth was obliterated, life itself was never in danger of disappearing – because an Earth with life is always more stable than one without. They call this 'condensation' of life from the energy-rich environment a "collapse to life", which in their view is as inevitable as the appearance of snowflakes in cold, moist air.
References
1. Morowitz, H. & Smith, E. Santa Fe Institute Working Paper (2006).
2. Russell, M. J. & Hall, A. J. in Hiscox, J. A. (ed.) The Search for Life on Mars, 26-36 (British Interplanetary Society, 1999).
3. Russell, M. J. et al. in Ikan, R. (ed.) Natural and Laboratory-Simulated Thermal Geochemical Processes, 325-388 (Kluwer, Dordrecht, 2003).
4. Martin, W. & Russell, M. J. Phil. Trans. Roy. Soc. B online publication doi:10.1098/rstb.2006.1881 (2006).
Here’s the unexpurgated version of my latest story for news@nature. There’s a lot of really interesting back story here, which I hope to return to at some point. This is far and away some of the most interesting “origin of life” work I’ve seen for some time.
Life may be the ultimate in planetary stress relief, a new theory claims
The appearance of life on Earth seems to face so many obstacles that scientists often feel forced to regard it almost as miraculous. Now two scientists working at the Santa Fe Institute in New Mexico suggest that, on the contrary, it may have been inevitable.
They argue that life was the necessary consequence of the build-up of available energy on the early Earth, thanks to purely geological processes. They regard it as directly analogous to the way lightning relieves the build-up of electrical charge in thunderclouds.
In other words, say Harold Morowitz and Eric Smith in a preprint posted on the Santa Fe Institute archive [1], the geological environment "forced life into existence".
This view, the researchers say, implies not only that life had to emerge on the Earth, but that the same would happen on any similar planet. And they hope that ultimately it will be possible to predict the first steps in the origin of life based on the laws of physics and chemistry alone.
Their proposal is "instructive and inspiring", says Michael Russell, a specialist in the origin of life at the California Institute of Technology in Pasadena.
Morowitz and Smith admit that they don't yet have the theoretical tools to clinch their arguments, or to show what form this "inevitable life" must take. But they argue that it is likely to have used the same chemical processes that now drive our own metabolism – but in reverse.
They say that the young Earth would have been accumulating energy from geological processes much as a dam accumulates gravitational potential energy by piling up water. Sooner or later, something had to give.
One source of such energy would have been energy-rich compounds called polyphosphates, generated in volcanic processes. These are 'battery molecules', analogous to the compound ATP, the ubiquitous source of metabolic energy in living cells.
Another source would have been hydrogen molecules, which are likely to have been abundant in the early atmosphere even though they are almost absent today. Hydrogen would have been generated, for example, by reactions between seawater and dissolved iron.
Energy-releasing reactions between hydrogen and carbon dioxide (a volcanic gas) in the atmosphere can produce complex organic molecules, the precursors of living systems.
In our own metabolism, a series of biochemical reactions called the citric-acid cycle breaks down organic compounds from food into carbon dioxide. Horowitz and Smith say that the energy reservoirs of the young Earth could have driven a citric-acid cycle in reverse, spawning the building blocks of life while relaxing the 'energy pressure' of the environment. Eventually these processes will have become encapsulated in cells, which makes the 'energy-conducting' flows more efficient.
Life, agrees Russell is "a chemical system that drains and dissipates chemical energy." He has used similar ideas to argue that "life would emerge using the same pathways on any sunny, wet rocky planet" [2,3]. But he believes that the most likely place for it to occur was at miniature subsea volcanoes called hydrothermal vents, where the ingredients and conditions are just right for energy-harnessing chemical machinery to develop [4].
The biochemical processes of living organisms are highly organized. Scientists have long puzzled over how these 'ordered' systems can come spontaneously into being, when the Second Law of Thermodynamics suggests that the universe as a whole tends to generate increasing disorder.
The answer, broadly speaking, is that local clumps of order come at the expense of increasing the disorder in their environment. But Horowitz and Smith suggest a rationale for why such concentrations of order should happen in the first place. They draw on the idea, proposed in the 1980s by Rod Swenson of the University of Connecticut, that ordered states are much better 'lightning' conductors' for discharging excess energy.
Thus, they say, despite several major extinctions throughout geological time, when most of life on Earth was obliterated, life itself was never in danger of disappearing – because an Earth with life is always more stable than one without. They call this 'condensation' of life from the energy-rich environment a "collapse to life", which in their view is as inevitable as the appearance of snowflakes in cold, moist air.
References
1. Morowitz, H. & Smith, E. Santa Fe Institute Working Paper (2006).
2. Russell, M. J. & Hall, A. J. in Hiscox, J. A. (ed.) The Search for Life on Mars, 26-36 (British Interplanetary Society, 1999).
3. Russell, M. J. et al. in Ikan, R. (ed.) Natural and Laboratory-Simulated Thermal Geochemical Processes, 325-388 (Kluwer, Dordrecht, 2003).
4. Martin, W. & Russell, M. J. Phil. Trans. Roy. Soc. B online publication doi:10.1098/rstb.2006.1881 (2006).
Friday, November 10, 2006
No offence?
Well well, I hadn’t anticipated that I was lighting a fuse with my FT article on economics. There has been more follow-up in the FT itself – Paul Ormerod wrote a very nice Comment which was partly something of a response to some of the letters claiming that economics ain’t like that any more. Paul’s point is that yes, perhaps academic economics has moved on in many ways (I should have been more explicit about that myself), but the stuff that students are taught is still very much rooted in the old tradition. And these are people who graduate and then presumably go into business and politics believing that that is what economics is about – which is precisely my concern. This squares with what Robert Hunter Wade, a professor at LSE, says in his letter to the FT about how the simplistic picture of market efficiency is what tends to filter down to policy makers. All this leaves me thinking that it’s precisely for this reason that the simple picture of rational maximizers, equilibrium and market efficiency is perhaps a rather dangerous place to start from – sure, academic economists often (even generally) then move beyond it, but not everyone who draws on economic theory has learnt it beyond graduate level.
Much of the discussion prompted by my article has taken place in the blogs, however. Some of those I’ve spotted are here and here and here and here. A lot of the debate seems to focus on how stupid and misinformed my article was (although I can’t help thinking that there wouldn’t be quite so much discussion if it was that easy). I decided to take up the challenge on Dave Altig’s blog, which has been an instructive experience. At first I was taken aback by the aggression of the discourse, which was something I’ve just not experienced in the natural sciences. I don’t know if this is something specific to the economics world, or to the blogosphere generally, but it was not a pleasant discovery. However, I’m very grateful that Dave Altig has made some very gracious and polite comments that have cooled the tone and facilitated a far more constructive exchange. I was at fault here too, taking initially a more gung-ho tone than I needed to. (I think I was probably riled by some comments I received separately from an assistant professor at the University of Pennsylvania, which had a character I’d not experienced since the school playground.) It seems also that my FT piece was misread by some as being more insulting to economists than I’d intended – if that’s the impression I gave, then I regret it. I do think one sometimes needs to be provocative in order to spark a discussion, but I’d hoped to do that without seeming to jeer or ridicule.
I can’t possibly summarize all the blogging discussion; it’s there if you’re interested. But the discussion on Dave Altig’s site has been very useful for me, helping me to sharpen what it is I want to say while pointing to some issues that I need to go away and consider. His post of 9 November gives particularly valuable food for thought; thank you Dave.
Well well, I hadn’t anticipated that I was lighting a fuse with my FT article on economics. There has been more follow-up in the FT itself – Paul Ormerod wrote a very nice Comment which was partly something of a response to some of the letters claiming that economics ain’t like that any more. Paul’s point is that yes, perhaps academic economics has moved on in many ways (I should have been more explicit about that myself), but the stuff that students are taught is still very much rooted in the old tradition. And these are people who graduate and then presumably go into business and politics believing that that is what economics is about – which is precisely my concern. This squares with what Robert Hunter Wade, a professor at LSE, says in his letter to the FT about how the simplistic picture of market efficiency is what tends to filter down to policy makers. All this leaves me thinking that it’s precisely for this reason that the simple picture of rational maximizers, equilibrium and market efficiency is perhaps a rather dangerous place to start from – sure, academic economists often (even generally) then move beyond it, but not everyone who draws on economic theory has learnt it beyond graduate level.
Much of the discussion prompted by my article has taken place in the blogs, however. Some of those I’ve spotted are here and here and here and here. A lot of the debate seems to focus on how stupid and misinformed my article was (although I can’t help thinking that there wouldn’t be quite so much discussion if it was that easy). I decided to take up the challenge on Dave Altig’s blog, which has been an instructive experience. At first I was taken aback by the aggression of the discourse, which was something I’ve just not experienced in the natural sciences. I don’t know if this is something specific to the economics world, or to the blogosphere generally, but it was not a pleasant discovery. However, I’m very grateful that Dave Altig has made some very gracious and polite comments that have cooled the tone and facilitated a far more constructive exchange. I was at fault here too, taking initially a more gung-ho tone than I needed to. (I think I was probably riled by some comments I received separately from an assistant professor at the University of Pennsylvania, which had a character I’d not experienced since the school playground.) It seems also that my FT piece was misread by some as being more insulting to economists than I’d intended – if that’s the impression I gave, then I regret it. I do think one sometimes needs to be provocative in order to spark a discussion, but I’d hoped to do that without seeming to jeer or ridicule.
I can’t possibly summarize all the blogging discussion; it’s there if you’re interested. But the discussion on Dave Altig’s site has been very useful for me, helping me to sharpen what it is I want to say while pointing to some issues that I need to go away and consider. His post of 9 November gives particularly valuable food for thought; thank you Dave.
Tuesday, November 07, 2006
When you can't do it all with mirrors
[This is the unedited text of my recent article for muse@nature.com.]
A new proposal and costing for a technofix to global warming shows that there are probably better ways to spend the money
The leading economist Nicholas Stern has just handed us, in advance, the bill for the impacts of climate change: close to $4 trillion by the end of this century [1].
And with perfect timing, astronomer Roger Angel of the University of Arizona has delivered the equivalent of a builder's estimate for patching up the problem using a cosmic sunshade [2]. It will set us back by… well, let's make it a nice round figure of $4 trillion by the end of the century.
Both figures can be criticized – after all, when costs add up to a significant fraction of global GDP, no one can expect them to be very accurate. But this happy conflux of estimates puts some perspective on the hope that global warming can be addressed with high-tech mega-engineering projects.
From a pragmatic point of view, the sunshade solution looks like a bad bargain. If a builder told you that the cost of fixing a problem with your roof was likely to be about the same as the cost of not fixing it, except that the fix was untested and might not work at all, and in any event you know the work is likely to run over budget and probably over schedule – well, what would you do?
One could argue, however, that in this case the 'problem' involves the potential suffering of millions of people, who could be killed by disease or flood or drought, displaced from their homes, or caught up in conflict as a result of climate change – in which case you might conclude that investing in a risky technofix can be justified on humanitarian grounds.
But Stern's report, commissioned by the UK government and hailed by many other economists as the most definitive study of its sort to date, doesn't just tot up the costs of inaction over climate change. It makes some estimate of the likely costs of tackling it using existing approaches and technologies – and the answer looks cheaper and a whole lot more attainable than Angel's sunshade.
That doesn't mean Angel's proposal is without value. On the contrary, it performs the service of showing just what would be involved in pursuing one of the favourite ideas of those who believe technofixes could save us from rising world temperatures.
A space shade that reduces the amount of sunlight reaching the Earth has been debated for decades. Many of these schemes invoke a screen that would be unfolded or assembled in space, like a gigantic sail. But as James Early of the Lawrence Livermore National Laboratory in California pointed out in 1989 [3], a sail is precisely what it would be: radiation pressure would push against the sunshade, and it would therefore need to be kept actively in position.
Angel has found inventive ways of coping with all the challenges while keeping costs down. To minimize radiation pressure, the screen would deflect sunlight through only a small angle, just enough to miss the Earth. To keep it in line between the Earth and Sun, it would be placed at the so-called Lagrange point L1, a point in space 1.5 million km away that orbits the Sun with the same 1-year period as our planet.
The size of the screen would be mind-boggling: about 4-6 million square km, around half the area of China. But to avoid complicated space-assembly problems, and to simplify the launching and increase the screen's versatility, Angel proposes that it should consist of a vast swarm of 1-m disks, made from lightweight, microscopically perforated and laminated films of ceramics. Each of these 'flyers' is manoeuvrable thanks to tiny solar sails placed on tabs at the rim, powered by solar cells.
As usual, science fiction got there first. In a short story by Brenda Cooper and Larry Niven published in 2001, an alien species wipes out another by deploying a fleet of tiny mirrors around their planet, plunging it into an ice age [4] – a reminder, perhaps, that we'd better not overdo the shadowing.
Angel's flyers would be launched in stacks, like piles of Brobdingnagian dinner plates, packaged into canisters and fired into space from electromagnetic guns more than a kilometre long. Twenty such cannons would fire 1-ton payloads every five minutes for ten years. Once in space, the flyers make their way to the Lagrange point using fuel-efficient ion thrusters, where they spread out into a cloud as wide as the Earth and 100,000 km long.
And the bill, please? Estimating the costs of materials and launch facility, launch energy, and manufacturing, Angel says it could be done for less than $5 trillion.
All this sounds a long way from the sober accounting of the Stern report. But if you take the report seriously – and as a former chief economist of the World Bank, Stern apparently has the right credentials, although his conclusions have proved predictably controversial – it is similarly mind-boggling.
For example, Stern says that the impacts of climate change could end up costing the world up to 20% of its annual GDP. He compares the effect to that of the world wars or the Great Depression. The "radical change in the physical geography of the world" that climate change would produce, he says, "must lead to major changes in the human geography – where people live and how they live their lives".
Mitigating this potential crisis would require equally drastic measures. Stern does not consider technofixes like the space sunshade, but dwells instead on the far less sexy measure of reducing greenhouse-gas emissions. Gordon Brown, the UK's Chancellor of the Exchequer, who commissioned the report, has called for cuts of 30% by 2020 and 60% by 2050.
Stern's solutions involve energy-saving and improvements in energy efficiency, stopping deforestation, and switching to non-fossil-fuel energy sources. That will work only if the effort is international, he says (which is one reason why sceptics have scoffed), and it will incur a substantial cost: 1% of global GDP over the next 50 years, an amount that Stern calls "significant but manageable", and which squares with some previous estimates.
Whether the targets can be reached by putting solar cells on roofs, turning out lights, banning SUVs and building wind farms, or whether this will require more substantial measures such as new nuclear power stations, extensive carbon capture and sequestration, and fierce taxation of air travel, is a question that environmentalists, industrialists and politicians will continue to debate, no doubt as dogmatically as ever.
But as well as sketching an essay in ingenuity, Angel has done us the great favour of showing that there is probably never going to be the option of conducting business as usual under the shelter of a gigantic technofix.
Reference
1. http://www.hm-treasury.gov.uk/independent_reviews/stern_review_economics_climate_change/stern_review_report.cfm
2. Angel, R. Proc. Natl Acad. Sci. USA in press (2006) [doi:10.1073/pnas.0608163103]
3. Early, J. T. J. Brit. Interplanet. Soc. 42, 567-569 (1989)
4. Cooper, B. & Niven, L. "Ice and Mirrors", in Asimov's Science Fiction, February 2001.
[This is the unedited text of my recent article for muse@nature.com.]
A new proposal and costing for a technofix to global warming shows that there are probably better ways to spend the money
The leading economist Nicholas Stern has just handed us, in advance, the bill for the impacts of climate change: close to $4 trillion by the end of this century [1].
And with perfect timing, astronomer Roger Angel of the University of Arizona has delivered the equivalent of a builder's estimate for patching up the problem using a cosmic sunshade [2]. It will set us back by… well, let's make it a nice round figure of $4 trillion by the end of the century.
Both figures can be criticized – after all, when costs add up to a significant fraction of global GDP, no one can expect them to be very accurate. But this happy conflux of estimates puts some perspective on the hope that global warming can be addressed with high-tech mega-engineering projects.
From a pragmatic point of view, the sunshade solution looks like a bad bargain. If a builder told you that the cost of fixing a problem with your roof was likely to be about the same as the cost of not fixing it, except that the fix was untested and might not work at all, and in any event you know the work is likely to run over budget and probably over schedule – well, what would you do?
One could argue, however, that in this case the 'problem' involves the potential suffering of millions of people, who could be killed by disease or flood or drought, displaced from their homes, or caught up in conflict as a result of climate change – in which case you might conclude that investing in a risky technofix can be justified on humanitarian grounds.
But Stern's report, commissioned by the UK government and hailed by many other economists as the most definitive study of its sort to date, doesn't just tot up the costs of inaction over climate change. It makes some estimate of the likely costs of tackling it using existing approaches and technologies – and the answer looks cheaper and a whole lot more attainable than Angel's sunshade.
That doesn't mean Angel's proposal is without value. On the contrary, it performs the service of showing just what would be involved in pursuing one of the favourite ideas of those who believe technofixes could save us from rising world temperatures.
A space shade that reduces the amount of sunlight reaching the Earth has been debated for decades. Many of these schemes invoke a screen that would be unfolded or assembled in space, like a gigantic sail. But as James Early of the Lawrence Livermore National Laboratory in California pointed out in 1989 [3], a sail is precisely what it would be: radiation pressure would push against the sunshade, and it would therefore need to be kept actively in position.
Angel has found inventive ways of coping with all the challenges while keeping costs down. To minimize radiation pressure, the screen would deflect sunlight through only a small angle, just enough to miss the Earth. To keep it in line between the Earth and Sun, it would be placed at the so-called Lagrange point L1, a point in space 1.5 million km away that orbits the Sun with the same 1-year period as our planet.
The size of the screen would be mind-boggling: about 4-6 million square km, around half the area of China. But to avoid complicated space-assembly problems, and to simplify the launching and increase the screen's versatility, Angel proposes that it should consist of a vast swarm of 1-m disks, made from lightweight, microscopically perforated and laminated films of ceramics. Each of these 'flyers' is manoeuvrable thanks to tiny solar sails placed on tabs at the rim, powered by solar cells.
As usual, science fiction got there first. In a short story by Brenda Cooper and Larry Niven published in 2001, an alien species wipes out another by deploying a fleet of tiny mirrors around their planet, plunging it into an ice age [4] – a reminder, perhaps, that we'd better not overdo the shadowing.
Angel's flyers would be launched in stacks, like piles of Brobdingnagian dinner plates, packaged into canisters and fired into space from electromagnetic guns more than a kilometre long. Twenty such cannons would fire 1-ton payloads every five minutes for ten years. Once in space, the flyers make their way to the Lagrange point using fuel-efficient ion thrusters, where they spread out into a cloud as wide as the Earth and 100,000 km long.
And the bill, please? Estimating the costs of materials and launch facility, launch energy, and manufacturing, Angel says it could be done for less than $5 trillion.
All this sounds a long way from the sober accounting of the Stern report. But if you take the report seriously – and as a former chief economist of the World Bank, Stern apparently has the right credentials, although his conclusions have proved predictably controversial – it is similarly mind-boggling.
For example, Stern says that the impacts of climate change could end up costing the world up to 20% of its annual GDP. He compares the effect to that of the world wars or the Great Depression. The "radical change in the physical geography of the world" that climate change would produce, he says, "must lead to major changes in the human geography – where people live and how they live their lives".
Mitigating this potential crisis would require equally drastic measures. Stern does not consider technofixes like the space sunshade, but dwells instead on the far less sexy measure of reducing greenhouse-gas emissions. Gordon Brown, the UK's Chancellor of the Exchequer, who commissioned the report, has called for cuts of 30% by 2020 and 60% by 2050.
Stern's solutions involve energy-saving and improvements in energy efficiency, stopping deforestation, and switching to non-fossil-fuel energy sources. That will work only if the effort is international, he says (which is one reason why sceptics have scoffed), and it will incur a substantial cost: 1% of global GDP over the next 50 years, an amount that Stern calls "significant but manageable", and which squares with some previous estimates.
Whether the targets can be reached by putting solar cells on roofs, turning out lights, banning SUVs and building wind farms, or whether this will require more substantial measures such as new nuclear power stations, extensive carbon capture and sequestration, and fierce taxation of air travel, is a question that environmentalists, industrialists and politicians will continue to debate, no doubt as dogmatically as ever.
But as well as sketching an essay in ingenuity, Angel has done us the great favour of showing that there is probably never going to be the option of conducting business as usual under the shelter of a gigantic technofix.
Reference
1. http://www.hm-treasury.gov.uk/independent_reviews/stern_review_economics_climate_change/stern_review_report.cfm
2. Angel, R. Proc. Natl Acad. Sci. USA in press (2006) [doi:10.1073/pnas.0608163103]
3. Early, J. T. J. Brit. Interplanet. Soc. 42, 567-569 (1989)
4. Cooper, B. & Niven, L. "Ice and Mirrors", in Asimov's Science Fiction, February 2001.
Friday, November 03, 2006
More on the dismal science
I have drawn some inevitable flak for my criticisms of economic theory in the Financial Times. That’s no more than I expected. Here’s the article; the comments and my responses follow.
Baroque fantasies of a peculiar science
Published in the Financial Times, October 29 2006
It is easy to mock economic theory. Any fool can see that the world of neoclassical economics, which dominates the academic field today, is a gross caricature in which every trader or company acts in the same self-interested way – rational, cool, omniscient. The theory has not foreseen a single stock market crash and has evidently failed to make the world any fairer or more pleasant.
The usual defence is that you have to start somewhere. But mainstream economists no longer consider their core theory to be a “start”. The tenets are so firmly embedded that economists who think it is time to move beyond them are cold-shouldered. It is a rigid dogma. To challenge these ideas is to invite blank stares of incomprehension – you might as well be telling a physicist that gravity does not exist.
That is disturbing because these things matter. Neoclassical idiocies persuaded many economists that market forces would create a robust post-Soviet economy in Russia (corrupt gangster economies do not exist in neoclassical theory). Neoclassical ideas favouring unfettered market forces may determine whether Britain adopts the euro, how we run our schools, hospitals and welfare system. If mainstream economic theory is fundamentally flawed, we are no better than doctors diagnosing with astrology.
Neoclassical economics asserts two things. First, in a free market, competition establishes a price equilibrium that is perfectly efficient: demand equals supply and no resources are squandered. Second, in equilibrium no one can be made better off without making someone else worse off.
The conclusions are a snug fit with rightwing convictions. So it is tempting to infer that the dominance of neoclassical theory has political origins. But while it has justified many rightwing policies, the truth goes deeper. Economics arose in the 18th century in a climate of Newtonian mechanistic science, with its belief in forces in balance. And the foundations of neoclassical theory were laid when scientists were exploring the notion of thermodynamic equilibrium. Economics borrowed wrong ideas from physics, and is now reluctant to give them up.
This error does not make neoclassical economic theory simple. Far from it. It is one of the most mathematically complicated subjects among the “sciences”, as difficult as quantum physics. That is part of the problem: it is such an elaborate contrivance that there is too much at stake to abandon it.
It is almost impossible to talk about economics today without endorsing its myths. Take the business cycle: there is no business cycle in any meaningful sense. In every other scientific discipline, a cycle is something that repeats periodically. Yet there is no absolute evidence for periodicity in economic fluctuations. Prices sometimes rise and sometimes fall. That is not a cycle; it is noise. Yet talk of cycles has led economists to hallucinate all kinds of fictitious oscillations in economic markets. Meanwhile, the Nobel-winning neoclassical theory of the so-called business cycle “explains” it by blaming events outside the market. This salvages the precious idea of equilibrium, and thus of market efficiency. Analysts talk of market “corrections”, as though there is some ideal state that it is trying to attain. But in reality the market is intrinsically prone to leap and lurch.
One can go through economic theory systematically demolishing all the cherished principles that students learn: the Phillips curve relating unemployment and inflation, the efficient market hypothesis, even the classic X-shaped intersections of supply and demand curves. Paul Ormerod, author of The Death of Economics, argues that one of the most limiting assumptions of neoclassical theory is that agent behaviour is fixed: people in markets pursue a single goal regardless of what others do. The only way one person can influence another’s choices is via the indirect effect of trading on prices. Yet it is abundantly clear that herding – irrational, copycat buying and selling – provokes market fluctuations.
There are ways of dealing with the variety and irrationality of real agents in economic theory. But not in mainstream economics journals, because the models defy neoclassical assumptions.
There is no other “science” in such a peculiar state. A demonstrably false conceptual core is sustained by inertia alone. This core, “the Citadel”, remains impregnable while its adherents fashion an increasingly baroque fantasy. As Alan Kirman, a progressive economist, said: “No amount of attention to the walls will prevent the Citadel from being empty.”
So there you have it. Now the critics, published in the 1 November FT and online:
Letter 1: Did this sceptic ever take a course in the one science that calls itself dismal?
Sir, Philip Ball ("Baroque fantasies of a most peculiar science", October 30) quarrels with what he calls neoclassical economics. Perhaps his scarce argument may be better allocated against a competing end: might he notice that physics attempts to describe, explain and predict the action of matter in space, motion and time?
Economic theory establishes a baseline description of human behaviour, while always positing that when humans act, considerable complexity results. Perhaps Mr Ball never took a second course in the only science that, for its challenges, calls itself dismal.
Chris Robling,
Chicago, IL 60602, US
Do you understand this? I don’t. Yes, that’s what physics does. And your point is?
‘Economic theory establishes a baseline’: well yes, except that it doesn’t, because it manifestly doesn’t describe the way people act even to first order. But the real criticism is that neoclassical economics isn’t consistent even on its own terms – if you swallow its assumptions, the conclusions don’t follow. Steve Keen’s book Debunking Economics shows why.
“A second course”? Is this some kind of American euphemism? Sorry, too strange.
By the way, I suspect most people use the phrase ‘the dismal science’ without knowing what Carlyle was implying (or even that it was Carlyle who implied it). Look it up – it’s interesting. He considered economics dismal not because it was shoddy, but because it dealt with unpalatable truths about human nature. The article in which he used the phrase was, after all, about “the nigger question”.
Letter 2: A critic paints another unrecognisable portrait of economics
Sir, Philip Ball says it is easy to mock economic theory ("Baroque fantasies of a most peculiar science", October 30). It is even easier to mock a caricature of economics, which is what he does, resorting to the tired cliché that we economists think we are doing mechanical physics. Once true, perhaps, but certainly not recently.
Like so many critics of economics, he paints an unrecognisable portrait of the subject. Economists do indeed use models that assume perfect competition and identical agents with unchanging behaviour, but only when it is useful to do so. At other times, we make other assumptions, including those used in the kind of models Mr Ball wrote about in his interesting book Critical Mass.
Economics is distinctive in using the concept of equilibrium - a state in which no individual consumer or business has an incentive to change behaviour - as a powerful analytical tool. It is so useful that evolutionary biologists, for example, use it all the time too.
I do of course have criticisms of my own subject. In particular, the typical undergraduate syllabus lags far behind all the remarkable developments of the past decade or two, such as information economics and behavioural economics. But the baroque citadel is Mr Ball's own fantasy; we economists moved out of it long ago, as a proper look at the mainstream journals (or a list of the Nobel winners) will show.
Diane Coyle,
Enlightenment Economics,
London W13 8PE, UK
Paul Ormerod tells me I would actually get on well with Diane. I think he’s right; I’ll probably get on with anyone who plugs my book. But I think I mostly agree with Diane anyway, except that I do wonder whether ‘we economists’ refers to a more select bunch than she appreciates. It is precisely those economist I mentioned in Critical Mass who are typically marginalized by the mainstream. I used ‘neoclassical’ so much in my article that I was worried by the repetition, precisely to make it clear that that is what I was criticizing, not the interesting ideas that get put forward outside of it. I understand that agent-based modelers have become so fed up with being excluded because their models violate neoclassical dogma that they have been forced to start their own journal.
The ‘citadel’ is not my term, nor my fantasy – it is the expression used by Alan Kirman, one of the pioneers of economic agent-based approaches. Ask Paul Ormerod. Ask Paul Krugman, for that matter. If they all feel this way, surely there’s a reason?
Letter 3: More risk finance fiascos on the way
Sir, I want to congratulate Philip Ball for his insightful and long-overdue comment about the domination of the economic profession by frustrated mathematicians and physics lecturers (October 30). He is incorrect in one regard, however, when he comments that "there is no other 'science' in such a peculiar state".
In fact, the worlds of finance and risk management have embraced the same nonsensical application of quantitative methods that he describes so well operating in the world of economics. This "derivative" view of credit risk and other important issues of global capital finance has badly damaged the ability of investors to perceive risk and gives managers an unreasonable view of the risks that they do accept. Witness the latest fiasco involving the hedge fund formerly called Amaranth. And there will be more examples very shortly.
Christopher Whalen,
Managing Director,
Institutional Risk Analytics,
Hawthorne, CA 90250, US
Well, precisely. I talk briefly about derivatives and risk management in Critical Mass, simply to say that you’re not going to do very well forecasting risk if you insist on thinking that market noise is Gaussian.
Letter 4: Economists are busy dealing with the impact of 'real agents' in the economy
Sir, Contrary to what Philip Ball believes, many economists are already busy "dealing with the variety and irrationality of real agents" ("Baroque fantasies of a most peculiar science", October 30). These economists include several Nobel prize-winners: Herbert A. Simon, Daniel Kahneman, Vernon L. Smith and Thomas C. Schelling. In fact, the Nobel prize this year was awarded to Edmund Phelps for challenging the Phillips curve trade-offs by "taking into account problems of information in the economy".
Mocking economic theory is easy but doing so by perpetuating "rigid dogma" about economics and economists is pure hearsay. A survey of recent literature in mainstream economic journals or textbooks should enlighten this misconceived view.
Chee Kian Leong,
639798 Singapore
OK, so it’s basically the same point as Diane Coyle’s. But the question of the Nobels is curious. (Needless to say, Simon and Schelling loom large in Critical Mass.) I’ve talked with others about the strange fact that economics Nobels often (though by no means always) go to contributions that lie outside the mainstream, and thus outside of neoclassical dogma. (One could add Stiglitz and Sen to the list, for example.) This speaks of impeccable taste (or nearly so) on the part of the Nobel committee. But it is puzzling – nothing like it happens in the other ‘sciences’.
The bottom line is: do you believe in neoclassical general equilibrium theory, with its efficient market hypothesis, its exogenous shocks, its aggregate price curves and all the rest? If not, do you think it is right that this is what students learn and come to believe about the way the economy works? And that papers which question the theory’s fundamental principles should be excluded from much of the literature?
I have drawn some inevitable flak for my criticisms of economic theory in the Financial Times. That’s no more than I expected. Here’s the article; the comments and my responses follow.
Baroque fantasies of a peculiar science
Published in the Financial Times, October 29 2006
It is easy to mock economic theory. Any fool can see that the world of neoclassical economics, which dominates the academic field today, is a gross caricature in which every trader or company acts in the same self-interested way – rational, cool, omniscient. The theory has not foreseen a single stock market crash and has evidently failed to make the world any fairer or more pleasant.
The usual defence is that you have to start somewhere. But mainstream economists no longer consider their core theory to be a “start”. The tenets are so firmly embedded that economists who think it is time to move beyond them are cold-shouldered. It is a rigid dogma. To challenge these ideas is to invite blank stares of incomprehension – you might as well be telling a physicist that gravity does not exist.
That is disturbing because these things matter. Neoclassical idiocies persuaded many economists that market forces would create a robust post-Soviet economy in Russia (corrupt gangster economies do not exist in neoclassical theory). Neoclassical ideas favouring unfettered market forces may determine whether Britain adopts the euro, how we run our schools, hospitals and welfare system. If mainstream economic theory is fundamentally flawed, we are no better than doctors diagnosing with astrology.
Neoclassical economics asserts two things. First, in a free market, competition establishes a price equilibrium that is perfectly efficient: demand equals supply and no resources are squandered. Second, in equilibrium no one can be made better off without making someone else worse off.
The conclusions are a snug fit with rightwing convictions. So it is tempting to infer that the dominance of neoclassical theory has political origins. But while it has justified many rightwing policies, the truth goes deeper. Economics arose in the 18th century in a climate of Newtonian mechanistic science, with its belief in forces in balance. And the foundations of neoclassical theory were laid when scientists were exploring the notion of thermodynamic equilibrium. Economics borrowed wrong ideas from physics, and is now reluctant to give them up.
This error does not make neoclassical economic theory simple. Far from it. It is one of the most mathematically complicated subjects among the “sciences”, as difficult as quantum physics. That is part of the problem: it is such an elaborate contrivance that there is too much at stake to abandon it.
It is almost impossible to talk about economics today without endorsing its myths. Take the business cycle: there is no business cycle in any meaningful sense. In every other scientific discipline, a cycle is something that repeats periodically. Yet there is no absolute evidence for periodicity in economic fluctuations. Prices sometimes rise and sometimes fall. That is not a cycle; it is noise. Yet talk of cycles has led economists to hallucinate all kinds of fictitious oscillations in economic markets. Meanwhile, the Nobel-winning neoclassical theory of the so-called business cycle “explains” it by blaming events outside the market. This salvages the precious idea of equilibrium, and thus of market efficiency. Analysts talk of market “corrections”, as though there is some ideal state that it is trying to attain. But in reality the market is intrinsically prone to leap and lurch.
One can go through economic theory systematically demolishing all the cherished principles that students learn: the Phillips curve relating unemployment and inflation, the efficient market hypothesis, even the classic X-shaped intersections of supply and demand curves. Paul Ormerod, author of The Death of Economics, argues that one of the most limiting assumptions of neoclassical theory is that agent behaviour is fixed: people in markets pursue a single goal regardless of what others do. The only way one person can influence another’s choices is via the indirect effect of trading on prices. Yet it is abundantly clear that herding – irrational, copycat buying and selling – provokes market fluctuations.
There are ways of dealing with the variety and irrationality of real agents in economic theory. But not in mainstream economics journals, because the models defy neoclassical assumptions.
There is no other “science” in such a peculiar state. A demonstrably false conceptual core is sustained by inertia alone. This core, “the Citadel”, remains impregnable while its adherents fashion an increasingly baroque fantasy. As Alan Kirman, a progressive economist, said: “No amount of attention to the walls will prevent the Citadel from being empty.”
So there you have it. Now the critics, published in the 1 November FT and online:
Letter 1: Did this sceptic ever take a course in the one science that calls itself dismal?
Sir, Philip Ball ("Baroque fantasies of a most peculiar science", October 30) quarrels with what he calls neoclassical economics. Perhaps his scarce argument may be better allocated against a competing end: might he notice that physics attempts to describe, explain and predict the action of matter in space, motion and time?
Economic theory establishes a baseline description of human behaviour, while always positing that when humans act, considerable complexity results. Perhaps Mr Ball never took a second course in the only science that, for its challenges, calls itself dismal.
Chris Robling,
Chicago, IL 60602, US
Do you understand this? I don’t. Yes, that’s what physics does. And your point is?
‘Economic theory establishes a baseline’: well yes, except that it doesn’t, because it manifestly doesn’t describe the way people act even to first order. But the real criticism is that neoclassical economics isn’t consistent even on its own terms – if you swallow its assumptions, the conclusions don’t follow. Steve Keen’s book Debunking Economics shows why.
“A second course”? Is this some kind of American euphemism? Sorry, too strange.
By the way, I suspect most people use the phrase ‘the dismal science’ without knowing what Carlyle was implying (or even that it was Carlyle who implied it). Look it up – it’s interesting. He considered economics dismal not because it was shoddy, but because it dealt with unpalatable truths about human nature. The article in which he used the phrase was, after all, about “the nigger question”.
Letter 2: A critic paints another unrecognisable portrait of economics
Sir, Philip Ball says it is easy to mock economic theory ("Baroque fantasies of a most peculiar science", October 30). It is even easier to mock a caricature of economics, which is what he does, resorting to the tired cliché that we economists think we are doing mechanical physics. Once true, perhaps, but certainly not recently.
Like so many critics of economics, he paints an unrecognisable portrait of the subject. Economists do indeed use models that assume perfect competition and identical agents with unchanging behaviour, but only when it is useful to do so. At other times, we make other assumptions, including those used in the kind of models Mr Ball wrote about in his interesting book Critical Mass.
Economics is distinctive in using the concept of equilibrium - a state in which no individual consumer or business has an incentive to change behaviour - as a powerful analytical tool. It is so useful that evolutionary biologists, for example, use it all the time too.
I do of course have criticisms of my own subject. In particular, the typical undergraduate syllabus lags far behind all the remarkable developments of the past decade or two, such as information economics and behavioural economics. But the baroque citadel is Mr Ball's own fantasy; we economists moved out of it long ago, as a proper look at the mainstream journals (or a list of the Nobel winners) will show.
Diane Coyle,
Enlightenment Economics,
London W13 8PE, UK
Paul Ormerod tells me I would actually get on well with Diane. I think he’s right; I’ll probably get on with anyone who plugs my book. But I think I mostly agree with Diane anyway, except that I do wonder whether ‘we economists’ refers to a more select bunch than she appreciates. It is precisely those economist I mentioned in Critical Mass who are typically marginalized by the mainstream. I used ‘neoclassical’ so much in my article that I was worried by the repetition, precisely to make it clear that that is what I was criticizing, not the interesting ideas that get put forward outside of it. I understand that agent-based modelers have become so fed up with being excluded because their models violate neoclassical dogma that they have been forced to start their own journal.
The ‘citadel’ is not my term, nor my fantasy – it is the expression used by Alan Kirman, one of the pioneers of economic agent-based approaches. Ask Paul Ormerod. Ask Paul Krugman, for that matter. If they all feel this way, surely there’s a reason?
Letter 3: More risk finance fiascos on the way
Sir, I want to congratulate Philip Ball for his insightful and long-overdue comment about the domination of the economic profession by frustrated mathematicians and physics lecturers (October 30). He is incorrect in one regard, however, when he comments that "there is no other 'science' in such a peculiar state".
In fact, the worlds of finance and risk management have embraced the same nonsensical application of quantitative methods that he describes so well operating in the world of economics. This "derivative" view of credit risk and other important issues of global capital finance has badly damaged the ability of investors to perceive risk and gives managers an unreasonable view of the risks that they do accept. Witness the latest fiasco involving the hedge fund formerly called Amaranth. And there will be more examples very shortly.
Christopher Whalen,
Managing Director,
Institutional Risk Analytics,
Hawthorne, CA 90250, US
Well, precisely. I talk briefly about derivatives and risk management in Critical Mass, simply to say that you’re not going to do very well forecasting risk if you insist on thinking that market noise is Gaussian.
Letter 4: Economists are busy dealing with the impact of 'real agents' in the economy
Sir, Contrary to what Philip Ball believes, many economists are already busy "dealing with the variety and irrationality of real agents" ("Baroque fantasies of a most peculiar science", October 30). These economists include several Nobel prize-winners: Herbert A. Simon, Daniel Kahneman, Vernon L. Smith and Thomas C. Schelling. In fact, the Nobel prize this year was awarded to Edmund Phelps for challenging the Phillips curve trade-offs by "taking into account problems of information in the economy".
Mocking economic theory is easy but doing so by perpetuating "rigid dogma" about economics and economists is pure hearsay. A survey of recent literature in mainstream economic journals or textbooks should enlighten this misconceived view.
Chee Kian Leong,
639798 Singapore
OK, so it’s basically the same point as Diane Coyle’s. But the question of the Nobels is curious. (Needless to say, Simon and Schelling loom large in Critical Mass.) I’ve talked with others about the strange fact that economics Nobels often (though by no means always) go to contributions that lie outside the mainstream, and thus outside of neoclassical dogma. (One could add Stiglitz and Sen to the list, for example.) This speaks of impeccable taste (or nearly so) on the part of the Nobel committee. But it is puzzling – nothing like it happens in the other ‘sciences’.
The bottom line is: do you believe in neoclassical general equilibrium theory, with its efficient market hypothesis, its exogenous shocks, its aggregate price curves and all the rest? If not, do you think it is right that this is what students learn and come to believe about the way the economy works? And that papers which question the theory’s fundamental principles should be excluded from much of the literature?
Wednesday, October 25, 2006
In defence of consensus
If I were to hope for psychological subtlety from soap operas, or historical accuracy from Dan Brown, I’d have only myself to blame for my pain. So I realize that I am scarcely doing myself any favours by allowing myself to be distressed by scientifically illiterate junk in the financial pages of the Daily Telegraph. I know that. Yet there is a small part of me, no doubt immature, that exclaims “But this is a national newspaper – how can it be printing sheer nonsense?”
To wit: Ruth Lea, director of the Centre for Policy Research, on the unreliability of consensus views. These are, apparently, “frequently very wrong indeed.” The target of this extraordinarily silly diatribe is the consensus on the human role in climate change. We are reminded by Lea that Galileo opposed the ‘consensus’ view. Let’s just note in passing that the invocation of Galileo is the surefire signature of the crank, and move on instead to the blindingly obvious point that Galileo’s ‘heresy’ represented the voice of scientific reason, and the consensus he opposed was a politico-religious defence of vested interests. Rather precisely, one might think, the opposite of the situation in the climate-change ‘consensus.’ (The truth about Galileo is actually a little more complicated – see Galileo in Rome by William Shea and Mariano Artigas – but this will do for now.)
In any case, the rejoinder is really very simple. Of course scientific consensus can be wrong – that’s the nature of science. But much more often it is ‘right’ (which is to say, it furnishes the best explanation for the observations with the tools to hand).
As further evidence of the untrustworthiness of consensus, however, Lea regales us with tales of how economists (for God’s sake) have in the past got things wrong en masse – apparently she thinks economics has a claim to the analytical and predictive capacity of natural science. Or perhaps she imagines that consensus-making is an arbitrary affair, a thing that just happens when lots of people get together to debate an issue, and not, as in science, a hard-won conclusion wrested from observation and understanding.
Ah, but you see, the science of global warming has been overturned by a paper “of the utmost scientific significance”, published by the venerable Royal Society. The paper’s author, a Danish scientist named Henrik Svensmark, “has been impeded and persecuted by scientific and government establishments” (they do that, you know) because his findings were “politically inconvenient”. What are these findings of the “utmost significance”? He has shown, according to Lea, that there has been a reduction in low-altitude cloudiness in the twentieth century owing to a reduction in the cosmic-ray flux into the atmosphere, because of a weakening of the shielding provided by the Sun’s magnetic field. Clouds have an overall cooling effect, and so this reduction in cloudiness probably lies behind the rise in global mean temperature.
Now, that sounds important, doesn’t it? Except that of course Svensmark has shown nothing of the sort. He has found that cosmic rays may induce the formation of sulphate droplets in a plastic box containing gases simulating the composition of the atmosphere. That’s an interesting result, demonstrating that cosmic rays might indeed affect cloud formation. It’s certainly worth publishing in the Proceedings of the Royal Society. The next step might be to look for ways of investigating whether the process works in the real atmosphere (and not just a rough lab simulacra of it). And then whether it does indeed lead to the creation of cloud condensation nuclei (which these sulphate droplets are not yet), and then to clouds. And then to establish whether there is in fact any record of increased cosmic-ray flux over the twentieth century. (We can answer that already: it’s been measured for the past 50 years, and there is no such trend.) And then whether there is evidence of changes in low-altitude cloudiness of the sort Svensmark’s idea predicts. And if so, whether it leads to the right predictions of temperature trends in climate models. And then to try to understand why the theory predicts a stronger daytime warming trend, whereas observations show that it’s stronger at night.
But that’s all nitpicking, surely, because in Lea’s view this new result “seriously challenges the current pseudo-consensus that global warming is largely caused by manmade carbon emissions.” Like most climate-change sceptics, Lea clearly feels this consensus is pulled out of a hat through vague and handwaving arguments, rather than being supported by painstaking comparisons of modelling and observation, such as the identification of a characteristic anthropogenic spatial fingerprint in the overall warming trend. It is truly pitiful.
“I am no climate scientist”, says Lea. (I take it we could leave out “climate” here.) So why is she commenting on climate science? I am no ballet dancer, which is why, should the opportunity bizarrely present itself for me to unveil my interpretation of Swan Lake before the nation, I will regretfully decline.
Simon Jenkins has recently argued in the Guardian that science should not be compulsory beyond primary-school level. I don’t think we need be too reactionary about his comments, though I disagree with much of them. But when a director of a ‘policy research centre’ shows such astonishing ignorance of scientific thinking, and perhaps worse still, no one on a national newspaper’s editorial or production team can see that this is so (would the equivalent historical ignorance be tolerated, say?), one has to wonder whether increasing scientific illiteracy still further is the right way to go. In fact, the scientific ignorance on display here is only the tip of the iceberg. The real fault is a complete lack of critical thinking. There are few things more dangerous in public life than people educated just far enough to be able to mask that lack with superficially confident and polished words.
But it’s perhaps most surprising of all to see someone in ‘policy research’ fail to understand how a government should use expert opinion. If there is a scientific consensus on this question, what does she want them to do? The opposite? Nothing? A responsible government acts according to the best advice available. If that advice turns out to be wrong (and science, unlike politics, must always admit to that possibility), the government nevertheless did the right thing. If this Policy Research Centre actually has any influence on policy-making, God help us.
If I were to hope for psychological subtlety from soap operas, or historical accuracy from Dan Brown, I’d have only myself to blame for my pain. So I realize that I am scarcely doing myself any favours by allowing myself to be distressed by scientifically illiterate junk in the financial pages of the Daily Telegraph. I know that. Yet there is a small part of me, no doubt immature, that exclaims “But this is a national newspaper – how can it be printing sheer nonsense?”
To wit: Ruth Lea, director of the Centre for Policy Research, on the unreliability of consensus views. These are, apparently, “frequently very wrong indeed.” The target of this extraordinarily silly diatribe is the consensus on the human role in climate change. We are reminded by Lea that Galileo opposed the ‘consensus’ view. Let’s just note in passing that the invocation of Galileo is the surefire signature of the crank, and move on instead to the blindingly obvious point that Galileo’s ‘heresy’ represented the voice of scientific reason, and the consensus he opposed was a politico-religious defence of vested interests. Rather precisely, one might think, the opposite of the situation in the climate-change ‘consensus.’ (The truth about Galileo is actually a little more complicated – see Galileo in Rome by William Shea and Mariano Artigas – but this will do for now.)
In any case, the rejoinder is really very simple. Of course scientific consensus can be wrong – that’s the nature of science. But much more often it is ‘right’ (which is to say, it furnishes the best explanation for the observations with the tools to hand).
As further evidence of the untrustworthiness of consensus, however, Lea regales us with tales of how economists (for God’s sake) have in the past got things wrong en masse – apparently she thinks economics has a claim to the analytical and predictive capacity of natural science. Or perhaps she imagines that consensus-making is an arbitrary affair, a thing that just happens when lots of people get together to debate an issue, and not, as in science, a hard-won conclusion wrested from observation and understanding.
Ah, but you see, the science of global warming has been overturned by a paper “of the utmost scientific significance”, published by the venerable Royal Society. The paper’s author, a Danish scientist named Henrik Svensmark, “has been impeded and persecuted by scientific and government establishments” (they do that, you know) because his findings were “politically inconvenient”. What are these findings of the “utmost significance”? He has shown, according to Lea, that there has been a reduction in low-altitude cloudiness in the twentieth century owing to a reduction in the cosmic-ray flux into the atmosphere, because of a weakening of the shielding provided by the Sun’s magnetic field. Clouds have an overall cooling effect, and so this reduction in cloudiness probably lies behind the rise in global mean temperature.
Now, that sounds important, doesn’t it? Except that of course Svensmark has shown nothing of the sort. He has found that cosmic rays may induce the formation of sulphate droplets in a plastic box containing gases simulating the composition of the atmosphere. That’s an interesting result, demonstrating that cosmic rays might indeed affect cloud formation. It’s certainly worth publishing in the Proceedings of the Royal Society. The next step might be to look for ways of investigating whether the process works in the real atmosphere (and not just a rough lab simulacra of it). And then whether it does indeed lead to the creation of cloud condensation nuclei (which these sulphate droplets are not yet), and then to clouds. And then to establish whether there is in fact any record of increased cosmic-ray flux over the twentieth century. (We can answer that already: it’s been measured for the past 50 years, and there is no such trend.) And then whether there is evidence of changes in low-altitude cloudiness of the sort Svensmark’s idea predicts. And if so, whether it leads to the right predictions of temperature trends in climate models. And then to try to understand why the theory predicts a stronger daytime warming trend, whereas observations show that it’s stronger at night.
But that’s all nitpicking, surely, because in Lea’s view this new result “seriously challenges the current pseudo-consensus that global warming is largely caused by manmade carbon emissions.” Like most climate-change sceptics, Lea clearly feels this consensus is pulled out of a hat through vague and handwaving arguments, rather than being supported by painstaking comparisons of modelling and observation, such as the identification of a characteristic anthropogenic spatial fingerprint in the overall warming trend. It is truly pitiful.
“I am no climate scientist”, says Lea. (I take it we could leave out “climate” here.) So why is she commenting on climate science? I am no ballet dancer, which is why, should the opportunity bizarrely present itself for me to unveil my interpretation of Swan Lake before the nation, I will regretfully decline.
Simon Jenkins has recently argued in the Guardian that science should not be compulsory beyond primary-school level. I don’t think we need be too reactionary about his comments, though I disagree with much of them. But when a director of a ‘policy research centre’ shows such astonishing ignorance of scientific thinking, and perhaps worse still, no one on a national newspaper’s editorial or production team can see that this is so (would the equivalent historical ignorance be tolerated, say?), one has to wonder whether increasing scientific illiteracy still further is the right way to go. In fact, the scientific ignorance on display here is only the tip of the iceberg. The real fault is a complete lack of critical thinking. There are few things more dangerous in public life than people educated just far enough to be able to mask that lack with superficially confident and polished words.
But it’s perhaps most surprising of all to see someone in ‘policy research’ fail to understand how a government should use expert opinion. If there is a scientific consensus on this question, what does she want them to do? The opposite? Nothing? A responsible government acts according to the best advice available. If that advice turns out to be wrong (and science, unlike politics, must always admit to that possibility), the government nevertheless did the right thing. If this Policy Research Centre actually has any influence on policy-making, God help us.
Monday, October 23, 2006

Decoding Da Vinci, decoded
I’m hoping that anyone who feels moved to challenge my dismissal of Fibonacci sequences and the Golden Mean in nature, in the Channel 4 TV series Decoding da Vinci will think first about how much ends up on the cutting-room floor in television studios. I stand by what I said in the programme, but I didn’t suggest to the presenter Dan Rivers that Fibonacci and phi are totally irrelevant in the natural world. Sure, overblown claims are made for them – just about all of what is said in this regard about human proportion is mere numerology (of which my favourite is the claim that the vital statistics of Veronica Lake were Fibonacci numbers). And the role of these numbers in phyllotaxis has been convincingly challenged recently by Todd Cooke, in a paper in the Botanical Journal of the Linnean Society. But even Cooke acknowledges that the spiral patterns of pine cones, sunflower florets and pineapples do seem to have Fibonacci parastichies (that is, counter-rotating spirals come in groups of (3,5), (5,8), (8,13) and so on). That has yet to be fully explained, although it doesn’t seem to be a huge mystery: the explanation surely has something to do with packing effects at the tip of the stem, where new buds form. It’s a little known fact that Alan Turing was developing his reaction-diffusion theory of pattern formation to explain this aspect of phyllotaxis just before he committed suicide. Jonathan Swinton has unearthed some fascinating material on this.
So the real story of Fibonacci numbers and phi in phyllotaxis is complicated, and certainly not something that could be squeezed into five minutes of TV. I shall discuss it in depth in my forthcoming, thorough revision of my book The Self-Made Tapestry, which Oxford University Press will publish as a three volume-set (under a title yet to be determined), beginning some time in late 2007.
Thursday, October 19, 2006

Paint it black
I don’t generally tend to post my article for Nature’s nanozone here, as they are a bit too techie. But this was just such a cute story…
Nanotechnology is older than we thought. The Egyptians were using it four millennia ago to darken their graying locks.
Artisans were making semiconductor quantum dots more than four thousand years ago, a team in France has claimed. Needless to say, the motivation was far removed from that today, when these nanoparticles are of interest for making light-emitting devices and as components of photonic circuits and memories. It seems that the ancient Egyptians and Greeks were instead making nanocrystals to dye their hair black.
Philippe Walter of the Centre for Research and Restoration of the Museums of France in Paris and his colleagues have investigated an ancient recipe for blackening hair using lead compounds. They find that the procedure described in historical sources produces nanoparticles of black lead sulphide (PbS), which are formed deep within the protein-rich matrix of hair [1].
That the chemical technologies of long ago sometimes involved surprisingly sophisticated processes and products is well known [2]. The synthesis of nanoparticles has, for example, been identified in metallic, lustrous glazes used by potters in the Middle Ages [3]. Such practices are remarkable given that ancient craftspeople generally had no real knowledge of chemical principles and had only crude means of transforming natural materials, such as heating, at their disposal.
The nanocrystal hair dye is particularly striking. Walter and colleagues say that these particles, with a size of about 5 nm, are “quite similar to PbS quantum dots synthesized by recent materials science techniques.” Moreover, the method alters the appearance of hair permanently, because of the deep penetration of the nanoparticles, yet without affecting its mechanical properties.
That makes the process an attractive dyeing procedure even today, despite the potential toxicity of lead-based compounds. Walter and colleagues point out that some modern hair darkeners indeed contain lead acetate, which forms lead sulphide in situ on hair fibres. In any event, safety concerns do not seem to have troubled people in ancient times, perhaps because of their short life expectancy – as well as using lead to dye hair, the Egyptians used lead carbonate as a skin whitener, and toxic antimony sulphide for eye shadow (kohl).
The recipe for making the lead-based hair dye is simple. Lead oxide is mixed with slaked lime (calcium hydroxide, which is strongly alkaline) and water to make a paste, which is then rubbed into the hair. A reaction between the leads ions and sulphur from hair keratins (proteins) produces lead sulphide. These proteins have a high sulphur content: they are strongly crosslinked by disulphide bonds formed from cysteine amino acids, which gives hair its resilience and springiness (such bonds are broken in hair-straightening treatments). The researchers found that the alkali seems to be essential for releasing sulphur from cysteine to form PbS.
The French team dyed blond human hairs black by applying this treatment for three days. They then looked at the distribution of lead within cross-sections of the hairs using X-ray fluorescence spectroscopy, and saw that it was present throughout. X-ray diffraction from treated hairs showed evidence of lead sulphide crystals, which electron microscopy revealed as nanoparticles about 4.8 nm across.
The nanoparticles decorate fibrillar aggregates of proteins within the cortex of hair strands – the inner region, beneath the cuticle of the hair surface. High-resolution microscopy revealed that these particles are highly organized: they seem to be attached to individual microfibrils, which are about 7 nm in diameter and are formed from alpha-helical proteins. Thus the distribution of particles echoes the supramolecular arrangement of the microfibrils, being placed in rows about 8-10 nm apart and aligned with the long axis of the hair strands. So the ancient recipe provides a means not only of making nanocrystals but of organizing them in a roughly regular fashion at the nanoscale – one of the major objectives of modern synthetic methods.
The discovery throws a slightly ironic light on the debate today about the use of nanoparticles in cosmetics [4]. Quite properly, critics point out that the toxicological behaviour of such particles is not yet well understood. It now seems this is a much older issue than anyone suspected.
References
1. Walter, P. et al. Early use of PbS nanotechnology for an ancient hair dyeing formula. Nano Lett. 6, 2215-2219 (2006) [article here]
2. Ball, P. Where is there wisdom to be found in ancient materials technologies? MRS Bull. March 2005, 149-151.
3. Pérez-Arantegui et al. Luster pottery from the thirteenth century to the sixteenth century: a nanostructured thin metallic film. J. Am. Ceram. Soc. 84, 442 (2001) [article here]
4. ‘Nanoscience and nanotechnologies: opportunities and uncertainties.’ Report by the Royal Society/Royal Academy of Engineering (2004). [Available here]
Tuesday, October 17, 2006

A sign of the times?
The ETC Group, erstwhile campaigners against nanotechnology, have launched a competition for the design of a ‘nano-hazard’ symbol analogous to those used already to denote toxicity, biohazards or radioactive materials. My commentary for Nature’s muse@nature.com on this unhelpful initiative is here.
I worry slightly that the ETC Group is a soft target, in that their pronouncements on nanotechnology rarely make much sense and show a deep lack of understanding of the field (and I say this as a supporter of many environmental causes and a strong believer in the ethical responsibilities of scientists). But I admit that the announcement left me a little riled, filled as it was with a fair degree of silliness and misinformation. For example:
“Nanoparticles are able to move around the body and the environment more readily than larger particles of pollution.” First, we don’t know much about how nanoparticles move around the body or the environment (and yes, that’s a problem in itself). Second, this sentence implies that nanoparticles (here meaning human-made nanoparticles, though that’s not specified) are ‘pollution’ by default, which one simply cannot claim with such generality. Some may be entirely harmless.
“Some designer nanomaterials may come to replace natural products such as cotton, rubber and metals – displacing the livelihoods of some of the poorest and most vulnerable people in the world.” I don’t want to see the livelihoods of poor, vulnerable people threatened. Yet not only is this claim completely contentious, but it offers us the prospect of a group that originated from concerns about soil erosion and land use now suggesting that metals are ‘natural products’ – as though mining has not, since ancient times, been one of the biggest polluters on the planet.
“Nano-enabled technologies also aim to ‘enhance’ human beings and ‘fix’ the disabled, a goal that raises troubling ethical issues and the specter of a new divide between the technologically “improved” and “unimproved.”” Many of these ‘human enhancements’ are silly dreams of Californian fantasists. There’s nothing specific to nanotech in such goals anyway. What nanotech does show some promise of doing is enabling important advances in biomedicine. If that is a ‘fix’, I suspect it is one many people would welcome.
And so on. I was one of those who wrote to the Royal Society, when they were preparing their report on nanotech, urging that they take seriously the social and ethical implications, even if these lay outside the usual remit of what scientists consider in terms of ethics. I feel that is an important obligation, and I was glad to see that the Royal Society/RAE report acknowledges it as such. But sticking ‘Danger: Nano’ stickers on sun creams isn’t the answer.
Friday, October 06, 2006
When it’s time to speak out
[The following is the unedited form of my latest article on muse@nature.com. The newsblog on this story is worth checking out too.]
By confronting ExxonMobil, the Royal Society is not being a censor of science but an advocate for it.
When Bob Ward, former manager of policy communication at the Royal Society in London, wrote a letter to the oil company ExxonMobil taking it to task for funding groups that deny the human role in global warning, it isn’t clear he knew quite what he was letting himself in for. But with hindsight the result was predictable: once the letter was obtained and published by the British Guardian newspaper, the Royal Society (RS) was denounced from all quarters as having overstepped its role as impartial custodians of science.
Inevitably, Ward’s letter fuels the claims of ‘climate sceptics’ that the scientific community is seeking to impose a consensus and to suppress dissent. But the RS has been denounced by less partisan voices too. David Whitehouse, formerly a science reporter for the BBC, argues that “you tackle bad science with good science”, rather than trying to turn off the money to your opponents. “Is it appropriate”, says Whitehouse, “that [the RS] should be using its authority to judge and censor in this way?”
And Roger Pielke Jr, director of the University of Colorado’s Center for Science and Technology Policy Research, who is a controversialist but far from a climate sceptic, says that “the actions by the Royal Society are inconsistent with the open and free exchange of ideas, as well as the democratic notion of free speech.”
Yes, there is nothing like the scent of scientific censorship to make scientists of all persuasions come over all sanctimonious about free speech.
The problem is that these critics do not seem to understand what the RS (or rather, Bob Ward) actually said, nor the context in which he said it, nor what the RS now stands for.
Ward wrote his letter to Nick Thomas, Director of Corporate Affairs at ExxonMobil’s UK branch Esso. He expressed surprise and disappointment at the way that ExxonMobil’s 2005 Corporate Citizenship Report claimed that the conclusions of the Intergovernmental Panel on Climate Change that recent global warming has a human cause “rely on expert judgement rather than objective, reproducible statistical methods”. Ward’s suggestion that this claim is “inaccurate” is in fact far too polite.
Model uncertainties and natural variability, the report goes on to claim, “make it very difficult to determine objectively the extent to which recent climate changes might be the result of human actions.” But anyone who has followed the course of the scientific debate over the past two decades will know how determinedly the scientists have refrained from pointing the finger at human activities until the evidence allows no reasonable alternative.
Most serious scientists will agree on this much, at least. The crux of the argument, however, is Ward’s alleged insistence that ExxonMobil stop funding climate-change deniers. (He estimates that ExxonMobil provided $2.9 million last year to US organizations “which misinformed the public about climate change.”) Actually, Ward makes no such demand. He points out that he expressed concerns about the company’s support for such lobby groups in a previous meeting with Thomas, who told him that the company intended to stop it. Ward asked in his letter when ExxonMobil plans to make that change.
So there is no demand here, merely a request for information about an action ExxonMobil had said it planned to undertake. Whitehouse and Pielke are simply wrong in what they allege. But was the RS wrong to intervene at all?
First, anyone who is surprised simply hasn’t being paying attention. Under outspoken presidents such as Robert May and Martin Rees, the Royal Society is no longer the remote, patrician and blandly noncommittal body of yore. It means business. In his 2005 Anniversary Address, May criticized “the campaigns waged by those whose belief systems or commercial interests impel them to deny, or even misrepresent, the scientific facts”.
“We must of course recognise there is always a case for hearing alternative, even maverick, views”, he added. “But we need to give sensible calibration to them. The intention of ‘balance’ is admittedly admirable, but this problem of wildly disparate ‘sides’ being presented as if they were two evenly balanced sporting teams is endemic to radio, TV, print media, and even occasional Parliamentary Select Committees.”
In response to his critics, Ward has said that “the Society has spoken out frequently, on many issues and throughout its history, when the scientific evidence is being ignored or misrepresented”. If anything, it hasn’t done that often enough.
Second, Ward rightly ridicules the notion of ExxonMobil as the frail David to the Royal Society’s Goliath. The accusations of “bullying” here are just risible. The RS is no imperious monarch, but a cash-strapped aristocrat who lives in the crumbling family pile and contrives elegantly to hide his impecuniosity. In contrast, the climate sceptics count among their number the most powerful man in the world, who has succeeded in emasculating the only international emissions treaty we have.
And it’s not just the oil industry (and its political allies) that the RS faces. The media are dominated by scientific illiterates like Neil Collins, who writes in the Telegraph newspaper à propos this little spat of his “instinctive leaning towards individuals on the fringe”, that being the habitual raffish pose of the literati. (My instinctive leaning, in contrast, is towards individuals who I think are right.) “Sea level does not appear to be rising”, says Collins (wrong), while “the livelihoods of thousands of scientists depend on our being sufficiently spooked to keep funding the research” (don’t even get me started on this recurrent idiocy). I fear the scientific community does not appreciate the real dangers posed by this kind of expensively educated posturing from high places.
If not, it ought to. In the early 1990s, the then editor of the Sunday Times Andrew Neil supported a campaign by his reporter Neville Hodgkinson suggesting that HIV does not cause AIDS.
Like most climate sceptics, Neil and the HIV-deniers did not truly care about having a scientific debate – their agenda was different. To them, the awful thing about the HIV theory was that it placed every sexual libertine at risk. How dare science threaten to spoil our fun? Far better to confine the danger to homosexuals: Hodgkinson implied that AIDS might somehow be the result of gay sex. For a time, the Sunday Times campaign did real damage to AIDS prevention in Africa. But now it is forgotten and the sceptics discredited, while Neil has gone from strength to strength as a media star.
On that occasion, Nature invited accusations of scientific censorship by standing up to the Sunday Times’s programme of misinformation – making me proud to be working for the journal. As I recall, the RS remained aloof from that matter (though May mentions it in his 2005 speech). We should be glad that it is now apparently ready to enter the fray. Challenging powerful groups that distort science for personal, political or commercial reasons is not censorship, it is being an advocate for science in the real world.
[The following is the unedited form of my latest article on muse@nature.com. The newsblog on this story is worth checking out too.]
By confronting ExxonMobil, the Royal Society is not being a censor of science but an advocate for it.
When Bob Ward, former manager of policy communication at the Royal Society in London, wrote a letter to the oil company ExxonMobil taking it to task for funding groups that deny the human role in global warning, it isn’t clear he knew quite what he was letting himself in for. But with hindsight the result was predictable: once the letter was obtained and published by the British Guardian newspaper, the Royal Society (RS) was denounced from all quarters as having overstepped its role as impartial custodians of science.
Inevitably, Ward’s letter fuels the claims of ‘climate sceptics’ that the scientific community is seeking to impose a consensus and to suppress dissent. But the RS has been denounced by less partisan voices too. David Whitehouse, formerly a science reporter for the BBC, argues that “you tackle bad science with good science”, rather than trying to turn off the money to your opponents. “Is it appropriate”, says Whitehouse, “that [the RS] should be using its authority to judge and censor in this way?”
And Roger Pielke Jr, director of the University of Colorado’s Center for Science and Technology Policy Research, who is a controversialist but far from a climate sceptic, says that “the actions by the Royal Society are inconsistent with the open and free exchange of ideas, as well as the democratic notion of free speech.”
Yes, there is nothing like the scent of scientific censorship to make scientists of all persuasions come over all sanctimonious about free speech.
The problem is that these critics do not seem to understand what the RS (or rather, Bob Ward) actually said, nor the context in which he said it, nor what the RS now stands for.
Ward wrote his letter to Nick Thomas, Director of Corporate Affairs at ExxonMobil’s UK branch Esso. He expressed surprise and disappointment at the way that ExxonMobil’s 2005 Corporate Citizenship Report claimed that the conclusions of the Intergovernmental Panel on Climate Change that recent global warming has a human cause “rely on expert judgement rather than objective, reproducible statistical methods”. Ward’s suggestion that this claim is “inaccurate” is in fact far too polite.
Model uncertainties and natural variability, the report goes on to claim, “make it very difficult to determine objectively the extent to which recent climate changes might be the result of human actions.” But anyone who has followed the course of the scientific debate over the past two decades will know how determinedly the scientists have refrained from pointing the finger at human activities until the evidence allows no reasonable alternative.
Most serious scientists will agree on this much, at least. The crux of the argument, however, is Ward’s alleged insistence that ExxonMobil stop funding climate-change deniers. (He estimates that ExxonMobil provided $2.9 million last year to US organizations “which misinformed the public about climate change.”) Actually, Ward makes no such demand. He points out that he expressed concerns about the company’s support for such lobby groups in a previous meeting with Thomas, who told him that the company intended to stop it. Ward asked in his letter when ExxonMobil plans to make that change.
So there is no demand here, merely a request for information about an action ExxonMobil had said it planned to undertake. Whitehouse and Pielke are simply wrong in what they allege. But was the RS wrong to intervene at all?
First, anyone who is surprised simply hasn’t being paying attention. Under outspoken presidents such as Robert May and Martin Rees, the Royal Society is no longer the remote, patrician and blandly noncommittal body of yore. It means business. In his 2005 Anniversary Address, May criticized “the campaigns waged by those whose belief systems or commercial interests impel them to deny, or even misrepresent, the scientific facts”.
“We must of course recognise there is always a case for hearing alternative, even maverick, views”, he added. “But we need to give sensible calibration to them. The intention of ‘balance’ is admittedly admirable, but this problem of wildly disparate ‘sides’ being presented as if they were two evenly balanced sporting teams is endemic to radio, TV, print media, and even occasional Parliamentary Select Committees.”
In response to his critics, Ward has said that “the Society has spoken out frequently, on many issues and throughout its history, when the scientific evidence is being ignored or misrepresented”. If anything, it hasn’t done that often enough.
Second, Ward rightly ridicules the notion of ExxonMobil as the frail David to the Royal Society’s Goliath. The accusations of “bullying” here are just risible. The RS is no imperious monarch, but a cash-strapped aristocrat who lives in the crumbling family pile and contrives elegantly to hide his impecuniosity. In contrast, the climate sceptics count among their number the most powerful man in the world, who has succeeded in emasculating the only international emissions treaty we have.
And it’s not just the oil industry (and its political allies) that the RS faces. The media are dominated by scientific illiterates like Neil Collins, who writes in the Telegraph newspaper à propos this little spat of his “instinctive leaning towards individuals on the fringe”, that being the habitual raffish pose of the literati. (My instinctive leaning, in contrast, is towards individuals who I think are right.) “Sea level does not appear to be rising”, says Collins (wrong), while “the livelihoods of thousands of scientists depend on our being sufficiently spooked to keep funding the research” (don’t even get me started on this recurrent idiocy). I fear the scientific community does not appreciate the real dangers posed by this kind of expensively educated posturing from high places.
If not, it ought to. In the early 1990s, the then editor of the Sunday Times Andrew Neil supported a campaign by his reporter Neville Hodgkinson suggesting that HIV does not cause AIDS.
Like most climate sceptics, Neil and the HIV-deniers did not truly care about having a scientific debate – their agenda was different. To them, the awful thing about the HIV theory was that it placed every sexual libertine at risk. How dare science threaten to spoil our fun? Far better to confine the danger to homosexuals: Hodgkinson implied that AIDS might somehow be the result of gay sex. For a time, the Sunday Times campaign did real damage to AIDS prevention in Africa. But now it is forgotten and the sceptics discredited, while Neil has gone from strength to strength as a media star.
On that occasion, Nature invited accusations of scientific censorship by standing up to the Sunday Times’s programme of misinformation – making me proud to be working for the journal. As I recall, the RS remained aloof from that matter (though May mentions it in his 2005 speech). We should be glad that it is now apparently ready to enter the fray. Challenging powerful groups that distort science for personal, political or commercial reasons is not censorship, it is being an advocate for science in the real world.
Physics gets dirty
[This is my Materials Witness column for the November issue of Nature Materials.]
My copy of The New Physics, published in 1989 by Cambridge University Press, is much thumbed. Now regarded as something of a classic, it provides a peerless overview of key areas of modern physics, written by leading experts who achieve the rare combination of depth and clarity.
It’s reasonable, then, to regard the revised edition, just published as The New Physics for the 21st Century, as something of an authoritative statement on what’s in and what’s out in physics. And so it is striking to see materials, more or less entirely absent from the 1989 book, prominent on the new agenda.
Most noticeably, Robert Cahn of Cambridge University has contributed a chapter called ‘Physics and materials’, which covers topics ranging from dopant distributions in semiconductors to liquid crystal displays, photovoltaics and magnetic storage. In addition, Yoseph Imry of the Weizmann Institute in Israel contributes a chapter on ‘Small-scale structure and nanoscience’, a snapshot of one of the hottest areas of materials science.
All very well, but it begs the question of why materials science was, according to this measure, more or less absent from twentieth-century physics but central to that of the twenty-first. Indeed, one might have thought that the traditional image of materials science as an empirical engineering discipline with a theoretical framework based in classical mechanics looks far from cutting-edge, and would hardly rival the appeal of quantum field theory or cosmology.
Of course, topics such as inflationary theory and quantum gravity are still very much on the menu. But the new book drops topics that might be deemed the epitome of physicists’ reputed delight in abstraction: gone are the chapters on grand unified theories, gauge theories, and the conceptual foundations of quantum theory. Even Stephen Hawking’s contribution on ‘The edge of spacetime’ has been axed (a brave move by the publishers) in favour of down-to-earth biophysics and medical physics.
So what took physics so long to realize that it must acknowledge its material aspects? “Straight physicists alternate between the deep conviction that they could do materials science much better than trained materials scientists (they are apt to regards the latter as fictional) and a somewhat stand-offish refusal to take an interest”, claims Cahn.
One could also say that physics has sometimes tried to transcend material particularities. “There has been the thought that condensed matter and material physics is second-rate dirty, applied stuff”, Imry says. Even though condensed matter is fairly well served in the first edition, it tended to be rather dematerialized, couched in terms of critical points, dimensionality, theories of quantum phase transitions. But it is now clear that universality has its limits – high-temperature superconductors need their own theory, graphene is not like a copper monolayer nor poly(phenylene vinylene) like silicon.
“Nanoscience has both universal aspects, which has been much of the focus of modern physics, and variety due to the wealth of real materials”, says Imry. “That’s a part of the beauty of this field!”
[This is my Materials Witness column for the November issue of Nature Materials.]
My copy of The New Physics, published in 1989 by Cambridge University Press, is much thumbed. Now regarded as something of a classic, it provides a peerless overview of key areas of modern physics, written by leading experts who achieve the rare combination of depth and clarity.
It’s reasonable, then, to regard the revised edition, just published as The New Physics for the 21st Century, as something of an authoritative statement on what’s in and what’s out in physics. And so it is striking to see materials, more or less entirely absent from the 1989 book, prominent on the new agenda.
Most noticeably, Robert Cahn of Cambridge University has contributed a chapter called ‘Physics and materials’, which covers topics ranging from dopant distributions in semiconductors to liquid crystal displays, photovoltaics and magnetic storage. In addition, Yoseph Imry of the Weizmann Institute in Israel contributes a chapter on ‘Small-scale structure and nanoscience’, a snapshot of one of the hottest areas of materials science.
All very well, but it begs the question of why materials science was, according to this measure, more or less absent from twentieth-century physics but central to that of the twenty-first. Indeed, one might have thought that the traditional image of materials science as an empirical engineering discipline with a theoretical framework based in classical mechanics looks far from cutting-edge, and would hardly rival the appeal of quantum field theory or cosmology.
Of course, topics such as inflationary theory and quantum gravity are still very much on the menu. But the new book drops topics that might be deemed the epitome of physicists’ reputed delight in abstraction: gone are the chapters on grand unified theories, gauge theories, and the conceptual foundations of quantum theory. Even Stephen Hawking’s contribution on ‘The edge of spacetime’ has been axed (a brave move by the publishers) in favour of down-to-earth biophysics and medical physics.
So what took physics so long to realize that it must acknowledge its material aspects? “Straight physicists alternate between the deep conviction that they could do materials science much better than trained materials scientists (they are apt to regards the latter as fictional) and a somewhat stand-offish refusal to take an interest”, claims Cahn.
One could also say that physics has sometimes tried to transcend material particularities. “There has been the thought that condensed matter and material physics is second-rate dirty, applied stuff”, Imry says. Even though condensed matter is fairly well served in the first edition, it tended to be rather dematerialized, couched in terms of critical points, dimensionality, theories of quantum phase transitions. But it is now clear that universality has its limits – high-temperature superconductors need their own theory, graphene is not like a copper monolayer nor poly(phenylene vinylene) like silicon.
“Nanoscience has both universal aspects, which has been much of the focus of modern physics, and variety due to the wealth of real materials”, says Imry. “That’s a part of the beauty of this field!”
Tuesday, September 26, 2006

One small step: NASA’s first date with China
Here’s my latest latest article for muse@nature.com, pondering on the implications of the visit by NASA’s Mike Griffin to China. (There’ll be a few differences due to editing, and this version also has handy links in the text.)
NASA’s visit to China is overdue – the rest of the world got there long ago.
This could be the start of a beautiful friendship. That, at least, is how the Chinese press seems keen to portray the visit this week by NASA’s head Mike Griffin, who is touring Beijing and Shanghai, at the invitation of Chinese president Hu Jintao, to “become acquainted with my counterparts in China and to understand their goals for space exploration.” China Central Television proudly proclaims “China, US to boost space cooperation”, while China Daily reports “China-US space co-op set for lift-off.”
But Griffin himself is more circumspect. “It’s our get-acquainted visit, it’s our exploratory visit and it’s our first date”, he told a press conference, adding “There are differences between our nations on certain key points” – unsurprisingly, for example, the control of missiles. He stressed shortly before the visit that he did not want to “create expectations that would be possibly embarrassing to us or embarrassing to China.”
Griffin’s caution is understandable, since this is after all the first visit by a NASA administrator, and he confessed before the trip that he did not know much about China’s capabilities in space. But why did it take them so long? After all, China has well established joint space projects with Europe, Russia and Brazil, and is one of only three nations to have put people into space.
Rising star
NASA’s Chinese jaunt is not entirely out of the blue. The administrator of the Chinese National Space Agency (CNSA), Sun Laiyan, visited Griffin’s predecessor Sean O’Keeffe at the end of 2004 on a similar introductory mission, a year after the first Chinese manned spaceflight. US space scientists were given a wake-up call last April when CNSA’s vice administrator Luo Ge revealed the extent of China’s space plans at the National Space Symposium in Colorado. These included the possibility of a manned moon shot.
And the full reality of Chinese capabilities became evident to US congressman Tom Feeney on a visit to China in January as part of Congress’s China Working Group. He and his colleagues saw the Jiuquan satellite launch centre in Gansu province at first hand. “In the United States, we’re training aerospace engineers how to maintain 20 to 40-year old technology”, said Feeley. “The Chinese are literally developing new technology on their own.”
There can be no remaining doubt that China is a serious player in space technology, however much it is a latecomer to the party. Griffin admits that “China has clearly made enormous strides in a very short period”. The ‘can-do’ philosophy apparent in China’s domestic industrial and engineering schemes, pursued with a determination that can appear little short of ruthless, will surely be sounding alarms within the US space industry.
All of which makes it strange that a NASA trip to China has been so long in coming.
Enemy at the gate
The reticence must be due in large measure to the fact that China has long been regarded as a rival rather than a collaborator. China’s desires to become involved in the International Space Station (ISS) have previously been stymied by the USA, for example. In 2001 Dana Rohrabacher, chair the space and aeronautics subcommittee of the House of Representatives, told journalists that he was not interested in Chinese offers to pay for ISS hardware, because of the country’s human-rights record. “The space station’s supposed to stand for something better,” he said, after seeking help from countries including the United Arab Emirates.
The real reasons for a US reluctance to engage with China over space technology must include a considerable dose of Cold War paranoia, especially now that China is emerging as such a strong player. Griffin himself says that Russian involvement with the ISS also initially met with some resistance, although now it’s clear that the space station would have been doomed without it.
The current talks of cooperation do not necessarily signal a lessening of that scepticism, but are possibly boosted by a mixture of realpolitik and economics. Since China is going ahead at full steam with its links to the space programs of Russia and Europe, the US could risk creating a powerful competitor if it doesn’t join in. And preventing US companies from exporting technologies to the most rapidly growing space program in the world threatens to undercut their own competitiveness. In fact, one of the obstacles to such trade is the question of China’s readiness to observe patents and copyrights.
But why has the US position on collaboration with China differed so much from that in Europe? Vincent Sabathier, previously Space Attaché at the French Embassy in the US, says that it comes down to a fundamental difference in attitudes to international relations: the US adopts a ‘realist’ stance based on opposed national interests, while European states have a more liberal approach that favours international dialogue and partnership. “While the US places an emphasis on space power and control, Europe maintains that its focus is on the peaceful use of outer space”, Sabathier says.
Power of partnerships
This has been reflected in Europe-China collaborations on satellite technology, such as the Galileo global-positioning system, intended for civilian use. Some Americans were unhappy that this threatened the hegemony of the US-controlled Global Positioning System, which has a large military component. The close links between the US space and military programs have hindered trade of its space technologies with China because of military export controls, whereas in Europe the issues are largely decoupled (Europe maintains a rather precarious arms embargo on trade with China). “The US’s isolationist policy forces other space-faring nations, such as Europe, Japan, Russia, India and China, to cooperate among themselves”, Sabathier asserts.
Fears about how China plans to use its space capabilities cannot be wholly dismissed as paranoia, however. China’s defence spending has increased in recent years, although it is notoriously cagey about the figures. Some worry that strengthening its military force is partly a move to intimidate Taiwan – an objective that could be bolstered by satellite technology. The fact remains, however, that China’s young space program doesn’t have the military legacy of NASA, fueled by an entire industry of defence-based aerospace. At this moment, it looks as though China’s space ambitions are driven more by national pride – by the wish to be seen as a technological world leader. That claim is becoming increasingly justified. Rather than worrying about losing technical secrets, China’s space collaborators seem now more likely to gain some handy tips.
Friday, September 08, 2006
Latest Lab Report
Here is my Lab Report column for the October issue of Prospect. And while I’m about it, I’d like to mention the excellent comment on Prospect’s web site about the shameful issue of Britain’s stance on the Trident nuclear submarines. Sadly, this kind of clear-headedness doesn’t find a voice in Westminster.
*************************
In-flight chemistry
It is not easy to make TNT, as I discovered by boiling toluene and nitric acid to no great effect during a school lunchtime. Admittedly it is not terribly hard either, if you have the right recipe, equipment and ingredients – the details can be found on the web, and the raw materials at DIY stores – but a little practical experience with chemistry provides some perspective on the notion of concocting an aircraft-busting explosive in the cabin toilet.
So it’s not surprising that some chemists have expressed doubts about the alleged terrorist plot to blow up transatlantic flights. Could two liquids really be combined to make an instant, deadly explosive?
Speculation has it that the plotters were going to mix up triacetone triperoxide (TATP), an explosive allegedly used in the London tube bombings last year. In principle this can be made from hydrogen peroxide (bleach), acetone (paint thinner) and sulphuric acid (drain cleaner). But like so much of chemistry, it’s not that straightforward. The ingredients have to be highly concentrated, so can’t easily be passed off as mineral water or shampoo. The reaction needs to be carried out at low temperature. And even if you succeed in making TATP, it isn’t dangerous until purified and crystallized. In other words, you’d be smuggling into the loo not just highly potent liquids but also a refrigerant and distilling apparatus – and the job might take several hours. Gerry Murray of the Forensic Science Agency of Northern Ireland told Chemistry World magazine that making TATP in-flight would be “extremely difficult.”
Why not just smuggle a ready-made liquid explosive on board? Some media reports suggested that the plotters intended instead to use bottled nitroglycerine. But you’d need a lot of it to do serious damage, and it is so delicate that it could well go off during check-in. The same is true for pure TATP itself (a solid resembling sugar), which is why the unconfirmed suggestion that it was used for the tube bombings has met with some scepticism.
What does this mean for the security measures currently in place? It is hard to understand the obsession with liquids and gels. It’s not clear, for example, that there is any vital component of any ‘mixable’ explosive that would be odourless and pass a ‘swig test’, let alone be feasibly used in flight to brew up a lethal charge. Why are solids not subject to the same scrutiny? Most explosives (including TATP) in any case emit volatile fumes that can be detected at very low concentrations.
When airports instigated the ‘no liquids’ policy in August, they were making an understandable quick response to a poorly known threat. But they now seem to be at risk of perpetuating a myth about how easy it is to do complex chemistry.
Moon crash
Smashing spacecraft into celestial bodies has become something of a craze among space scientists. In 1999 they disposed of the Lunar Prospector craft, at the end of its mission to survey the moon for water ice and magnetic fields, by crashing it into a lunar crater in the hope that the impact would throw up evidence of water visible from telescopes. (It didn’t.) The Deep Impact mission ploughed into the comet Tempel 1 last February, revealing a puff of ice hidden below the surface. A rocket stage used to send a new satellite to the moon in 2008 has been proposed for a more massive re-run of the Prospector experiment. And the THOR mission pencilled in for 2011 would send a 100-kg copper projectile crashing into Mars, creating a 50-m wide crater and possibly ejecting ice, organic compounds and other materials.
The most recent of these kamikaze missions is SMART-1, the European Space Agency’s moon-observing satellite, which ended its career on 3 September by smashing into the lunar Lake of Excellence. Again, the aim was to analyse images of the impact to identify the chemical composition of the debris, using the technique of spectroscopy. SMART-1 had already stayed active for longer than originally expected, and its experimental ion-thrust propulsion system was exhausted, making a lunar crash landing inevitable anyway. This was another case of wringing a last bit of value from a moribund mission. The disposal of a washing-machine-sized probe on the moon is hardly the most heinous act of fly-tipping – but it can’t be long before this trend starts to raise mutters of environmental disapproval.
Perhaps we can clear up the mess when we return to the moon. Lockheed Martin has recently been awarded the NASA contract to build the Orion Crew Exploration Vehicle, the replacement for the beleaguered space shuttle and the basis of a new manned moon shot. Scheduled for 2014 at the latest, Orion will ditch the airplane chic of the shuttle, comprising a single-use tubular rocket with a lunar lander and re-entry capsule in its tip, the latter provided with heat shield and parachutes. Lockheed Martin has presumably been working hard on this design, but cynics might suspect they just stole the idea from that film with Tom Hanks in it.
The Macbeth effect
Shakespeare’s insight into the human psyche is vindicated once again. The impulse to wash after committing an unethical act, immortalized in Lady Macbeth’s “Out, damned spot!”, has been confirmed as a genuine psychological phenomenon. Two social scientists say that ‘cleansing-related words’ were more readily produced in exercises by subjects who had first been asked to recall an unethical deed. These subjects were also more likely to take a proffered antiseptic wipe – and, rather alarmingly, such physical cleansing seemed to expunge their guilt and make them less likely to show philanthropic behaviour afterwards. There is nothing particularly godly about cleanliness, then, which is a sign of a guilty conscience cheaply assuaged.
Here is my Lab Report column for the October issue of Prospect. And while I’m about it, I’d like to mention the excellent comment on Prospect’s web site about the shameful issue of Britain’s stance on the Trident nuclear submarines. Sadly, this kind of clear-headedness doesn’t find a voice in Westminster.
*************************
In-flight chemistry
It is not easy to make TNT, as I discovered by boiling toluene and nitric acid to no great effect during a school lunchtime. Admittedly it is not terribly hard either, if you have the right recipe, equipment and ingredients – the details can be found on the web, and the raw materials at DIY stores – but a little practical experience with chemistry provides some perspective on the notion of concocting an aircraft-busting explosive in the cabin toilet.
So it’s not surprising that some chemists have expressed doubts about the alleged terrorist plot to blow up transatlantic flights. Could two liquids really be combined to make an instant, deadly explosive?
Speculation has it that the plotters were going to mix up triacetone triperoxide (TATP), an explosive allegedly used in the London tube bombings last year. In principle this can be made from hydrogen peroxide (bleach), acetone (paint thinner) and sulphuric acid (drain cleaner). But like so much of chemistry, it’s not that straightforward. The ingredients have to be highly concentrated, so can’t easily be passed off as mineral water or shampoo. The reaction needs to be carried out at low temperature. And even if you succeed in making TATP, it isn’t dangerous until purified and crystallized. In other words, you’d be smuggling into the loo not just highly potent liquids but also a refrigerant and distilling apparatus – and the job might take several hours. Gerry Murray of the Forensic Science Agency of Northern Ireland told Chemistry World magazine that making TATP in-flight would be “extremely difficult.”
Why not just smuggle a ready-made liquid explosive on board? Some media reports suggested that the plotters intended instead to use bottled nitroglycerine. But you’d need a lot of it to do serious damage, and it is so delicate that it could well go off during check-in. The same is true for pure TATP itself (a solid resembling sugar), which is why the unconfirmed suggestion that it was used for the tube bombings has met with some scepticism.
What does this mean for the security measures currently in place? It is hard to understand the obsession with liquids and gels. It’s not clear, for example, that there is any vital component of any ‘mixable’ explosive that would be odourless and pass a ‘swig test’, let alone be feasibly used in flight to brew up a lethal charge. Why are solids not subject to the same scrutiny? Most explosives (including TATP) in any case emit volatile fumes that can be detected at very low concentrations.
When airports instigated the ‘no liquids’ policy in August, they were making an understandable quick response to a poorly known threat. But they now seem to be at risk of perpetuating a myth about how easy it is to do complex chemistry.
Moon crash
Smashing spacecraft into celestial bodies has become something of a craze among space scientists. In 1999 they disposed of the Lunar Prospector craft, at the end of its mission to survey the moon for water ice and magnetic fields, by crashing it into a lunar crater in the hope that the impact would throw up evidence of water visible from telescopes. (It didn’t.) The Deep Impact mission ploughed into the comet Tempel 1 last February, revealing a puff of ice hidden below the surface. A rocket stage used to send a new satellite to the moon in 2008 has been proposed for a more massive re-run of the Prospector experiment. And the THOR mission pencilled in for 2011 would send a 100-kg copper projectile crashing into Mars, creating a 50-m wide crater and possibly ejecting ice, organic compounds and other materials.
The most recent of these kamikaze missions is SMART-1, the European Space Agency’s moon-observing satellite, which ended its career on 3 September by smashing into the lunar Lake of Excellence. Again, the aim was to analyse images of the impact to identify the chemical composition of the debris, using the technique of spectroscopy. SMART-1 had already stayed active for longer than originally expected, and its experimental ion-thrust propulsion system was exhausted, making a lunar crash landing inevitable anyway. This was another case of wringing a last bit of value from a moribund mission. The disposal of a washing-machine-sized probe on the moon is hardly the most heinous act of fly-tipping – but it can’t be long before this trend starts to raise mutters of environmental disapproval.
Perhaps we can clear up the mess when we return to the moon. Lockheed Martin has recently been awarded the NASA contract to build the Orion Crew Exploration Vehicle, the replacement for the beleaguered space shuttle and the basis of a new manned moon shot. Scheduled for 2014 at the latest, Orion will ditch the airplane chic of the shuttle, comprising a single-use tubular rocket with a lunar lander and re-entry capsule in its tip, the latter provided with heat shield and parachutes. Lockheed Martin has presumably been working hard on this design, but cynics might suspect they just stole the idea from that film with Tom Hanks in it.
The Macbeth effect
Shakespeare’s insight into the human psyche is vindicated once again. The impulse to wash after committing an unethical act, immortalized in Lady Macbeth’s “Out, damned spot!”, has been confirmed as a genuine psychological phenomenon. Two social scientists say that ‘cleansing-related words’ were more readily produced in exercises by subjects who had first been asked to recall an unethical deed. These subjects were also more likely to take a proffered antiseptic wipe – and, rather alarmingly, such physical cleansing seemed to expunge their guilt and make them less likely to show philanthropic behaviour afterwards. There is nothing particularly godly about cleanliness, then, which is a sign of a guilty conscience cheaply assuaged.
Sunday, September 03, 2006
Unbelievable fiction
In telling us “how to read a novel”, John Sutherland in the Guardian Review (2 September 2006) shows an admirable willingness to avoid the usual literary snobbery about science fiction, suggesting that among other things it can have a pedagogical value. That’s certainly true of the brand of sci-fi pioneered by the likes of Arthur C. Clarke and Isaac Asimov, which took pride in the accuracy of its science. Often, however, sci-fi writers might appropriate just enough real science to make that aspect of the plot vaguely plausible – which is entirely proper for a work of fiction, but not always the most reliable way to learn about science. Even that, however, can encourage the reader to find out more, as Sutherland says.
Sadly, however, he chooses to use the books of Michael Crichton to illustrate his point. Now, Crichton likes to let it be known that he does his homework, and certainly his use of genetic engineering in Jurassic Park is perfectly reasonable for a sci-fi thriller: that’s to say, he stretches the facts, but not unduly, and one has to be a bit of a pedant to object to his reconstituted T. rexes. But Crichton has now seemingly succumbed to the malaise that threatens many pretty smart and successful people, in that they forget the limitations of that smartness. In Prey, Crichton made entertaining use of the eccentric vision of nanotechnology presented by Eric Drexler (self-replicating rogue nanobots), supplemented with some ideas from swarm intelligence, but one’s heart sank when it became clear at the end of the book that in fact Crichton believed this was what nanotech was really all about. (I admit that I’m being generous about the definition of ‘entertaining’ here – I read the book for professional purposes, you understand, and was naively shocked by what passes for characterisation and dialogue in this airport genre. But that’s just a bit of literary snobbishness of my own.)
The situation is far worse, however, in Crichton’s climate-change thriller State of Fear, which portrays anthropogenic climate change as a massive scam. Crichton wants us to buy into this as a serious point of view – one, you understand, that he has come to himself after examining the scientific literature on the subject.
I’ve written about this elsewhere. But Sutherland’s comments present a new perspective. He seems to accept a worrying degree of ignorance on the part of the reader, such that we are assumed to be totally in the dark about whether Crichton or his ‘critics’ (the entire scientific community, aside from the predictable likes of Bjorn Lomborg, Patrick Michaels, Richard Lindzen and, er, about two or three others) are correct. “No one knows the accuracy of what Crichton knows, or thinks he knows”, says Sutherland. Well, we could do worse than consult the latest report by the Intergovernmental Panel on Climate Change, composed of the world’s top climate scientists, which flatly contradicts Crichton’s claims. Perhaps in the literary world one person’s opinion is as good as another’s, but thankfully science doesn’t work that way. Sutherland’s suggestion that readers of State of Fear will end up knowing more about the subject is wishful thinking: misinformation is the precise opposite of information.
It isn’t clear whether or not he thinks we should be impressed by the fact that Crichton testified in 2005 before a US senate committee on climate change, but in fact this showed in truly chilling fashion how hard some US politicians find it to distinguish fact from fiction. (That State of Fear was given an award for ‘journalism’ by the American Association of Petroleum Geologists earlier this year was more nakedly cynical.)
Yes, fiction can teach us facts, but not when it is written by authors who have forgotten they are telling a story and have started to believe this makes them experts on their subject. That’s the point at which fiction starts to become dangerous.
In telling us “how to read a novel”, John Sutherland in the Guardian Review (2 September 2006) shows an admirable willingness to avoid the usual literary snobbery about science fiction, suggesting that among other things it can have a pedagogical value. That’s certainly true of the brand of sci-fi pioneered by the likes of Arthur C. Clarke and Isaac Asimov, which took pride in the accuracy of its science. Often, however, sci-fi writers might appropriate just enough real science to make that aspect of the plot vaguely plausible – which is entirely proper for a work of fiction, but not always the most reliable way to learn about science. Even that, however, can encourage the reader to find out more, as Sutherland says.
Sadly, however, he chooses to use the books of Michael Crichton to illustrate his point. Now, Crichton likes to let it be known that he does his homework, and certainly his use of genetic engineering in Jurassic Park is perfectly reasonable for a sci-fi thriller: that’s to say, he stretches the facts, but not unduly, and one has to be a bit of a pedant to object to his reconstituted T. rexes. But Crichton has now seemingly succumbed to the malaise that threatens many pretty smart and successful people, in that they forget the limitations of that smartness. In Prey, Crichton made entertaining use of the eccentric vision of nanotechnology presented by Eric Drexler (self-replicating rogue nanobots), supplemented with some ideas from swarm intelligence, but one’s heart sank when it became clear at the end of the book that in fact Crichton believed this was what nanotech was really all about. (I admit that I’m being generous about the definition of ‘entertaining’ here – I read the book for professional purposes, you understand, and was naively shocked by what passes for characterisation and dialogue in this airport genre. But that’s just a bit of literary snobbishness of my own.)
The situation is far worse, however, in Crichton’s climate-change thriller State of Fear, which portrays anthropogenic climate change as a massive scam. Crichton wants us to buy into this as a serious point of view – one, you understand, that he has come to himself after examining the scientific literature on the subject.
I’ve written about this elsewhere. But Sutherland’s comments present a new perspective. He seems to accept a worrying degree of ignorance on the part of the reader, such that we are assumed to be totally in the dark about whether Crichton or his ‘critics’ (the entire scientific community, aside from the predictable likes of Bjorn Lomborg, Patrick Michaels, Richard Lindzen and, er, about two or three others) are correct. “No one knows the accuracy of what Crichton knows, or thinks he knows”, says Sutherland. Well, we could do worse than consult the latest report by the Intergovernmental Panel on Climate Change, composed of the world’s top climate scientists, which flatly contradicts Crichton’s claims. Perhaps in the literary world one person’s opinion is as good as another’s, but thankfully science doesn’t work that way. Sutherland’s suggestion that readers of State of Fear will end up knowing more about the subject is wishful thinking: misinformation is the precise opposite of information.
It isn’t clear whether or not he thinks we should be impressed by the fact that Crichton testified in 2005 before a US senate committee on climate change, but in fact this showed in truly chilling fashion how hard some US politicians find it to distinguish fact from fiction. (That State of Fear was given an award for ‘journalism’ by the American Association of Petroleum Geologists earlier this year was more nakedly cynical.)
Yes, fiction can teach us facts, but not when it is written by authors who have forgotten they are telling a story and have started to believe this makes them experts on their subject. That’s the point at which fiction starts to become dangerous.
Saturday, August 26, 2006
Tyred out
Here’s my Materials Witness column for the September issue of Nature Materials. It springs from a recent broadcast in which I participated on BBC Radio 4’s Material World – I was there to talk about synthetic biology, but the item before me was concerned with the unexpectedly fascinating, and important, topic of tyre disposal. It seemed to me that the issue highlighted the all too common craziness of our manufacturing systems, in which potentially valuable materials are treated as ‘waste’ simply because we have not worked out the infrastructure sensibly. We can’t afford this profligacy, especially with oil-based products. I know that incineration has a bad press, and I can believe that is sometimes deserved; but surely it is better to recover some of this embodied energy rather than to simply dump it in the nearest ditch?
*****
In July it became illegal to dump almost any kind of vehicle tyres in landfill sites in Europe. Dumping of whole tyres has been banned since 2003; the new directive forbids such disposal of shredded tyres too. That is going to leave European states with an awful lot of used tyres to dispose of in other ways. What can be done with them?
This is a difficult question for the motor industry, but also raises a broader issue about the life cycle of industrial materials. The strange thing about tyres is that there are many ways in which they could be a valuable resource, and yet somehow they end up being regarded as toxic waste. Reduced to crumbs, tyre rubber can be incorporated into soft surfacing for sports grounds and playgrounds. Added to asphalt for road surfaces, it makes the roads harder-wearing.
And rubber is of course an energy carrier: a potential fuel. Pyrolysis of tyres generates gas and oil, recovering some of the carbon that went into their making. This process can be made relatively clean – certainly more so than combustion of coal in power stations.
Alternatively, tyres can simply be burnt to create heat: they have 10% more calorific content than coal. At present, the main use of old tyres is as fuel for cement kilns. But the image of burning tyres sounds deeply unappealing, and there is opposition to this practice from environmental groups, who dispute the claim that it is cleaner than coal. Such concerns make it hard to secure approval for either cement-kiln firing or pyrolysis. And the emissions regulations are strict – rightfully so, but reducing the economic viability. As a result, these uses tend to be capacity-limited.
Tyre retreads have a bad image too – they are seen as second-rate, whereas the truth is that they can perform very well and the environmental benefits of reuse are considerable. Such recycling is also undermined by cheap imports – why buy a second-hand tyre when a new one costs the same?
Unfortunately, other environmental concerns are going to make the problem of tyre disposal even worse. Another European ruling prohibits the use of polycyclic aromatic hydrocarbon oil components in tyre rubber because of their carcinogenicity. It’s a reasonable enough precaution, given that a Swedish study in 2002 found that tyre wear on roads was responsible for a significant amount of the polycyclic aromatics detected in aquatic organisms around Stockholm. But without these ingredients, a tyre’s lifetime is likely to be cut to perhaps just a quarter of its present value. That means more worn-out tyres: the current 42 million tyres discarded in the UK alone could rise to around 100 million as a consequence.
Whether Europe will avoid a used-tyre mountain remains to be seen. But the prospect of an evidently useful, energy-rich material being massively under-exploited seems to say something salutary about the notion that market economics can guarantee efficient materials use. Perhaps it’s time for some incentives?
Here’s my Materials Witness column for the September issue of Nature Materials. It springs from a recent broadcast in which I participated on BBC Radio 4’s Material World – I was there to talk about synthetic biology, but the item before me was concerned with the unexpectedly fascinating, and important, topic of tyre disposal. It seemed to me that the issue highlighted the all too common craziness of our manufacturing systems, in which potentially valuable materials are treated as ‘waste’ simply because we have not worked out the infrastructure sensibly. We can’t afford this profligacy, especially with oil-based products. I know that incineration has a bad press, and I can believe that is sometimes deserved; but surely it is better to recover some of this embodied energy rather than to simply dump it in the nearest ditch?
*****
In July it became illegal to dump almost any kind of vehicle tyres in landfill sites in Europe. Dumping of whole tyres has been banned since 2003; the new directive forbids such disposal of shredded tyres too. That is going to leave European states with an awful lot of used tyres to dispose of in other ways. What can be done with them?
This is a difficult question for the motor industry, but also raises a broader issue about the life cycle of industrial materials. The strange thing about tyres is that there are many ways in which they could be a valuable resource, and yet somehow they end up being regarded as toxic waste. Reduced to crumbs, tyre rubber can be incorporated into soft surfacing for sports grounds and playgrounds. Added to asphalt for road surfaces, it makes the roads harder-wearing.
And rubber is of course an energy carrier: a potential fuel. Pyrolysis of tyres generates gas and oil, recovering some of the carbon that went into their making. This process can be made relatively clean – certainly more so than combustion of coal in power stations.
Alternatively, tyres can simply be burnt to create heat: they have 10% more calorific content than coal. At present, the main use of old tyres is as fuel for cement kilns. But the image of burning tyres sounds deeply unappealing, and there is opposition to this practice from environmental groups, who dispute the claim that it is cleaner than coal. Such concerns make it hard to secure approval for either cement-kiln firing or pyrolysis. And the emissions regulations are strict – rightfully so, but reducing the economic viability. As a result, these uses tend to be capacity-limited.
Tyre retreads have a bad image too – they are seen as second-rate, whereas the truth is that they can perform very well and the environmental benefits of reuse are considerable. Such recycling is also undermined by cheap imports – why buy a second-hand tyre when a new one costs the same?
Unfortunately, other environmental concerns are going to make the problem of tyre disposal even worse. Another European ruling prohibits the use of polycyclic aromatic hydrocarbon oil components in tyre rubber because of their carcinogenicity. It’s a reasonable enough precaution, given that a Swedish study in 2002 found that tyre wear on roads was responsible for a significant amount of the polycyclic aromatics detected in aquatic organisms around Stockholm. But without these ingredients, a tyre’s lifetime is likely to be cut to perhaps just a quarter of its present value. That means more worn-out tyres: the current 42 million tyres discarded in the UK alone could rise to around 100 million as a consequence.
Whether Europe will avoid a used-tyre mountain remains to be seen. But the prospect of an evidently useful, energy-rich material being massively under-exploited seems to say something salutary about the notion that market economics can guarantee efficient materials use. Perhaps it’s time for some incentives?
Subscribe to:
Posts (Atom)