There’s no place like home
… but that won’t stop us looking for it in our search for extraterrestrials.
[This is the pre-edited version of my latest Muse column for Nature news. Am I foolish to imagine there might be people out there who appreciate the differences? Don't answer that.]
In searching the skies for other worlds, are we perhaps just like the English tourists waddling down the Costa del Sol, our eyes lighting up when we see “The Red Lion” pub with the Union Jack in the windows and Watneys Red Barrel on tap? Gazing out into the unutterably vast, unnervingly strange depths of the cosmos, are we not really just hankering for somewhere that looks like home?
It isn’t just a longing for the familiar that has stirred up excitement about the discovery of what looks like a scaled-down version of our own solar system surrounding a distant star [1]. But neither, I think, is that impulse absent.
There’s sound reasoning in looking for ‘Earth-like’ extrasolar planets, because the one thing we can say for sure about these places is that they are capable of supporting life. And it is entirely understandable that extraterrestrial life should be the pot of gold at the end of this particular rainbow.
Yet I doubt that the cold logic of this argument is all there is behind our fascination with Earth-likeness. Science-fiction writers and movie makers have sometimes had fun inventing worlds very different to our own, peopled (can we say that?) with denizens of corresponding weirdness. But that, on the whole, is the exception. That the Klingons and Romulans looked strangely like Californians with bad hangovers was not simply a matter of budget constraints. Edgar Rice Burroughs’ Mars was apparently situated somewhere in the Sahara, populated by extras from the Arabian Nights. In Jeanette Winterson’s new novel The Stone Gods, a moribund, degenerate Earth (here called Orbus) rejoices in the discovery of a pristine Blue Planet in the equivalent of the Cretaceous period, because it offers somewhere to escape to (and might that, in these times haunted by environmental change, nuclear proliferation and fears of planet-searing impacts, already be a part of our own reverie?). Most fictional aliens have been very obviously distorted or enhanced versions of ourselves, both physically and mentally, because in the end our stories, right back to those of Valhalla, Olympus and the seven Hindu heavens, have been more about exploring the human condition than genuinely imagining something outside it.
This solipsism is understandable but deep-rooted, and we shouldn’t imagine that astrobiology and extrasolar planetary prospecting are free from it. However, the claim by the discoverers of the new ‘mini-solar system’ that “solar system analogs may be common” around other stars certainly amounts to more than saying “hey, you can get a decent cup of coffee in this god-forsaken place”. It shows that our theories of the formation and evolution of planetary systems are not parochial, and offers some support for the suspicion that previous methods of planet detection bias our findings towards oddballs such as ‘hot Jupiters’. The fact that the relatively new technique used in this case – gravitational microlensing – has so quickly turned up a ‘solar system analog’ is an encouraging sign that indeed our own neighbourhood is not an anomaly.
The desire – it is more than an unspoken expectation – to find a place that looks like home is nevertheless a persistent bugbear of astrobiology. A conference organized in 2003 to address the question “Can life exist without water?” had as part of its agenda the issue of whether non-aqueous biochemistries could be imagined [2]. But in the event, the participants did not feel comfortable in straying beyond our atmosphere, and so the debate became that of whether proteins can function in the dry or in other solvents, rather than whether other solvents can support the evolution of a non-protein equivalent of enzymes. Attempts to re-imagine biology in, say, liquid methane or ammonia, have been rare [3]. An even more fundamental question, which I have never seen addressed anywhere, is whether evolution has to be Darwinian. It would be a daunting challenge to think of any better way to achieve ‘design’ and function blindly, but there is no proof that Darwin has a monopoly on such matters. Do we even need evolution? Are we absolutely sure that some kind of spontaneous self-organization can’t create life-like complexity, without the need for replication, say?
Maybe these questions are too big to be truly scientific in this form. Better, then, to break bits off them. Marcelo Gleiser and his coworkers at Dartmouth College in New Hampshire have done that in a recent preprint [4], asking whether a ‘replica Earth’ would share our left-handed proteins and right-handed nucleic acids. The handedness here refers to the mirror-image shapes of the biomolecular building blocks. The two mirror-image forms are called enantiomers, and are distinguishable by the fact that they rotate the plane of polarized light to the left or the right.
In principle, all our biochemistry could be reversed by mirror reflection of these shapes, and we’d never notice. So the question is why one set of enantiomers was preferred over the other. One possibility is that it was purely random – once the choice is made, it is fixed, because building blocks of the ‘wrong’ chirality don’t ‘fit’ when constructing organisms. Other explanations, however, suggest that life’s hand was biased at the outset, perhaps by the intrinsic left-handedness in the laws of fundamental physics, or because there was an excess of left-handed amino acids that fell to Earth on meteorites and seeded the first life (that is simply deferring the question, however).
Gleiser and his coworkers argue that these ideas may all be irrelevant. They say that environmental disturbances strong and long enough can reset the handedness, if this is propagated in the prebiotic environment by an autocatalytic process in which an enantiomer acts as a catalyst to create more of itself while blocking the chemical reaction that leads to the other enantiomer. Such a self-amplifying process was proposed in 1953 by physicist Charles Frank, and was demonstrated experimentally in the 1990s by Japanese chemist Kenso Soai.
The US researchers show that an initially random mixture of enantiomers in such a system quickly develops patchiness, with big blobs of each enantiomer accumulating like oil separating from vinegar in an unstirred salad dressing. Chance initial variations will lead to one or other enantiomer eventually dominating. But an environmental disruption, like the planet-sterilizing giant impacts suffered by the early Earth, can shake the salad dressing, breaking up the blobs. When the process begins again, the new dominant enantiomer that emerges may be different from the one before, even if there was a small excess of the other at the outset. As a result, they say, the origin of life’s handedness “is enmeshed with Earth’s environmental history” – and is therefore purely contingent.
Other researchers I have spoken to question whether the scheme Gleiser’s team has considered – autocatalytic spreading in an unstirred solvent – has much relevance to ‘warm little ponds’ on the turbulent young Earth, and whether the notion of resetting by shaking isn’t obvious in any case in a process like Frank’s in which chance variations get amplified. But of course, in an astrobiological context the more fundamental issue is whether there is the slightest reason to think that alien life will use amino acids and DNA, so that a comparison of handedness will be possible.
That doesn’t mean these questions aren’t worth pursuing (they remain relevant to life on Earth, at the very least). But it’s another illustration of our tendency to frame the questions parochially. In the quest for life elsewhere, whether searching for new planets or considering the molecular parameters of potential living systems, we are in some ways more akin to historians than to scientists: our data is a unique narrative, and our thinking is likely to stay trapped within it.
References
1. Gaudi, B. S. et al. Science 319, 927-930 (2008).
2. Phil. Trans. R. Soc. Lond. Ser. B special issue, 359 (no. 1448) (2004).
3. Benner, S. A. et al. Curr. Opin. Chem. Biol. 8, 672 (2004).
4. Gleiser, M. et al. http://arxiv.org/abs/0802.1446
Friday, February 15, 2008
Saturday, February 09, 2008
The hazards of saying what you mean
It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?
You know which X I mean.
I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.
Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.
Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”
And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”
And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”
And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)
What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.
Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.
It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?
You know which X I mean.
I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.
Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.
Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”
And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”
And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”
And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)
What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.
Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.
Friday, February 08, 2008
Waste not, want not
[This is my latest Muse column for Nature News.]
We will now go to any extent to scavenge every last joule of energy from our environment.
As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.
And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.
Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.
So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.
That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.
That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.
While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).
There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.
The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.
Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.
Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].
In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].
And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.
Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.
These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.
And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.
References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).
[This is my latest Muse column for Nature News.]
We will now go to any extent to scavenge every last joule of energy from our environment.
As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.
And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.
Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.
So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.
That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.
That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.
While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).
There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.
The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.
Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.
Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].
In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].
And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.
Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.
These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.
And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.
References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).
Friday, February 01, 2008
Risky business
[My latest Muse column for Nature online news…]
Managing risk in financial markets requires better understanding of their complex dynamics. But it’s already clear that unfettered greed makes matter worse.
It seems to be sheer coincidence that the multi-billion dollar losses at the French bank Société Générale (SocGen), caused by the illegal dealings of rogue trader Jérôme Kerviel, comes at a time of imminent global economic depression. But the conjunction has provoked discussion about whether such localized shocks to the financial market can trigger worldwide (‘systemic’) economic crises.
If so, what can be done to prevent it? Some have called for more regulation, particularly of the murky business that economists call derivatives trading and the rest of us would recognize as institutionalized gambling. “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions”, wrote John Lanchester in the British Guardian newspaper [1]. But how well do we understand what we’d be regulating?
The French affair is in a sense timely, because ‘systemic risk’ in the financial system has become a hot topic, as witnessed by a recent report by the Federal Reserve Bank of New York (FRBNY) and the US National Academy of Sciences [2]. Worries about systemic risk are indeed largely motivated by the link to global recessions, like the one currently looming. This concern was articulated after the Great Depression of the 1930s by the British economist John Maynard Keynes, who wanted to understand how the global economy can switch from a healthy to a depressed state, both of which seemed to be stable ‘equilibrium’ states to the extent that they stick around for a while.
That terminology implies that there is some common ground with the natural sciences. In physics, a change in the global state of a system from one equilibrium configuration to another is called a phase transition, and some economists use such terms and concepts borrowed from physics to talk about market dynamics.
The analogy is potentially misleading, however, because the financial system, and the global economy generally, is never in equilibrium. Money is constantly in motion, and it’s widely recognized that instabilities such as market crashes depend in a sensitive but ill-understood ways on feedbacks within the system that can act to amplify small disturbances and which enforce perpetual change. Other terms from ‘economese’, such as liquidity (the ability to exchange assets for cash), reveal an intuition of that dynamism, and indeed Keynes himself tried to develop a model of economics that relied on analogies with hydrodynamics.
Just as equilibrium spells death for living things, so the financial market is in trouble when money stops flowing. It’s when people stop investing, cutting off the bank loans that business needs to thrive, that a crisis looms. Banks themselves stay in business only if the money keeps comes in; when customers lose confidence and withdraw their cash – a ‘run on the bank’ like that witnessed recently at the UK’s Northern Rock – banks can no longer lend, have to call in existing loans at a loss, and face ruin. That prospect sets up a feedback: the more customers withdraw their money, the more others feel compelled to do so – and if the bank wasn’t in real danger of collapse at the outset, it soon is. The French government is trying to avoid that situation at SocGen, since the collapse of a bank has knock-on consequences that could wreak havoc throughout a nation’s economy, or even beyond.
In other words, bank runs may lead, as with Northern Rock, to the ironic spectacle of gung-ho advocates of the free market appealing, or even demanding, state intervention to bail them out. As Will Hutton puts it in the Observer newspaper, “financiers have organised themselves so that actual or potential losses are picked up by somebody else - if not their clients, then the state - while profits are kept to themselves” [3]. Even measures such as deposit insurance, introduced in the US after the bank runs of the 1930s, which ensures that depositors won’t lose their money even if the bank fails, arguably exacerbate the situation by encouraging banks to take more risks, secure in the knowledge that their customers are unlikely to lose their nerve and desert them.
Economists’ attempts to understand runaway feedbacks in situations like bank runs draw on another area of the natural sciences: epidemiology. They speak of ‘contagion’: the spread of behaviours from one agent, or one part of the market, to another, like the spread of disease in a population. In bank runs, contagion can even spread to other banks: one run leads people to fear others, and this then becomes a self-fulfilling prophecy.
Regardless of whether the current shaky economy might be toppled by a SocGen scandal, it is clear that the financial market is in general potentially susceptible to systemic failure caused by specific, local events. The terrorist attacks on the World Trade Centre on 11 September 2001 demonstrated that, albeit in a most unusual way – for the ‘shock’ here was not a market event as such but physical destruction of its ‘hardware’. Disruption of trading activity in banks in downtown Manhattan in effect caused a bottleneck in the flow of money that had serious knock-on consequences, leading to a precipitous drop in the global financial market.
The FRBNY report [2] is a promising sign that economists seeking to understand risk are open to the ideas and tools of the natural sciences that deal with phase transitions, feedbacks and other complex nonlinear dynamics. But the bugbear of all these efforts is that ultimately the matter hinges on human behaviour. Your propensity to catch a virus is indifferent to whether you feel optimistic or pessimistic about your chances of that; but with contagion in the economy, expectations are crucial.
This is where conventional economic models run into problems. Most of the tools used in financial markets, such as how to price assets and derivatives and how to deal with risk in portfolio management, rely on the assumption that market traders respond rationally and identically on the basis of complete information about the market. This leads to mathematical models that can be solved, but it doesn’t much resemble what real agents do. For one thing, different people reach different conclusions on the basis of the same data. They tend to be overconfident, to be biased towards information that confirms their preconceptions, to have poor intuition about probabilities of rare events, and to indulge in wishful thinking [4]. The field of behavioural finance, which garnered a Nobel prize for Daniel Kahneman in 2002, shows the beginnings of an acknowledgement of these complexities in decision-making – but they haven’t yet had much impact on the tools widely used to calculate and manage risk.
One can’t blame the vulnerability of the financial market on the inability of economists to model it. These poor folks are faced with a challenge of such magnitude that those working on ‘complex systems’ in the natural sciences have it easy by comparison. Yet economic models that make unrealistic assumptions about human decision-making can’t help but suggest that we need to look elsewhere to fix the weak spots. Perhaps no one can be expected to anticipate the wild, not to mention illegal, behaviour of SocGen’s Kerviel or of those who brought low the US power company EnRon in 2001. But these examples are arguably only at the extreme end of a scale that is inherently biased towards high-risk activity by the very rules of engagement. State support of failing banks is just one example of the way that finance is geared to risky strategies: hedge fund managers, for example, get a hefty cut of their profits on top of a basic salary, but others pay for the losses [3]. The FRBNY’s vice president John Kambhu and his colleagues have pointed out that hedge funds (themselves a means of passing on risk) operate in a way that makes risk particularly severe and hard to manage [5].
That’s why, if understanding the financial market demands a better grasp of decision-making, with all its attendant irrationalities, it may be that managing the market to reduce risk and offer more secure public benefit requires more constraint, more checks and balances, to be put on that decision-making. We’re talking about regulation.
Free-market advocates firmly reject such ‘meddling’ on the basis that it cripples Adam Smith’s ‘invisible hand’ that guides the economy. But that hand is shaky, prone to wild gestures and sudden seizures, because it is no longer the collective hand of Smith’s sober bakers and pin-makers but that of rapacious profiteers creaming absurd wealth from deals in imaginary and incredible goods.
One suggestion is that banks and other financial institutions be required to make public how they are managing risk – basically, they should share currently proprietary information about expectations and strategies. This could reduce instability caused by each party trying to second-guess, and being forced to respond reactively, to the others. It might reduce opportunities to make high-risk killings, but the payoff would be to smooth away systemic crises of confidence. (Interestingly, the same proposal of transparency was made by nuclear scientists to Western governments after the development of the US atomic bomb, in the hope of avoiding the risks of an arms race.)
It’s true that too much regulation could be damaging, limiting the ability of the complex financial system to adapt spontaneously to absorb shocks. All the more reason to strive for a theoretical understanding of the processes involved. But experience alone tells us that it is time to move beyond Gordon Gekko’s infamous credo ‘greed is good’. One might argue that ‘a bit of greed is necessary’, but too much is liable to bend and rupture the pipes of the economy. As Hutton says [3], “We need the financiers to serve business and the economy rather than be its master.”
References
[1] Lanchester, J. ‘Dicing with disaster’, Guardian 26 January 2008.
[2] FRBNY Economic Policy Review special issue, ‘New directions for understanding systemic risk’, 13(2) (2007).
[3] Hutton, W. ‘This reckless greed of the few harms the future of the many’, Observer 27 January 2008.
[4] Anderson, J. V. in Encyclopedia of Complexity and Systems Science (Springer, in press, 2008)
[5] Kambhu, J. et al. FRBNY Economic Policy Review 13(3), 1-18 (2008).
[My latest Muse column for Nature online news…]
Managing risk in financial markets requires better understanding of their complex dynamics. But it’s already clear that unfettered greed makes matter worse.
It seems to be sheer coincidence that the multi-billion dollar losses at the French bank Société Générale (SocGen), caused by the illegal dealings of rogue trader Jérôme Kerviel, comes at a time of imminent global economic depression. But the conjunction has provoked discussion about whether such localized shocks to the financial market can trigger worldwide (‘systemic’) economic crises.
If so, what can be done to prevent it? Some have called for more regulation, particularly of the murky business that economists call derivatives trading and the rest of us would recognize as institutionalized gambling. “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions”, wrote John Lanchester in the British Guardian newspaper [1]. But how well do we understand what we’d be regulating?
The French affair is in a sense timely, because ‘systemic risk’ in the financial system has become a hot topic, as witnessed by a recent report by the Federal Reserve Bank of New York (FRBNY) and the US National Academy of Sciences [2]. Worries about systemic risk are indeed largely motivated by the link to global recessions, like the one currently looming. This concern was articulated after the Great Depression of the 1930s by the British economist John Maynard Keynes, who wanted to understand how the global economy can switch from a healthy to a depressed state, both of which seemed to be stable ‘equilibrium’ states to the extent that they stick around for a while.
That terminology implies that there is some common ground with the natural sciences. In physics, a change in the global state of a system from one equilibrium configuration to another is called a phase transition, and some economists use such terms and concepts borrowed from physics to talk about market dynamics.
The analogy is potentially misleading, however, because the financial system, and the global economy generally, is never in equilibrium. Money is constantly in motion, and it’s widely recognized that instabilities such as market crashes depend in a sensitive but ill-understood ways on feedbacks within the system that can act to amplify small disturbances and which enforce perpetual change. Other terms from ‘economese’, such as liquidity (the ability to exchange assets for cash), reveal an intuition of that dynamism, and indeed Keynes himself tried to develop a model of economics that relied on analogies with hydrodynamics.
Just as equilibrium spells death for living things, so the financial market is in trouble when money stops flowing. It’s when people stop investing, cutting off the bank loans that business needs to thrive, that a crisis looms. Banks themselves stay in business only if the money keeps comes in; when customers lose confidence and withdraw their cash – a ‘run on the bank’ like that witnessed recently at the UK’s Northern Rock – banks can no longer lend, have to call in existing loans at a loss, and face ruin. That prospect sets up a feedback: the more customers withdraw their money, the more others feel compelled to do so – and if the bank wasn’t in real danger of collapse at the outset, it soon is. The French government is trying to avoid that situation at SocGen, since the collapse of a bank has knock-on consequences that could wreak havoc throughout a nation’s economy, or even beyond.
In other words, bank runs may lead, as with Northern Rock, to the ironic spectacle of gung-ho advocates of the free market appealing, or even demanding, state intervention to bail them out. As Will Hutton puts it in the Observer newspaper, “financiers have organised themselves so that actual or potential losses are picked up by somebody else - if not their clients, then the state - while profits are kept to themselves” [3]. Even measures such as deposit insurance, introduced in the US after the bank runs of the 1930s, which ensures that depositors won’t lose their money even if the bank fails, arguably exacerbate the situation by encouraging banks to take more risks, secure in the knowledge that their customers are unlikely to lose their nerve and desert them.
Economists’ attempts to understand runaway feedbacks in situations like bank runs draw on another area of the natural sciences: epidemiology. They speak of ‘contagion’: the spread of behaviours from one agent, or one part of the market, to another, like the spread of disease in a population. In bank runs, contagion can even spread to other banks: one run leads people to fear others, and this then becomes a self-fulfilling prophecy.
Regardless of whether the current shaky economy might be toppled by a SocGen scandal, it is clear that the financial market is in general potentially susceptible to systemic failure caused by specific, local events. The terrorist attacks on the World Trade Centre on 11 September 2001 demonstrated that, albeit in a most unusual way – for the ‘shock’ here was not a market event as such but physical destruction of its ‘hardware’. Disruption of trading activity in banks in downtown Manhattan in effect caused a bottleneck in the flow of money that had serious knock-on consequences, leading to a precipitous drop in the global financial market.
The FRBNY report [2] is a promising sign that economists seeking to understand risk are open to the ideas and tools of the natural sciences that deal with phase transitions, feedbacks and other complex nonlinear dynamics. But the bugbear of all these efforts is that ultimately the matter hinges on human behaviour. Your propensity to catch a virus is indifferent to whether you feel optimistic or pessimistic about your chances of that; but with contagion in the economy, expectations are crucial.
This is where conventional economic models run into problems. Most of the tools used in financial markets, such as how to price assets and derivatives and how to deal with risk in portfolio management, rely on the assumption that market traders respond rationally and identically on the basis of complete information about the market. This leads to mathematical models that can be solved, but it doesn’t much resemble what real agents do. For one thing, different people reach different conclusions on the basis of the same data. They tend to be overconfident, to be biased towards information that confirms their preconceptions, to have poor intuition about probabilities of rare events, and to indulge in wishful thinking [4]. The field of behavioural finance, which garnered a Nobel prize for Daniel Kahneman in 2002, shows the beginnings of an acknowledgement of these complexities in decision-making – but they haven’t yet had much impact on the tools widely used to calculate and manage risk.
One can’t blame the vulnerability of the financial market on the inability of economists to model it. These poor folks are faced with a challenge of such magnitude that those working on ‘complex systems’ in the natural sciences have it easy by comparison. Yet economic models that make unrealistic assumptions about human decision-making can’t help but suggest that we need to look elsewhere to fix the weak spots. Perhaps no one can be expected to anticipate the wild, not to mention illegal, behaviour of SocGen’s Kerviel or of those who brought low the US power company EnRon in 2001. But these examples are arguably only at the extreme end of a scale that is inherently biased towards high-risk activity by the very rules of engagement. State support of failing banks is just one example of the way that finance is geared to risky strategies: hedge fund managers, for example, get a hefty cut of their profits on top of a basic salary, but others pay for the losses [3]. The FRBNY’s vice president John Kambhu and his colleagues have pointed out that hedge funds (themselves a means of passing on risk) operate in a way that makes risk particularly severe and hard to manage [5].
That’s why, if understanding the financial market demands a better grasp of decision-making, with all its attendant irrationalities, it may be that managing the market to reduce risk and offer more secure public benefit requires more constraint, more checks and balances, to be put on that decision-making. We’re talking about regulation.
Free-market advocates firmly reject such ‘meddling’ on the basis that it cripples Adam Smith’s ‘invisible hand’ that guides the economy. But that hand is shaky, prone to wild gestures and sudden seizures, because it is no longer the collective hand of Smith’s sober bakers and pin-makers but that of rapacious profiteers creaming absurd wealth from deals in imaginary and incredible goods.
One suggestion is that banks and other financial institutions be required to make public how they are managing risk – basically, they should share currently proprietary information about expectations and strategies. This could reduce instability caused by each party trying to second-guess, and being forced to respond reactively, to the others. It might reduce opportunities to make high-risk killings, but the payoff would be to smooth away systemic crises of confidence. (Interestingly, the same proposal of transparency was made by nuclear scientists to Western governments after the development of the US atomic bomb, in the hope of avoiding the risks of an arms race.)
It’s true that too much regulation could be damaging, limiting the ability of the complex financial system to adapt spontaneously to absorb shocks. All the more reason to strive for a theoretical understanding of the processes involved. But experience alone tells us that it is time to move beyond Gordon Gekko’s infamous credo ‘greed is good’. One might argue that ‘a bit of greed is necessary’, but too much is liable to bend and rupture the pipes of the economy. As Hutton says [3], “We need the financiers to serve business and the economy rather than be its master.”
References
[1] Lanchester, J. ‘Dicing with disaster’, Guardian 26 January 2008.
[2] FRBNY Economic Policy Review special issue, ‘New directions for understanding systemic risk’, 13(2) (2007).
[3] Hutton, W. ‘This reckless greed of the few harms the future of the many’, Observer 27 January 2008.
[4] Anderson, J. V. in Encyclopedia of Complexity and Systems Science (Springer, in press, 2008)
[5] Kambhu, J. et al. FRBNY Economic Policy Review 13(3), 1-18 (2008).
Saturday, January 26, 2008
No option
There is an excellent article in today’s Guardian by the author John Lanchester, who turns out to have a surprisingly (but after all, why not?) thorough understanding of the derivatives market. Lanchester’s piece is motivated by the extraordinary losses chalked up by rogue trader Jérôme Kerviel of the French bank Société Générale. Kerviel’s exploits seem to be provoking the predictable shock-horror about the kind of person entrusted with the world’s finances (as though the last 20 years had never happened). I suspect it was Lanchester’s intention to leave it unstated, but one can’t read his piece without a mounting sense that the derivatives market is one of humankind’s more deranged inventions. To bemoan that is not in itself terribly productive, since it is not clear how one legislates against the situation where one person bets an insane amount of (someone else's) money on an event of which he (not she, on the whole) has not the slightest real idea of the outcome, and another person says ‘you’re on!’. All the same, it is hard to quibble with Lanchester’s conclusion that “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.”
All this makes me appreciate that, while I have been a small voice among many to have criticized the conventional models of economics, in fact economists are only the poor chaps trying to make sense of the lunacy that is the economy. Which brings me to Fischer Black and Myron Scholes, who, Lanchester explains, published a paper in 1973 that gave a formula for how to price derivatives (specifically, options). What Lanchester doesn’t mention is that this Nobel-winning work made the assumption that the volatility of the market – the fluctuations in prices – follows the form dictated by a normal or Gaussian distribution. The problem is that it doesn’t. This is what I said about that in my book Critical Mass:
“Options are supposed to be relatively tame derivatives—thanks to the Black-Scholes model, which has been described as ‘the most successful theory not only in finance but in all of economics’. Black and Scholes considered the question of strategy: what is the best price for the buyer, and how can both the buyer and the writer minimize the risks? It was assumed that the buyer would be given a ‘risk discount’ that reflects the uncertainty in the stock price covered by the option he or she takes out. Scholes and Black proposed that these premiums are already inherent in the stock price, since riskier stock sells for relatively less than its expected future value than does safer stock.
Based on this idea, the two went on to devise a formula for calculating the ‘fair price’ of an option. The theory was a gift to the trader, who had only to plug in appropriate numbers and get out the figure he or she should pay.
But there was just one element of the model that could not be readily specified: the market volatility, or how the market fluctuates. To calculate this, Black and Scholes assumed that the fluctuations were gaussian.
Not only do we know that this is not true, but it means that the Black-Scholes formula can produce nonsensical results: it suggests that option-writing can be conducted in a risk-free manner. This is a potentially disastrous message, imbuing a false sense of confidence that can lead to huge losses. The shortcoming arises from the erroneous assumption about market variability, showing that it matters very much in practical terms exactly how the fluctuations should be described.
The drawbacks of the Scholes-Black theory are known to economists, but they have failed to ameliorate them. Many extensions and modifications of the model have been proposed, yet none of them guarantees to remove the risks. It has been estimated that the deficiencies of such models account for up to 40 percent of the 1997 losses in derivatives trading, and it appears that in some cases traders’ rules of thumb do better than mathematically sophisticated models.”
Just a little reminder that, say what you will about the ‘econophysicists’ who are among those to be working on this issue, there are some rather important lacunae remaining in economic theory.
There is an excellent article in today’s Guardian by the author John Lanchester, who turns out to have a surprisingly (but after all, why not?) thorough understanding of the derivatives market. Lanchester’s piece is motivated by the extraordinary losses chalked up by rogue trader Jérôme Kerviel of the French bank Société Générale. Kerviel’s exploits seem to be provoking the predictable shock-horror about the kind of person entrusted with the world’s finances (as though the last 20 years had never happened). I suspect it was Lanchester’s intention to leave it unstated, but one can’t read his piece without a mounting sense that the derivatives market is one of humankind’s more deranged inventions. To bemoan that is not in itself terribly productive, since it is not clear how one legislates against the situation where one person bets an insane amount of (someone else's) money on an event of which he (not she, on the whole) has not the slightest real idea of the outcome, and another person says ‘you’re on!’. All the same, it is hard to quibble with Lanchester’s conclusion that “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.”
All this makes me appreciate that, while I have been a small voice among many to have criticized the conventional models of economics, in fact economists are only the poor chaps trying to make sense of the lunacy that is the economy. Which brings me to Fischer Black and Myron Scholes, who, Lanchester explains, published a paper in 1973 that gave a formula for how to price derivatives (specifically, options). What Lanchester doesn’t mention is that this Nobel-winning work made the assumption that the volatility of the market – the fluctuations in prices – follows the form dictated by a normal or Gaussian distribution. The problem is that it doesn’t. This is what I said about that in my book Critical Mass:
“Options are supposed to be relatively tame derivatives—thanks to the Black-Scholes model, which has been described as ‘the most successful theory not only in finance but in all of economics’. Black and Scholes considered the question of strategy: what is the best price for the buyer, and how can both the buyer and the writer minimize the risks? It was assumed that the buyer would be given a ‘risk discount’ that reflects the uncertainty in the stock price covered by the option he or she takes out. Scholes and Black proposed that these premiums are already inherent in the stock price, since riskier stock sells for relatively less than its expected future value than does safer stock.
Based on this idea, the two went on to devise a formula for calculating the ‘fair price’ of an option. The theory was a gift to the trader, who had only to plug in appropriate numbers and get out the figure he or she should pay.
But there was just one element of the model that could not be readily specified: the market volatility, or how the market fluctuates. To calculate this, Black and Scholes assumed that the fluctuations were gaussian.
Not only do we know that this is not true, but it means that the Black-Scholes formula can produce nonsensical results: it suggests that option-writing can be conducted in a risk-free manner. This is a potentially disastrous message, imbuing a false sense of confidence that can lead to huge losses. The shortcoming arises from the erroneous assumption about market variability, showing that it matters very much in practical terms exactly how the fluctuations should be described.
The drawbacks of the Scholes-Black theory are known to economists, but they have failed to ameliorate them. Many extensions and modifications of the model have been proposed, yet none of them guarantees to remove the risks. It has been estimated that the deficiencies of such models account for up to 40 percent of the 1997 losses in derivatives trading, and it appears that in some cases traders’ rules of thumb do better than mathematically sophisticated models.”
Just a little reminder that, say what you will about the ‘econophysicists’ who are among those to be working on this issue, there are some rather important lacunae remaining in economic theory.
Thursday, January 24, 2008
Scratchbuilt genomes
[Here’s the pre-edited version of my latest story for Nature’s online news. I discuss this work also in the BBC World Service’s Science in Action programme this week.]
By announcing the first chemical synthesis of a complete bacterial genome [1], scientists in the US have shown that the stage is now set for the creation of the first artificial organisms – something that looks likely to be achieved within the next year.
The genome of the pathogenic bacterium Mycoplasma genitalium, made in the laboratory by Hamilton Smith and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, represents an increase by more than a factor of ten in the longest stretch of genetic material ever created by chemical means.
The complete genome of M. genitalium contains 582,970 of the fundamental building blocks of DNA, called nucleotide bases. Each of these was stitched in place by commercial DNA-synthesis companies according to the Venter Institute’s specifications, to make 101 separate segments of the genome. The scientists then used biotechnological methods to combine these fragments into a single genome within cells of E. coli bacteria and yeast.
M. genitalium has the smallest genome of any organism that can grow and replicate independently. (Viruses have smaller genomes, some of which have been synthesized before, but they cannot replicate on their own.) Its DNA contains the instructions for making just 485 proteins, which orchestrate the cells’ functions.
This genetic concision makes M. genitalium a candidate for the basis of a ‘minimal organism’, which would be stripped down further to contain the bare minimum of genes needed to survive. The Venter Institute team, which includes the institute’s founder, genomics pioneer Craig Venter, believe that around 100 of the bacterium’s genes could be jettisoned – but they don’t know which 100 these are.
The way to test that would be to make versions of the M. genitalium genome that lack some genes, and see whether it still provides a viable ‘operating system’ for the organism. Such an approach would also require a method for replacing a cell’s existing genome with a new, redesigned one. But Venter and his colleagues have already achieved such a ‘gene transplant’, which they reported last year between two bacteria closely related to M. genitalium [2].
Their current synthesis of the entire M. genitalium genome thus provides the other part of the puzzle. Chemical synthesis of DNA involves sequentially adding one of the four nucleotide bases to a growing chain in a specified sequence. The Venter Institute team farmed out this task to the companies Blue Heron Technology, DNA2.0 and GENEART.
But it is beyond the capabilities of the current techniques to join up all half a million or so bases in a single, continuous process. That was why the researchers ordered 101 fragments or ‘cassettes’, each of about 5000-7000 bases and with overlapping sequences that enabled them to be stuck together by enzymes.
To distinguish the synthetic DNA from the genomes of ‘wild’ M. genitalium, Smith and colleagues included ‘watermark’ sequences: stretches of DNA carrying a kind of barcode that designates its artificiality. These watermarks must be inserted at sites in the genome known to be able to tolerate such additions without their genetic function being impaired.
The researchers made one further change to the natural genome: they altered one gene in a way that was known to render M. genitalium unable to stick to mammalian cells. This ensured that cells carrying the artificial genome could not act as pathogens.
The cassettes were stitched together into strands that each contained a quarter of the total genome using DNA-linking enzymes within E. coli cells. But, for reasons that the researchers don’t yet understand, the final assembly of these quarter-genomes into a single circular strand didn’t run smoothly in the bacteria. So the team transferred them to cells of brewers’ yeast, in which the last steps of the assembly were carried out.
Smith and colleagues then extracted these synthetic genomes from the yeast cells, and used enzymes to chew up the yeast’s own DNA. They read out the sequences of the remaining DNA to check that these matched those of wild M. genitalium (apart from the deliberate modifications such as watermarks).
The ultimate evidence that the synthetic genomes are authentic copies, however, will be to show that cells can be ‘booted up’ when loaded with this genetic material. “This is the next step and we are working on it”, says Smith.
Advances in DNA synthesis might ultimately make this laborious stitching of fragments unnecessary, but Dorene Farnham, director of sales and marketing at Blue Heron in Bothell, Washington, stresses that that’s not a foregone conclusion. “The difficulty is not about length”, she says. “There are many other factors that go into getting these synthetic genes to survive in cells.”
Venter’s team hopes that a stripped-down version of the M. genitalium genome might serve as a general-purpose chassis to which might be added all sorts of useful designer functions, for example including genes that turn the bacteria into biological factories for making carbon-based ‘green’ fuels or hydrogen when fed with nutrients.
The next step towards that goal is to build potential minimal genomes from scratch, transplant them into Mycoplasma, and see if they will keep the cells alive. “We plan to start removing putative ‘non-essential’ genes and test whether we get viable transplants”, says Smith.
References
1. Gibson, D. G. et al. Science Express doi:10.1126/science.1151721 (2008).
2. Lartigue, C. et al. Science 317, 632 (2007).
[Here’s the pre-edited version of my latest story for Nature’s online news. I discuss this work also in the BBC World Service’s Science in Action programme this week.]
By announcing the first chemical synthesis of a complete bacterial genome [1], scientists in the US have shown that the stage is now set for the creation of the first artificial organisms – something that looks likely to be achieved within the next year.
The genome of the pathogenic bacterium Mycoplasma genitalium, made in the laboratory by Hamilton Smith and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, represents an increase by more than a factor of ten in the longest stretch of genetic material ever created by chemical means.
The complete genome of M. genitalium contains 582,970 of the fundamental building blocks of DNA, called nucleotide bases. Each of these was stitched in place by commercial DNA-synthesis companies according to the Venter Institute’s specifications, to make 101 separate segments of the genome. The scientists then used biotechnological methods to combine these fragments into a single genome within cells of E. coli bacteria and yeast.
M. genitalium has the smallest genome of any organism that can grow and replicate independently. (Viruses have smaller genomes, some of which have been synthesized before, but they cannot replicate on their own.) Its DNA contains the instructions for making just 485 proteins, which orchestrate the cells’ functions.
This genetic concision makes M. genitalium a candidate for the basis of a ‘minimal organism’, which would be stripped down further to contain the bare minimum of genes needed to survive. The Venter Institute team, which includes the institute’s founder, genomics pioneer Craig Venter, believe that around 100 of the bacterium’s genes could be jettisoned – but they don’t know which 100 these are.
The way to test that would be to make versions of the M. genitalium genome that lack some genes, and see whether it still provides a viable ‘operating system’ for the organism. Such an approach would also require a method for replacing a cell’s existing genome with a new, redesigned one. But Venter and his colleagues have already achieved such a ‘gene transplant’, which they reported last year between two bacteria closely related to M. genitalium [2].
Their current synthesis of the entire M. genitalium genome thus provides the other part of the puzzle. Chemical synthesis of DNA involves sequentially adding one of the four nucleotide bases to a growing chain in a specified sequence. The Venter Institute team farmed out this task to the companies Blue Heron Technology, DNA2.0 and GENEART.
But it is beyond the capabilities of the current techniques to join up all half a million or so bases in a single, continuous process. That was why the researchers ordered 101 fragments or ‘cassettes’, each of about 5000-7000 bases and with overlapping sequences that enabled them to be stuck together by enzymes.
To distinguish the synthetic DNA from the genomes of ‘wild’ M. genitalium, Smith and colleagues included ‘watermark’ sequences: stretches of DNA carrying a kind of barcode that designates its artificiality. These watermarks must be inserted at sites in the genome known to be able to tolerate such additions without their genetic function being impaired.
The researchers made one further change to the natural genome: they altered one gene in a way that was known to render M. genitalium unable to stick to mammalian cells. This ensured that cells carrying the artificial genome could not act as pathogens.
The cassettes were stitched together into strands that each contained a quarter of the total genome using DNA-linking enzymes within E. coli cells. But, for reasons that the researchers don’t yet understand, the final assembly of these quarter-genomes into a single circular strand didn’t run smoothly in the bacteria. So the team transferred them to cells of brewers’ yeast, in which the last steps of the assembly were carried out.
Smith and colleagues then extracted these synthetic genomes from the yeast cells, and used enzymes to chew up the yeast’s own DNA. They read out the sequences of the remaining DNA to check that these matched those of wild M. genitalium (apart from the deliberate modifications such as watermarks).
The ultimate evidence that the synthetic genomes are authentic copies, however, will be to show that cells can be ‘booted up’ when loaded with this genetic material. “This is the next step and we are working on it”, says Smith.
Advances in DNA synthesis might ultimately make this laborious stitching of fragments unnecessary, but Dorene Farnham, director of sales and marketing at Blue Heron in Bothell, Washington, stresses that that’s not a foregone conclusion. “The difficulty is not about length”, she says. “There are many other factors that go into getting these synthetic genes to survive in cells.”
Venter’s team hopes that a stripped-down version of the M. genitalium genome might serve as a general-purpose chassis to which might be added all sorts of useful designer functions, for example including genes that turn the bacteria into biological factories for making carbon-based ‘green’ fuels or hydrogen when fed with nutrients.
The next step towards that goal is to build potential minimal genomes from scratch, transplant them into Mycoplasma, and see if they will keep the cells alive. “We plan to start removing putative ‘non-essential’ genes and test whether we get viable transplants”, says Smith.
References
1. Gibson, D. G. et al. Science Express doi:10.1126/science.1151721 (2008).
2. Lartigue, C. et al. Science 317, 632 (2007).
Tuesday, January 22, 2008

Differences in the shower
[This is how my latest article for Nature’s Muse column started out. Check out also a couple of interesting papers in the latest issue of Phys. Rev. E: a study of how ‘spies’ affect the minority game, and a look at the value of diversity in promoting cooperation in the spatial Prisoner’s Dilemma.]
A company sets out to hire a 20-person team to solve a tricky problem, and has a thousand applicants to choose from. So they set them all a test related to the problem in question. Should they then pick the 20 people who do best? That sounds like a no-brainer, but there situations in which it would be better to hire 20 of the applicants at random.
This scenario was presented four years ago by social scientists Lu Hong and Scott Page of the University of Michigan [1] as an illustration of the value of diversity in human groups. It shows that many different minds are sometimes more effective than many ‘expert’ minds. The drawback of having a team composed of the ‘best’ problem-solvers is that they are likely all to think in the same way, and so are less likely to come up with versatile, flexible solutions. “Diversity”, said Hong and Page, “trumps ability.”
Page believes that studies like this, which present mathematical models of decision-making, show that initiatives to encourage cultural diversity in social, academic and institutional settings are not just exercises in politically correct posturing. To Page, they are ways of making the most of the social capital that human difference offers.
There are evolutionary analogues to this. Genetic diversity in a population confers robustness in the face of a changing environment, whereas a group of almost identical ‘optimally adapted’ organisms can come to grief when the wind shifts. Similarly, sexual reproduction provides healthy variety in our own genomes, while in ecology monocultures are notoriously fragile in the face of new threats.
But it’s possible to overplay the diversity card. Expert opinion, literary and artistic canons, and indeed the whole concept of ‘excellence’ have become fashionable whipping boys to the extent that some, particularly in the humanities, worry about standards and judgement vanishing in a deluge of relativist mediocrity. Of course it is important to recognize that diversity does not have to mean ‘anything goes’ (a range of artistic styles does not preclude discrimination of good from bad within each of them) – but that’s often what sceptics of the value of ‘diversity’ fear.
This is why models like that of Hong and Page bring some valuable precision to the questions of what diversity is and why and when it matters. That issue now receives a further dose of enlightenment from a study that looks, at face value, to be absurdly whimsical.
Economist Christina Matzke and physicist Damien Challet have devised a mathematical model of (as they put it) “taking a shower in youth hostels” [2]. Among the risks of budget travel, few are more hazardous than this. If you try to have a shower at the same time as everyone else, it’s a devil of a job adjusting the taps to get the right water temperature.
The problem, say Matzke and Challet, is that in the primitive plumbing systems of typical hostels, one person changing their shower temperature settings alters the balance of hot and cold water for everyone else too. They in turn try to retune the settings to their own comfort, with the result that the shower temperatures fluctuate wildly between scalding and freezing. Under what conditions, they ask, can everyone find a mutually acceptable compromise, rather than all furiously altering their shower controls while cursing the other guests?
So far, so amusing. But is this really such a (excuse me) burning issue? Challet’s previous work provides some kind of answer to that. Several years ago, he and physicist Yi-Cheng Zhang devised the so-called minority game as a model for human decision-making [3]. They took their lead from economist Brian Arthur, who was in the habit of frequenting a bar called El Farol in the town of Santa Fe where he worked [4]. The bar hosted an Irish music night on Thursdays which was often so popular that the place would be too crowded for comfort.
Noting this, some El Farol clients began staying away on Irish nights. That was great for those who did turn up – but once word got round that things were more comfortable, overcrowding resumed. In other words, attendance would fluctuate wildly, and the aim was to go only on those nights when you figured others would stay away.
But how do you know which nights those are? You don’t, of course. Human nature, however, prompts us to think we can guess. Maybe low attendance one week means high attendance the next? Or if it’s been busy three weeks in a row, the next is sure to be quiet? The fact is that there’s no ‘best’ strategy – it depends on what strategies other use.
The point of the El Farol problem, which Challet and Zhang generalized, is to be in the minority: to stay away when most others go, and vice versa. The reason why this is not a trivial issue is that the minority game serves as a proxy for many social situations, from lane-changing in heavy traffic to choosing your holiday destination. It is especially relevant in economics: in a buyer’s market, for example, it pays to be a seller. It’s unlikely that anyone decided whether or not to go to El Farol by plotting graphs and statistics, but market traders certainly do so, hoping to tease out trends that will enable them to make the best decisions. Each has a preferred strategy.
The maths of the minority game looks at how such strategies affect one another, how they evolve and how the ‘agents’ playing the game learn from experience. I once played it in an interactive lecture in which push-button voting devices were distributed to the audience, who were asked to decide in each round whether to be in group A or group B. (The one person who succeeded in being in the minority in all of several rounds said that his strategy was to switch his vote from one group to the other “one round later than it seemed common sense to do so.”)
So what about the role of diversity? Challet’s work showed that the more mixed the strategies of decision-making are, the more reliably the game settles down to the optimal average size of the majority and minority groups. In other words, attendance at El Farol doesn’t in that case fluctuate so much from one week to the next, and is usually close to capacity.
The Shower Temperature Problem is very different, because in principle the ideal situation, where everyone gets closest to their preferred temperature, happens when they all set their taps in the same way – that is, they all use the same strategy. However, this solution is unstable – the slightest deviation, caused by one person trying to tweak the shower settings to get a bit closer to the ideal, sets off wild oscillations in temperature as others respond.
In contrast, when there is a diversity of strategies – agents use a range of tap settings in an attempt to hit the desired water temperature – then these oscillations are suppressed and the system converges more reliably to an acceptable temperature for all. But there’s a price paid for that stability. While overall the water temperature doesn’t fluctuate strongly, individuals may find they have to settle for a temperature further from the ideal value than they would in the case of identical shower settings.
This problem is representative of any in which many agents try to obtain equal amounts of some fixed quantity that is not necessarily abundant enough to satisfy them all completely – factories or homes competing for energy in a power grid, perhaps. But more generally, the model of Matzke and Challet shows how diversity in decision-making may fundamentally alter the collective outcome. That may sound obvious, but don’t count on it. Conventional economic models have for decades stubbornly insisted on making all their agents identical. They are ‘representative’ – one size fits all – and they follow a single ‘optimal’ strategy that maximizes their gains.
There’s a good reason for this assumption: the models are very hard to solve otherwise. But there’s little point in having a tractable model if it doesn’t come close to describing reality. The static view of a ‘representative’ agent leads to the prediction of an ‘equilibrium’ economy, rather like the equilibrium shower system of Matzke and Challet’s homogeneous agents. Anyone contemplating the current world economy knows all too well what a myth this equilibrium is – and how real-world behaviour is sure to depend on the complex mix of beliefs that economic agents hold about the future and how to deal with it.
More generally, the Shower Temperature Problem offers another example of how difference and diversity can improve the outcome of group decisions. Encouraging diversity is not then about being liberal or tolerant (although it tends to require both) but about being rational. Perhaps the deeper challenge for human societies, and the one that underpins current debates about multiculturalism, is how to cope with differences not in problem-solving strategies but in the question of what the problems are and what the desired solutions should be.
References
1. Hong, L. & Page, S. E. Proc. Natl Acad. Sci. USA 101, 16385 (2004).
2. Matzke, C. & Challet, D. preprint http://www.arxiv.org/abs/0801.1573 (2008).
3. Arthur, B. W. Am. Econ. Assoc. Papers & Proc. 84, 406. (1994).
4. Challet, D. & Zhang, Y.-C. Physica A 246, 407 (1997).
Wednesday, January 16, 2008
Groups, glaciation and the pox
[This is the pre-edited version of my Lab Report column for the February issue of Prospect.]
Blaming America for the woes of the world is an old European habit. Barely three decades after Columbus’s crew returned from the New World, a Spanish doctor claimed they brought back the new disease that was haunting Europe: syphilis, so named in the 1530s by the Italian Girolamo Fracastoro. All social strata were afflicted: kings, cardinals and popes suffered alongside soldiers, although sexual promiscuity was so common that the venereal nature of the disease took time to emerge. Treatments were fierce and of limited value: inhalations of mercury vapour had side-effects as bad as the symptoms, while only the rich could afford medicines made from guaiac wood imported from the West Indies.
But it became fashionable during the twentieth century to doubt the New World origin of syphilis: perhaps the disease was a dormant European one that acquired new virulence during the Renaissance? Certainly, the bacterial spirochete Treponema pallidum (subspecies pallidum) that causes syphilis is closely related to other ‘treponemal’ pathogens, such as that which causes yaws in hot, humid regions like the Congo and Indonesia. Most of these diseases leave marks on the skeleton and so can be identified in human remains. They are seen widely in New World populations dating back thousands of years, but reported cases of syphilis-like lesions in Old World remains before Columbus have been ambiguous.
Now a team of scientists in Atlanta, Georgia, has analysed the genetics of many different strains of treponemal bacteria to construct an evolutionary tree that not only identifies how venereal syphilis emerged but shows where in the world its nearest genetic relatives are found. This kind of ‘molecular phylogenetics’, which builds family trees not from a traditional comparison of morphologies but by comparing gene sequences, has revolutionized palaeonotology, and it works as well for viruses and bacteria as it does for hominids and dinosaurs. The upshot is that T. pallidum subsp. pallidum is more closely related to a New World subspecies than it is to Old World strains. In other words, it looks as though the syphilis spirochete indeed mutated from an American progenitor. That doesn’t quite imply that Columbus’s sailors brought syphilis back with them, however – it’s also possible that they carried a non-venereal form that quickly mutated into the sexually transmitted disease on its arrival. Given that syphilis was reported within two years of Columbus’s landing in Spain, that would have been a quick change.
****
Having helped to bury the notion of group selection in the 1970s, Harvard biologist E. O. Wilson is now attempting to resurrect it. He has a tough job on his hands; most evolutionary biologists have firmly rejected this explanation for altruism, and Richard Dawkins has called Wilson’s new support for group selection a ‘weird infatuation’ that is ‘unfortunate in a biologist who is so justly influential.’
The argument is all about why we are (occasionally) nice to one another, rather than battling, red in tooth and claw, for limited resources. The old view of group selection said simply that survival prospects may improve if organisms act collectively rather than individually. Human altruism, with its framework of moral and social imperatives, is murky territory for such questions, but cooperation is common enough in the wild, particularly in eusocial insects such as ants and bees. Since the mid-twentieth century such behaviour has been explained not by vague group selection but via kin selection: by helping those genetically related to us, we propagate our genes. It is summed up in the famous formulation of J. B. S. Haldane that he would lay down his life for two brothers or eight cousins – a statement of the average genetic overlaps that make the sacrifice worthwhile. Game theory now offers versions of altruism that don’t demand kinship – cooperation of non-relatives can also be to mutual benefit – but kin selection remains the dominant explanation for eusociality.
That was the position advocated by Wilson in his 1975 book Sociobiology. In a forthcoming book The Superorganism, and a recent paper, he now reverses this claim and says that kin selection may not be all that important. What matters, he says, is that a population possess genes that predispose the organisms to flexible behavioural choices, permitting a switch from competitive to cooperative action in ‘one single leap’ when the circumstances make it potentially beneficial.
Wilson cites a lack of direct, quantitative evidence for kin selection, although others have disputed that criticism. In the end the devil is in the details – specifically in the maths of how much genetic common ground a group needs to make self-sacrifice pay – and it’s not clear that either camp yet has the numbers to make an airtight case.
****
The discovery of ice sheets half the size of today’s Antarctic ice cap during the ‘super-greenhouse’ climate of the Turonian stage, 93.5-89.3 million years ago, seems to imply that we need not fret about polar melting today. With atmospheric greenhouse gas levels 3-10 times higher than now, ocean temperatures around 5 degC warmer, and crocodiles swimming in the Arctic, the Turonian sounds like the IPCC’s worst nightmare. But it’s not at all straightforward to extrapolate between then and now. More intense circulation of water in the atmosphere could have left thick glaciers on the high mountains and plateaus of Antarctica even in those torrid times. In any event, a rather particular set of climatic circumstances seems to have been at play – the glaciation does not persist throughout the warm Cretaceous period. And it is always important to remember that, with climate, where you end up tends to depend on where you started from.
[This is the pre-edited version of my Lab Report column for the February issue of Prospect.]
Blaming America for the woes of the world is an old European habit. Barely three decades after Columbus’s crew returned from the New World, a Spanish doctor claimed they brought back the new disease that was haunting Europe: syphilis, so named in the 1530s by the Italian Girolamo Fracastoro. All social strata were afflicted: kings, cardinals and popes suffered alongside soldiers, although sexual promiscuity was so common that the venereal nature of the disease took time to emerge. Treatments were fierce and of limited value: inhalations of mercury vapour had side-effects as bad as the symptoms, while only the rich could afford medicines made from guaiac wood imported from the West Indies.
But it became fashionable during the twentieth century to doubt the New World origin of syphilis: perhaps the disease was a dormant European one that acquired new virulence during the Renaissance? Certainly, the bacterial spirochete Treponema pallidum (subspecies pallidum) that causes syphilis is closely related to other ‘treponemal’ pathogens, such as that which causes yaws in hot, humid regions like the Congo and Indonesia. Most of these diseases leave marks on the skeleton and so can be identified in human remains. They are seen widely in New World populations dating back thousands of years, but reported cases of syphilis-like lesions in Old World remains before Columbus have been ambiguous.
Now a team of scientists in Atlanta, Georgia, has analysed the genetics of many different strains of treponemal bacteria to construct an evolutionary tree that not only identifies how venereal syphilis emerged but shows where in the world its nearest genetic relatives are found. This kind of ‘molecular phylogenetics’, which builds family trees not from a traditional comparison of morphologies but by comparing gene sequences, has revolutionized palaeonotology, and it works as well for viruses and bacteria as it does for hominids and dinosaurs. The upshot is that T. pallidum subsp. pallidum is more closely related to a New World subspecies than it is to Old World strains. In other words, it looks as though the syphilis spirochete indeed mutated from an American progenitor. That doesn’t quite imply that Columbus’s sailors brought syphilis back with them, however – it’s also possible that they carried a non-venereal form that quickly mutated into the sexually transmitted disease on its arrival. Given that syphilis was reported within two years of Columbus’s landing in Spain, that would have been a quick change.
****
Having helped to bury the notion of group selection in the 1970s, Harvard biologist E. O. Wilson is now attempting to resurrect it. He has a tough job on his hands; most evolutionary biologists have firmly rejected this explanation for altruism, and Richard Dawkins has called Wilson’s new support for group selection a ‘weird infatuation’ that is ‘unfortunate in a biologist who is so justly influential.’
The argument is all about why we are (occasionally) nice to one another, rather than battling, red in tooth and claw, for limited resources. The old view of group selection said simply that survival prospects may improve if organisms act collectively rather than individually. Human altruism, with its framework of moral and social imperatives, is murky territory for such questions, but cooperation is common enough in the wild, particularly in eusocial insects such as ants and bees. Since the mid-twentieth century such behaviour has been explained not by vague group selection but via kin selection: by helping those genetically related to us, we propagate our genes. It is summed up in the famous formulation of J. B. S. Haldane that he would lay down his life for two brothers or eight cousins – a statement of the average genetic overlaps that make the sacrifice worthwhile. Game theory now offers versions of altruism that don’t demand kinship – cooperation of non-relatives can also be to mutual benefit – but kin selection remains the dominant explanation for eusociality.
That was the position advocated by Wilson in his 1975 book Sociobiology. In a forthcoming book The Superorganism, and a recent paper, he now reverses this claim and says that kin selection may not be all that important. What matters, he says, is that a population possess genes that predispose the organisms to flexible behavioural choices, permitting a switch from competitive to cooperative action in ‘one single leap’ when the circumstances make it potentially beneficial.
Wilson cites a lack of direct, quantitative evidence for kin selection, although others have disputed that criticism. In the end the devil is in the details – specifically in the maths of how much genetic common ground a group needs to make self-sacrifice pay – and it’s not clear that either camp yet has the numbers to make an airtight case.
****
The discovery of ice sheets half the size of today’s Antarctic ice cap during the ‘super-greenhouse’ climate of the Turonian stage, 93.5-89.3 million years ago, seems to imply that we need not fret about polar melting today. With atmospheric greenhouse gas levels 3-10 times higher than now, ocean temperatures around 5 degC warmer, and crocodiles swimming in the Arctic, the Turonian sounds like the IPCC’s worst nightmare. But it’s not at all straightforward to extrapolate between then and now. More intense circulation of water in the atmosphere could have left thick glaciers on the high mountains and plateaus of Antarctica even in those torrid times. In any event, a rather particular set of climatic circumstances seems to have been at play – the glaciation does not persist throughout the warm Cretaceous period. And it is always important to remember that, with climate, where you end up tends to depend on where you started from.
Friday, January 11, 2008
In praise of wrong ideas
[This is my latest column for Chemistry World, and explains what I got up to on Monday night. I’m not sure when the series is being broadcast – this was the first to be recorded. It’s an odd format, and I’m not entirely sure it works, or at least, not yet. Along with Jonathan Miller, my fellow guest was mathematician Marcus de Sautoy. Jonathan chose to submit the Nottingham Alabasters (look it up – interesting stuff), and Marcus the odd symmetry group called the Monster.]
I can’t say that I’d expected to find myself defending phlogiston, let alone in front of a comedy audience. But I wasn’t angling for laughs. I was aiming to secure a place for phlogiston in the ‘Museum of Curiosities’, an institution that exists only in the ethereal realm of BBC’s Radio 4. In a forthcoming series of the same name, panellists submit an item of their choice to the museum, explaining why it deserves a place. The show will have laughs – the curator is the British comedian Bill Bailey – but actually it needn’t. The real aim is to spark discussion of the issues that each guest’s choice raises. For phlogiston, there are plenty of those.
What struck me most during the recording was how strongly the old historiographic image of phlogiston still seems to hold sway. In 1930 the chemical popularizer Bernard Jaffe wrote that phlogiston, which he attributed to the alchemist Johann Becher, ‘nearly destroyed the progress of chemistry’, while in 1957 the science historian John Read called it a ‘theory of unreason.’ Many of us doubtless encountered phlogiston in derisive terms during our education, which is perhaps why it is forgivable that the programme’s producers wanted to know of ‘other scientific theories from the past that look silly today’. But even the esteemed science communicator, the medical doctor Jonathan Miller (who was one of my co-panellists), spoke of the ‘drivel’ of the alchemists and suggested that natural philosophers of earlier times got things like this wrong because they ‘didn’t think smartly enough’.
I feel this isn’t the right way to think about phlogiston. Yes, it had serious problems even from the outset, but that was true of just about any fundamental chemical theory of the time, Lavoisier’s oxygen included. Phlogiston also had a lot going for it, not least because it unified a wealth of observations and phenomena. Arguably it was the first overarching chemical theory with a recognizably modern character, even if the debts to ancient and alchemical theories of the elements remained clear.
Phlogiston was in fact named in 1718 by Georg Stahl, professor of medicine at the University of Halle, who derived it from the Greek phlogistos, to set on fire. But Stahl took the notion from Becher’s terra pinguis or fatty earth, one of three types of ‘earth’ that Becher designated as ‘principles’ responsible for mineral formation. Becher’s ‘earths’ were themselves a restatement of the alchemical principles sulphur, mercury and salt proposed as the components of all things by Paracelsus. Terra pinguis was the principle of combustibility – it was abundant in oily or sulphurous substances.
The idea, then, was that phlogiston made things burn. When wood or coal was ignited, its phlogiston was lost to the air, which was why its mass decreases. Combustion ceases when air is saturated in phlogiston. One key problem, noted but not explained by Stahl, was that metals don’t lose but gain weight when combusted. This is often a source of modern scorn, for it led later scientists to contorted explanations such as that phlogiston buoyed up heavier substances, or (sometimes) had negative weight. Those claims prompted Lavoisier ultimately to denounce phlogiston as a ‘veritable Proteus’ that ‘adapts itself to all the explanations for which it may be required.’ But actually it was not always clear whether metals did gain weight when burnt, for the powerful lenses used for heating them could sublimate the oxides.
In any event, phlogiston explained not only combustion but also acidity, respiration, chemical reactivity, and the growth and properties of plants. As Oliver Morton points out in his new book Eating the Sun (Fourth Estate), the Scottish geologist James Hutton invoked a ‘phlogiston cycle’ analogous to the carbon and energy cycles of modern earth scientists, in which phlogiston was a kind of fixed sunlight taken up by plants, some of which is buried in the deep earth as coal and which creates a ‘constant fire in the mineral regions’ that powers volcanism.
So phlogiston was an astonishingly fertile idea. The problem was not that it was plain wrong, but that it was so nearly right – it was the mirror image of the oxygen theory – that it could not easily be discredited. And indeed, that didn’t happen as cleanly and abruptly as implied in conventional accounts of the Chemical Revolution – as Hasok Chang at University College London has explained, phlogistonists persisted well into the nineteenth century, and even eminent figures such as Humphry Davy were sceptical of Lavoisier.
This is one of the reasons I chose phlogiston for the museum – it reminds us of our ahistorical tendency to clean up science history in retrospect, and to divide people facilely into progressives and conservatives. It also shows that the opposite of a good idea can also be a good idea. And it reminds us that science is not about being right but being a little less wrong. I’m sure that one day the dark matter and dark energy of cosmologists will look like phlogiston does now: not silly ideas, but ones that we needed until something better came along.
[This is my latest column for Chemistry World, and explains what I got up to on Monday night. I’m not sure when the series is being broadcast – this was the first to be recorded. It’s an odd format, and I’m not entirely sure it works, or at least, not yet. Along with Jonathan Miller, my fellow guest was mathematician Marcus de Sautoy. Jonathan chose to submit the Nottingham Alabasters (look it up – interesting stuff), and Marcus the odd symmetry group called the Monster.]
I can’t say that I’d expected to find myself defending phlogiston, let alone in front of a comedy audience. But I wasn’t angling for laughs. I was aiming to secure a place for phlogiston in the ‘Museum of Curiosities’, an institution that exists only in the ethereal realm of BBC’s Radio 4. In a forthcoming series of the same name, panellists submit an item of their choice to the museum, explaining why it deserves a place. The show will have laughs – the curator is the British comedian Bill Bailey – but actually it needn’t. The real aim is to spark discussion of the issues that each guest’s choice raises. For phlogiston, there are plenty of those.
What struck me most during the recording was how strongly the old historiographic image of phlogiston still seems to hold sway. In 1930 the chemical popularizer Bernard Jaffe wrote that phlogiston, which he attributed to the alchemist Johann Becher, ‘nearly destroyed the progress of chemistry’, while in 1957 the science historian John Read called it a ‘theory of unreason.’ Many of us doubtless encountered phlogiston in derisive terms during our education, which is perhaps why it is forgivable that the programme’s producers wanted to know of ‘other scientific theories from the past that look silly today’. But even the esteemed science communicator, the medical doctor Jonathan Miller (who was one of my co-panellists), spoke of the ‘drivel’ of the alchemists and suggested that natural philosophers of earlier times got things like this wrong because they ‘didn’t think smartly enough’.
I feel this isn’t the right way to think about phlogiston. Yes, it had serious problems even from the outset, but that was true of just about any fundamental chemical theory of the time, Lavoisier’s oxygen included. Phlogiston also had a lot going for it, not least because it unified a wealth of observations and phenomena. Arguably it was the first overarching chemical theory with a recognizably modern character, even if the debts to ancient and alchemical theories of the elements remained clear.
Phlogiston was in fact named in 1718 by Georg Stahl, professor of medicine at the University of Halle, who derived it from the Greek phlogistos, to set on fire. But Stahl took the notion from Becher’s terra pinguis or fatty earth, one of three types of ‘earth’ that Becher designated as ‘principles’ responsible for mineral formation. Becher’s ‘earths’ were themselves a restatement of the alchemical principles sulphur, mercury and salt proposed as the components of all things by Paracelsus. Terra pinguis was the principle of combustibility – it was abundant in oily or sulphurous substances.
The idea, then, was that phlogiston made things burn. When wood or coal was ignited, its phlogiston was lost to the air, which was why its mass decreases. Combustion ceases when air is saturated in phlogiston. One key problem, noted but not explained by Stahl, was that metals don’t lose but gain weight when combusted. This is often a source of modern scorn, for it led later scientists to contorted explanations such as that phlogiston buoyed up heavier substances, or (sometimes) had negative weight. Those claims prompted Lavoisier ultimately to denounce phlogiston as a ‘veritable Proteus’ that ‘adapts itself to all the explanations for which it may be required.’ But actually it was not always clear whether metals did gain weight when burnt, for the powerful lenses used for heating them could sublimate the oxides.
In any event, phlogiston explained not only combustion but also acidity, respiration, chemical reactivity, and the growth and properties of plants. As Oliver Morton points out in his new book Eating the Sun (Fourth Estate), the Scottish geologist James Hutton invoked a ‘phlogiston cycle’ analogous to the carbon and energy cycles of modern earth scientists, in which phlogiston was a kind of fixed sunlight taken up by plants, some of which is buried in the deep earth as coal and which creates a ‘constant fire in the mineral regions’ that powers volcanism.
So phlogiston was an astonishingly fertile idea. The problem was not that it was plain wrong, but that it was so nearly right – it was the mirror image of the oxygen theory – that it could not easily be discredited. And indeed, that didn’t happen as cleanly and abruptly as implied in conventional accounts of the Chemical Revolution – as Hasok Chang at University College London has explained, phlogistonists persisted well into the nineteenth century, and even eminent figures such as Humphry Davy were sceptical of Lavoisier.
This is one of the reasons I chose phlogiston for the museum – it reminds us of our ahistorical tendency to clean up science history in retrospect, and to divide people facilely into progressives and conservatives. It also shows that the opposite of a good idea can also be a good idea. And it reminds us that science is not about being right but being a little less wrong. I’m sure that one day the dark matter and dark energy of cosmologists will look like phlogiston does now: not silly ideas, but ones that we needed until something better came along.
Thursday, December 20, 2007
Wise words from the Vatican?
[I’m no fan of the pope. And what I don’t say below (because it would simply be cut out as irrelevant) is that his message for World Peace Day includes some typically hateful homophobic stuff in regard to families. AIDS-related contraception and stem-cell research are just two of the areas in which the papacy has put twisted dogma before human well-being. But I feel we should always be ready to give credit where it is due. And so here, in my latest Muse article for Nature News, I try to do so.]
When Cardinal Joseph Ratzinger became Pope Benedict XVI in 2005, many both inside and outside the Christian world feared that the Catholic church was set on a course of hardline conservatism. But in two recent addresses, Benedict XVI shows intriguing signs that he is keen to engage with the technological age, and that he has in some ways a surprisingly thoughtful position on the dialogue between faith and reason.
In his second Encyclical Letter, released on 30 November, the pope tackles the question of how Christian thought should respond to technological change. And in a message for World Peace Day on 1 January 2008, he considers the immense challenges posed by climate change.
Let’s take the latter first, since it is in some ways more straightforward. Benedict XVI’s comments on the environment have already been interpreted in some quarters as “a surprise attack on climate change prophets of doom” who are motivated by “dubious ideology.” According to the British newspaper the Daily Mail, the pope “suggested that fears over man-made emissions melting the ice caps and causing a wave of unprecedented disasters were nothing more than scare-mongering.”
Now, non-British readers may not be aware that the Daily Mail is itself a stalwart bastion of “dubious ideology”, but this claim plumbs new depths even by the newspaper’s impressive standards of distortion and fabrication. Here’s what the pope actually said: “Humanity today is rightly concerned about the ecological balance of tomorrow. It is important for assessments in this regard to be carried out prudently, in dialogue with experts and people of wisdom, uninhibited by ideological pressure to draw hasty conclusions, and above all with the aim of reaching agreement on a model of sustainable development capable of ensuring the well-being of all while respecting environmental balances.”
Hands up those who disagree with this proposition. I thought not. When you consider that the idea that human activities might affect climate has been around for over a century, and the possibility that this might now be occurring has received serious study for more than two decades – during which time the climate science community has resolutely resisted pressing any alarm buttons until they could draw as informed a conclusion as possible – you might just begin to doubt it is they, and their current consensus that human-induced climate change seems real, who are in the pope’s sights when he talks of “hasty conclusions”. Might the charge be levelled, on the contrary, at those who pounce on every new suggestion that there are other factors in climate, such as solar fluctuations, as evidence of a global scientific conspiracy to pin the blame on humanity? I leave you to judge.
The pope’s statement is simply the one that any reasonable person would make. He calls for investment in “sufficient resources in the search for alternative sources of energy and for greater energy efficiency”, for technologically advanced countries to “reassess the high levels of consumption due to the present model of development”, and for humankind not to “selfishly consider nature to be at the complete disposal of our own interests.” Doesn’t that just sound a little like the environmentalists whom the pope is said by some to be lambasting? Admittedly, one might ask whether the Judaeo-Christian notion of human stewardship of the earth has contributed to our current sense of entitlement over its resources; but that’s another debate.
So far, then, good on Benedict XVI. And there’s more: “One must acknowledge with regret the growing number of states engaged in the arms race: even some developing nations allot a significant proportion of their scant domestic product to the purchase of weapons. The responsibility for this baneful commerce is not limited: the countries of the industrially developed world profit immensely from the sale of arms… it is truly necessary for all persons of good will to come together to reach concrete agreements aimed at an effective demilitarization, especially in the area of nuclear arms.” Goodness me, it’s almost enough to make me consider going to Christmas Mass.
The Encyclical Letter, meanwhile (entitled “On Christian Hope”), bites into some more meaty and difficult pies. On one level, its message might sound rather prosaic, however valid: science cannot provide society with a moral compass. The pope is particularly critical of Francis Bacon’s vision of a technological utopia: he and his followers “were wrong to believe that man would be redeemed through science.” Even committed technophiles ought to find that unobjectionable.
Without doubt, Benedict XVI says, progress (for which we might here read science) “offers new possibilities for good, but it also opens up appalling possibilities for evil.” He cites social philosopher Theodor Adorno’s remark that one view of ‘progress’ leads us from the sling to the atom bomb.
More generally, the pope argues that there can be no ready-made prescription for utopia: “Anyone who promises the better world that is guaranteed to last for ever is making a false promise.” Of course, one can see what is coming next: “it is not science that redeems man: man is redeemed by love” – which the pope believes may come only through faith in God. Only with that last step, however, does he enter into his own closed system of reference, in which our own moral lack can be filled only from a divine source.
More interesting is the accompanying remark that “in the field of ethical awareness and moral decision-making… decisions can never simply be made for us in advance by others… in fundamental decisions, every person and every generation is a new beginning.” Now, like most spiritual statements this one is open to interpretation, but surely one way of reading it is to conclude that, when technologies such as stem cell science throw up new ethical questions, we won’t find the answers already written down in any book. The papacy has not been noted for its enlightened attitude to that particular issue, but we might draw a small bit of encouragement from the suggestion that such developments require fresh thinking rather than a knee-jerk response based on outmoded dogma.
Most surprising of all (though I don’t claim to have my finger on the pulse of theological fashion) is the pope’s apparent assertion that the ‘eternal life’ promised biblically is not to be taken literally. He seems concerned, and with good reason, that many people now regard this as a threat rather than a promise: “do we really want this – to live eternally?” he asks. In this regard, Benedict XVI seems to possess rather more wisdom than the rich people who look forward to resurrection of their frozen heads. ‘Eternal life’, he says, is merely a metaphor for an authentic and happy life lived on earth.
True, this then makes no acknowledgement of how badly generations of earlier churchmen have misled their flock. And it seems strange that a pope who believes this interpretation can at the same time feel so evidently fondly towards St Paul and St Augustine, who between them made earthly life a deservedly miserable existence endured by sinners, and towards the Cistercian leader Bernard of Clairvaux, who in consequence pronounced that “We are wounded as soon as we come into this world, while we live in it, and when we leave it; from the soles of our feet to the top of our heads, nothing is healthy in us.”
Perhaps this is one of the many subtle points of theology I don’t understand. All the same, the suggestion that we’d better look for our happiness on an earth managed responsibly, rather than deferring it to some heavenly eternity, gives me a little hope that faith and reason are not set on inevitably divergent paths.
[I’m no fan of the pope. And what I don’t say below (because it would simply be cut out as irrelevant) is that his message for World Peace Day includes some typically hateful homophobic stuff in regard to families. AIDS-related contraception and stem-cell research are just two of the areas in which the papacy has put twisted dogma before human well-being. But I feel we should always be ready to give credit where it is due. And so here, in my latest Muse article for Nature News, I try to do so.]
When Cardinal Joseph Ratzinger became Pope Benedict XVI in 2005, many both inside and outside the Christian world feared that the Catholic church was set on a course of hardline conservatism. But in two recent addresses, Benedict XVI shows intriguing signs that he is keen to engage with the technological age, and that he has in some ways a surprisingly thoughtful position on the dialogue between faith and reason.
In his second Encyclical Letter, released on 30 November, the pope tackles the question of how Christian thought should respond to technological change. And in a message for World Peace Day on 1 January 2008, he considers the immense challenges posed by climate change.
Let’s take the latter first, since it is in some ways more straightforward. Benedict XVI’s comments on the environment have already been interpreted in some quarters as “a surprise attack on climate change prophets of doom” who are motivated by “dubious ideology.” According to the British newspaper the Daily Mail, the pope “suggested that fears over man-made emissions melting the ice caps and causing a wave of unprecedented disasters were nothing more than scare-mongering.”
Now, non-British readers may not be aware that the Daily Mail is itself a stalwart bastion of “dubious ideology”, but this claim plumbs new depths even by the newspaper’s impressive standards of distortion and fabrication. Here’s what the pope actually said: “Humanity today is rightly concerned about the ecological balance of tomorrow. It is important for assessments in this regard to be carried out prudently, in dialogue with experts and people of wisdom, uninhibited by ideological pressure to draw hasty conclusions, and above all with the aim of reaching agreement on a model of sustainable development capable of ensuring the well-being of all while respecting environmental balances.”
Hands up those who disagree with this proposition. I thought not. When you consider that the idea that human activities might affect climate has been around for over a century, and the possibility that this might now be occurring has received serious study for more than two decades – during which time the climate science community has resolutely resisted pressing any alarm buttons until they could draw as informed a conclusion as possible – you might just begin to doubt it is they, and their current consensus that human-induced climate change seems real, who are in the pope’s sights when he talks of “hasty conclusions”. Might the charge be levelled, on the contrary, at those who pounce on every new suggestion that there are other factors in climate, such as solar fluctuations, as evidence of a global scientific conspiracy to pin the blame on humanity? I leave you to judge.
The pope’s statement is simply the one that any reasonable person would make. He calls for investment in “sufficient resources in the search for alternative sources of energy and for greater energy efficiency”, for technologically advanced countries to “reassess the high levels of consumption due to the present model of development”, and for humankind not to “selfishly consider nature to be at the complete disposal of our own interests.” Doesn’t that just sound a little like the environmentalists whom the pope is said by some to be lambasting? Admittedly, one might ask whether the Judaeo-Christian notion of human stewardship of the earth has contributed to our current sense of entitlement over its resources; but that’s another debate.
So far, then, good on Benedict XVI. And there’s more: “One must acknowledge with regret the growing number of states engaged in the arms race: even some developing nations allot a significant proportion of their scant domestic product to the purchase of weapons. The responsibility for this baneful commerce is not limited: the countries of the industrially developed world profit immensely from the sale of arms… it is truly necessary for all persons of good will to come together to reach concrete agreements aimed at an effective demilitarization, especially in the area of nuclear arms.” Goodness me, it’s almost enough to make me consider going to Christmas Mass.
The Encyclical Letter, meanwhile (entitled “On Christian Hope”), bites into some more meaty and difficult pies. On one level, its message might sound rather prosaic, however valid: science cannot provide society with a moral compass. The pope is particularly critical of Francis Bacon’s vision of a technological utopia: he and his followers “were wrong to believe that man would be redeemed through science.” Even committed technophiles ought to find that unobjectionable.
Without doubt, Benedict XVI says, progress (for which we might here read science) “offers new possibilities for good, but it also opens up appalling possibilities for evil.” He cites social philosopher Theodor Adorno’s remark that one view of ‘progress’ leads us from the sling to the atom bomb.
More generally, the pope argues that there can be no ready-made prescription for utopia: “Anyone who promises the better world that is guaranteed to last for ever is making a false promise.” Of course, one can see what is coming next: “it is not science that redeems man: man is redeemed by love” – which the pope believes may come only through faith in God. Only with that last step, however, does he enter into his own closed system of reference, in which our own moral lack can be filled only from a divine source.
More interesting is the accompanying remark that “in the field of ethical awareness and moral decision-making… decisions can never simply be made for us in advance by others… in fundamental decisions, every person and every generation is a new beginning.” Now, like most spiritual statements this one is open to interpretation, but surely one way of reading it is to conclude that, when technologies such as stem cell science throw up new ethical questions, we won’t find the answers already written down in any book. The papacy has not been noted for its enlightened attitude to that particular issue, but we might draw a small bit of encouragement from the suggestion that such developments require fresh thinking rather than a knee-jerk response based on outmoded dogma.
Most surprising of all (though I don’t claim to have my finger on the pulse of theological fashion) is the pope’s apparent assertion that the ‘eternal life’ promised biblically is not to be taken literally. He seems concerned, and with good reason, that many people now regard this as a threat rather than a promise: “do we really want this – to live eternally?” he asks. In this regard, Benedict XVI seems to possess rather more wisdom than the rich people who look forward to resurrection of their frozen heads. ‘Eternal life’, he says, is merely a metaphor for an authentic and happy life lived on earth.
True, this then makes no acknowledgement of how badly generations of earlier churchmen have misled their flock. And it seems strange that a pope who believes this interpretation can at the same time feel so evidently fondly towards St Paul and St Augustine, who between them made earthly life a deservedly miserable existence endured by sinners, and towards the Cistercian leader Bernard of Clairvaux, who in consequence pronounced that “We are wounded as soon as we come into this world, while we live in it, and when we leave it; from the soles of our feet to the top of our heads, nothing is healthy in us.”
Perhaps this is one of the many subtle points of theology I don’t understand. All the same, the suggestion that we’d better look for our happiness on an earth managed responsibly, rather than deferring it to some heavenly eternity, gives me a little hope that faith and reason are not set on inevitably divergent paths.
Friday, December 14, 2007

Can Aladdin’s carpet fly?
[Here’s a seasonal news story I just wrote for Nature, which will appear (in edited form) in the last issue of the year. I gather, incidentally, that the original text of the ‘Arabian Nights’ doesn’t specify that the carpet flies as such, but only that anyone who sits on it is transported instantly to other lands.]
A team of scientists in the US and France has the perfect offering for the pantomime season: instructions for making a flying carpet.
The magical device may owe more to Walt Disney than to The Arabian Nights, but it is not pure fantasy, according to Lakshminarayanan Mahadevan of Harvard University, Mederic Argentina of the University of Nice, and Jan Skotheim of the Rockefeller University in New York. They have studied the aerodynamics of a flexible, rippling sheet moving through a fluid, and find that it should be possible to make one that will stay aloft in air, propelled by actively powered undulations much as a marine ray swims through water [1].
No such carpet is going to ferry humans around, though. The researchers say that, to stay afloat in air, a sheet would need to be typically about 10 cm long, 0.1 mm thick, and vibrate at about 10 Hz with an amplitude of about 0.25 mm. Making a heavier carpet ‘fly’ is not absolutely forbidden by physics, but it would require such a powerful engine to drive vibrations that the researchers say “our computations and scaling laws suggest it will remain in the magical, mystical and virtual realm.”
The key to a magic carpet is to create uplift as the ripples push against the viscous fluid. If the sheet is close to a horizontal surface, like a piece of foil settling down onto the floor, then such movements can create a high pressure in the gap between the sheet and the floor. “As waves propagate along the flexible foil, they generate a fluid flow that leads to a pressure that lifts the foil, roughly balancing its weight”, Mahadevan explains.
But as well as lifting it, ripples can drive the foil forward – as any respectable magic carpet would require. “If the waves propagate from one edge”, says Mahadevan, “this causes the foil to tilt ever so slightly and then move in one direction, towards the edge that is slightly higher. Fluid is then squeezed from this end to the other, causing the sheet to progress like a submarine ray.”
To generate a big thrust and thus a high speed, the carpet has to undulate in big ripples, comparable to the carpet's total size. This makes for a very bumpy ride. ”If you want a smooth ride, you can generate a lot of small ripples”, says Mahadevan. “But you’ll be slower.” He points out that this is not so different from any other mode of transport, where speed tends to induce bumpiness while moving more smoothly means moving slower.
"It's cute, it's charming", says physicist Tom Witten at the University of Chicago. He adds that the result is not very surprising, but says "the main interest is that someone would think to pose this problem."
Could artificial flying mini-carpets really be made? Spontaneous undulating motions have already been demonstrated in ‘smart’ polymers suspended in fluids, which can be made to swell or shrink in response to external signals. In September, a team also at Harvard University described flexible sheets of plastic coated with cultured rat muscle cells that flex in response to electrical signals and could exhibit swimming movements [2]. “In air, it should be possible to make moving sheets – a kind of micro hovercraft – with very light materials, or with very powerful engines”, says Mahadevan.
Mahadevan has developed something of a speciality in looking for unusual effects from everyday physics – his previous papers have included a study of the ‘Cheerios effect’, where small floating rings (like the breakfast cereal) stick together through surface tension, and an analysis of the sawtooth shape made by ripping open envelopes with a finger.
“I think the most interesting questions are the ones that everyone has wondered about, usually idly”, he says. “I think that is what it means to be an applied mathematician – it is our responsibility to build mathematical tools and models to help explain and rationalize what we all see.”
References
1. Argentina, M. et al., Phys. Rev. Lett. 99, 224503 (2007).
2. Feinberg, A. W. et al., Science 317, 1366-1370 (2007).
Thursday, December 13, 2007
Surfers and stem cells
[This is the pre-edited version of my Lab Report column for the January issue of Prospect.]
Just when you thought that the Dancing Wu Li Masters and the Tao of Physics had finally been left in the 1970s, along comes a surfer living on the Hawaiian island of Maui who claims to have a simple theory of everything which shows that the universe is an ‘exceptionally beautiful shape’. Garrett Lisi has a physics PhD but no university affiliation, and lists his three most important things as physics, love and surfing – “and no, those aren’t in order.”
But Lisi is no semi-mystic drawing charming but ultimately unedifying analogies. He is being taken seriously by the theoretical physics community, and has been invited to the high-powered Perimeter Institute in Waterloo, Canada, where leading physicist Lee Smolin has called his work “fabulous.”
One thing rather fabulous is that it is almost comprehensible, at least by the standards of modern fundamental physics. Lisi himself admits that, in comparison to string theory, the main contender for a theory of everything, he uses only “baby mathematics.” That’s not to say it’s easy, though.
A theory of everything must unify the theory of general relativity, which describes gravity and the structure of spacetime on large scales, with quantum theory, which describes how fundamental particles behave at the subatomic scale. To put it another way, gravity must be mixed into the so-called Standard Model of particle physics, which explains the interactions between all known fundamental particles – quarks, electrons, photons and so forth.
Physicists typically attempt unification by using symmetry. To put it crudely, suppose there are two particles that look the same except that they spin in opposite directions. These can be ‘unified’ into a single particle by appreciating that they can be interconverted by reflection in a mirror – a symmetry operation.
The idea is that the proliferation of particles and forces in today’s universe happened in a series of ‘symmetry-breaking’ steps, just as lowering a square’s symmetry to rectangular creates two distinct pairs of sides from four identical ones. This is already known to be true of some forces and particles, but not all of them.
Lisi claims that the primordial symmetry is a pattern called E8, known to mathematicians for over a century but fully understood only recently; it is rather like a multi-dimensional polyhedron with 248 ‘corners’. He has shown that all the known particles, plus descriptions of gravity, can be mapped onto the corners of E8. So a bit of it looks like the Standard Model, while a bit looks like gravity and spacetime. Twenty of the ‘corners’ remain empty, corresponding to hypothetical particles not yet known: the E8 model thus predicts their existence. It’s rather like the way nineteenth-century chemists found a pattern that brought coherence and order to the chemical elements – the periodic table – while noting that it had gaps, predicting elements that were later found.
Is E8 really the answer to everything? Physicists are reserving judgement, for Lisi’s paper, which is not yet peer-reviewed or published, is just a sketch – not a theory, and barely even a model. Mathematical physicist Peter Woit is unsure about the whole approach, saying that playing with symmetry just defers the question of what breaks it to make the world we know. But the trick worked before in the 1950s, when Murray Gell-Mann predicted a new particle by mapping a group of known ones onto a symmetry group called SU(3).
Lisi’s surfer-dude persona is fun, but so what, really? The real point is that his suggestion invigorates a field that, wandering in the thickets of string theory, sorely needs it.
*****
Stem-cell researchers in Shoukhrat Mitalipov’s team at the Oregon Health and Science University might be forgiven a little chagrin. No sooner had they reported the breakthrough that has eluded the field for years than they were trumped by two reports seeming to offer an even more attractive way of making human stem cells. Having sung the praises of Mitalipov’s achievement, Ian Wilmut, the University of Edinburgh cloning pioneer who created Dolly the sheep, announced that he was ditching their approach in favour of the new one.
Stem cells are the all-purpose cells present in the very early stages of embryo growth that can develop into just about any type of specialized tissue cells. The ‘traditional’ strategy for making them with DNA matched to the eventual recipient involves stripping the genetic material from an unfertilized egg and replacing it with donor DNA, and then prompting the egg to grow into a blastocyst, the initial stage of an embryo, from which stem cells can be extracted. This is called somatic cell nuclear transfer (SCNT), and is the method used in animal cloning. It works for sheep, dogs and mice, but there had previously been no success for humans or other primates.
On 14 November last year, Mitalipov and colleagues reported stem cells made by SCNT from rhesus macaques that could develop into other cell types. But a week later, teams based at the universities of Kyoto and Wisconsin-Madison independently reported the creation of human stem cells from ordinary skin cells, by treating them with proteins that reprogrammed them. In effect, the proteins switch the gene circuits from a ‘skin cell’ to a ‘stem cell’ setting. This reversal of normal developmental pathways is extraordinary.
The two teams used different cocktails of proteins to do the reprogramming – the Wisconsin team manage to avoid an agent that carries a cancer risk – showing that there is some scope for optimising the mix. Best of all, the method avoids the creation and destruction of embryos that has dogged the ethics of stem-cell research. But Mitalipov insists that starting with eggs is still best, and he has now started collaborating with a team in Newcastle licensed to work with human embryos. After years of frustrating effort, suddenly all options seem open.
[This is the pre-edited version of my Lab Report column for the January issue of Prospect.]
Just when you thought that the Dancing Wu Li Masters and the Tao of Physics had finally been left in the 1970s, along comes a surfer living on the Hawaiian island of Maui who claims to have a simple theory of everything which shows that the universe is an ‘exceptionally beautiful shape’. Garrett Lisi has a physics PhD but no university affiliation, and lists his three most important things as physics, love and surfing – “and no, those aren’t in order.”
But Lisi is no semi-mystic drawing charming but ultimately unedifying analogies. He is being taken seriously by the theoretical physics community, and has been invited to the high-powered Perimeter Institute in Waterloo, Canada, where leading physicist Lee Smolin has called his work “fabulous.”
One thing rather fabulous is that it is almost comprehensible, at least by the standards of modern fundamental physics. Lisi himself admits that, in comparison to string theory, the main contender for a theory of everything, he uses only “baby mathematics.” That’s not to say it’s easy, though.
A theory of everything must unify the theory of general relativity, which describes gravity and the structure of spacetime on large scales, with quantum theory, which describes how fundamental particles behave at the subatomic scale. To put it another way, gravity must be mixed into the so-called Standard Model of particle physics, which explains the interactions between all known fundamental particles – quarks, electrons, photons and so forth.
Physicists typically attempt unification by using symmetry. To put it crudely, suppose there are two particles that look the same except that they spin in opposite directions. These can be ‘unified’ into a single particle by appreciating that they can be interconverted by reflection in a mirror – a symmetry operation.
The idea is that the proliferation of particles and forces in today’s universe happened in a series of ‘symmetry-breaking’ steps, just as lowering a square’s symmetry to rectangular creates two distinct pairs of sides from four identical ones. This is already known to be true of some forces and particles, but not all of them.
Lisi claims that the primordial symmetry is a pattern called E8, known to mathematicians for over a century but fully understood only recently; it is rather like a multi-dimensional polyhedron with 248 ‘corners’. He has shown that all the known particles, plus descriptions of gravity, can be mapped onto the corners of E8. So a bit of it looks like the Standard Model, while a bit looks like gravity and spacetime. Twenty of the ‘corners’ remain empty, corresponding to hypothetical particles not yet known: the E8 model thus predicts their existence. It’s rather like the way nineteenth-century chemists found a pattern that brought coherence and order to the chemical elements – the periodic table – while noting that it had gaps, predicting elements that were later found.
Is E8 really the answer to everything? Physicists are reserving judgement, for Lisi’s paper, which is not yet peer-reviewed or published, is just a sketch – not a theory, and barely even a model. Mathematical physicist Peter Woit is unsure about the whole approach, saying that playing with symmetry just defers the question of what breaks it to make the world we know. But the trick worked before in the 1950s, when Murray Gell-Mann predicted a new particle by mapping a group of known ones onto a symmetry group called SU(3).
Lisi’s surfer-dude persona is fun, but so what, really? The real point is that his suggestion invigorates a field that, wandering in the thickets of string theory, sorely needs it.
*****
Stem-cell researchers in Shoukhrat Mitalipov’s team at the Oregon Health and Science University might be forgiven a little chagrin. No sooner had they reported the breakthrough that has eluded the field for years than they were trumped by two reports seeming to offer an even more attractive way of making human stem cells. Having sung the praises of Mitalipov’s achievement, Ian Wilmut, the University of Edinburgh cloning pioneer who created Dolly the sheep, announced that he was ditching their approach in favour of the new one.
Stem cells are the all-purpose cells present in the very early stages of embryo growth that can develop into just about any type of specialized tissue cells. The ‘traditional’ strategy for making them with DNA matched to the eventual recipient involves stripping the genetic material from an unfertilized egg and replacing it with donor DNA, and then prompting the egg to grow into a blastocyst, the initial stage of an embryo, from which stem cells can be extracted. This is called somatic cell nuclear transfer (SCNT), and is the method used in animal cloning. It works for sheep, dogs and mice, but there had previously been no success for humans or other primates.
On 14 November last year, Mitalipov and colleagues reported stem cells made by SCNT from rhesus macaques that could develop into other cell types. But a week later, teams based at the universities of Kyoto and Wisconsin-Madison independently reported the creation of human stem cells from ordinary skin cells, by treating them with proteins that reprogrammed them. In effect, the proteins switch the gene circuits from a ‘skin cell’ to a ‘stem cell’ setting. This reversal of normal developmental pathways is extraordinary.
The two teams used different cocktails of proteins to do the reprogramming – the Wisconsin team manage to avoid an agent that carries a cancer risk – showing that there is some scope for optimising the mix. Best of all, the method avoids the creation and destruction of embryos that has dogged the ethics of stem-cell research. But Mitalipov insists that starting with eggs is still best, and he has now started collaborating with a team in Newcastle licensed to work with human embryos. After years of frustrating effort, suddenly all options seem open.
Wednesday, December 12, 2007
Money for old rope
… except without the money. At no extra work to myself, I appear in a couple of recent books:
The Public Image of Chemistry, eds J. Schummer, B. Bensaude-Vincent & B. Van Tiggelen (World Scientific, 2007). This is a kind of proceedings volume of a conference of (almost) the same name in 2004, supplemented by contributions from a session at the 5th International Conference on the History of Chemistry in 2005. There’s lots of interesting stuff in it. It contains my paper ‘Chemistry and Power in Recent American Fiction’, which was published previously in the journal Hyle.
Futures from Nature, edited by my friend Henry Gee and published by Tor in January 2008. This is a collection of 100 of the short sci-fi stories published in Nature in recent years, and includes a contribution (I won’t say a short story, more of a pastiche) by one Theo von Hohenheim, who sounds vaguely familiar. Buy it here.
And while I’m at it, I recorded today a review of the year in science for the BBC World Service’s Science in Action. Don’t know when it is being broadcast… but before the year is out, clearly.
And while I'm at it at it, I have a piece in the latest issue of Seed on why RNA is the new DNA...
… except without the money. At no extra work to myself, I appear in a couple of recent books:
The Public Image of Chemistry, eds J. Schummer, B. Bensaude-Vincent & B. Van Tiggelen (World Scientific, 2007). This is a kind of proceedings volume of a conference of (almost) the same name in 2004, supplemented by contributions from a session at the 5th International Conference on the History of Chemistry in 2005. There’s lots of interesting stuff in it. It contains my paper ‘Chemistry and Power in Recent American Fiction’, which was published previously in the journal Hyle.
Futures from Nature, edited by my friend Henry Gee and published by Tor in January 2008. This is a collection of 100 of the short sci-fi stories published in Nature in recent years, and includes a contribution (I won’t say a short story, more of a pastiche) by one Theo von Hohenheim, who sounds vaguely familiar. Buy it here.
And while I’m at it, I recorded today a review of the year in science for the BBC World Service’s Science in Action. Don’t know when it is being broadcast… but before the year is out, clearly.
And while I'm at it at it, I have a piece in the latest issue of Seed on why RNA is the new DNA...
Sunday, December 09, 2007
We’re only after your money
There is a very sour little piece in this Saturday’s Guardian from Wendy Cope on copyright. I should say first of all that I must acknowledge a few items:
1. Cope is right to say that a poem is much more likely to get copied (either digitally or on paper) and downloaded than an entire book – in that sense, poets are especially vulnerable to copyright violations.
2. It’s mostly damned hard making a living as a writer, and perhaps especially so as a poet, so some sensitivity to potential earnings lost seems reasonable.
But it seems rather sad to see a writer of any sort so bitterly possessive about their words. To read Cope’s piece, one might imagine that she sits scribbling away resentfully, thinking each time she finishes a poem, ‘Now, get out there and earn your keep, you little sod.’ Now, to be honest, my rather limited experience of Cope’s work tallies rather well with the notion that bitterness is one of her prime motivations, but this piece seemed so jealous of every last penny potentially denied her that one wonders why she doesn’t just throw in the towel and become a plumber. Indeed, it seems to me that she doesn’t even truly understand why people read or buy poetry. Why, if anyone genuinely loved her poems, would they be content to download a few from the web and, and then – well, then what? File the printouts? Poetry lovers must be among the most bookish people in the world – they surely relish having the books on their shelves, rather than just scanning their eyes briefly over a piece of downloaded text and then binning it.
‘You want to read my poems? Then buy the book’, is Cope’s crabby refrain. Does she pull her volumes off the shelves of public libraries, I wonder? What is particularly dispiriting about this little rant is that it gives no sense of writing being about wanting to share with people ideas, images, thoughts and stories – and recognizing that this will never happen solely through the medium of books sold – but that it is instead about creating ‘word product’ that you buggers must pay for.
No source of income is too minor or incidental that its possible loss is not begrudged. Other people reading your poems at festivals is no good, because you might not get your little commission for it. (You get paid just for standing up and reading out old words? What the hell are you complaining about?) Another thing I find odd, although perhaps it just shows that things work differently in the poetry world, is that Cope is so covetous of every last book sale because of its financial rewards. In non-fiction at least, if you’re the kind of writer who gets a substantial part of your income from royalties, as opposed to pocketing a modest advance that might with great luck be paid off in ten years’ time, then you must be selling so many books that you shouldn’t need the supplement of £1.20 for a book sale that comes from someone’s refusal to copy one of your poems and give it to friend.
But what caps it all – and indeed reveals the pathology of Cope’s obsession – is her anger and regret that all those possible royalties are going to be lost when you’re dead. “I sometimes feel a bit annoyed by the prospect of people making money out of my poems when I’m too dead to spend it”, she moans. Well personally, Wendy, if someone keeps my words alive when I’m not, I’ll be over the bloody moon, and I don’t give a damn what they make from doing so.
There is a very sour little piece in this Saturday’s Guardian from Wendy Cope on copyright. I should say first of all that I must acknowledge a few items:
1. Cope is right to say that a poem is much more likely to get copied (either digitally or on paper) and downloaded than an entire book – in that sense, poets are especially vulnerable to copyright violations.
2. It’s mostly damned hard making a living as a writer, and perhaps especially so as a poet, so some sensitivity to potential earnings lost seems reasonable.
But it seems rather sad to see a writer of any sort so bitterly possessive about their words. To read Cope’s piece, one might imagine that she sits scribbling away resentfully, thinking each time she finishes a poem, ‘Now, get out there and earn your keep, you little sod.’ Now, to be honest, my rather limited experience of Cope’s work tallies rather well with the notion that bitterness is one of her prime motivations, but this piece seemed so jealous of every last penny potentially denied her that one wonders why she doesn’t just throw in the towel and become a plumber. Indeed, it seems to me that she doesn’t even truly understand why people read or buy poetry. Why, if anyone genuinely loved her poems, would they be content to download a few from the web and, and then – well, then what? File the printouts? Poetry lovers must be among the most bookish people in the world – they surely relish having the books on their shelves, rather than just scanning their eyes briefly over a piece of downloaded text and then binning it.
‘You want to read my poems? Then buy the book’, is Cope’s crabby refrain. Does she pull her volumes off the shelves of public libraries, I wonder? What is particularly dispiriting about this little rant is that it gives no sense of writing being about wanting to share with people ideas, images, thoughts and stories – and recognizing that this will never happen solely through the medium of books sold – but that it is instead about creating ‘word product’ that you buggers must pay for.
No source of income is too minor or incidental that its possible loss is not begrudged. Other people reading your poems at festivals is no good, because you might not get your little commission for it. (You get paid just for standing up and reading out old words? What the hell are you complaining about?) Another thing I find odd, although perhaps it just shows that things work differently in the poetry world, is that Cope is so covetous of every last book sale because of its financial rewards. In non-fiction at least, if you’re the kind of writer who gets a substantial part of your income from royalties, as opposed to pocketing a modest advance that might with great luck be paid off in ten years’ time, then you must be selling so many books that you shouldn’t need the supplement of £1.20 for a book sale that comes from someone’s refusal to copy one of your poems and give it to friend.
But what caps it all – and indeed reveals the pathology of Cope’s obsession – is her anger and regret that all those possible royalties are going to be lost when you’re dead. “I sometimes feel a bit annoyed by the prospect of people making money out of my poems when I’m too dead to spend it”, she moans. Well personally, Wendy, if someone keeps my words alive when I’m not, I’ll be over the bloody moon, and I don’t give a damn what they make from doing so.
Thursday, December 06, 2007
Beyond recycling
[This is my Materials Witness column for the January 2008 issue of Nature Materials.]
It is surely ironic that global warming and environmental degradation now pose serious risks at a time when industry and technology are cleaner than at any other stage of the Industrial Revolution. Admittedly, that may not be globally true, but in principle we can manufacture products and generate energy more efficiently and with less pollution than ever before. So why the problem?
Partly, the answer is obvious: cleaner technologies struggle to keep pace with increased industrial activity as populations and economies grow. And green methodologies are typically costly, so aren’t universally available. But the equation is still more complex than that. For example, cars can be more fuel-efficient, less polluting and cheaper. But consumers who save money on fuel tend to spend it elsewhere: they drive more, say, or they spend it on holiday air flights. And cheap cars mean more cars. There is an ‘environmental rebound effect’ to such savings, counteracting the gains.
This is just one way in which ‘green’ manufacturing – using fewer materials and environmentally friendly processing, recycling wastes, and making products themselves recyclable or biodegradable – may fall short of its goal of making the world cleaner. All of these things are surely valuable, indeed essential, in making economic growth sustainable. But the problem goes beyond how things are made, to the issue of how they are used. We need to look not just at production, but at consumption.
One of the initiatives here is the so-called Product-Service System (PSS): a combination of product design and manufacture with the supply of related consumer services that has the potential to give consumers greater utility while reducing the ecological footprint. That might sound like marketing jargon, but it’s a tangible concept of proven value, enacted for example in formalized car-sharing schemes, leasing of temporary furnished office space, biological pest management services, and polystyrene recycling. It’s not mere philanthropy either: there’s a profit incentive too.
One of the key benefits of a PSS approach is that it might offer a way of simply making less stuff. You don’t need to be an eco-warrior to be shocked at the senseless excesses of current manufacturing. A splendid example of an alternative model is offered by a team in Sweden, who have outlined plans for a baby-pram leasing and remanufacturing scheme (O. Mont et al., J. Cleaner Prod. 14, 1509; 2006). Since baby prams generally last for much longer than they are needed (per child), who not lease one instead of buying it? If the infrastructure exists for repairing minor wear and tear, every customer gets an ‘as new’ product, and no prams end up on the waste tip in a near-pristine state.
Developing countries are often adept at informal schemes like this already: little gets thrown away there. But if implemented all the way from the product design stage, it is much more than recycling. What remains is to break our current cult of ‘product ownership’. Prams seem as good a place to start as any.
[This is my Materials Witness column for the January 2008 issue of Nature Materials.]
It is surely ironic that global warming and environmental degradation now pose serious risks at a time when industry and technology are cleaner than at any other stage of the Industrial Revolution. Admittedly, that may not be globally true, but in principle we can manufacture products and generate energy more efficiently and with less pollution than ever before. So why the problem?
Partly, the answer is obvious: cleaner technologies struggle to keep pace with increased industrial activity as populations and economies grow. And green methodologies are typically costly, so aren’t universally available. But the equation is still more complex than that. For example, cars can be more fuel-efficient, less polluting and cheaper. But consumers who save money on fuel tend to spend it elsewhere: they drive more, say, or they spend it on holiday air flights. And cheap cars mean more cars. There is an ‘environmental rebound effect’ to such savings, counteracting the gains.
This is just one way in which ‘green’ manufacturing – using fewer materials and environmentally friendly processing, recycling wastes, and making products themselves recyclable or biodegradable – may fall short of its goal of making the world cleaner. All of these things are surely valuable, indeed essential, in making economic growth sustainable. But the problem goes beyond how things are made, to the issue of how they are used. We need to look not just at production, but at consumption.
One of the initiatives here is the so-called Product-Service System (PSS): a combination of product design and manufacture with the supply of related consumer services that has the potential to give consumers greater utility while reducing the ecological footprint. That might sound like marketing jargon, but it’s a tangible concept of proven value, enacted for example in formalized car-sharing schemes, leasing of temporary furnished office space, biological pest management services, and polystyrene recycling. It’s not mere philanthropy either: there’s a profit incentive too.
One of the key benefits of a PSS approach is that it might offer a way of simply making less stuff. You don’t need to be an eco-warrior to be shocked at the senseless excesses of current manufacturing. A splendid example of an alternative model is offered by a team in Sweden, who have outlined plans for a baby-pram leasing and remanufacturing scheme (O. Mont et al., J. Cleaner Prod. 14, 1509; 2006). Since baby prams generally last for much longer than they are needed (per child), who not lease one instead of buying it? If the infrastructure exists for repairing minor wear and tear, every customer gets an ‘as new’ product, and no prams end up on the waste tip in a near-pristine state.
Developing countries are often adept at informal schemes like this already: little gets thrown away there. But if implemented all the way from the product design stage, it is much more than recycling. What remains is to break our current cult of ‘product ownership’. Prams seem as good a place to start as any.
Thursday, November 29, 2007
Why 'Never Let Me Go' isn't really a 'science novel'
I have just finished reading Kazuo Ishiguro’s Never Let Me Go. What a strange book. First, there’s the tone – purposely amateurish writing (there can’t be any doubt, given his earlier books, that this is intentional), which creates an odd sense of flatness. As the Telegraph’s reviewer put it, “There is no aesthetic thrill to be had from the sentences – except that of a writer getting the desired dreary effect exactly right.” It’s a testament to Ishiguro that his control of this voice never slips, and that the story remains compelling in spite of the deliberately clumsy prose. That s probably a far harder trick to pull off than it seems. Second, there are the trademark bits of childlike quasi-surrealism, where he develops an idea that seems utterly implausible yet is presented so deadpan that you start to think “Is he serious about this?” – for instance, Tommy’s theory about the ‘art gallery’. This sort of dreamlike riffing was put to wonderful effect in The Unconsoled, which was a dream world from start to finish. It jarred a little at the end of When We Were Orphans, because it didn’t quite fit with the rest of the book – but was still strangely compelling. Here it seems to be an expression of the enforced naivety of the characters, but is disorientating when it becomes so utterly a part of the world that Kathy H depicts.
But my biggest concern is that the plot just doesn’t seem at all plausible enough to create a strong critique of cloning and related biotechnologies. Is that even the intention? I’m still unsure, as were several reviewers. The situation of the donor children is so unethical and so deeply at odds with any current ethical perspectives on cloning and reproductive technologies that one can’t really imagine how a world could have got this way. After all, in other respects it seems to be a world just like ours. It is not even set in some dystopian future, but has a feeling of being more like the 1980s. The ‘normal’ humans aren’t cold-hearted dysfunctionals – they seem pretty much like ordinary people, except that they seem to accept this donor business largely without question – whereas nothing like this would be tolerated or even contemplated for an instant today. It feels as though Ishiguro just hasn’t worked hard enough to make an alternative reality that can support the terrible scenario he portrays. As a result, whatever broader point he is making loses its force. What we are left with is a well told tale of friendship and tragedy experienced by sympathetic characters put in a situation that couldn’t arise under the social conditions presented. I enjoyed the book, but I can’t see how it can add much to the cloning debate. Perhaps, as one reviewer suggested, this is all just an allegory about mortality – in which case it works rather well, but is somewhat perverse.
I’ve just taken a look at M John Harrison’s review in the Guardian, which puts these same points extremely well:
“Inevitably, it being set in an alternate Britain, in an alternate 1990s, this novel will be described as science fiction. But there's no science here. How are the clones kept alive once they've begun "donating"? Who can afford this kind of medicine, in a society the author depicts as no richer, indeed perhaps less rich, than ours?
Ishiguro's refusal to consider questions such as these forces his story into a pure rhetorical space. You read by pawing constantly at the text, turning it over in your hands, looking for some vital seam or row of rivets. Precisely how naturalistic is it supposed to be? Precisely how parabolic? Receiving no answer, you're thrown back on the obvious explanation: the novel is about its own moral position on cloning. But that position has been visited before (one thinks immediately of Michael Marshall Smith's savage 1996 offering, Spares). There's nothing new here; there's nothing all that startling; and there certainly isn't anything to argue with. Who on earth could be "for" the exploitation of human beings in this way?
Ishiguro's contribution to the cloning debate turns out to be sleight of hand, eye candy, cover for his pathological need to be subtle… This extraordinary and, in the end, rather frighteningly clever novel isn't about cloning, or being a clone, at all. It's about why we don't explode, why we don't just wake up one day and go sobbing and crying down the street, kicking everything to pieces out of the raw, infuriating, completely personal sense of our lives never having been what they could have been.”
I have just finished reading Kazuo Ishiguro’s Never Let Me Go. What a strange book. First, there’s the tone – purposely amateurish writing (there can’t be any doubt, given his earlier books, that this is intentional), which creates an odd sense of flatness. As the Telegraph’s reviewer put it, “There is no aesthetic thrill to be had from the sentences – except that of a writer getting the desired dreary effect exactly right.” It’s a testament to Ishiguro that his control of this voice never slips, and that the story remains compelling in spite of the deliberately clumsy prose. That s probably a far harder trick to pull off than it seems. Second, there are the trademark bits of childlike quasi-surrealism, where he develops an idea that seems utterly implausible yet is presented so deadpan that you start to think “Is he serious about this?” – for instance, Tommy’s theory about the ‘art gallery’. This sort of dreamlike riffing was put to wonderful effect in The Unconsoled, which was a dream world from start to finish. It jarred a little at the end of When We Were Orphans, because it didn’t quite fit with the rest of the book – but was still strangely compelling. Here it seems to be an expression of the enforced naivety of the characters, but is disorientating when it becomes so utterly a part of the world that Kathy H depicts.
But my biggest concern is that the plot just doesn’t seem at all plausible enough to create a strong critique of cloning and related biotechnologies. Is that even the intention? I’m still unsure, as were several reviewers. The situation of the donor children is so unethical and so deeply at odds with any current ethical perspectives on cloning and reproductive technologies that one can’t really imagine how a world could have got this way. After all, in other respects it seems to be a world just like ours. It is not even set in some dystopian future, but has a feeling of being more like the 1980s. The ‘normal’ humans aren’t cold-hearted dysfunctionals – they seem pretty much like ordinary people, except that they seem to accept this donor business largely without question – whereas nothing like this would be tolerated or even contemplated for an instant today. It feels as though Ishiguro just hasn’t worked hard enough to make an alternative reality that can support the terrible scenario he portrays. As a result, whatever broader point he is making loses its force. What we are left with is a well told tale of friendship and tragedy experienced by sympathetic characters put in a situation that couldn’t arise under the social conditions presented. I enjoyed the book, but I can’t see how it can add much to the cloning debate. Perhaps, as one reviewer suggested, this is all just an allegory about mortality – in which case it works rather well, but is somewhat perverse.
I’ve just taken a look at M John Harrison’s review in the Guardian, which puts these same points extremely well:
“Inevitably, it being set in an alternate Britain, in an alternate 1990s, this novel will be described as science fiction. But there's no science here. How are the clones kept alive once they've begun "donating"? Who can afford this kind of medicine, in a society the author depicts as no richer, indeed perhaps less rich, than ours?
Ishiguro's refusal to consider questions such as these forces his story into a pure rhetorical space. You read by pawing constantly at the text, turning it over in your hands, looking for some vital seam or row of rivets. Precisely how naturalistic is it supposed to be? Precisely how parabolic? Receiving no answer, you're thrown back on the obvious explanation: the novel is about its own moral position on cloning. But that position has been visited before (one thinks immediately of Michael Marshall Smith's savage 1996 offering, Spares). There's nothing new here; there's nothing all that startling; and there certainly isn't anything to argue with. Who on earth could be "for" the exploitation of human beings in this way?
Ishiguro's contribution to the cloning debate turns out to be sleight of hand, eye candy, cover for his pathological need to be subtle… This extraordinary and, in the end, rather frighteningly clever novel isn't about cloning, or being a clone, at all. It's about why we don't explode, why we don't just wake up one day and go sobbing and crying down the street, kicking everything to pieces out of the raw, infuriating, completely personal sense of our lives never having been what they could have been.”
Monday, November 26, 2007
Listen out
Let me now be rather less coy about media appearances. This Wednesday night at 9 pm I am presenting Frontiers on BBC Radio 4, looking at digital medicine. This meant that I got to strap a ‘digital plaster’ to my chest which relayed my heartbeat to a remote monitor through a wireless link. I am apparently alive and well.
Let me now be rather less coy about media appearances. This Wednesday night at 9 pm I am presenting Frontiers on BBC Radio 4, looking at digital medicine. This meant that I got to strap a ‘digital plaster’ to my chest which relayed my heartbeat to a remote monitor through a wireless link. I am apparently alive and well.

Salt-free Paxo
No one can reasonably expect Jeremy Paxman to have a fluent knowledge of all the subjects on which he has to ask sometimes remarkably different questions on University Challenge. But if the topic is chemistry, you’d better get it word-perfect, because he’s got no latitude for interpretation. Tonight’s round had a moment that went something like this:
Paxman: “Which hydrated ferrous salt was once known as green vitriol?”
Hapless student: “Iron sulphate.”
Paxman: “No, it’s just sulphate.”
I’ve seen precisely the same thing happen before. How come someone doesn’t pick Paxo up on it? The fact is, contestants are advised that they can press their button to challenge if they think their answer was unfairly dismissed. The offending portion of the filming then gets snipped out. But I suspect no one ever does this – it’s just too intimidating to say to Paxo “I think you’ve got that wrong.”
Subscribe to:
Posts (Atom)