On hobbits and Merlin
[This is my latest Lab Report column for Prospect.]
In a hole in the ground there lived a hobbit. But the rest of this story is not fit for children, mired in accusations of grave-robbing and incompetence. The ‘hobbits’ in question, some just three feet tall, have been allegedly found in caves on islands of the Palauan archipelago in Micronesia. Or rather, their bones have, dating to around 1400 years ago. The discoverers, Lee Berger of the University of Witwatersrand in South Africa and his colleagues, think they shed new light on the diminutive Homo floresiensis remains discovered in Indonesia in 2003, which are widely believed to be a new species that lived until 13,000 years ago. If relatively recent humans can be this small, that belief could be undermined. Berger thinks that the smallness of H. floresiensis might be dwarfism caused by a restricted diet and lack of predators on a small island.
But others say Berger’s team are misrepresenting their find. Some claim the bones could be those of individuals no smaller than ‘pygmy’ groups still living in the Philippines, or even of children, and so are nothing to get excited about. And the new species status of H. floresiensis does not rest on size alone, but on detailed anatomical analysis.
On top of these criticisms, Berger’s team faces accusations of cultural insensitivity for prodding around in caves that locals regard as sacred burial places. To make matters worse, Berger’s work was partly funded by the National Geographic Society, which made a film about the study that was released shortly before Berger’s paper appeared in the online journal PLoS One (where peer review focuses on methodology, not conclusions). To other scientists, this seems suspiciously like grandstanding that undermines normal academic channels, although Berger insists he knew nothing of the film’s timing. “This looks like a classic example of what can go wrong when science and the review process are driven by popular media”, palaeoanthropologist Tim White told Nature.
*****
As well as sabre-rattling, the Bush administration has a softer strategy for dealing with nuclear ‘rogue states’. It has set up a club for suitably vetted nations called the Global Nuclear Energy Partnership (GNEP), in which trustworthy members with “secure, advanced nuclear capabilities” provide nuclear fuel to, and deal with the waste from, other nations who agree to peaceful uses of nuclear power only. In effect, it’s a kind of ‘nuclear aid’ scheme with string attached: we give you the fuel, and we clean up for you, if you use it the way we tell you to. So members share information on reactor design but not on reprocessing of spent fuel, which can be used to extract military-grade fissile material. Everyone’s waste will be shipped to a select band of reprocessing states, including China, Russia, France, Japan, Australia and the US itself.
For all its obvious hierarchy, the GNEP is not without merit. The claim is that it will promote non-proliferation of nuclear arms, and it makes sense for the burden of generating energy without fossil fuels to be shared internationally. But one might worry about the prospect of large amounts of nuclear waste being shipped around the planet. Even more troublingly, many nuclear advocates think the current technology is not up to the task. John Deutsch of the Massachusetts Institute of Technology, a specialist in nuclear energy and security, calls GNEP “hugely expensive, hugely misdirected and hugely out of sync with the needs of the industry and the nation.” The US Department of Energy’s plans to build a massive reprocessing facility, without initial pilot projects, has been called “a recipe for disaster” by the Federation of American Scientists, which adds that “GNEP has the potential to become the greatest technological debacle in US history.” It accuses the DoE of selling the idea as a green-sounding ‘recycling’ scheme. Nonetheless, in February the UK signed up as the GNEP’s 21st member, while contemplating the estimated £30 bn bill for cleaning up its own reprocessing facility at Sellafield.
*****
Having come to expect all news to be bad, British astronomers saw a ray of hope in late February when the decision of the Science and Technology Facilities Council (STFC) to withdraw from the Gemini project was reversed. Gemini’s two telescopes in Chile and Hawaii offer peerless views of the entire sky at visible and infrared wavelengths, and the previous decision of the STFC was seen as devastating. But now it’s business as usual, as the STFC has announced that the e-MERLIN project is threatened with closure even before it is up and running. This is an upgrade of MERLIN, a system that sends the signals of six radio telescopes around Britain by radio link-up to Jodrell Bank, near Manchester. In e-MERLIN the radio links are being replaced with optical cables, making the process faster and able to handle more data. It will boost the sensitivity of the observations by a factor of 30, revealing things that just can’t be seen at present – for example, how disks of dust around stars evolve into planetary systems.
e-MERLIN is now nearly completed, but the STFC is considering whether to pull its funding in 2009. That would surely axe jobs at Jodrell Bank and the astronomy department at Manchester, second only in size to Cambridge, but would also harm Britain’s impressive international standing in radio astronomy. With more than ten other projects on the STFC’s endangered list, everyone is now asking where the next blow will fall. There are no obvious duds on the list, yet something has to give if the STFC is to make up its £80 million deficit. But it is the opaque and high-handed way the decisions are being taken that is creating such fury and low morale.
Tuesday, March 25, 2008
Monday, March 17, 2008
More burning water
[Here is my latest Crucible column for Chemistry World (April). I’m not sure if I’m one of the “unscientific critics who did not delve into the facts first” mentioned in the Roy et al. paper. If so, I’m not sure which of the ‘facts’ mentioned in my earlier article is wrong. Nonetheless, this is an intriguing result; I leave you to judge the implications.]
Take a test tube of sea water and hit it with radio waves. Then light a match – and watch it burn. Flickering over the mouth of the tube is a yellow-white flame, presumably due to the combustion of hydrogen.
When John Kanzius, an engineer in Erie, Pennsylvania, did this last year, the local TV networks were all over him. ‘He may have found a way to solve the world’s energy problems,’ they said. The clips duly found their way onto YouTube, and soon the whole world knew about this apparent new source of ‘clean fuel’.
I wrote then in Nature that Kanzius’s claims ‘must stand or fall on the basis of careful experiment’. Now, it seems, those experiments have begun. Rustum Roy, a materials scientist at Pennsylvania State University with a long and distinguished career in the microwave processing of materials, has collaborated with Kanzius to investigate the effect. The pair, along with Roy’s colleague Manju Rao, have just published a paper describing their findings in Materials Research Innovations[1], a journal that advertises itself as ‘especially suited for the publication of results which are so new, so unexpected, that they are likely to be rejected by tradition-bound journals’.
Materials Research Innovations, of which Roy is editor-in-chief, practises what it calls ‘super peer review’, which ‘is based on reviewing the authors, not the particular piece of work… the author (at least one) shall have published in the open, often peer-reviewed literature, a large body of work… The only other criterion is that the work be “new”, “a step-function advance”, etc.’
I’m not complaining if Roy’s paper has had an easy ride, however. On the contrary, given the wide interest that Kanzius’s work elicited, it’s very handy to see the results of a methodical study without the long delays that such efforts are often likely to incur from other, more cautious journals under the standard peer-review model. Of course a review system like this is open to abuse (aren’t they all?), but the new paper suggests there is a useful function for MRI’s approach.
Mystery gas
The experimental details in the paper are simple and to the point. Put an aqueous solution of as little as 1 percent sodium chloride in a Pyrex test tube; expose it to a 300 Watt radio frequency field at 13.56 MHz; and ignite the gas that comes from the tube. Note that the inflammable gas was not collected and analysed, but simply burnt.
The effect may sound surprising, but it is not unprecedented. In 1982, a team of chemists at Western Illinois University reported the room-temperature decomposition of water vapour into hydrogen peroxide and hydrogen using radio frequency waves with around 60 percent yield [2]. They too used precisely the same frequency of 13.56 MHz – no coincidence really, since this is a common frequency for radio frequency generators. And in 1993 a Russian team reported the apparent dissociation of water into hydrogen and hydroxyl radicals using microwaves [3]. Neither paper is cited by Roy et al.
Free lunch
If water can indeed be split this way, it is intrinsically interesting. That it seems to require the presence of salt is puzzling, and offers a foothold for further exploration of what’s happening.
But of course the story neither begins nor ends there. The TV reports make it plain what was in the air: energy for free. None of them thought to ask what the energy balance actually was, and Kanzius apparently did not offer it. Roy et al. now stress that Kanzius never claimed he could get out more energy than was put in; but given the direction the reports were taking, it seems not unreasonable to have expected an explicit denial of that.
Still, we have such a denial now (in effect), so that should put an end to the breathless talk of solving the energy crisis.
The real question now is whether this process is any more energy-efficient than standard electrolysis (which has the added advantage of automatically separating the two product gases). If not, it remains unclear how useful the radio frequency process will be, no matter how intriguing. Sadly, the present paper is silent on that matter too.
There seems scant reason, then, for all the media excitement. But this episode is a reminder of the power of visual images – here, a flame dancing over an apparently untouched tube of water, a seductive sight to a culture anxious about its energy resources. It’s a reminder too of the force of water’s mythology, for this is a substance that has throughout history been lauded as a saviour and source of miracles.
References
1. R. Roy et al., Mat. Res. Innov. 2008, 12, 3.
2. S. Roychowdhury et al., Plasma Chem. Plasma Process. 1982, 2, 157.
3. V. L. Vaks et al., Radiophys. Quantum Electr. 1994, 37, 85.
[Here is my latest Crucible column for Chemistry World (April). I’m not sure if I’m one of the “unscientific critics who did not delve into the facts first” mentioned in the Roy et al. paper. If so, I’m not sure which of the ‘facts’ mentioned in my earlier article is wrong. Nonetheless, this is an intriguing result; I leave you to judge the implications.]
Take a test tube of sea water and hit it with radio waves. Then light a match – and watch it burn. Flickering over the mouth of the tube is a yellow-white flame, presumably due to the combustion of hydrogen.
When John Kanzius, an engineer in Erie, Pennsylvania, did this last year, the local TV networks were all over him. ‘He may have found a way to solve the world’s energy problems,’ they said. The clips duly found their way onto YouTube, and soon the whole world knew about this apparent new source of ‘clean fuel’.
I wrote then in Nature that Kanzius’s claims ‘must stand or fall on the basis of careful experiment’. Now, it seems, those experiments have begun. Rustum Roy, a materials scientist at Pennsylvania State University with a long and distinguished career in the microwave processing of materials, has collaborated with Kanzius to investigate the effect. The pair, along with Roy’s colleague Manju Rao, have just published a paper describing their findings in Materials Research Innovations[1], a journal that advertises itself as ‘especially suited for the publication of results which are so new, so unexpected, that they are likely to be rejected by tradition-bound journals’.
Materials Research Innovations, of which Roy is editor-in-chief, practises what it calls ‘super peer review’, which ‘is based on reviewing the authors, not the particular piece of work… the author (at least one) shall have published in the open, often peer-reviewed literature, a large body of work… The only other criterion is that the work be “new”, “a step-function advance”, etc.’
I’m not complaining if Roy’s paper has had an easy ride, however. On the contrary, given the wide interest that Kanzius’s work elicited, it’s very handy to see the results of a methodical study without the long delays that such efforts are often likely to incur from other, more cautious journals under the standard peer-review model. Of course a review system like this is open to abuse (aren’t they all?), but the new paper suggests there is a useful function for MRI’s approach.
Mystery gas
The experimental details in the paper are simple and to the point. Put an aqueous solution of as little as 1 percent sodium chloride in a Pyrex test tube; expose it to a 300 Watt radio frequency field at 13.56 MHz; and ignite the gas that comes from the tube. Note that the inflammable gas was not collected and analysed, but simply burnt.
The effect may sound surprising, but it is not unprecedented. In 1982, a team of chemists at Western Illinois University reported the room-temperature decomposition of water vapour into hydrogen peroxide and hydrogen using radio frequency waves with around 60 percent yield [2]. They too used precisely the same frequency of 13.56 MHz – no coincidence really, since this is a common frequency for radio frequency generators. And in 1993 a Russian team reported the apparent dissociation of water into hydrogen and hydroxyl radicals using microwaves [3]. Neither paper is cited by Roy et al.
Free lunch
If water can indeed be split this way, it is intrinsically interesting. That it seems to require the presence of salt is puzzling, and offers a foothold for further exploration of what’s happening.
But of course the story neither begins nor ends there. The TV reports make it plain what was in the air: energy for free. None of them thought to ask what the energy balance actually was, and Kanzius apparently did not offer it. Roy et al. now stress that Kanzius never claimed he could get out more energy than was put in; but given the direction the reports were taking, it seems not unreasonable to have expected an explicit denial of that.
Still, we have such a denial now (in effect), so that should put an end to the breathless talk of solving the energy crisis.
The real question now is whether this process is any more energy-efficient than standard electrolysis (which has the added advantage of automatically separating the two product gases). If not, it remains unclear how useful the radio frequency process will be, no matter how intriguing. Sadly, the present paper is silent on that matter too.
There seems scant reason, then, for all the media excitement. But this episode is a reminder of the power of visual images – here, a flame dancing over an apparently untouched tube of water, a seductive sight to a culture anxious about its energy resources. It’s a reminder too of the force of water’s mythology, for this is a substance that has throughout history been lauded as a saviour and source of miracles.
References
1. R. Roy et al., Mat. Res. Innov. 2008, 12, 3.
2. S. Roychowdhury et al., Plasma Chem. Plasma Process. 1982, 2, 157.
3. V. L. Vaks et al., Radiophys. Quantum Electr. 1994, 37, 85.
Wednesday, March 05, 2008
Enough theory
One of the side-effects of James Wood’s widely reviewed book How Fiction Works (Jonathan Cape) is that it has renewed talk in the literary pages of theory. Er, which theory is that, the ingénue asks? Oh, do keep up, the postmodernist replies. You know, theory.
Why is this ridiculous affectation so universally indulged? Why do we not simply laugh when Terry Eagleton writes a book called After Theory (and he is not the first)? Now yes, it is true that we are now living in an age which postdates quantum theory, and Darwinian theory, and chaos theory, and, hell, Deryaguin-Landay-Verwey-Overbeek theory. But these people are not talking about theories as such. To them, there is only one theory, indeed only ‘theory’.
All right, we are talking here about literary theory, or if you like, cultural theory. This is not, as you might imagine, a theory about how literature works, or how culture works. It is a particular approach to thinking about literature, or culture. It is a point of view. It is in some respects quite an interesting point of view. In other respects, it is not terribly interested I the business of writing, which is what literature has (I hope you’ll agree) tended to be about. In any event, it became in the 1980s such a hegemonic point of view that it dropped all adjectives and just became ‘theory’, and even in general publications like this one, literary critics no longer felt obliged even to tell us what it says. Sometimes one feels that is just as well. But when critics now talk of theory, they generally tend to mean something clustered around post-modernism and post-structuralism. You can expect a Marxist tint. You can expect mention of hermeneutics. You had better expect to be confused. Most of all, you can expect solipsism of extravagant proportions.
Eagleton’s review of Wood in the latest Prospect is a good example. It makes a few telling points, but on the whole speaks condescendingly of Wood’s ‘A-levelish approach’, pretending to be a little sad that Wood’s determination to read the text carefully is ‘passé’. Eagleton doesn’t quite tell us what is wrong with Wood’s book, but assumes we will know exactly what he means, because are we too not adepts of ‘theory’? It bemoans the absence of any reference to Finnegan’s Wake, which (this is no value judgement) is about as relevant to the question of ‘how fiction works’ as is Catherine Cookson. I am no literary critic, and I’ve no idea if Wood’s book is any good, but I know a rubbish review when I see one.
In any event, ‘theory’ is all very much in line with ‘theory’s’ goals. It takes a word, like ‘theory’, and scoffs at our pretensions to know what it means. It appropriates language. This doesn’t seem a terribly helpful thing in a group of people who are meant to be experts on words. It is a little like declaring that henceforth, ‘breakfast’ will no longer mean the generic first meal of the day, but the croissant and coffee consumed by Derrida in his favourite Left Bank café.
One of the side-effects of James Wood’s widely reviewed book How Fiction Works (Jonathan Cape) is that it has renewed talk in the literary pages of theory. Er, which theory is that, the ingénue asks? Oh, do keep up, the postmodernist replies. You know, theory.
Why is this ridiculous affectation so universally indulged? Why do we not simply laugh when Terry Eagleton writes a book called After Theory (and he is not the first)? Now yes, it is true that we are now living in an age which postdates quantum theory, and Darwinian theory, and chaos theory, and, hell, Deryaguin-Landay-Verwey-Overbeek theory. But these people are not talking about theories as such. To them, there is only one theory, indeed only ‘theory’.
All right, we are talking here about literary theory, or if you like, cultural theory. This is not, as you might imagine, a theory about how literature works, or how culture works. It is a particular approach to thinking about literature, or culture. It is a point of view. It is in some respects quite an interesting point of view. In other respects, it is not terribly interested I the business of writing, which is what literature has (I hope you’ll agree) tended to be about. In any event, it became in the 1980s such a hegemonic point of view that it dropped all adjectives and just became ‘theory’, and even in general publications like this one, literary critics no longer felt obliged even to tell us what it says. Sometimes one feels that is just as well. But when critics now talk of theory, they generally tend to mean something clustered around post-modernism and post-structuralism. You can expect a Marxist tint. You can expect mention of hermeneutics. You had better expect to be confused. Most of all, you can expect solipsism of extravagant proportions.
Eagleton’s review of Wood in the latest Prospect is a good example. It makes a few telling points, but on the whole speaks condescendingly of Wood’s ‘A-levelish approach’, pretending to be a little sad that Wood’s determination to read the text carefully is ‘passé’. Eagleton doesn’t quite tell us what is wrong with Wood’s book, but assumes we will know exactly what he means, because are we too not adepts of ‘theory’? It bemoans the absence of any reference to Finnegan’s Wake, which (this is no value judgement) is about as relevant to the question of ‘how fiction works’ as is Catherine Cookson. I am no literary critic, and I’ve no idea if Wood’s book is any good, but I know a rubbish review when I see one.
In any event, ‘theory’ is all very much in line with ‘theory’s’ goals. It takes a word, like ‘theory’, and scoffs at our pretensions to know what it means. It appropriates language. This doesn’t seem a terribly helpful thing in a group of people who are meant to be experts on words. It is a little like declaring that henceforth, ‘breakfast’ will no longer mean the generic first meal of the day, but the croissant and coffee consumed by Derrida in his favourite Left Bank café.
Monday, March 03, 2008

Can a ‘green city’ in the Middle East live up to its claims?
[Here’s my latest piece for Nature’s Muse column.]
The United Arab Emirates has little cause to boast of green credentials, but that shouldn’t make us cynical about its new eco-city.
When Israel’s first prime minister David Ben-Gurion proclaimed his ambition to “make the desert bloom”, he unwittingly foreshadowed one of the enduring sources of controversy and tension in this beleaguered region of the Middle East. His comment has been interpreted by some as a signal of the centrality of water to political power in a parched land – and without doubt, Israel’s armed conflicts with its neighbours have been fought in part over control of water resources.
But Ben-Gurion’s remark also prompts the question of what it really means to make a desert bloom. To critics, one of those meanings involves an inappropriate transposition of a temperate lifestyle to a water-short land. Wasn’t the ‘desert’, which for centuries supported grain, fruit and olive groves, already ‘blooming’ in the most suitable way? Does ‘blooming’ entail golf courses and verdant public parks sucking up precious water?
In other words, there’s something of a collision of imagery in talk of ‘going green’ in an arid climate, where literal greenness imposes a huge burden on resources. That’s now highlighted as plans to create an ambitious ‘green city’ near Abu Dhabi in the United Arab Emirates (UAE) get underway.
Masdar City is slated to cost $22 bn, and the government of the UAE hopes that by 2018 it will be home to around 15,000 people, and a workplace for 50,000. Yet it will have no cars, will run on solar energy, and will produce no carbon emissions or other waste.
Concerns have been raised about whether this will just be an oasis for the rich, with all the incongruous trappings of luxury evident elsewhere in the UAE, where the wealthy can play golf on lush greens and even ski on immense indoor slopes covered with artificial snow.
Others have dismissed Masdar City as a figleaf to hide the energy profligacy of the UAE, where the carbon footprint per capita is the highest in the world, over five times the global average, and greenhouse gas emissions per capita are exceeded only by Qatar and Kuwait. Cynics might ask whether a little patch of clean energy will do much to alter that.
These are fair questions, but it would be a shame if Masdar City was discredited on this basis alone. Like it or not, we need to take greenness wherever we can find it. We do not need to be naïve about the motives for it, but neither does it help to be too snooty. There is some pragmatic truth in the satirical poem ‘The Grumbling Hive’ published in 1705 by Belgian physician Bernard Mandeville, who argued that private vices can have public benefits: that good may sometimes come from dubious intentions.
One might make the same accusations of a cosmetic function for China’s plans to build a zero-emission city, Dongtan, near Shanghai (although China is more worried about environmental issues than is sometimes acknowledged, recognizing them as a potential constraint on economic growth). One might also point out that the US government’s new-found enthusiasm for clean energy is motivated more by fears of its energy security than by an acceptance of the reality of global warming. But if these things lead to useful innovations that can be applied elsewhere, we would be foolish to turn up our noses at them.
It’s not just energy that is at issue here; water is an equally critical aspect of environmental sensitivity and sustainability in the baking Middle Eastern climate. Here there can be little question that necessity has been the mother of invention that makes the Middle Eastern countries world leaders in water technology. Israel has been criticized in the past for its irresponsible (not to mention inequitable) use of the region’s aquifers, and the ecosystem of the Sea of Galilee has certainly suffered badly from water practices. But Israel has in other ways become a pioneer in wise water-use schemes, particularly desalination and sewage farming. The latter reduces the strain on water systems relative to the way that some other less water-stressed countries moisten the crops with water fit to drink.
It would be good to think that there has been some recognition here that even in purely economic terms it is better to find technological solutions to water scarcity than to fight wars over it. The cost of a single F-16 jet fighter is comparable to that of the massive Ashkelon desalination plant in Israel, which produces over 300,000 cubic metres of water a day.
Desalination is a major source of fresh water in the UAE too. The Jebel Ali Desalination Plant, 35 km southwest of Dubai, generates an awesome 300 million cubic metres a year. For Masdar City, on the outskirts of Abu Dhabi City surrounded by sea, desalination is the obvious solution to the water demands of a small population cluster, and the current plans state with almost blithe confidence that this is where the water will come from. That doesn’t seem unfeasible, however. And there is now a wealth of water-resource know-how to draw on from experience elsewhere in the region, such as intelligent use of grey-water recycling.
One of the most attractive aspects of the planned design, however – which will engage the services of British architect Norman Foster, renowned for such feats as the energy-efficient ‘Gherkin’ tower in London – is that it plans to draw on old architectural wisdom as well as new. Without cars (transport will be provided by magnetic light rail), the streets will be narrow like those of older Middle Eastern towns, offering shade for pedestrians. It has long been recognized that some traditional forms of Middle Eastern architecture offer comforts in an energy-efficient manner, for example providing ‘natural’ air conditioning driven simply by convective circulation. It would be good to see such knowledge revived, and indeed Foster has talked of “working with nature, working with the elements and learning from traditional models.”
It seems unlikely that anyone is going to be blindly seduced by the promises of Masdar City – part of the support for the project offered by the World Wildlife Fund seems to involve monitoring progress to ensure that the good intentions are met. Yet we can hope that the lessons it will surely teach can be applied elsewhere.
Saturday, March 01, 2008

Heart of Steel
Birth of an Idea
Chemical art with heart
[Here’s my latest Crucible column for Chemistry World, which appears in the March issue.]
Several years ago I attempted to launch a project that would use the methods of chemical synthesis as a means of sculpture, creating a genuine plastic art at the molecular scale. I shelved it when I saw that it was unrealistic to expect chemists to think like artists: they generally inherit an aesthetic that owes more to Platonic conceptions of beauty than to anything the art world has tended (now or ever) to employ.
But the experience brought me in contact with several people who seek to integrate the molecular sciences with the visual arts. One of them is Julian Voss-Andreae, a former physicist who now works as a sculptor in Portland, Oregon. Despite his background, much of Voss-Andreae’s work is inspired by molecular structures; his latest piece is a metre-and-a-half tall sculpture of an ion channel, commissioned by Roderick MacKinnon of Rockefeller University in New York who shared a Nobel prize for elucidating its structure. It has the elegance and textures of twentieth-century modernism: with its bare, dark metal and bright wire, supported on a base of warm, finely joined wood, it wouldn’t have looked out of place at the recent Louise Bourgeois exhibition at London’s Tate Modern gallery. The title, Birth of an Idea, alludes to the role of ion channels in creating the electrical impulses of our nerve cells.
I find it hard to imagine that sculptures like these could be made by anyone who did not have a deep understanding of what molecules are and what they do. Iconic images of DNA’s double helix are commonplace now (the Cold Spring Harbor Laboratory on Long Island has two), but do little more than express delight at the graceful spiral-staircase shape (while implicitly failing to acknowledge that this is crucially dependent on the surrounding solvent). Voss-Andreae’s molecular sculptures have more to say than that. His Heart of Steel (2005), placed at an intersection in the city of Lake Oswego in Oregon, is a steel model of the structure of haemoglobin, with a red glass sphere at its centre. The twisting polypeptide chains echo those depicted in physical models made in the early days of protein crystallography, photos of which would appear in research papers in lieu of the fancy computer graphics we see today. But Heart of Steel engages with the chemistry of the molecule too, because the steel structure, left exposed to the elements, has gradually (and intentionally) corroded until its coils have become rust-red, a recapitulation of the iron-based redness of our own blood cells. Blood and iron indeed, as Bismarck said of the German Empire.
It’s no surprise that Voss-Andreae is sensitive to such nuances. As a graduate student at the University of Vienna, he was one of the team led by Anton Zeilinger that conducted a ground-breaking experiment in quantum mechanics in 1999. The researchers showed that even molecules as big as C60 can reveal their fundamentally quantum nature under the right conditions: a beam of them passed through a diffraction grating will exhibit the purely wavelike property of interference. A subsequent experiment on C70 showed how interactions with the environment (a background gas of different densities) will gradually wash away the quantumness thanks to the process of decoherence, which is now recognized as the way the classical world emerges from the quantum.
Such experiences evidently inform Voss-Andreae’s Quantum Man (2006), a figure 2.5 m tall made from thin, parallel steel sheets that looks ‘classically’ solid when seen from one angle but almost disappears into a vague haze seen from another. C60 itself has featured in more than one of Voss-Andreae’s sculptures: the football cage, 9 m high, sits among trees in Tryon Creek State Park in Oregon.
Among Voss-Andreae’s latest projects is a sculpture based on foam. “My obsession with buckyballs seems to be due to their bubble-like geometry, which got me started on this new project”, he says. His aim is to produce a foam network that is ‘adapted’ to a particular boundary shape, such as the human body. This involves more than simply ‘carving’ a block of foam to the desired contours (as was done, for example, in making the spectacular swimming stadium for the Beijing Olympics), because, he says, “the cellular structure ‘talks’ with the boundary.” Voss-Andreae is attacking the problem both mathematically and experimentally, casting a resin in the gaps between an artificial foam of water-filled balloons. Eventually he hopes to cast the resulting structure in bronze.
I admit that I am not usually a fan of attempts to turn molecular shapes into ‘art’; all too often this draws on the chemist’s rather particular concept of beauty, and to put it bluntly, a pretty picture does not equate with art. But Voss-Andreae’s work is different, because it looks to convey some of the underlying scientific principles of the subject matter even to viewers who know nothing about them. That’s what good ‘sciart’ does: rather than seeking to ‘educate’, it presents some of the textures of science in a way that nudges the mind and enlivens the senses.
Friday, February 22, 2008
Engineering for the better?
[This is the pre-edited version of my latest Muse column for Nature News.]
Many of the grand technological challenges of the century ahead are inseparable from their sociopolitical context.
At the meeting of the American Association for the Advancement of Science in Boston last week, a team of people selected by the US National Academy of Engineering identified 14 ‘grand challenges for engineering’ that would help make the world “a more sustainable, safe, healthy, and joyous – in other words, better- place.”
It’s heartening to see engineers, long dismissed as the lumpen, dirty-handed serfs labouring at the foot of science’s lofty citadel, asserting in this manner their subject’s centrality to our future course. Without rehearsing again the debates about the murky boundaries between pure and applied science, or science and technology, it’s rather easy to see that technologists have altered human culture in ways that scientists never have. Plato, Galileo, Darwin and Einstein have reshaped our minds, but there is hardly an action we can take in the industrialized world that does not feel the influence of engineering.
This, indeed, is why one can argue that a moral, ethical and generally humanistic sensitivity is needed in engineering even more than it is in the abstract natural sciences. It is by the same token the reason why engineering is a political as well as a technological activity: whether they are making dams or databases, engineers are both moving and being moved by the sociopolitical landscape.
This is abundantly clear in the Grand Challenges project. The vision it outlines is, by and large, a valuable and praiseworthy one. It recognizes explicitly that “the most difficult challenge of all will be to disperse the fruits of engineering widely around the globe, to rich and poor alike.” Its objectives, the statement says are “goals for all the world’s people.”
Yet some of the problems identified arguably say more about the current state of mind of Western culture than about what engineering can do or what goals are most urgent. Two of the challenges are concerned with security – or what the committee calls vulnerability – and two focus on the personalization of services – health and education – that have traditionally been seen as generalized ‘one size fits all’ affairs. There are good arguments why it is worthwhile recognizing individual differences – not all medicines have the same effect on everyone (in either good or bad ways), and not everyone learns in the same way. But there is surely a broader political dimension to the notion that we seem now to demand greater tailoring of public services to our personal needs, and greater protection from ‘outsiders’.
What is particularly striking is how ‘vulnerability’ and security are here no longer discussed in terms of warfare (one of the principal engines of technological innovation since ancient times) but attacks on society from nefarious, faceless aggressors such as nuclear and cyber terrorists. These are real threats, but presented this way in terms of engineering challenges makes for a very odd perspective.
For example, let us say (for the sake of argument) that there exists a country where guns can be readily bought at the corner store. How can we make the law-abiding citizen safe from firearms falling into the hands of homicidal madmen? The answers proposed here are, in effect, to develop technologies for making the stores more secure, for keeping track of where the guns are, for cleaning up after a massacre, and for finding out who did it. To which one might be tempted to add another humble suggestion: what if the shops did not sell guns?
To put it bluntly, discussing nuclear security without any mention of nuclear non-proliferation agreements and efforts towards disarmament is nonsensical. In one sense, perhaps it is understandably difficult for a committee on engineering to suggest that part of the solution to a problem might lie with not making things. Cynics might also suspect a degree of political expediency at work, but I think it is more reasonable to say that questions of this nature don’t really fall into the hands of engineers at all but are contingent on the political climate. To put it another way, I suspect the most stimulating lists of ways to make the world better won’t just include things that everyone can reasonably deem desirable, but things that some will not.
The limited boundaries of the debate are the central shortcoming of an exercise like this. It was made clear from the outset that all these topics are being considered purely from an engineering point of view, but one can hardly read the list without feeling that it is really attempting to enumerate all the big challenges facing humankind that have some degree of technical content. The solutions, and perhaps even the choices, are then bound to disappoint, because just about any challenge of this sort does not depend on technology alone, or even primarily.
Take health, for example. Most of the diseases in the world (and AIDS is now only a partial exception) are ones we know already how to prevent, cure or keep at bay. Technology can play a part in making such treatments cheaper or more widely available (or, in cases of waterborne diseases, say, not necessary in the first place) – but in the immediate future, health informatics and personalized medicine are hardly the key requirements. Economics, development and diet are likely to have a much bigger effect on global health than cutting-edge medical science.
None of this is to deny the value of the Grand Challenges project. But it highlights the fact that one the most important goals is to integrate science and technology with other social and cultural forces. This is a point made by philosopher of science Nicholas Maxwell in his 1984 book From Knowledge to Wisdom (a new edition of which has just been published by Pentire Press).
To blame science for the ills of the world is to miss the point, says Maxwell. “What we urgently need to do - given the unprecedented powers bequeathed to us by science - is to learn how to tackle our immense, intractable problems of living in rather more intelligent, humane, cooperatively rational ways than we do at present… We need a new kind of academic inquiry that gives intellectual priority to our problems of living - to clarifying what our problems are, and to proposing and critically assessing the possible solutions.”
He proposes that, to this end, the natural sciences should include three domains of discussion: not just evidence and theory, but aims, “this last category covering discussion of metaphysics, values and politics.” There is certainly much to challenge in Maxwell’s position. Trofim Lysenko’s fatefully distorted genetics in the Stalinist Soviet Union, for example, had ‘values and politics’; and the hazards of excessively goal-driven research are well-known in this age of political and economic short-termism.
Maxwell tackles such criticisms in his book, but his wider point – that science and technology should not just be cognisant of social and ethical factors but better integrated with them – is important. The Grand Challenges committee is full of wise and humane technologists. Next time, it would be interesting to include some who are the former but not the latter.
[This is the pre-edited version of my latest Muse column for Nature News.]
Many of the grand technological challenges of the century ahead are inseparable from their sociopolitical context.
At the meeting of the American Association for the Advancement of Science in Boston last week, a team of people selected by the US National Academy of Engineering identified 14 ‘grand challenges for engineering’ that would help make the world “a more sustainable, safe, healthy, and joyous – in other words, better- place.”
It’s heartening to see engineers, long dismissed as the lumpen, dirty-handed serfs labouring at the foot of science’s lofty citadel, asserting in this manner their subject’s centrality to our future course. Without rehearsing again the debates about the murky boundaries between pure and applied science, or science and technology, it’s rather easy to see that technologists have altered human culture in ways that scientists never have. Plato, Galileo, Darwin and Einstein have reshaped our minds, but there is hardly an action we can take in the industrialized world that does not feel the influence of engineering.
This, indeed, is why one can argue that a moral, ethical and generally humanistic sensitivity is needed in engineering even more than it is in the abstract natural sciences. It is by the same token the reason why engineering is a political as well as a technological activity: whether they are making dams or databases, engineers are both moving and being moved by the sociopolitical landscape.
This is abundantly clear in the Grand Challenges project. The vision it outlines is, by and large, a valuable and praiseworthy one. It recognizes explicitly that “the most difficult challenge of all will be to disperse the fruits of engineering widely around the globe, to rich and poor alike.” Its objectives, the statement says are “goals for all the world’s people.”
Yet some of the problems identified arguably say more about the current state of mind of Western culture than about what engineering can do or what goals are most urgent. Two of the challenges are concerned with security – or what the committee calls vulnerability – and two focus on the personalization of services – health and education – that have traditionally been seen as generalized ‘one size fits all’ affairs. There are good arguments why it is worthwhile recognizing individual differences – not all medicines have the same effect on everyone (in either good or bad ways), and not everyone learns in the same way. But there is surely a broader political dimension to the notion that we seem now to demand greater tailoring of public services to our personal needs, and greater protection from ‘outsiders’.
What is particularly striking is how ‘vulnerability’ and security are here no longer discussed in terms of warfare (one of the principal engines of technological innovation since ancient times) but attacks on society from nefarious, faceless aggressors such as nuclear and cyber terrorists. These are real threats, but presented this way in terms of engineering challenges makes for a very odd perspective.
For example, let us say (for the sake of argument) that there exists a country where guns can be readily bought at the corner store. How can we make the law-abiding citizen safe from firearms falling into the hands of homicidal madmen? The answers proposed here are, in effect, to develop technologies for making the stores more secure, for keeping track of where the guns are, for cleaning up after a massacre, and for finding out who did it. To which one might be tempted to add another humble suggestion: what if the shops did not sell guns?
To put it bluntly, discussing nuclear security without any mention of nuclear non-proliferation agreements and efforts towards disarmament is nonsensical. In one sense, perhaps it is understandably difficult for a committee on engineering to suggest that part of the solution to a problem might lie with not making things. Cynics might also suspect a degree of political expediency at work, but I think it is more reasonable to say that questions of this nature don’t really fall into the hands of engineers at all but are contingent on the political climate. To put it another way, I suspect the most stimulating lists of ways to make the world better won’t just include things that everyone can reasonably deem desirable, but things that some will not.
The limited boundaries of the debate are the central shortcoming of an exercise like this. It was made clear from the outset that all these topics are being considered purely from an engineering point of view, but one can hardly read the list without feeling that it is really attempting to enumerate all the big challenges facing humankind that have some degree of technical content. The solutions, and perhaps even the choices, are then bound to disappoint, because just about any challenge of this sort does not depend on technology alone, or even primarily.
Take health, for example. Most of the diseases in the world (and AIDS is now only a partial exception) are ones we know already how to prevent, cure or keep at bay. Technology can play a part in making such treatments cheaper or more widely available (or, in cases of waterborne diseases, say, not necessary in the first place) – but in the immediate future, health informatics and personalized medicine are hardly the key requirements. Economics, development and diet are likely to have a much bigger effect on global health than cutting-edge medical science.
None of this is to deny the value of the Grand Challenges project. But it highlights the fact that one the most important goals is to integrate science and technology with other social and cultural forces. This is a point made by philosopher of science Nicholas Maxwell in his 1984 book From Knowledge to Wisdom (a new edition of which has just been published by Pentire Press).
To blame science for the ills of the world is to miss the point, says Maxwell. “What we urgently need to do - given the unprecedented powers bequeathed to us by science - is to learn how to tackle our immense, intractable problems of living in rather more intelligent, humane, cooperatively rational ways than we do at present… We need a new kind of academic inquiry that gives intellectual priority to our problems of living - to clarifying what our problems are, and to proposing and critically assessing the possible solutions.”
He proposes that, to this end, the natural sciences should include three domains of discussion: not just evidence and theory, but aims, “this last category covering discussion of metaphysics, values and politics.” There is certainly much to challenge in Maxwell’s position. Trofim Lysenko’s fatefully distorted genetics in the Stalinist Soviet Union, for example, had ‘values and politics’; and the hazards of excessively goal-driven research are well-known in this age of political and economic short-termism.
Maxwell tackles such criticisms in his book, but his wider point – that science and technology should not just be cognisant of social and ethical factors but better integrated with them – is important. The Grand Challenges committee is full of wise and humane technologists. Next time, it would be interesting to include some who are the former but not the latter.
Sunday, February 17, 2008
Ye Gods
Yes, as the previous entry shows, I am reading Jeanette Winterson’s The Stone Gods. Among the most trivial of the issues it makes me ponder is what kind of fool gave the name to silicone polymers. You don’t exactly have to be a linguist to see where that was going to lead. The excuse that there was some chemical rationale for it (the ‘-one’ suffix was chosen by analogy to ketones, with which silicones were mistakenly thought to be homologous) is no excuse at all. After all, chemistry is replete with antiquated names in which a terminal ‘e’ became something of a matter of taste, including alizarine and indeed proteine. So we are now saddled with endless confusions of silicone with silicon – with the particularly unfortunate (or is it?) consequence in Winterson’s case that her robot Spike is implied to have a brain made of the same stuff as the brainless Pink’s breasts.
But for some reason I find myself forgiving just about anything in Jeanette Winterson. Partly this is because her passion for words is so ingenuous and valuable, and partly it may be because my instinct for false modesty is so grotesquely over-developed that I can only gaze in awed admiration at someone who will unhesitatingly nominate their own latest book as the year’s best. But I must also guiltily confess that it is clearly also because we are so clearly both on The Same Side on just about every issue (how could it be otherwise for someone who cites Tove Jansson among her influences?). It is deplorable, I know, that I would be all smug and gloating if the science errors in The Stone Gods had come from someone like Michael Crichton. But of course Crichton preens about the ‘accuracy’ of his research (sufficiently to fool admittedly gullible US politicians), whereas it is really missing the point of Winterson to get all het up about her use of light-years as a unit of time.
Ah, but all the same – where were the editors? Is this the fate of famous authors – that no one deems it necessary to fact-check you any more? True, it is only the sci-fi nerd who will worry that Winterson’s spacecraft can zip about at ‘light speed’ (which we can understand, with poetic licence, as near-light-speed) without the slightest sign of any time dilation. And she never really pretends to be imagining a real future (she says she hates science fiction, although I assume with a narrow definition), so there’s no point in scoffing at the notion that blogs and iPods have somehow survived into the age of interstellar travel. But listen, you don’t need to be a scientist to sense something wrong with this:
“In space it is difficult to tell what is the right way up; space is curved, stars and planets are globes. There is no right way up. The Ship itself is tilting at a forty-five degree angle, but it is the instruments that tell me so, not my body looking out of the window.”
Um, and the instruments are measuring with respect to what? This is actually a rather lovely demonstration of the trap of our earthbound intuitions – which brings me back to the piece below. Oh ignore me, Jeanette (as if you needed telling).
Yes, as the previous entry shows, I am reading Jeanette Winterson’s The Stone Gods. Among the most trivial of the issues it makes me ponder is what kind of fool gave the name to silicone polymers. You don’t exactly have to be a linguist to see where that was going to lead. The excuse that there was some chemical rationale for it (the ‘-one’ suffix was chosen by analogy to ketones, with which silicones were mistakenly thought to be homologous) is no excuse at all. After all, chemistry is replete with antiquated names in which a terminal ‘e’ became something of a matter of taste, including alizarine and indeed proteine. So we are now saddled with endless confusions of silicone with silicon – with the particularly unfortunate (or is it?) consequence in Winterson’s case that her robot Spike is implied to have a brain made of the same stuff as the brainless Pink’s breasts.
But for some reason I find myself forgiving just about anything in Jeanette Winterson. Partly this is because her passion for words is so ingenuous and valuable, and partly it may be because my instinct for false modesty is so grotesquely over-developed that I can only gaze in awed admiration at someone who will unhesitatingly nominate their own latest book as the year’s best. But I must also guiltily confess that it is clearly also because we are so clearly both on The Same Side on just about every issue (how could it be otherwise for someone who cites Tove Jansson among her influences?). It is deplorable, I know, that I would be all smug and gloating if the science errors in The Stone Gods had come from someone like Michael Crichton. But of course Crichton preens about the ‘accuracy’ of his research (sufficiently to fool admittedly gullible US politicians), whereas it is really missing the point of Winterson to get all het up about her use of light-years as a unit of time.
Ah, but all the same – where were the editors? Is this the fate of famous authors – that no one deems it necessary to fact-check you any more? True, it is only the sci-fi nerd who will worry that Winterson’s spacecraft can zip about at ‘light speed’ (which we can understand, with poetic licence, as near-light-speed) without the slightest sign of any time dilation. And she never really pretends to be imagining a real future (she says she hates science fiction, although I assume with a narrow definition), so there’s no point in scoffing at the notion that blogs and iPods have somehow survived into the age of interstellar travel. But listen, you don’t need to be a scientist to sense something wrong with this:
“In space it is difficult to tell what is the right way up; space is curved, stars and planets are globes. There is no right way up. The Ship itself is tilting at a forty-five degree angle, but it is the instruments that tell me so, not my body looking out of the window.”
Um, and the instruments are measuring with respect to what? This is actually a rather lovely demonstration of the trap of our earthbound intuitions – which brings me back to the piece below. Oh ignore me, Jeanette (as if you needed telling).
Friday, February 15, 2008
There’s no place like home
… but that won’t stop us looking for it in our search for extraterrestrials.
[This is the pre-edited version of my latest Muse column for Nature news. Am I foolish to imagine there might be people out there who appreciate the differences? Don't answer that.]
In searching the skies for other worlds, are we perhaps just like the English tourists waddling down the Costa del Sol, our eyes lighting up when we see “The Red Lion” pub with the Union Jack in the windows and Watneys Red Barrel on tap? Gazing out into the unutterably vast, unnervingly strange depths of the cosmos, are we not really just hankering for somewhere that looks like home?
It isn’t just a longing for the familiar that has stirred up excitement about the discovery of what looks like a scaled-down version of our own solar system surrounding a distant star [1]. But neither, I think, is that impulse absent.
There’s sound reasoning in looking for ‘Earth-like’ extrasolar planets, because the one thing we can say for sure about these places is that they are capable of supporting life. And it is entirely understandable that extraterrestrial life should be the pot of gold at the end of this particular rainbow.
Yet I doubt that the cold logic of this argument is all there is behind our fascination with Earth-likeness. Science-fiction writers and movie makers have sometimes had fun inventing worlds very different to our own, peopled (can we say that?) with denizens of corresponding weirdness. But that, on the whole, is the exception. That the Klingons and Romulans looked strangely like Californians with bad hangovers was not simply a matter of budget constraints. Edgar Rice Burroughs’ Mars was apparently situated somewhere in the Sahara, populated by extras from the Arabian Nights. In Jeanette Winterson’s new novel The Stone Gods, a moribund, degenerate Earth (here called Orbus) rejoices in the discovery of a pristine Blue Planet in the equivalent of the Cretaceous period, because it offers somewhere to escape to (and might that, in these times haunted by environmental change, nuclear proliferation and fears of planet-searing impacts, already be a part of our own reverie?). Most fictional aliens have been very obviously distorted or enhanced versions of ourselves, both physically and mentally, because in the end our stories, right back to those of Valhalla, Olympus and the seven Hindu heavens, have been more about exploring the human condition than genuinely imagining something outside it.
This solipsism is understandable but deep-rooted, and we shouldn’t imagine that astrobiology and extrasolar planetary prospecting are free from it. However, the claim by the discoverers of the new ‘mini-solar system’ that “solar system analogs may be common” around other stars certainly amounts to more than saying “hey, you can get a decent cup of coffee in this god-forsaken place”. It shows that our theories of the formation and evolution of planetary systems are not parochial, and offers some support for the suspicion that previous methods of planet detection bias our findings towards oddballs such as ‘hot Jupiters’. The fact that the relatively new technique used in this case – gravitational microlensing – has so quickly turned up a ‘solar system analog’ is an encouraging sign that indeed our own neighbourhood is not an anomaly.
The desire – it is more than an unspoken expectation – to find a place that looks like home is nevertheless a persistent bugbear of astrobiology. A conference organized in 2003 to address the question “Can life exist without water?” had as part of its agenda the issue of whether non-aqueous biochemistries could be imagined [2]. But in the event, the participants did not feel comfortable in straying beyond our atmosphere, and so the debate became that of whether proteins can function in the dry or in other solvents, rather than whether other solvents can support the evolution of a non-protein equivalent of enzymes. Attempts to re-imagine biology in, say, liquid methane or ammonia, have been rare [3]. An even more fundamental question, which I have never seen addressed anywhere, is whether evolution has to be Darwinian. It would be a daunting challenge to think of any better way to achieve ‘design’ and function blindly, but there is no proof that Darwin has a monopoly on such matters. Do we even need evolution? Are we absolutely sure that some kind of spontaneous self-organization can’t create life-like complexity, without the need for replication, say?
Maybe these questions are too big to be truly scientific in this form. Better, then, to break bits off them. Marcelo Gleiser and his coworkers at Dartmouth College in New Hampshire have done that in a recent preprint [4], asking whether a ‘replica Earth’ would share our left-handed proteins and right-handed nucleic acids. The handedness here refers to the mirror-image shapes of the biomolecular building blocks. The two mirror-image forms are called enantiomers, and are distinguishable by the fact that they rotate the plane of polarized light to the left or the right.
In principle, all our biochemistry could be reversed by mirror reflection of these shapes, and we’d never notice. So the question is why one set of enantiomers was preferred over the other. One possibility is that it was purely random – once the choice is made, it is fixed, because building blocks of the ‘wrong’ chirality don’t ‘fit’ when constructing organisms. Other explanations, however, suggest that life’s hand was biased at the outset, perhaps by the intrinsic left-handedness in the laws of fundamental physics, or because there was an excess of left-handed amino acids that fell to Earth on meteorites and seeded the first life (that is simply deferring the question, however).
Gleiser and his coworkers argue that these ideas may all be irrelevant. They say that environmental disturbances strong and long enough can reset the handedness, if this is propagated in the prebiotic environment by an autocatalytic process in which an enantiomer acts as a catalyst to create more of itself while blocking the chemical reaction that leads to the other enantiomer. Such a self-amplifying process was proposed in 1953 by physicist Charles Frank, and was demonstrated experimentally in the 1990s by Japanese chemist Kenso Soai.
The US researchers show that an initially random mixture of enantiomers in such a system quickly develops patchiness, with big blobs of each enantiomer accumulating like oil separating from vinegar in an unstirred salad dressing. Chance initial variations will lead to one or other enantiomer eventually dominating. But an environmental disruption, like the planet-sterilizing giant impacts suffered by the early Earth, can shake the salad dressing, breaking up the blobs. When the process begins again, the new dominant enantiomer that emerges may be different from the one before, even if there was a small excess of the other at the outset. As a result, they say, the origin of life’s handedness “is enmeshed with Earth’s environmental history” – and is therefore purely contingent.
Other researchers I have spoken to question whether the scheme Gleiser’s team has considered – autocatalytic spreading in an unstirred solvent – has much relevance to ‘warm little ponds’ on the turbulent young Earth, and whether the notion of resetting by shaking isn’t obvious in any case in a process like Frank’s in which chance variations get amplified. But of course, in an astrobiological context the more fundamental issue is whether there is the slightest reason to think that alien life will use amino acids and DNA, so that a comparison of handedness will be possible.
That doesn’t mean these questions aren’t worth pursuing (they remain relevant to life on Earth, at the very least). But it’s another illustration of our tendency to frame the questions parochially. In the quest for life elsewhere, whether searching for new planets or considering the molecular parameters of potential living systems, we are in some ways more akin to historians than to scientists: our data is a unique narrative, and our thinking is likely to stay trapped within it.
References
1. Gaudi, B. S. et al. Science 319, 927-930 (2008).
2. Phil. Trans. R. Soc. Lond. Ser. B special issue, 359 (no. 1448) (2004).
3. Benner, S. A. et al. Curr. Opin. Chem. Biol. 8, 672 (2004).
4. Gleiser, M. et al. http://arxiv.org/abs/0802.1446
… but that won’t stop us looking for it in our search for extraterrestrials.
[This is the pre-edited version of my latest Muse column for Nature news. Am I foolish to imagine there might be people out there who appreciate the differences? Don't answer that.]
In searching the skies for other worlds, are we perhaps just like the English tourists waddling down the Costa del Sol, our eyes lighting up when we see “The Red Lion” pub with the Union Jack in the windows and Watneys Red Barrel on tap? Gazing out into the unutterably vast, unnervingly strange depths of the cosmos, are we not really just hankering for somewhere that looks like home?
It isn’t just a longing for the familiar that has stirred up excitement about the discovery of what looks like a scaled-down version of our own solar system surrounding a distant star [1]. But neither, I think, is that impulse absent.
There’s sound reasoning in looking for ‘Earth-like’ extrasolar planets, because the one thing we can say for sure about these places is that they are capable of supporting life. And it is entirely understandable that extraterrestrial life should be the pot of gold at the end of this particular rainbow.
Yet I doubt that the cold logic of this argument is all there is behind our fascination with Earth-likeness. Science-fiction writers and movie makers have sometimes had fun inventing worlds very different to our own, peopled (can we say that?) with denizens of corresponding weirdness. But that, on the whole, is the exception. That the Klingons and Romulans looked strangely like Californians with bad hangovers was not simply a matter of budget constraints. Edgar Rice Burroughs’ Mars was apparently situated somewhere in the Sahara, populated by extras from the Arabian Nights. In Jeanette Winterson’s new novel The Stone Gods, a moribund, degenerate Earth (here called Orbus) rejoices in the discovery of a pristine Blue Planet in the equivalent of the Cretaceous period, because it offers somewhere to escape to (and might that, in these times haunted by environmental change, nuclear proliferation and fears of planet-searing impacts, already be a part of our own reverie?). Most fictional aliens have been very obviously distorted or enhanced versions of ourselves, both physically and mentally, because in the end our stories, right back to those of Valhalla, Olympus and the seven Hindu heavens, have been more about exploring the human condition than genuinely imagining something outside it.
This solipsism is understandable but deep-rooted, and we shouldn’t imagine that astrobiology and extrasolar planetary prospecting are free from it. However, the claim by the discoverers of the new ‘mini-solar system’ that “solar system analogs may be common” around other stars certainly amounts to more than saying “hey, you can get a decent cup of coffee in this god-forsaken place”. It shows that our theories of the formation and evolution of planetary systems are not parochial, and offers some support for the suspicion that previous methods of planet detection bias our findings towards oddballs such as ‘hot Jupiters’. The fact that the relatively new technique used in this case – gravitational microlensing – has so quickly turned up a ‘solar system analog’ is an encouraging sign that indeed our own neighbourhood is not an anomaly.
The desire – it is more than an unspoken expectation – to find a place that looks like home is nevertheless a persistent bugbear of astrobiology. A conference organized in 2003 to address the question “Can life exist without water?” had as part of its agenda the issue of whether non-aqueous biochemistries could be imagined [2]. But in the event, the participants did not feel comfortable in straying beyond our atmosphere, and so the debate became that of whether proteins can function in the dry or in other solvents, rather than whether other solvents can support the evolution of a non-protein equivalent of enzymes. Attempts to re-imagine biology in, say, liquid methane or ammonia, have been rare [3]. An even more fundamental question, which I have never seen addressed anywhere, is whether evolution has to be Darwinian. It would be a daunting challenge to think of any better way to achieve ‘design’ and function blindly, but there is no proof that Darwin has a monopoly on such matters. Do we even need evolution? Are we absolutely sure that some kind of spontaneous self-organization can’t create life-like complexity, without the need for replication, say?
Maybe these questions are too big to be truly scientific in this form. Better, then, to break bits off them. Marcelo Gleiser and his coworkers at Dartmouth College in New Hampshire have done that in a recent preprint [4], asking whether a ‘replica Earth’ would share our left-handed proteins and right-handed nucleic acids. The handedness here refers to the mirror-image shapes of the biomolecular building blocks. The two mirror-image forms are called enantiomers, and are distinguishable by the fact that they rotate the plane of polarized light to the left or the right.
In principle, all our biochemistry could be reversed by mirror reflection of these shapes, and we’d never notice. So the question is why one set of enantiomers was preferred over the other. One possibility is that it was purely random – once the choice is made, it is fixed, because building blocks of the ‘wrong’ chirality don’t ‘fit’ when constructing organisms. Other explanations, however, suggest that life’s hand was biased at the outset, perhaps by the intrinsic left-handedness in the laws of fundamental physics, or because there was an excess of left-handed amino acids that fell to Earth on meteorites and seeded the first life (that is simply deferring the question, however).
Gleiser and his coworkers argue that these ideas may all be irrelevant. They say that environmental disturbances strong and long enough can reset the handedness, if this is propagated in the prebiotic environment by an autocatalytic process in which an enantiomer acts as a catalyst to create more of itself while blocking the chemical reaction that leads to the other enantiomer. Such a self-amplifying process was proposed in 1953 by physicist Charles Frank, and was demonstrated experimentally in the 1990s by Japanese chemist Kenso Soai.
The US researchers show that an initially random mixture of enantiomers in such a system quickly develops patchiness, with big blobs of each enantiomer accumulating like oil separating from vinegar in an unstirred salad dressing. Chance initial variations will lead to one or other enantiomer eventually dominating. But an environmental disruption, like the planet-sterilizing giant impacts suffered by the early Earth, can shake the salad dressing, breaking up the blobs. When the process begins again, the new dominant enantiomer that emerges may be different from the one before, even if there was a small excess of the other at the outset. As a result, they say, the origin of life’s handedness “is enmeshed with Earth’s environmental history” – and is therefore purely contingent.
Other researchers I have spoken to question whether the scheme Gleiser’s team has considered – autocatalytic spreading in an unstirred solvent – has much relevance to ‘warm little ponds’ on the turbulent young Earth, and whether the notion of resetting by shaking isn’t obvious in any case in a process like Frank’s in which chance variations get amplified. But of course, in an astrobiological context the more fundamental issue is whether there is the slightest reason to think that alien life will use amino acids and DNA, so that a comparison of handedness will be possible.
That doesn’t mean these questions aren’t worth pursuing (they remain relevant to life on Earth, at the very least). But it’s another illustration of our tendency to frame the questions parochially. In the quest for life elsewhere, whether searching for new planets or considering the molecular parameters of potential living systems, we are in some ways more akin to historians than to scientists: our data is a unique narrative, and our thinking is likely to stay trapped within it.
References
1. Gaudi, B. S. et al. Science 319, 927-930 (2008).
2. Phil. Trans. R. Soc. Lond. Ser. B special issue, 359 (no. 1448) (2004).
3. Benner, S. A. et al. Curr. Opin. Chem. Biol. 8, 672 (2004).
4. Gleiser, M. et al. http://arxiv.org/abs/0802.1446
Saturday, February 09, 2008
The hazards of saying what you mean
It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?
You know which X I mean.
I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.
Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.
Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”
And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”
And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”
And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)
What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.
Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.
It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?
You know which X I mean.
I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.
Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.
Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”
And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”
And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”
And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)
What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.
Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.
Friday, February 08, 2008
Waste not, want not
[This is my latest Muse column for Nature News.]
We will now go to any extent to scavenge every last joule of energy from our environment.
As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.
And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.
Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.
So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.
That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.
That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.
While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).
There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.
The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.
Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.
Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].
In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].
And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.
Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.
These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.
And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.
References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).
[This is my latest Muse column for Nature News.]
We will now go to any extent to scavenge every last joule of energy from our environment.
As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.
And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.
Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.
So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.
That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.
That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.
While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).
There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.
The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.
Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.
Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].
In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].
And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.
Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.
These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.
And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.
References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).
Friday, February 01, 2008
Risky business
[My latest Muse column for Nature online news…]
Managing risk in financial markets requires better understanding of their complex dynamics. But it’s already clear that unfettered greed makes matter worse.
It seems to be sheer coincidence that the multi-billion dollar losses at the French bank Société Générale (SocGen), caused by the illegal dealings of rogue trader Jérôme Kerviel, comes at a time of imminent global economic depression. But the conjunction has provoked discussion about whether such localized shocks to the financial market can trigger worldwide (‘systemic’) economic crises.
If so, what can be done to prevent it? Some have called for more regulation, particularly of the murky business that economists call derivatives trading and the rest of us would recognize as institutionalized gambling. “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions”, wrote John Lanchester in the British Guardian newspaper [1]. But how well do we understand what we’d be regulating?
The French affair is in a sense timely, because ‘systemic risk’ in the financial system has become a hot topic, as witnessed by a recent report by the Federal Reserve Bank of New York (FRBNY) and the US National Academy of Sciences [2]. Worries about systemic risk are indeed largely motivated by the link to global recessions, like the one currently looming. This concern was articulated after the Great Depression of the 1930s by the British economist John Maynard Keynes, who wanted to understand how the global economy can switch from a healthy to a depressed state, both of which seemed to be stable ‘equilibrium’ states to the extent that they stick around for a while.
That terminology implies that there is some common ground with the natural sciences. In physics, a change in the global state of a system from one equilibrium configuration to another is called a phase transition, and some economists use such terms and concepts borrowed from physics to talk about market dynamics.
The analogy is potentially misleading, however, because the financial system, and the global economy generally, is never in equilibrium. Money is constantly in motion, and it’s widely recognized that instabilities such as market crashes depend in a sensitive but ill-understood ways on feedbacks within the system that can act to amplify small disturbances and which enforce perpetual change. Other terms from ‘economese’, such as liquidity (the ability to exchange assets for cash), reveal an intuition of that dynamism, and indeed Keynes himself tried to develop a model of economics that relied on analogies with hydrodynamics.
Just as equilibrium spells death for living things, so the financial market is in trouble when money stops flowing. It’s when people stop investing, cutting off the bank loans that business needs to thrive, that a crisis looms. Banks themselves stay in business only if the money keeps comes in; when customers lose confidence and withdraw their cash – a ‘run on the bank’ like that witnessed recently at the UK’s Northern Rock – banks can no longer lend, have to call in existing loans at a loss, and face ruin. That prospect sets up a feedback: the more customers withdraw their money, the more others feel compelled to do so – and if the bank wasn’t in real danger of collapse at the outset, it soon is. The French government is trying to avoid that situation at SocGen, since the collapse of a bank has knock-on consequences that could wreak havoc throughout a nation’s economy, or even beyond.
In other words, bank runs may lead, as with Northern Rock, to the ironic spectacle of gung-ho advocates of the free market appealing, or even demanding, state intervention to bail them out. As Will Hutton puts it in the Observer newspaper, “financiers have organised themselves so that actual or potential losses are picked up by somebody else - if not their clients, then the state - while profits are kept to themselves” [3]. Even measures such as deposit insurance, introduced in the US after the bank runs of the 1930s, which ensures that depositors won’t lose their money even if the bank fails, arguably exacerbate the situation by encouraging banks to take more risks, secure in the knowledge that their customers are unlikely to lose their nerve and desert them.
Economists’ attempts to understand runaway feedbacks in situations like bank runs draw on another area of the natural sciences: epidemiology. They speak of ‘contagion’: the spread of behaviours from one agent, or one part of the market, to another, like the spread of disease in a population. In bank runs, contagion can even spread to other banks: one run leads people to fear others, and this then becomes a self-fulfilling prophecy.
Regardless of whether the current shaky economy might be toppled by a SocGen scandal, it is clear that the financial market is in general potentially susceptible to systemic failure caused by specific, local events. The terrorist attacks on the World Trade Centre on 11 September 2001 demonstrated that, albeit in a most unusual way – for the ‘shock’ here was not a market event as such but physical destruction of its ‘hardware’. Disruption of trading activity in banks in downtown Manhattan in effect caused a bottleneck in the flow of money that had serious knock-on consequences, leading to a precipitous drop in the global financial market.
The FRBNY report [2] is a promising sign that economists seeking to understand risk are open to the ideas and tools of the natural sciences that deal with phase transitions, feedbacks and other complex nonlinear dynamics. But the bugbear of all these efforts is that ultimately the matter hinges on human behaviour. Your propensity to catch a virus is indifferent to whether you feel optimistic or pessimistic about your chances of that; but with contagion in the economy, expectations are crucial.
This is where conventional economic models run into problems. Most of the tools used in financial markets, such as how to price assets and derivatives and how to deal with risk in portfolio management, rely on the assumption that market traders respond rationally and identically on the basis of complete information about the market. This leads to mathematical models that can be solved, but it doesn’t much resemble what real agents do. For one thing, different people reach different conclusions on the basis of the same data. They tend to be overconfident, to be biased towards information that confirms their preconceptions, to have poor intuition about probabilities of rare events, and to indulge in wishful thinking [4]. The field of behavioural finance, which garnered a Nobel prize for Daniel Kahneman in 2002, shows the beginnings of an acknowledgement of these complexities in decision-making – but they haven’t yet had much impact on the tools widely used to calculate and manage risk.
One can’t blame the vulnerability of the financial market on the inability of economists to model it. These poor folks are faced with a challenge of such magnitude that those working on ‘complex systems’ in the natural sciences have it easy by comparison. Yet economic models that make unrealistic assumptions about human decision-making can’t help but suggest that we need to look elsewhere to fix the weak spots. Perhaps no one can be expected to anticipate the wild, not to mention illegal, behaviour of SocGen’s Kerviel or of those who brought low the US power company EnRon in 2001. But these examples are arguably only at the extreme end of a scale that is inherently biased towards high-risk activity by the very rules of engagement. State support of failing banks is just one example of the way that finance is geared to risky strategies: hedge fund managers, for example, get a hefty cut of their profits on top of a basic salary, but others pay for the losses [3]. The FRBNY’s vice president John Kambhu and his colleagues have pointed out that hedge funds (themselves a means of passing on risk) operate in a way that makes risk particularly severe and hard to manage [5].
That’s why, if understanding the financial market demands a better grasp of decision-making, with all its attendant irrationalities, it may be that managing the market to reduce risk and offer more secure public benefit requires more constraint, more checks and balances, to be put on that decision-making. We’re talking about regulation.
Free-market advocates firmly reject such ‘meddling’ on the basis that it cripples Adam Smith’s ‘invisible hand’ that guides the economy. But that hand is shaky, prone to wild gestures and sudden seizures, because it is no longer the collective hand of Smith’s sober bakers and pin-makers but that of rapacious profiteers creaming absurd wealth from deals in imaginary and incredible goods.
One suggestion is that banks and other financial institutions be required to make public how they are managing risk – basically, they should share currently proprietary information about expectations and strategies. This could reduce instability caused by each party trying to second-guess, and being forced to respond reactively, to the others. It might reduce opportunities to make high-risk killings, but the payoff would be to smooth away systemic crises of confidence. (Interestingly, the same proposal of transparency was made by nuclear scientists to Western governments after the development of the US atomic bomb, in the hope of avoiding the risks of an arms race.)
It’s true that too much regulation could be damaging, limiting the ability of the complex financial system to adapt spontaneously to absorb shocks. All the more reason to strive for a theoretical understanding of the processes involved. But experience alone tells us that it is time to move beyond Gordon Gekko’s infamous credo ‘greed is good’. One might argue that ‘a bit of greed is necessary’, but too much is liable to bend and rupture the pipes of the economy. As Hutton says [3], “We need the financiers to serve business and the economy rather than be its master.”
References
[1] Lanchester, J. ‘Dicing with disaster’, Guardian 26 January 2008.
[2] FRBNY Economic Policy Review special issue, ‘New directions for understanding systemic risk’, 13(2) (2007).
[3] Hutton, W. ‘This reckless greed of the few harms the future of the many’, Observer 27 January 2008.
[4] Anderson, J. V. in Encyclopedia of Complexity and Systems Science (Springer, in press, 2008)
[5] Kambhu, J. et al. FRBNY Economic Policy Review 13(3), 1-18 (2008).
[My latest Muse column for Nature online news…]
Managing risk in financial markets requires better understanding of their complex dynamics. But it’s already clear that unfettered greed makes matter worse.
It seems to be sheer coincidence that the multi-billion dollar losses at the French bank Société Générale (SocGen), caused by the illegal dealings of rogue trader Jérôme Kerviel, comes at a time of imminent global economic depression. But the conjunction has provoked discussion about whether such localized shocks to the financial market can trigger worldwide (‘systemic’) economic crises.
If so, what can be done to prevent it? Some have called for more regulation, particularly of the murky business that economists call derivatives trading and the rest of us would recognize as institutionalized gambling. “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions”, wrote John Lanchester in the British Guardian newspaper [1]. But how well do we understand what we’d be regulating?
The French affair is in a sense timely, because ‘systemic risk’ in the financial system has become a hot topic, as witnessed by a recent report by the Federal Reserve Bank of New York (FRBNY) and the US National Academy of Sciences [2]. Worries about systemic risk are indeed largely motivated by the link to global recessions, like the one currently looming. This concern was articulated after the Great Depression of the 1930s by the British economist John Maynard Keynes, who wanted to understand how the global economy can switch from a healthy to a depressed state, both of which seemed to be stable ‘equilibrium’ states to the extent that they stick around for a while.
That terminology implies that there is some common ground with the natural sciences. In physics, a change in the global state of a system from one equilibrium configuration to another is called a phase transition, and some economists use such terms and concepts borrowed from physics to talk about market dynamics.
The analogy is potentially misleading, however, because the financial system, and the global economy generally, is never in equilibrium. Money is constantly in motion, and it’s widely recognized that instabilities such as market crashes depend in a sensitive but ill-understood ways on feedbacks within the system that can act to amplify small disturbances and which enforce perpetual change. Other terms from ‘economese’, such as liquidity (the ability to exchange assets for cash), reveal an intuition of that dynamism, and indeed Keynes himself tried to develop a model of economics that relied on analogies with hydrodynamics.
Just as equilibrium spells death for living things, so the financial market is in trouble when money stops flowing. It’s when people stop investing, cutting off the bank loans that business needs to thrive, that a crisis looms. Banks themselves stay in business only if the money keeps comes in; when customers lose confidence and withdraw their cash – a ‘run on the bank’ like that witnessed recently at the UK’s Northern Rock – banks can no longer lend, have to call in existing loans at a loss, and face ruin. That prospect sets up a feedback: the more customers withdraw their money, the more others feel compelled to do so – and if the bank wasn’t in real danger of collapse at the outset, it soon is. The French government is trying to avoid that situation at SocGen, since the collapse of a bank has knock-on consequences that could wreak havoc throughout a nation’s economy, or even beyond.
In other words, bank runs may lead, as with Northern Rock, to the ironic spectacle of gung-ho advocates of the free market appealing, or even demanding, state intervention to bail them out. As Will Hutton puts it in the Observer newspaper, “financiers have organised themselves so that actual or potential losses are picked up by somebody else - if not their clients, then the state - while profits are kept to themselves” [3]. Even measures such as deposit insurance, introduced in the US after the bank runs of the 1930s, which ensures that depositors won’t lose their money even if the bank fails, arguably exacerbate the situation by encouraging banks to take more risks, secure in the knowledge that their customers are unlikely to lose their nerve and desert them.
Economists’ attempts to understand runaway feedbacks in situations like bank runs draw on another area of the natural sciences: epidemiology. They speak of ‘contagion’: the spread of behaviours from one agent, or one part of the market, to another, like the spread of disease in a population. In bank runs, contagion can even spread to other banks: one run leads people to fear others, and this then becomes a self-fulfilling prophecy.
Regardless of whether the current shaky economy might be toppled by a SocGen scandal, it is clear that the financial market is in general potentially susceptible to systemic failure caused by specific, local events. The terrorist attacks on the World Trade Centre on 11 September 2001 demonstrated that, albeit in a most unusual way – for the ‘shock’ here was not a market event as such but physical destruction of its ‘hardware’. Disruption of trading activity in banks in downtown Manhattan in effect caused a bottleneck in the flow of money that had serious knock-on consequences, leading to a precipitous drop in the global financial market.
The FRBNY report [2] is a promising sign that economists seeking to understand risk are open to the ideas and tools of the natural sciences that deal with phase transitions, feedbacks and other complex nonlinear dynamics. But the bugbear of all these efforts is that ultimately the matter hinges on human behaviour. Your propensity to catch a virus is indifferent to whether you feel optimistic or pessimistic about your chances of that; but with contagion in the economy, expectations are crucial.
This is where conventional economic models run into problems. Most of the tools used in financial markets, such as how to price assets and derivatives and how to deal with risk in portfolio management, rely on the assumption that market traders respond rationally and identically on the basis of complete information about the market. This leads to mathematical models that can be solved, but it doesn’t much resemble what real agents do. For one thing, different people reach different conclusions on the basis of the same data. They tend to be overconfident, to be biased towards information that confirms their preconceptions, to have poor intuition about probabilities of rare events, and to indulge in wishful thinking [4]. The field of behavioural finance, which garnered a Nobel prize for Daniel Kahneman in 2002, shows the beginnings of an acknowledgement of these complexities in decision-making – but they haven’t yet had much impact on the tools widely used to calculate and manage risk.
One can’t blame the vulnerability of the financial market on the inability of economists to model it. These poor folks are faced with a challenge of such magnitude that those working on ‘complex systems’ in the natural sciences have it easy by comparison. Yet economic models that make unrealistic assumptions about human decision-making can’t help but suggest that we need to look elsewhere to fix the weak spots. Perhaps no one can be expected to anticipate the wild, not to mention illegal, behaviour of SocGen’s Kerviel or of those who brought low the US power company EnRon in 2001. But these examples are arguably only at the extreme end of a scale that is inherently biased towards high-risk activity by the very rules of engagement. State support of failing banks is just one example of the way that finance is geared to risky strategies: hedge fund managers, for example, get a hefty cut of their profits on top of a basic salary, but others pay for the losses [3]. The FRBNY’s vice president John Kambhu and his colleagues have pointed out that hedge funds (themselves a means of passing on risk) operate in a way that makes risk particularly severe and hard to manage [5].
That’s why, if understanding the financial market demands a better grasp of decision-making, with all its attendant irrationalities, it may be that managing the market to reduce risk and offer more secure public benefit requires more constraint, more checks and balances, to be put on that decision-making. We’re talking about regulation.
Free-market advocates firmly reject such ‘meddling’ on the basis that it cripples Adam Smith’s ‘invisible hand’ that guides the economy. But that hand is shaky, prone to wild gestures and sudden seizures, because it is no longer the collective hand of Smith’s sober bakers and pin-makers but that of rapacious profiteers creaming absurd wealth from deals in imaginary and incredible goods.
One suggestion is that banks and other financial institutions be required to make public how they are managing risk – basically, they should share currently proprietary information about expectations and strategies. This could reduce instability caused by each party trying to second-guess, and being forced to respond reactively, to the others. It might reduce opportunities to make high-risk killings, but the payoff would be to smooth away systemic crises of confidence. (Interestingly, the same proposal of transparency was made by nuclear scientists to Western governments after the development of the US atomic bomb, in the hope of avoiding the risks of an arms race.)
It’s true that too much regulation could be damaging, limiting the ability of the complex financial system to adapt spontaneously to absorb shocks. All the more reason to strive for a theoretical understanding of the processes involved. But experience alone tells us that it is time to move beyond Gordon Gekko’s infamous credo ‘greed is good’. One might argue that ‘a bit of greed is necessary’, but too much is liable to bend and rupture the pipes of the economy. As Hutton says [3], “We need the financiers to serve business and the economy rather than be its master.”
References
[1] Lanchester, J. ‘Dicing with disaster’, Guardian 26 January 2008.
[2] FRBNY Economic Policy Review special issue, ‘New directions for understanding systemic risk’, 13(2) (2007).
[3] Hutton, W. ‘This reckless greed of the few harms the future of the many’, Observer 27 January 2008.
[4] Anderson, J. V. in Encyclopedia of Complexity and Systems Science (Springer, in press, 2008)
[5] Kambhu, J. et al. FRBNY Economic Policy Review 13(3), 1-18 (2008).
Saturday, January 26, 2008
No option
There is an excellent article in today’s Guardian by the author John Lanchester, who turns out to have a surprisingly (but after all, why not?) thorough understanding of the derivatives market. Lanchester’s piece is motivated by the extraordinary losses chalked up by rogue trader Jérôme Kerviel of the French bank Société Générale. Kerviel’s exploits seem to be provoking the predictable shock-horror about the kind of person entrusted with the world’s finances (as though the last 20 years had never happened). I suspect it was Lanchester’s intention to leave it unstated, but one can’t read his piece without a mounting sense that the derivatives market is one of humankind’s more deranged inventions. To bemoan that is not in itself terribly productive, since it is not clear how one legislates against the situation where one person bets an insane amount of (someone else's) money on an event of which he (not she, on the whole) has not the slightest real idea of the outcome, and another person says ‘you’re on!’. All the same, it is hard to quibble with Lanchester’s conclusion that “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.”
All this makes me appreciate that, while I have been a small voice among many to have criticized the conventional models of economics, in fact economists are only the poor chaps trying to make sense of the lunacy that is the economy. Which brings me to Fischer Black and Myron Scholes, who, Lanchester explains, published a paper in 1973 that gave a formula for how to price derivatives (specifically, options). What Lanchester doesn’t mention is that this Nobel-winning work made the assumption that the volatility of the market – the fluctuations in prices – follows the form dictated by a normal or Gaussian distribution. The problem is that it doesn’t. This is what I said about that in my book Critical Mass:
“Options are supposed to be relatively tame derivatives—thanks to the Black-Scholes model, which has been described as ‘the most successful theory not only in finance but in all of economics’. Black and Scholes considered the question of strategy: what is the best price for the buyer, and how can both the buyer and the writer minimize the risks? It was assumed that the buyer would be given a ‘risk discount’ that reflects the uncertainty in the stock price covered by the option he or she takes out. Scholes and Black proposed that these premiums are already inherent in the stock price, since riskier stock sells for relatively less than its expected future value than does safer stock.
Based on this idea, the two went on to devise a formula for calculating the ‘fair price’ of an option. The theory was a gift to the trader, who had only to plug in appropriate numbers and get out the figure he or she should pay.
But there was just one element of the model that could not be readily specified: the market volatility, or how the market fluctuates. To calculate this, Black and Scholes assumed that the fluctuations were gaussian.
Not only do we know that this is not true, but it means that the Black-Scholes formula can produce nonsensical results: it suggests that option-writing can be conducted in a risk-free manner. This is a potentially disastrous message, imbuing a false sense of confidence that can lead to huge losses. The shortcoming arises from the erroneous assumption about market variability, showing that it matters very much in practical terms exactly how the fluctuations should be described.
The drawbacks of the Scholes-Black theory are known to economists, but they have failed to ameliorate them. Many extensions and modifications of the model have been proposed, yet none of them guarantees to remove the risks. It has been estimated that the deficiencies of such models account for up to 40 percent of the 1997 losses in derivatives trading, and it appears that in some cases traders’ rules of thumb do better than mathematically sophisticated models.”
Just a little reminder that, say what you will about the ‘econophysicists’ who are among those to be working on this issue, there are some rather important lacunae remaining in economic theory.
There is an excellent article in today’s Guardian by the author John Lanchester, who turns out to have a surprisingly (but after all, why not?) thorough understanding of the derivatives market. Lanchester’s piece is motivated by the extraordinary losses chalked up by rogue trader Jérôme Kerviel of the French bank Société Générale. Kerviel’s exploits seem to be provoking the predictable shock-horror about the kind of person entrusted with the world’s finances (as though the last 20 years had never happened). I suspect it was Lanchester’s intention to leave it unstated, but one can’t read his piece without a mounting sense that the derivatives market is one of humankind’s more deranged inventions. To bemoan that is not in itself terribly productive, since it is not clear how one legislates against the situation where one person bets an insane amount of (someone else's) money on an event of which he (not she, on the whole) has not the slightest real idea of the outcome, and another person says ‘you’re on!’. All the same, it is hard to quibble with Lanchester’s conclusion that “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.”
All this makes me appreciate that, while I have been a small voice among many to have criticized the conventional models of economics, in fact economists are only the poor chaps trying to make sense of the lunacy that is the economy. Which brings me to Fischer Black and Myron Scholes, who, Lanchester explains, published a paper in 1973 that gave a formula for how to price derivatives (specifically, options). What Lanchester doesn’t mention is that this Nobel-winning work made the assumption that the volatility of the market – the fluctuations in prices – follows the form dictated by a normal or Gaussian distribution. The problem is that it doesn’t. This is what I said about that in my book Critical Mass:
“Options are supposed to be relatively tame derivatives—thanks to the Black-Scholes model, which has been described as ‘the most successful theory not only in finance but in all of economics’. Black and Scholes considered the question of strategy: what is the best price for the buyer, and how can both the buyer and the writer minimize the risks? It was assumed that the buyer would be given a ‘risk discount’ that reflects the uncertainty in the stock price covered by the option he or she takes out. Scholes and Black proposed that these premiums are already inherent in the stock price, since riskier stock sells for relatively less than its expected future value than does safer stock.
Based on this idea, the two went on to devise a formula for calculating the ‘fair price’ of an option. The theory was a gift to the trader, who had only to plug in appropriate numbers and get out the figure he or she should pay.
But there was just one element of the model that could not be readily specified: the market volatility, or how the market fluctuates. To calculate this, Black and Scholes assumed that the fluctuations were gaussian.
Not only do we know that this is not true, but it means that the Black-Scholes formula can produce nonsensical results: it suggests that option-writing can be conducted in a risk-free manner. This is a potentially disastrous message, imbuing a false sense of confidence that can lead to huge losses. The shortcoming arises from the erroneous assumption about market variability, showing that it matters very much in practical terms exactly how the fluctuations should be described.
The drawbacks of the Scholes-Black theory are known to economists, but they have failed to ameliorate them. Many extensions and modifications of the model have been proposed, yet none of them guarantees to remove the risks. It has been estimated that the deficiencies of such models account for up to 40 percent of the 1997 losses in derivatives trading, and it appears that in some cases traders’ rules of thumb do better than mathematically sophisticated models.”
Just a little reminder that, say what you will about the ‘econophysicists’ who are among those to be working on this issue, there are some rather important lacunae remaining in economic theory.
Thursday, January 24, 2008
Scratchbuilt genomes
[Here’s the pre-edited version of my latest story for Nature’s online news. I discuss this work also in the BBC World Service’s Science in Action programme this week.]
By announcing the first chemical synthesis of a complete bacterial genome [1], scientists in the US have shown that the stage is now set for the creation of the first artificial organisms – something that looks likely to be achieved within the next year.
The genome of the pathogenic bacterium Mycoplasma genitalium, made in the laboratory by Hamilton Smith and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, represents an increase by more than a factor of ten in the longest stretch of genetic material ever created by chemical means.
The complete genome of M. genitalium contains 582,970 of the fundamental building blocks of DNA, called nucleotide bases. Each of these was stitched in place by commercial DNA-synthesis companies according to the Venter Institute’s specifications, to make 101 separate segments of the genome. The scientists then used biotechnological methods to combine these fragments into a single genome within cells of E. coli bacteria and yeast.
M. genitalium has the smallest genome of any organism that can grow and replicate independently. (Viruses have smaller genomes, some of which have been synthesized before, but they cannot replicate on their own.) Its DNA contains the instructions for making just 485 proteins, which orchestrate the cells’ functions.
This genetic concision makes M. genitalium a candidate for the basis of a ‘minimal organism’, which would be stripped down further to contain the bare minimum of genes needed to survive. The Venter Institute team, which includes the institute’s founder, genomics pioneer Craig Venter, believe that around 100 of the bacterium’s genes could be jettisoned – but they don’t know which 100 these are.
The way to test that would be to make versions of the M. genitalium genome that lack some genes, and see whether it still provides a viable ‘operating system’ for the organism. Such an approach would also require a method for replacing a cell’s existing genome with a new, redesigned one. But Venter and his colleagues have already achieved such a ‘gene transplant’, which they reported last year between two bacteria closely related to M. genitalium [2].
Their current synthesis of the entire M. genitalium genome thus provides the other part of the puzzle. Chemical synthesis of DNA involves sequentially adding one of the four nucleotide bases to a growing chain in a specified sequence. The Venter Institute team farmed out this task to the companies Blue Heron Technology, DNA2.0 and GENEART.
But it is beyond the capabilities of the current techniques to join up all half a million or so bases in a single, continuous process. That was why the researchers ordered 101 fragments or ‘cassettes’, each of about 5000-7000 bases and with overlapping sequences that enabled them to be stuck together by enzymes.
To distinguish the synthetic DNA from the genomes of ‘wild’ M. genitalium, Smith and colleagues included ‘watermark’ sequences: stretches of DNA carrying a kind of barcode that designates its artificiality. These watermarks must be inserted at sites in the genome known to be able to tolerate such additions without their genetic function being impaired.
The researchers made one further change to the natural genome: they altered one gene in a way that was known to render M. genitalium unable to stick to mammalian cells. This ensured that cells carrying the artificial genome could not act as pathogens.
The cassettes were stitched together into strands that each contained a quarter of the total genome using DNA-linking enzymes within E. coli cells. But, for reasons that the researchers don’t yet understand, the final assembly of these quarter-genomes into a single circular strand didn’t run smoothly in the bacteria. So the team transferred them to cells of brewers’ yeast, in which the last steps of the assembly were carried out.
Smith and colleagues then extracted these synthetic genomes from the yeast cells, and used enzymes to chew up the yeast’s own DNA. They read out the sequences of the remaining DNA to check that these matched those of wild M. genitalium (apart from the deliberate modifications such as watermarks).
The ultimate evidence that the synthetic genomes are authentic copies, however, will be to show that cells can be ‘booted up’ when loaded with this genetic material. “This is the next step and we are working on it”, says Smith.
Advances in DNA synthesis might ultimately make this laborious stitching of fragments unnecessary, but Dorene Farnham, director of sales and marketing at Blue Heron in Bothell, Washington, stresses that that’s not a foregone conclusion. “The difficulty is not about length”, she says. “There are many other factors that go into getting these synthetic genes to survive in cells.”
Venter’s team hopes that a stripped-down version of the M. genitalium genome might serve as a general-purpose chassis to which might be added all sorts of useful designer functions, for example including genes that turn the bacteria into biological factories for making carbon-based ‘green’ fuels or hydrogen when fed with nutrients.
The next step towards that goal is to build potential minimal genomes from scratch, transplant them into Mycoplasma, and see if they will keep the cells alive. “We plan to start removing putative ‘non-essential’ genes and test whether we get viable transplants”, says Smith.
References
1. Gibson, D. G. et al. Science Express doi:10.1126/science.1151721 (2008).
2. Lartigue, C. et al. Science 317, 632 (2007).
[Here’s the pre-edited version of my latest story for Nature’s online news. I discuss this work also in the BBC World Service’s Science in Action programme this week.]
By announcing the first chemical synthesis of a complete bacterial genome [1], scientists in the US have shown that the stage is now set for the creation of the first artificial organisms – something that looks likely to be achieved within the next year.
The genome of the pathogenic bacterium Mycoplasma genitalium, made in the laboratory by Hamilton Smith and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, represents an increase by more than a factor of ten in the longest stretch of genetic material ever created by chemical means.
The complete genome of M. genitalium contains 582,970 of the fundamental building blocks of DNA, called nucleotide bases. Each of these was stitched in place by commercial DNA-synthesis companies according to the Venter Institute’s specifications, to make 101 separate segments of the genome. The scientists then used biotechnological methods to combine these fragments into a single genome within cells of E. coli bacteria and yeast.
M. genitalium has the smallest genome of any organism that can grow and replicate independently. (Viruses have smaller genomes, some of which have been synthesized before, but they cannot replicate on their own.) Its DNA contains the instructions for making just 485 proteins, which orchestrate the cells’ functions.
This genetic concision makes M. genitalium a candidate for the basis of a ‘minimal organism’, which would be stripped down further to contain the bare minimum of genes needed to survive. The Venter Institute team, which includes the institute’s founder, genomics pioneer Craig Venter, believe that around 100 of the bacterium’s genes could be jettisoned – but they don’t know which 100 these are.
The way to test that would be to make versions of the M. genitalium genome that lack some genes, and see whether it still provides a viable ‘operating system’ for the organism. Such an approach would also require a method for replacing a cell’s existing genome with a new, redesigned one. But Venter and his colleagues have already achieved such a ‘gene transplant’, which they reported last year between two bacteria closely related to M. genitalium [2].
Their current synthesis of the entire M. genitalium genome thus provides the other part of the puzzle. Chemical synthesis of DNA involves sequentially adding one of the four nucleotide bases to a growing chain in a specified sequence. The Venter Institute team farmed out this task to the companies Blue Heron Technology, DNA2.0 and GENEART.
But it is beyond the capabilities of the current techniques to join up all half a million or so bases in a single, continuous process. That was why the researchers ordered 101 fragments or ‘cassettes’, each of about 5000-7000 bases and with overlapping sequences that enabled them to be stuck together by enzymes.
To distinguish the synthetic DNA from the genomes of ‘wild’ M. genitalium, Smith and colleagues included ‘watermark’ sequences: stretches of DNA carrying a kind of barcode that designates its artificiality. These watermarks must be inserted at sites in the genome known to be able to tolerate such additions without their genetic function being impaired.
The researchers made one further change to the natural genome: they altered one gene in a way that was known to render M. genitalium unable to stick to mammalian cells. This ensured that cells carrying the artificial genome could not act as pathogens.
The cassettes were stitched together into strands that each contained a quarter of the total genome using DNA-linking enzymes within E. coli cells. But, for reasons that the researchers don’t yet understand, the final assembly of these quarter-genomes into a single circular strand didn’t run smoothly in the bacteria. So the team transferred them to cells of brewers’ yeast, in which the last steps of the assembly were carried out.
Smith and colleagues then extracted these synthetic genomes from the yeast cells, and used enzymes to chew up the yeast’s own DNA. They read out the sequences of the remaining DNA to check that these matched those of wild M. genitalium (apart from the deliberate modifications such as watermarks).
The ultimate evidence that the synthetic genomes are authentic copies, however, will be to show that cells can be ‘booted up’ when loaded with this genetic material. “This is the next step and we are working on it”, says Smith.
Advances in DNA synthesis might ultimately make this laborious stitching of fragments unnecessary, but Dorene Farnham, director of sales and marketing at Blue Heron in Bothell, Washington, stresses that that’s not a foregone conclusion. “The difficulty is not about length”, she says. “There are many other factors that go into getting these synthetic genes to survive in cells.”
Venter’s team hopes that a stripped-down version of the M. genitalium genome might serve as a general-purpose chassis to which might be added all sorts of useful designer functions, for example including genes that turn the bacteria into biological factories for making carbon-based ‘green’ fuels or hydrogen when fed with nutrients.
The next step towards that goal is to build potential minimal genomes from scratch, transplant them into Mycoplasma, and see if they will keep the cells alive. “We plan to start removing putative ‘non-essential’ genes and test whether we get viable transplants”, says Smith.
References
1. Gibson, D. G. et al. Science Express doi:10.1126/science.1151721 (2008).
2. Lartigue, C. et al. Science 317, 632 (2007).
Tuesday, January 22, 2008

Differences in the shower
[This is how my latest article for Nature’s Muse column started out. Check out also a couple of interesting papers in the latest issue of Phys. Rev. E: a study of how ‘spies’ affect the minority game, and a look at the value of diversity in promoting cooperation in the spatial Prisoner’s Dilemma.]
A company sets out to hire a 20-person team to solve a tricky problem, and has a thousand applicants to choose from. So they set them all a test related to the problem in question. Should they then pick the 20 people who do best? That sounds like a no-brainer, but there situations in which it would be better to hire 20 of the applicants at random.
This scenario was presented four years ago by social scientists Lu Hong and Scott Page of the University of Michigan [1] as an illustration of the value of diversity in human groups. It shows that many different minds are sometimes more effective than many ‘expert’ minds. The drawback of having a team composed of the ‘best’ problem-solvers is that they are likely all to think in the same way, and so are less likely to come up with versatile, flexible solutions. “Diversity”, said Hong and Page, “trumps ability.”
Page believes that studies like this, which present mathematical models of decision-making, show that initiatives to encourage cultural diversity in social, academic and institutional settings are not just exercises in politically correct posturing. To Page, they are ways of making the most of the social capital that human difference offers.
There are evolutionary analogues to this. Genetic diversity in a population confers robustness in the face of a changing environment, whereas a group of almost identical ‘optimally adapted’ organisms can come to grief when the wind shifts. Similarly, sexual reproduction provides healthy variety in our own genomes, while in ecology monocultures are notoriously fragile in the face of new threats.
But it’s possible to overplay the diversity card. Expert opinion, literary and artistic canons, and indeed the whole concept of ‘excellence’ have become fashionable whipping boys to the extent that some, particularly in the humanities, worry about standards and judgement vanishing in a deluge of relativist mediocrity. Of course it is important to recognize that diversity does not have to mean ‘anything goes’ (a range of artistic styles does not preclude discrimination of good from bad within each of them) – but that’s often what sceptics of the value of ‘diversity’ fear.
This is why models like that of Hong and Page bring some valuable precision to the questions of what diversity is and why and when it matters. That issue now receives a further dose of enlightenment from a study that looks, at face value, to be absurdly whimsical.
Economist Christina Matzke and physicist Damien Challet have devised a mathematical model of (as they put it) “taking a shower in youth hostels” [2]. Among the risks of budget travel, few are more hazardous than this. If you try to have a shower at the same time as everyone else, it’s a devil of a job adjusting the taps to get the right water temperature.
The problem, say Matzke and Challet, is that in the primitive plumbing systems of typical hostels, one person changing their shower temperature settings alters the balance of hot and cold water for everyone else too. They in turn try to retune the settings to their own comfort, with the result that the shower temperatures fluctuate wildly between scalding and freezing. Under what conditions, they ask, can everyone find a mutually acceptable compromise, rather than all furiously altering their shower controls while cursing the other guests?
So far, so amusing. But is this really such a (excuse me) burning issue? Challet’s previous work provides some kind of answer to that. Several years ago, he and physicist Yi-Cheng Zhang devised the so-called minority game as a model for human decision-making [3]. They took their lead from economist Brian Arthur, who was in the habit of frequenting a bar called El Farol in the town of Santa Fe where he worked [4]. The bar hosted an Irish music night on Thursdays which was often so popular that the place would be too crowded for comfort.
Noting this, some El Farol clients began staying away on Irish nights. That was great for those who did turn up – but once word got round that things were more comfortable, overcrowding resumed. In other words, attendance would fluctuate wildly, and the aim was to go only on those nights when you figured others would stay away.
But how do you know which nights those are? You don’t, of course. Human nature, however, prompts us to think we can guess. Maybe low attendance one week means high attendance the next? Or if it’s been busy three weeks in a row, the next is sure to be quiet? The fact is that there’s no ‘best’ strategy – it depends on what strategies other use.
The point of the El Farol problem, which Challet and Zhang generalized, is to be in the minority: to stay away when most others go, and vice versa. The reason why this is not a trivial issue is that the minority game serves as a proxy for many social situations, from lane-changing in heavy traffic to choosing your holiday destination. It is especially relevant in economics: in a buyer’s market, for example, it pays to be a seller. It’s unlikely that anyone decided whether or not to go to El Farol by plotting graphs and statistics, but market traders certainly do so, hoping to tease out trends that will enable them to make the best decisions. Each has a preferred strategy.
The maths of the minority game looks at how such strategies affect one another, how they evolve and how the ‘agents’ playing the game learn from experience. I once played it in an interactive lecture in which push-button voting devices were distributed to the audience, who were asked to decide in each round whether to be in group A or group B. (The one person who succeeded in being in the minority in all of several rounds said that his strategy was to switch his vote from one group to the other “one round later than it seemed common sense to do so.”)
So what about the role of diversity? Challet’s work showed that the more mixed the strategies of decision-making are, the more reliably the game settles down to the optimal average size of the majority and minority groups. In other words, attendance at El Farol doesn’t in that case fluctuate so much from one week to the next, and is usually close to capacity.
The Shower Temperature Problem is very different, because in principle the ideal situation, where everyone gets closest to their preferred temperature, happens when they all set their taps in the same way – that is, they all use the same strategy. However, this solution is unstable – the slightest deviation, caused by one person trying to tweak the shower settings to get a bit closer to the ideal, sets off wild oscillations in temperature as others respond.
In contrast, when there is a diversity of strategies – agents use a range of tap settings in an attempt to hit the desired water temperature – then these oscillations are suppressed and the system converges more reliably to an acceptable temperature for all. But there’s a price paid for that stability. While overall the water temperature doesn’t fluctuate strongly, individuals may find they have to settle for a temperature further from the ideal value than they would in the case of identical shower settings.
This problem is representative of any in which many agents try to obtain equal amounts of some fixed quantity that is not necessarily abundant enough to satisfy them all completely – factories or homes competing for energy in a power grid, perhaps. But more generally, the model of Matzke and Challet shows how diversity in decision-making may fundamentally alter the collective outcome. That may sound obvious, but don’t count on it. Conventional economic models have for decades stubbornly insisted on making all their agents identical. They are ‘representative’ – one size fits all – and they follow a single ‘optimal’ strategy that maximizes their gains.
There’s a good reason for this assumption: the models are very hard to solve otherwise. But there’s little point in having a tractable model if it doesn’t come close to describing reality. The static view of a ‘representative’ agent leads to the prediction of an ‘equilibrium’ economy, rather like the equilibrium shower system of Matzke and Challet’s homogeneous agents. Anyone contemplating the current world economy knows all too well what a myth this equilibrium is – and how real-world behaviour is sure to depend on the complex mix of beliefs that economic agents hold about the future and how to deal with it.
More generally, the Shower Temperature Problem offers another example of how difference and diversity can improve the outcome of group decisions. Encouraging diversity is not then about being liberal or tolerant (although it tends to require both) but about being rational. Perhaps the deeper challenge for human societies, and the one that underpins current debates about multiculturalism, is how to cope with differences not in problem-solving strategies but in the question of what the problems are and what the desired solutions should be.
References
1. Hong, L. & Page, S. E. Proc. Natl Acad. Sci. USA 101, 16385 (2004).
2. Matzke, C. & Challet, D. preprint http://www.arxiv.org/abs/0801.1573 (2008).
3. Arthur, B. W. Am. Econ. Assoc. Papers & Proc. 84, 406. (1994).
4. Challet, D. & Zhang, Y.-C. Physica A 246, 407 (1997).
Wednesday, January 16, 2008
Groups, glaciation and the pox
[This is the pre-edited version of my Lab Report column for the February issue of Prospect.]
Blaming America for the woes of the world is an old European habit. Barely three decades after Columbus’s crew returned from the New World, a Spanish doctor claimed they brought back the new disease that was haunting Europe: syphilis, so named in the 1530s by the Italian Girolamo Fracastoro. All social strata were afflicted: kings, cardinals and popes suffered alongside soldiers, although sexual promiscuity was so common that the venereal nature of the disease took time to emerge. Treatments were fierce and of limited value: inhalations of mercury vapour had side-effects as bad as the symptoms, while only the rich could afford medicines made from guaiac wood imported from the West Indies.
But it became fashionable during the twentieth century to doubt the New World origin of syphilis: perhaps the disease was a dormant European one that acquired new virulence during the Renaissance? Certainly, the bacterial spirochete Treponema pallidum (subspecies pallidum) that causes syphilis is closely related to other ‘treponemal’ pathogens, such as that which causes yaws in hot, humid regions like the Congo and Indonesia. Most of these diseases leave marks on the skeleton and so can be identified in human remains. They are seen widely in New World populations dating back thousands of years, but reported cases of syphilis-like lesions in Old World remains before Columbus have been ambiguous.
Now a team of scientists in Atlanta, Georgia, has analysed the genetics of many different strains of treponemal bacteria to construct an evolutionary tree that not only identifies how venereal syphilis emerged but shows where in the world its nearest genetic relatives are found. This kind of ‘molecular phylogenetics’, which builds family trees not from a traditional comparison of morphologies but by comparing gene sequences, has revolutionized palaeonotology, and it works as well for viruses and bacteria as it does for hominids and dinosaurs. The upshot is that T. pallidum subsp. pallidum is more closely related to a New World subspecies than it is to Old World strains. In other words, it looks as though the syphilis spirochete indeed mutated from an American progenitor. That doesn’t quite imply that Columbus’s sailors brought syphilis back with them, however – it’s also possible that they carried a non-venereal form that quickly mutated into the sexually transmitted disease on its arrival. Given that syphilis was reported within two years of Columbus’s landing in Spain, that would have been a quick change.
****
Having helped to bury the notion of group selection in the 1970s, Harvard biologist E. O. Wilson is now attempting to resurrect it. He has a tough job on his hands; most evolutionary biologists have firmly rejected this explanation for altruism, and Richard Dawkins has called Wilson’s new support for group selection a ‘weird infatuation’ that is ‘unfortunate in a biologist who is so justly influential.’
The argument is all about why we are (occasionally) nice to one another, rather than battling, red in tooth and claw, for limited resources. The old view of group selection said simply that survival prospects may improve if organisms act collectively rather than individually. Human altruism, with its framework of moral and social imperatives, is murky territory for such questions, but cooperation is common enough in the wild, particularly in eusocial insects such as ants and bees. Since the mid-twentieth century such behaviour has been explained not by vague group selection but via kin selection: by helping those genetically related to us, we propagate our genes. It is summed up in the famous formulation of J. B. S. Haldane that he would lay down his life for two brothers or eight cousins – a statement of the average genetic overlaps that make the sacrifice worthwhile. Game theory now offers versions of altruism that don’t demand kinship – cooperation of non-relatives can also be to mutual benefit – but kin selection remains the dominant explanation for eusociality.
That was the position advocated by Wilson in his 1975 book Sociobiology. In a forthcoming book The Superorganism, and a recent paper, he now reverses this claim and says that kin selection may not be all that important. What matters, he says, is that a population possess genes that predispose the organisms to flexible behavioural choices, permitting a switch from competitive to cooperative action in ‘one single leap’ when the circumstances make it potentially beneficial.
Wilson cites a lack of direct, quantitative evidence for kin selection, although others have disputed that criticism. In the end the devil is in the details – specifically in the maths of how much genetic common ground a group needs to make self-sacrifice pay – and it’s not clear that either camp yet has the numbers to make an airtight case.
****
The discovery of ice sheets half the size of today’s Antarctic ice cap during the ‘super-greenhouse’ climate of the Turonian stage, 93.5-89.3 million years ago, seems to imply that we need not fret about polar melting today. With atmospheric greenhouse gas levels 3-10 times higher than now, ocean temperatures around 5 degC warmer, and crocodiles swimming in the Arctic, the Turonian sounds like the IPCC’s worst nightmare. But it’s not at all straightforward to extrapolate between then and now. More intense circulation of water in the atmosphere could have left thick glaciers on the high mountains and plateaus of Antarctica even in those torrid times. In any event, a rather particular set of climatic circumstances seems to have been at play – the glaciation does not persist throughout the warm Cretaceous period. And it is always important to remember that, with climate, where you end up tends to depend on where you started from.
[This is the pre-edited version of my Lab Report column for the February issue of Prospect.]
Blaming America for the woes of the world is an old European habit. Barely three decades after Columbus’s crew returned from the New World, a Spanish doctor claimed they brought back the new disease that was haunting Europe: syphilis, so named in the 1530s by the Italian Girolamo Fracastoro. All social strata were afflicted: kings, cardinals and popes suffered alongside soldiers, although sexual promiscuity was so common that the venereal nature of the disease took time to emerge. Treatments were fierce and of limited value: inhalations of mercury vapour had side-effects as bad as the symptoms, while only the rich could afford medicines made from guaiac wood imported from the West Indies.
But it became fashionable during the twentieth century to doubt the New World origin of syphilis: perhaps the disease was a dormant European one that acquired new virulence during the Renaissance? Certainly, the bacterial spirochete Treponema pallidum (subspecies pallidum) that causes syphilis is closely related to other ‘treponemal’ pathogens, such as that which causes yaws in hot, humid regions like the Congo and Indonesia. Most of these diseases leave marks on the skeleton and so can be identified in human remains. They are seen widely in New World populations dating back thousands of years, but reported cases of syphilis-like lesions in Old World remains before Columbus have been ambiguous.
Now a team of scientists in Atlanta, Georgia, has analysed the genetics of many different strains of treponemal bacteria to construct an evolutionary tree that not only identifies how venereal syphilis emerged but shows where in the world its nearest genetic relatives are found. This kind of ‘molecular phylogenetics’, which builds family trees not from a traditional comparison of morphologies but by comparing gene sequences, has revolutionized palaeonotology, and it works as well for viruses and bacteria as it does for hominids and dinosaurs. The upshot is that T. pallidum subsp. pallidum is more closely related to a New World subspecies than it is to Old World strains. In other words, it looks as though the syphilis spirochete indeed mutated from an American progenitor. That doesn’t quite imply that Columbus’s sailors brought syphilis back with them, however – it’s also possible that they carried a non-venereal form that quickly mutated into the sexually transmitted disease on its arrival. Given that syphilis was reported within two years of Columbus’s landing in Spain, that would have been a quick change.
****
Having helped to bury the notion of group selection in the 1970s, Harvard biologist E. O. Wilson is now attempting to resurrect it. He has a tough job on his hands; most evolutionary biologists have firmly rejected this explanation for altruism, and Richard Dawkins has called Wilson’s new support for group selection a ‘weird infatuation’ that is ‘unfortunate in a biologist who is so justly influential.’
The argument is all about why we are (occasionally) nice to one another, rather than battling, red in tooth and claw, for limited resources. The old view of group selection said simply that survival prospects may improve if organisms act collectively rather than individually. Human altruism, with its framework of moral and social imperatives, is murky territory for such questions, but cooperation is common enough in the wild, particularly in eusocial insects such as ants and bees. Since the mid-twentieth century such behaviour has been explained not by vague group selection but via kin selection: by helping those genetically related to us, we propagate our genes. It is summed up in the famous formulation of J. B. S. Haldane that he would lay down his life for two brothers or eight cousins – a statement of the average genetic overlaps that make the sacrifice worthwhile. Game theory now offers versions of altruism that don’t demand kinship – cooperation of non-relatives can also be to mutual benefit – but kin selection remains the dominant explanation for eusociality.
That was the position advocated by Wilson in his 1975 book Sociobiology. In a forthcoming book The Superorganism, and a recent paper, he now reverses this claim and says that kin selection may not be all that important. What matters, he says, is that a population possess genes that predispose the organisms to flexible behavioural choices, permitting a switch from competitive to cooperative action in ‘one single leap’ when the circumstances make it potentially beneficial.
Wilson cites a lack of direct, quantitative evidence for kin selection, although others have disputed that criticism. In the end the devil is in the details – specifically in the maths of how much genetic common ground a group needs to make self-sacrifice pay – and it’s not clear that either camp yet has the numbers to make an airtight case.
****
The discovery of ice sheets half the size of today’s Antarctic ice cap during the ‘super-greenhouse’ climate of the Turonian stage, 93.5-89.3 million years ago, seems to imply that we need not fret about polar melting today. With atmospheric greenhouse gas levels 3-10 times higher than now, ocean temperatures around 5 degC warmer, and crocodiles swimming in the Arctic, the Turonian sounds like the IPCC’s worst nightmare. But it’s not at all straightforward to extrapolate between then and now. More intense circulation of water in the atmosphere could have left thick glaciers on the high mountains and plateaus of Antarctica even in those torrid times. In any event, a rather particular set of climatic circumstances seems to have been at play – the glaciation does not persist throughout the warm Cretaceous period. And it is always important to remember that, with climate, where you end up tends to depend on where you started from.
Subscribe to:
Posts (Atom)