Engineering for the better?
[This is the pre-edited version of my latest Muse column for Nature News.]
Many of the grand technological challenges of the century ahead are inseparable from their sociopolitical context.
At the meeting of the American Association for the Advancement of Science in Boston last week, a team of people selected by the US National Academy of Engineering identified 14 ‘grand challenges for engineering’ that would help make the world “a more sustainable, safe, healthy, and joyous – in other words, better- place.”
It’s heartening to see engineers, long dismissed as the lumpen, dirty-handed serfs labouring at the foot of science’s lofty citadel, asserting in this manner their subject’s centrality to our future course. Without rehearsing again the debates about the murky boundaries between pure and applied science, or science and technology, it’s rather easy to see that technologists have altered human culture in ways that scientists never have. Plato, Galileo, Darwin and Einstein have reshaped our minds, but there is hardly an action we can take in the industrialized world that does not feel the influence of engineering.
This, indeed, is why one can argue that a moral, ethical and generally humanistic sensitivity is needed in engineering even more than it is in the abstract natural sciences. It is by the same token the reason why engineering is a political as well as a technological activity: whether they are making dams or databases, engineers are both moving and being moved by the sociopolitical landscape.
This is abundantly clear in the Grand Challenges project. The vision it outlines is, by and large, a valuable and praiseworthy one. It recognizes explicitly that “the most difficult challenge of all will be to disperse the fruits of engineering widely around the globe, to rich and poor alike.” Its objectives, the statement says are “goals for all the world’s people.”
Yet some of the problems identified arguably say more about the current state of mind of Western culture than about what engineering can do or what goals are most urgent. Two of the challenges are concerned with security – or what the committee calls vulnerability – and two focus on the personalization of services – health and education – that have traditionally been seen as generalized ‘one size fits all’ affairs. There are good arguments why it is worthwhile recognizing individual differences – not all medicines have the same effect on everyone (in either good or bad ways), and not everyone learns in the same way. But there is surely a broader political dimension to the notion that we seem now to demand greater tailoring of public services to our personal needs, and greater protection from ‘outsiders’.
What is particularly striking is how ‘vulnerability’ and security are here no longer discussed in terms of warfare (one of the principal engines of technological innovation since ancient times) but attacks on society from nefarious, faceless aggressors such as nuclear and cyber terrorists. These are real threats, but presented this way in terms of engineering challenges makes for a very odd perspective.
For example, let us say (for the sake of argument) that there exists a country where guns can be readily bought at the corner store. How can we make the law-abiding citizen safe from firearms falling into the hands of homicidal madmen? The answers proposed here are, in effect, to develop technologies for making the stores more secure, for keeping track of where the guns are, for cleaning up after a massacre, and for finding out who did it. To which one might be tempted to add another humble suggestion: what if the shops did not sell guns?
To put it bluntly, discussing nuclear security without any mention of nuclear non-proliferation agreements and efforts towards disarmament is nonsensical. In one sense, perhaps it is understandably difficult for a committee on engineering to suggest that part of the solution to a problem might lie with not making things. Cynics might also suspect a degree of political expediency at work, but I think it is more reasonable to say that questions of this nature don’t really fall into the hands of engineers at all but are contingent on the political climate. To put it another way, I suspect the most stimulating lists of ways to make the world better won’t just include things that everyone can reasonably deem desirable, but things that some will not.
The limited boundaries of the debate are the central shortcoming of an exercise like this. It was made clear from the outset that all these topics are being considered purely from an engineering point of view, but one can hardly read the list without feeling that it is really attempting to enumerate all the big challenges facing humankind that have some degree of technical content. The solutions, and perhaps even the choices, are then bound to disappoint, because just about any challenge of this sort does not depend on technology alone, or even primarily.
Take health, for example. Most of the diseases in the world (and AIDS is now only a partial exception) are ones we know already how to prevent, cure or keep at bay. Technology can play a part in making such treatments cheaper or more widely available (or, in cases of waterborne diseases, say, not necessary in the first place) – but in the immediate future, health informatics and personalized medicine are hardly the key requirements. Economics, development and diet are likely to have a much bigger effect on global health than cutting-edge medical science.
None of this is to deny the value of the Grand Challenges project. But it highlights the fact that one the most important goals is to integrate science and technology with other social and cultural forces. This is a point made by philosopher of science Nicholas Maxwell in his 1984 book From Knowledge to Wisdom (a new edition of which has just been published by Pentire Press).
To blame science for the ills of the world is to miss the point, says Maxwell. “What we urgently need to do - given the unprecedented powers bequeathed to us by science - is to learn how to tackle our immense, intractable problems of living in rather more intelligent, humane, cooperatively rational ways than we do at present… We need a new kind of academic inquiry that gives intellectual priority to our problems of living - to clarifying what our problems are, and to proposing and critically assessing the possible solutions.”
He proposes that, to this end, the natural sciences should include three domains of discussion: not just evidence and theory, but aims, “this last category covering discussion of metaphysics, values and politics.” There is certainly much to challenge in Maxwell’s position. Trofim Lysenko’s fatefully distorted genetics in the Stalinist Soviet Union, for example, had ‘values and politics’; and the hazards of excessively goal-driven research are well-known in this age of political and economic short-termism.
Maxwell tackles such criticisms in his book, but his wider point – that science and technology should not just be cognisant of social and ethical factors but better integrated with them – is important. The Grand Challenges committee is full of wise and humane technologists. Next time, it would be interesting to include some who are the former but not the latter.
Friday, February 22, 2008
Sunday, February 17, 2008
Ye Gods
Yes, as the previous entry shows, I am reading Jeanette Winterson’s The Stone Gods. Among the most trivial of the issues it makes me ponder is what kind of fool gave the name to silicone polymers. You don’t exactly have to be a linguist to see where that was going to lead. The excuse that there was some chemical rationale for it (the ‘-one’ suffix was chosen by analogy to ketones, with which silicones were mistakenly thought to be homologous) is no excuse at all. After all, chemistry is replete with antiquated names in which a terminal ‘e’ became something of a matter of taste, including alizarine and indeed proteine. So we are now saddled with endless confusions of silicone with silicon – with the particularly unfortunate (or is it?) consequence in Winterson’s case that her robot Spike is implied to have a brain made of the same stuff as the brainless Pink’s breasts.
But for some reason I find myself forgiving just about anything in Jeanette Winterson. Partly this is because her passion for words is so ingenuous and valuable, and partly it may be because my instinct for false modesty is so grotesquely over-developed that I can only gaze in awed admiration at someone who will unhesitatingly nominate their own latest book as the year’s best. But I must also guiltily confess that it is clearly also because we are so clearly both on The Same Side on just about every issue (how could it be otherwise for someone who cites Tove Jansson among her influences?). It is deplorable, I know, that I would be all smug and gloating if the science errors in The Stone Gods had come from someone like Michael Crichton. But of course Crichton preens about the ‘accuracy’ of his research (sufficiently to fool admittedly gullible US politicians), whereas it is really missing the point of Winterson to get all het up about her use of light-years as a unit of time.
Ah, but all the same – where were the editors? Is this the fate of famous authors – that no one deems it necessary to fact-check you any more? True, it is only the sci-fi nerd who will worry that Winterson’s spacecraft can zip about at ‘light speed’ (which we can understand, with poetic licence, as near-light-speed) without the slightest sign of any time dilation. And she never really pretends to be imagining a real future (she says she hates science fiction, although I assume with a narrow definition), so there’s no point in scoffing at the notion that blogs and iPods have somehow survived into the age of interstellar travel. But listen, you don’t need to be a scientist to sense something wrong with this:
“In space it is difficult to tell what is the right way up; space is curved, stars and planets are globes. There is no right way up. The Ship itself is tilting at a forty-five degree angle, but it is the instruments that tell me so, not my body looking out of the window.”
Um, and the instruments are measuring with respect to what? This is actually a rather lovely demonstration of the trap of our earthbound intuitions – which brings me back to the piece below. Oh ignore me, Jeanette (as if you needed telling).
Yes, as the previous entry shows, I am reading Jeanette Winterson’s The Stone Gods. Among the most trivial of the issues it makes me ponder is what kind of fool gave the name to silicone polymers. You don’t exactly have to be a linguist to see where that was going to lead. The excuse that there was some chemical rationale for it (the ‘-one’ suffix was chosen by analogy to ketones, with which silicones were mistakenly thought to be homologous) is no excuse at all. After all, chemistry is replete with antiquated names in which a terminal ‘e’ became something of a matter of taste, including alizarine and indeed proteine. So we are now saddled with endless confusions of silicone with silicon – with the particularly unfortunate (or is it?) consequence in Winterson’s case that her robot Spike is implied to have a brain made of the same stuff as the brainless Pink’s breasts.
But for some reason I find myself forgiving just about anything in Jeanette Winterson. Partly this is because her passion for words is so ingenuous and valuable, and partly it may be because my instinct for false modesty is so grotesquely over-developed that I can only gaze in awed admiration at someone who will unhesitatingly nominate their own latest book as the year’s best. But I must also guiltily confess that it is clearly also because we are so clearly both on The Same Side on just about every issue (how could it be otherwise for someone who cites Tove Jansson among her influences?). It is deplorable, I know, that I would be all smug and gloating if the science errors in The Stone Gods had come from someone like Michael Crichton. But of course Crichton preens about the ‘accuracy’ of his research (sufficiently to fool admittedly gullible US politicians), whereas it is really missing the point of Winterson to get all het up about her use of light-years as a unit of time.
Ah, but all the same – where were the editors? Is this the fate of famous authors – that no one deems it necessary to fact-check you any more? True, it is only the sci-fi nerd who will worry that Winterson’s spacecraft can zip about at ‘light speed’ (which we can understand, with poetic licence, as near-light-speed) without the slightest sign of any time dilation. And she never really pretends to be imagining a real future (she says she hates science fiction, although I assume with a narrow definition), so there’s no point in scoffing at the notion that blogs and iPods have somehow survived into the age of interstellar travel. But listen, you don’t need to be a scientist to sense something wrong with this:
“In space it is difficult to tell what is the right way up; space is curved, stars and planets are globes. There is no right way up. The Ship itself is tilting at a forty-five degree angle, but it is the instruments that tell me so, not my body looking out of the window.”
Um, and the instruments are measuring with respect to what? This is actually a rather lovely demonstration of the trap of our earthbound intuitions – which brings me back to the piece below. Oh ignore me, Jeanette (as if you needed telling).
Friday, February 15, 2008
There’s no place like home
… but that won’t stop us looking for it in our search for extraterrestrials.
[This is the pre-edited version of my latest Muse column for Nature news. Am I foolish to imagine there might be people out there who appreciate the differences? Don't answer that.]
In searching the skies for other worlds, are we perhaps just like the English tourists waddling down the Costa del Sol, our eyes lighting up when we see “The Red Lion” pub with the Union Jack in the windows and Watneys Red Barrel on tap? Gazing out into the unutterably vast, unnervingly strange depths of the cosmos, are we not really just hankering for somewhere that looks like home?
It isn’t just a longing for the familiar that has stirred up excitement about the discovery of what looks like a scaled-down version of our own solar system surrounding a distant star [1]. But neither, I think, is that impulse absent.
There’s sound reasoning in looking for ‘Earth-like’ extrasolar planets, because the one thing we can say for sure about these places is that they are capable of supporting life. And it is entirely understandable that extraterrestrial life should be the pot of gold at the end of this particular rainbow.
Yet I doubt that the cold logic of this argument is all there is behind our fascination with Earth-likeness. Science-fiction writers and movie makers have sometimes had fun inventing worlds very different to our own, peopled (can we say that?) with denizens of corresponding weirdness. But that, on the whole, is the exception. That the Klingons and Romulans looked strangely like Californians with bad hangovers was not simply a matter of budget constraints. Edgar Rice Burroughs’ Mars was apparently situated somewhere in the Sahara, populated by extras from the Arabian Nights. In Jeanette Winterson’s new novel The Stone Gods, a moribund, degenerate Earth (here called Orbus) rejoices in the discovery of a pristine Blue Planet in the equivalent of the Cretaceous period, because it offers somewhere to escape to (and might that, in these times haunted by environmental change, nuclear proliferation and fears of planet-searing impacts, already be a part of our own reverie?). Most fictional aliens have been very obviously distorted or enhanced versions of ourselves, both physically and mentally, because in the end our stories, right back to those of Valhalla, Olympus and the seven Hindu heavens, have been more about exploring the human condition than genuinely imagining something outside it.
This solipsism is understandable but deep-rooted, and we shouldn’t imagine that astrobiology and extrasolar planetary prospecting are free from it. However, the claim by the discoverers of the new ‘mini-solar system’ that “solar system analogs may be common” around other stars certainly amounts to more than saying “hey, you can get a decent cup of coffee in this god-forsaken place”. It shows that our theories of the formation and evolution of planetary systems are not parochial, and offers some support for the suspicion that previous methods of planet detection bias our findings towards oddballs such as ‘hot Jupiters’. The fact that the relatively new technique used in this case – gravitational microlensing – has so quickly turned up a ‘solar system analog’ is an encouraging sign that indeed our own neighbourhood is not an anomaly.
The desire – it is more than an unspoken expectation – to find a place that looks like home is nevertheless a persistent bugbear of astrobiology. A conference organized in 2003 to address the question “Can life exist without water?” had as part of its agenda the issue of whether non-aqueous biochemistries could be imagined [2]. But in the event, the participants did not feel comfortable in straying beyond our atmosphere, and so the debate became that of whether proteins can function in the dry or in other solvents, rather than whether other solvents can support the evolution of a non-protein equivalent of enzymes. Attempts to re-imagine biology in, say, liquid methane or ammonia, have been rare [3]. An even more fundamental question, which I have never seen addressed anywhere, is whether evolution has to be Darwinian. It would be a daunting challenge to think of any better way to achieve ‘design’ and function blindly, but there is no proof that Darwin has a monopoly on such matters. Do we even need evolution? Are we absolutely sure that some kind of spontaneous self-organization can’t create life-like complexity, without the need for replication, say?
Maybe these questions are too big to be truly scientific in this form. Better, then, to break bits off them. Marcelo Gleiser and his coworkers at Dartmouth College in New Hampshire have done that in a recent preprint [4], asking whether a ‘replica Earth’ would share our left-handed proteins and right-handed nucleic acids. The handedness here refers to the mirror-image shapes of the biomolecular building blocks. The two mirror-image forms are called enantiomers, and are distinguishable by the fact that they rotate the plane of polarized light to the left or the right.
In principle, all our biochemistry could be reversed by mirror reflection of these shapes, and we’d never notice. So the question is why one set of enantiomers was preferred over the other. One possibility is that it was purely random – once the choice is made, it is fixed, because building blocks of the ‘wrong’ chirality don’t ‘fit’ when constructing organisms. Other explanations, however, suggest that life’s hand was biased at the outset, perhaps by the intrinsic left-handedness in the laws of fundamental physics, or because there was an excess of left-handed amino acids that fell to Earth on meteorites and seeded the first life (that is simply deferring the question, however).
Gleiser and his coworkers argue that these ideas may all be irrelevant. They say that environmental disturbances strong and long enough can reset the handedness, if this is propagated in the prebiotic environment by an autocatalytic process in which an enantiomer acts as a catalyst to create more of itself while blocking the chemical reaction that leads to the other enantiomer. Such a self-amplifying process was proposed in 1953 by physicist Charles Frank, and was demonstrated experimentally in the 1990s by Japanese chemist Kenso Soai.
The US researchers show that an initially random mixture of enantiomers in such a system quickly develops patchiness, with big blobs of each enantiomer accumulating like oil separating from vinegar in an unstirred salad dressing. Chance initial variations will lead to one or other enantiomer eventually dominating. But an environmental disruption, like the planet-sterilizing giant impacts suffered by the early Earth, can shake the salad dressing, breaking up the blobs. When the process begins again, the new dominant enantiomer that emerges may be different from the one before, even if there was a small excess of the other at the outset. As a result, they say, the origin of life’s handedness “is enmeshed with Earth’s environmental history” – and is therefore purely contingent.
Other researchers I have spoken to question whether the scheme Gleiser’s team has considered – autocatalytic spreading in an unstirred solvent – has much relevance to ‘warm little ponds’ on the turbulent young Earth, and whether the notion of resetting by shaking isn’t obvious in any case in a process like Frank’s in which chance variations get amplified. But of course, in an astrobiological context the more fundamental issue is whether there is the slightest reason to think that alien life will use amino acids and DNA, so that a comparison of handedness will be possible.
That doesn’t mean these questions aren’t worth pursuing (they remain relevant to life on Earth, at the very least). But it’s another illustration of our tendency to frame the questions parochially. In the quest for life elsewhere, whether searching for new planets or considering the molecular parameters of potential living systems, we are in some ways more akin to historians than to scientists: our data is a unique narrative, and our thinking is likely to stay trapped within it.
References
1. Gaudi, B. S. et al. Science 319, 927-930 (2008).
2. Phil. Trans. R. Soc. Lond. Ser. B special issue, 359 (no. 1448) (2004).
3. Benner, S. A. et al. Curr. Opin. Chem. Biol. 8, 672 (2004).
4. Gleiser, M. et al. http://arxiv.org/abs/0802.1446
… but that won’t stop us looking for it in our search for extraterrestrials.
[This is the pre-edited version of my latest Muse column for Nature news. Am I foolish to imagine there might be people out there who appreciate the differences? Don't answer that.]
In searching the skies for other worlds, are we perhaps just like the English tourists waddling down the Costa del Sol, our eyes lighting up when we see “The Red Lion” pub with the Union Jack in the windows and Watneys Red Barrel on tap? Gazing out into the unutterably vast, unnervingly strange depths of the cosmos, are we not really just hankering for somewhere that looks like home?
It isn’t just a longing for the familiar that has stirred up excitement about the discovery of what looks like a scaled-down version of our own solar system surrounding a distant star [1]. But neither, I think, is that impulse absent.
There’s sound reasoning in looking for ‘Earth-like’ extrasolar planets, because the one thing we can say for sure about these places is that they are capable of supporting life. And it is entirely understandable that extraterrestrial life should be the pot of gold at the end of this particular rainbow.
Yet I doubt that the cold logic of this argument is all there is behind our fascination with Earth-likeness. Science-fiction writers and movie makers have sometimes had fun inventing worlds very different to our own, peopled (can we say that?) with denizens of corresponding weirdness. But that, on the whole, is the exception. That the Klingons and Romulans looked strangely like Californians with bad hangovers was not simply a matter of budget constraints. Edgar Rice Burroughs’ Mars was apparently situated somewhere in the Sahara, populated by extras from the Arabian Nights. In Jeanette Winterson’s new novel The Stone Gods, a moribund, degenerate Earth (here called Orbus) rejoices in the discovery of a pristine Blue Planet in the equivalent of the Cretaceous period, because it offers somewhere to escape to (and might that, in these times haunted by environmental change, nuclear proliferation and fears of planet-searing impacts, already be a part of our own reverie?). Most fictional aliens have been very obviously distorted or enhanced versions of ourselves, both physically and mentally, because in the end our stories, right back to those of Valhalla, Olympus and the seven Hindu heavens, have been more about exploring the human condition than genuinely imagining something outside it.
This solipsism is understandable but deep-rooted, and we shouldn’t imagine that astrobiology and extrasolar planetary prospecting are free from it. However, the claim by the discoverers of the new ‘mini-solar system’ that “solar system analogs may be common” around other stars certainly amounts to more than saying “hey, you can get a decent cup of coffee in this god-forsaken place”. It shows that our theories of the formation and evolution of planetary systems are not parochial, and offers some support for the suspicion that previous methods of planet detection bias our findings towards oddballs such as ‘hot Jupiters’. The fact that the relatively new technique used in this case – gravitational microlensing – has so quickly turned up a ‘solar system analog’ is an encouraging sign that indeed our own neighbourhood is not an anomaly.
The desire – it is more than an unspoken expectation – to find a place that looks like home is nevertheless a persistent bugbear of astrobiology. A conference organized in 2003 to address the question “Can life exist without water?” had as part of its agenda the issue of whether non-aqueous biochemistries could be imagined [2]. But in the event, the participants did not feel comfortable in straying beyond our atmosphere, and so the debate became that of whether proteins can function in the dry or in other solvents, rather than whether other solvents can support the evolution of a non-protein equivalent of enzymes. Attempts to re-imagine biology in, say, liquid methane or ammonia, have been rare [3]. An even more fundamental question, which I have never seen addressed anywhere, is whether evolution has to be Darwinian. It would be a daunting challenge to think of any better way to achieve ‘design’ and function blindly, but there is no proof that Darwin has a monopoly on such matters. Do we even need evolution? Are we absolutely sure that some kind of spontaneous self-organization can’t create life-like complexity, without the need for replication, say?
Maybe these questions are too big to be truly scientific in this form. Better, then, to break bits off them. Marcelo Gleiser and his coworkers at Dartmouth College in New Hampshire have done that in a recent preprint [4], asking whether a ‘replica Earth’ would share our left-handed proteins and right-handed nucleic acids. The handedness here refers to the mirror-image shapes of the biomolecular building blocks. The two mirror-image forms are called enantiomers, and are distinguishable by the fact that they rotate the plane of polarized light to the left or the right.
In principle, all our biochemistry could be reversed by mirror reflection of these shapes, and we’d never notice. So the question is why one set of enantiomers was preferred over the other. One possibility is that it was purely random – once the choice is made, it is fixed, because building blocks of the ‘wrong’ chirality don’t ‘fit’ when constructing organisms. Other explanations, however, suggest that life’s hand was biased at the outset, perhaps by the intrinsic left-handedness in the laws of fundamental physics, or because there was an excess of left-handed amino acids that fell to Earth on meteorites and seeded the first life (that is simply deferring the question, however).
Gleiser and his coworkers argue that these ideas may all be irrelevant. They say that environmental disturbances strong and long enough can reset the handedness, if this is propagated in the prebiotic environment by an autocatalytic process in which an enantiomer acts as a catalyst to create more of itself while blocking the chemical reaction that leads to the other enantiomer. Such a self-amplifying process was proposed in 1953 by physicist Charles Frank, and was demonstrated experimentally in the 1990s by Japanese chemist Kenso Soai.
The US researchers show that an initially random mixture of enantiomers in such a system quickly develops patchiness, with big blobs of each enantiomer accumulating like oil separating from vinegar in an unstirred salad dressing. Chance initial variations will lead to one or other enantiomer eventually dominating. But an environmental disruption, like the planet-sterilizing giant impacts suffered by the early Earth, can shake the salad dressing, breaking up the blobs. When the process begins again, the new dominant enantiomer that emerges may be different from the one before, even if there was a small excess of the other at the outset. As a result, they say, the origin of life’s handedness “is enmeshed with Earth’s environmental history” – and is therefore purely contingent.
Other researchers I have spoken to question whether the scheme Gleiser’s team has considered – autocatalytic spreading in an unstirred solvent – has much relevance to ‘warm little ponds’ on the turbulent young Earth, and whether the notion of resetting by shaking isn’t obvious in any case in a process like Frank’s in which chance variations get amplified. But of course, in an astrobiological context the more fundamental issue is whether there is the slightest reason to think that alien life will use amino acids and DNA, so that a comparison of handedness will be possible.
That doesn’t mean these questions aren’t worth pursuing (they remain relevant to life on Earth, at the very least). But it’s another illustration of our tendency to frame the questions parochially. In the quest for life elsewhere, whether searching for new planets or considering the molecular parameters of potential living systems, we are in some ways more akin to historians than to scientists: our data is a unique narrative, and our thinking is likely to stay trapped within it.
References
1. Gaudi, B. S. et al. Science 319, 927-930 (2008).
2. Phil. Trans. R. Soc. Lond. Ser. B special issue, 359 (no. 1448) (2004).
3. Benner, S. A. et al. Curr. Opin. Chem. Biol. 8, 672 (2004).
4. Gleiser, M. et al. http://arxiv.org/abs/0802.1446
Saturday, February 09, 2008
The hazards of saying what you mean
It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?
You know which X I mean.
I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.
Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.
Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”
And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”
And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”
And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)
What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.
Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.
It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?
You know which X I mean.
I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.
Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.
Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”
And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”
And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”
And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)
What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.
Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.
Friday, February 08, 2008
Waste not, want not
[This is my latest Muse column for Nature News.]
We will now go to any extent to scavenge every last joule of energy from our environment.
As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.
And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.
Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.
So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.
That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.
That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.
While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).
There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.
The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.
Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.
Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].
In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].
And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.
Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.
These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.
And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.
References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).
[This is my latest Muse column for Nature News.]
We will now go to any extent to scavenge every last joule of energy from our environment.
As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.
And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.
Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.
So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.
That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.
That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.
While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).
There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.
The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.
Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.
Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].
In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].
And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.
Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.
These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.
And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.
References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).
Friday, February 01, 2008
Risky business
[My latest Muse column for Nature online news…]
Managing risk in financial markets requires better understanding of their complex dynamics. But it’s already clear that unfettered greed makes matter worse.
It seems to be sheer coincidence that the multi-billion dollar losses at the French bank Société Générale (SocGen), caused by the illegal dealings of rogue trader Jérôme Kerviel, comes at a time of imminent global economic depression. But the conjunction has provoked discussion about whether such localized shocks to the financial market can trigger worldwide (‘systemic’) economic crises.
If so, what can be done to prevent it? Some have called for more regulation, particularly of the murky business that economists call derivatives trading and the rest of us would recognize as institutionalized gambling. “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions”, wrote John Lanchester in the British Guardian newspaper [1]. But how well do we understand what we’d be regulating?
The French affair is in a sense timely, because ‘systemic risk’ in the financial system has become a hot topic, as witnessed by a recent report by the Federal Reserve Bank of New York (FRBNY) and the US National Academy of Sciences [2]. Worries about systemic risk are indeed largely motivated by the link to global recessions, like the one currently looming. This concern was articulated after the Great Depression of the 1930s by the British economist John Maynard Keynes, who wanted to understand how the global economy can switch from a healthy to a depressed state, both of which seemed to be stable ‘equilibrium’ states to the extent that they stick around for a while.
That terminology implies that there is some common ground with the natural sciences. In physics, a change in the global state of a system from one equilibrium configuration to another is called a phase transition, and some economists use such terms and concepts borrowed from physics to talk about market dynamics.
The analogy is potentially misleading, however, because the financial system, and the global economy generally, is never in equilibrium. Money is constantly in motion, and it’s widely recognized that instabilities such as market crashes depend in a sensitive but ill-understood ways on feedbacks within the system that can act to amplify small disturbances and which enforce perpetual change. Other terms from ‘economese’, such as liquidity (the ability to exchange assets for cash), reveal an intuition of that dynamism, and indeed Keynes himself tried to develop a model of economics that relied on analogies with hydrodynamics.
Just as equilibrium spells death for living things, so the financial market is in trouble when money stops flowing. It’s when people stop investing, cutting off the bank loans that business needs to thrive, that a crisis looms. Banks themselves stay in business only if the money keeps comes in; when customers lose confidence and withdraw their cash – a ‘run on the bank’ like that witnessed recently at the UK’s Northern Rock – banks can no longer lend, have to call in existing loans at a loss, and face ruin. That prospect sets up a feedback: the more customers withdraw their money, the more others feel compelled to do so – and if the bank wasn’t in real danger of collapse at the outset, it soon is. The French government is trying to avoid that situation at SocGen, since the collapse of a bank has knock-on consequences that could wreak havoc throughout a nation’s economy, or even beyond.
In other words, bank runs may lead, as with Northern Rock, to the ironic spectacle of gung-ho advocates of the free market appealing, or even demanding, state intervention to bail them out. As Will Hutton puts it in the Observer newspaper, “financiers have organised themselves so that actual or potential losses are picked up by somebody else - if not their clients, then the state - while profits are kept to themselves” [3]. Even measures such as deposit insurance, introduced in the US after the bank runs of the 1930s, which ensures that depositors won’t lose their money even if the bank fails, arguably exacerbate the situation by encouraging banks to take more risks, secure in the knowledge that their customers are unlikely to lose their nerve and desert them.
Economists’ attempts to understand runaway feedbacks in situations like bank runs draw on another area of the natural sciences: epidemiology. They speak of ‘contagion’: the spread of behaviours from one agent, or one part of the market, to another, like the spread of disease in a population. In bank runs, contagion can even spread to other banks: one run leads people to fear others, and this then becomes a self-fulfilling prophecy.
Regardless of whether the current shaky economy might be toppled by a SocGen scandal, it is clear that the financial market is in general potentially susceptible to systemic failure caused by specific, local events. The terrorist attacks on the World Trade Centre on 11 September 2001 demonstrated that, albeit in a most unusual way – for the ‘shock’ here was not a market event as such but physical destruction of its ‘hardware’. Disruption of trading activity in banks in downtown Manhattan in effect caused a bottleneck in the flow of money that had serious knock-on consequences, leading to a precipitous drop in the global financial market.
The FRBNY report [2] is a promising sign that economists seeking to understand risk are open to the ideas and tools of the natural sciences that deal with phase transitions, feedbacks and other complex nonlinear dynamics. But the bugbear of all these efforts is that ultimately the matter hinges on human behaviour. Your propensity to catch a virus is indifferent to whether you feel optimistic or pessimistic about your chances of that; but with contagion in the economy, expectations are crucial.
This is where conventional economic models run into problems. Most of the tools used in financial markets, such as how to price assets and derivatives and how to deal with risk in portfolio management, rely on the assumption that market traders respond rationally and identically on the basis of complete information about the market. This leads to mathematical models that can be solved, but it doesn’t much resemble what real agents do. For one thing, different people reach different conclusions on the basis of the same data. They tend to be overconfident, to be biased towards information that confirms their preconceptions, to have poor intuition about probabilities of rare events, and to indulge in wishful thinking [4]. The field of behavioural finance, which garnered a Nobel prize for Daniel Kahneman in 2002, shows the beginnings of an acknowledgement of these complexities in decision-making – but they haven’t yet had much impact on the tools widely used to calculate and manage risk.
One can’t blame the vulnerability of the financial market on the inability of economists to model it. These poor folks are faced with a challenge of such magnitude that those working on ‘complex systems’ in the natural sciences have it easy by comparison. Yet economic models that make unrealistic assumptions about human decision-making can’t help but suggest that we need to look elsewhere to fix the weak spots. Perhaps no one can be expected to anticipate the wild, not to mention illegal, behaviour of SocGen’s Kerviel or of those who brought low the US power company EnRon in 2001. But these examples are arguably only at the extreme end of a scale that is inherently biased towards high-risk activity by the very rules of engagement. State support of failing banks is just one example of the way that finance is geared to risky strategies: hedge fund managers, for example, get a hefty cut of their profits on top of a basic salary, but others pay for the losses [3]. The FRBNY’s vice president John Kambhu and his colleagues have pointed out that hedge funds (themselves a means of passing on risk) operate in a way that makes risk particularly severe and hard to manage [5].
That’s why, if understanding the financial market demands a better grasp of decision-making, with all its attendant irrationalities, it may be that managing the market to reduce risk and offer more secure public benefit requires more constraint, more checks and balances, to be put on that decision-making. We’re talking about regulation.
Free-market advocates firmly reject such ‘meddling’ on the basis that it cripples Adam Smith’s ‘invisible hand’ that guides the economy. But that hand is shaky, prone to wild gestures and sudden seizures, because it is no longer the collective hand of Smith’s sober bakers and pin-makers but that of rapacious profiteers creaming absurd wealth from deals in imaginary and incredible goods.
One suggestion is that banks and other financial institutions be required to make public how they are managing risk – basically, they should share currently proprietary information about expectations and strategies. This could reduce instability caused by each party trying to second-guess, and being forced to respond reactively, to the others. It might reduce opportunities to make high-risk killings, but the payoff would be to smooth away systemic crises of confidence. (Interestingly, the same proposal of transparency was made by nuclear scientists to Western governments after the development of the US atomic bomb, in the hope of avoiding the risks of an arms race.)
It’s true that too much regulation could be damaging, limiting the ability of the complex financial system to adapt spontaneously to absorb shocks. All the more reason to strive for a theoretical understanding of the processes involved. But experience alone tells us that it is time to move beyond Gordon Gekko’s infamous credo ‘greed is good’. One might argue that ‘a bit of greed is necessary’, but too much is liable to bend and rupture the pipes of the economy. As Hutton says [3], “We need the financiers to serve business and the economy rather than be its master.”
References
[1] Lanchester, J. ‘Dicing with disaster’, Guardian 26 January 2008.
[2] FRBNY Economic Policy Review special issue, ‘New directions for understanding systemic risk’, 13(2) (2007).
[3] Hutton, W. ‘This reckless greed of the few harms the future of the many’, Observer 27 January 2008.
[4] Anderson, J. V. in Encyclopedia of Complexity and Systems Science (Springer, in press, 2008)
[5] Kambhu, J. et al. FRBNY Economic Policy Review 13(3), 1-18 (2008).
[My latest Muse column for Nature online news…]
Managing risk in financial markets requires better understanding of their complex dynamics. But it’s already clear that unfettered greed makes matter worse.
It seems to be sheer coincidence that the multi-billion dollar losses at the French bank Société Générale (SocGen), caused by the illegal dealings of rogue trader Jérôme Kerviel, comes at a time of imminent global economic depression. But the conjunction has provoked discussion about whether such localized shocks to the financial market can trigger worldwide (‘systemic’) economic crises.
If so, what can be done to prevent it? Some have called for more regulation, particularly of the murky business that economists call derivatives trading and the rest of us would recognize as institutionalized gambling. “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions”, wrote John Lanchester in the British Guardian newspaper [1]. But how well do we understand what we’d be regulating?
The French affair is in a sense timely, because ‘systemic risk’ in the financial system has become a hot topic, as witnessed by a recent report by the Federal Reserve Bank of New York (FRBNY) and the US National Academy of Sciences [2]. Worries about systemic risk are indeed largely motivated by the link to global recessions, like the one currently looming. This concern was articulated after the Great Depression of the 1930s by the British economist John Maynard Keynes, who wanted to understand how the global economy can switch from a healthy to a depressed state, both of which seemed to be stable ‘equilibrium’ states to the extent that they stick around for a while.
That terminology implies that there is some common ground with the natural sciences. In physics, a change in the global state of a system from one equilibrium configuration to another is called a phase transition, and some economists use such terms and concepts borrowed from physics to talk about market dynamics.
The analogy is potentially misleading, however, because the financial system, and the global economy generally, is never in equilibrium. Money is constantly in motion, and it’s widely recognized that instabilities such as market crashes depend in a sensitive but ill-understood ways on feedbacks within the system that can act to amplify small disturbances and which enforce perpetual change. Other terms from ‘economese’, such as liquidity (the ability to exchange assets for cash), reveal an intuition of that dynamism, and indeed Keynes himself tried to develop a model of economics that relied on analogies with hydrodynamics.
Just as equilibrium spells death for living things, so the financial market is in trouble when money stops flowing. It’s when people stop investing, cutting off the bank loans that business needs to thrive, that a crisis looms. Banks themselves stay in business only if the money keeps comes in; when customers lose confidence and withdraw their cash – a ‘run on the bank’ like that witnessed recently at the UK’s Northern Rock – banks can no longer lend, have to call in existing loans at a loss, and face ruin. That prospect sets up a feedback: the more customers withdraw their money, the more others feel compelled to do so – and if the bank wasn’t in real danger of collapse at the outset, it soon is. The French government is trying to avoid that situation at SocGen, since the collapse of a bank has knock-on consequences that could wreak havoc throughout a nation’s economy, or even beyond.
In other words, bank runs may lead, as with Northern Rock, to the ironic spectacle of gung-ho advocates of the free market appealing, or even demanding, state intervention to bail them out. As Will Hutton puts it in the Observer newspaper, “financiers have organised themselves so that actual or potential losses are picked up by somebody else - if not their clients, then the state - while profits are kept to themselves” [3]. Even measures such as deposit insurance, introduced in the US after the bank runs of the 1930s, which ensures that depositors won’t lose their money even if the bank fails, arguably exacerbate the situation by encouraging banks to take more risks, secure in the knowledge that their customers are unlikely to lose their nerve and desert them.
Economists’ attempts to understand runaway feedbacks in situations like bank runs draw on another area of the natural sciences: epidemiology. They speak of ‘contagion’: the spread of behaviours from one agent, or one part of the market, to another, like the spread of disease in a population. In bank runs, contagion can even spread to other banks: one run leads people to fear others, and this then becomes a self-fulfilling prophecy.
Regardless of whether the current shaky economy might be toppled by a SocGen scandal, it is clear that the financial market is in general potentially susceptible to systemic failure caused by specific, local events. The terrorist attacks on the World Trade Centre on 11 September 2001 demonstrated that, albeit in a most unusual way – for the ‘shock’ here was not a market event as such but physical destruction of its ‘hardware’. Disruption of trading activity in banks in downtown Manhattan in effect caused a bottleneck in the flow of money that had serious knock-on consequences, leading to a precipitous drop in the global financial market.
The FRBNY report [2] is a promising sign that economists seeking to understand risk are open to the ideas and tools of the natural sciences that deal with phase transitions, feedbacks and other complex nonlinear dynamics. But the bugbear of all these efforts is that ultimately the matter hinges on human behaviour. Your propensity to catch a virus is indifferent to whether you feel optimistic or pessimistic about your chances of that; but with contagion in the economy, expectations are crucial.
This is where conventional economic models run into problems. Most of the tools used in financial markets, such as how to price assets and derivatives and how to deal with risk in portfolio management, rely on the assumption that market traders respond rationally and identically on the basis of complete information about the market. This leads to mathematical models that can be solved, but it doesn’t much resemble what real agents do. For one thing, different people reach different conclusions on the basis of the same data. They tend to be overconfident, to be biased towards information that confirms their preconceptions, to have poor intuition about probabilities of rare events, and to indulge in wishful thinking [4]. The field of behavioural finance, which garnered a Nobel prize for Daniel Kahneman in 2002, shows the beginnings of an acknowledgement of these complexities in decision-making – but they haven’t yet had much impact on the tools widely used to calculate and manage risk.
One can’t blame the vulnerability of the financial market on the inability of economists to model it. These poor folks are faced with a challenge of such magnitude that those working on ‘complex systems’ in the natural sciences have it easy by comparison. Yet economic models that make unrealistic assumptions about human decision-making can’t help but suggest that we need to look elsewhere to fix the weak spots. Perhaps no one can be expected to anticipate the wild, not to mention illegal, behaviour of SocGen’s Kerviel or of those who brought low the US power company EnRon in 2001. But these examples are arguably only at the extreme end of a scale that is inherently biased towards high-risk activity by the very rules of engagement. State support of failing banks is just one example of the way that finance is geared to risky strategies: hedge fund managers, for example, get a hefty cut of their profits on top of a basic salary, but others pay for the losses [3]. The FRBNY’s vice president John Kambhu and his colleagues have pointed out that hedge funds (themselves a means of passing on risk) operate in a way that makes risk particularly severe and hard to manage [5].
That’s why, if understanding the financial market demands a better grasp of decision-making, with all its attendant irrationalities, it may be that managing the market to reduce risk and offer more secure public benefit requires more constraint, more checks and balances, to be put on that decision-making. We’re talking about regulation.
Free-market advocates firmly reject such ‘meddling’ on the basis that it cripples Adam Smith’s ‘invisible hand’ that guides the economy. But that hand is shaky, prone to wild gestures and sudden seizures, because it is no longer the collective hand of Smith’s sober bakers and pin-makers but that of rapacious profiteers creaming absurd wealth from deals in imaginary and incredible goods.
One suggestion is that banks and other financial institutions be required to make public how they are managing risk – basically, they should share currently proprietary information about expectations and strategies. This could reduce instability caused by each party trying to second-guess, and being forced to respond reactively, to the others. It might reduce opportunities to make high-risk killings, but the payoff would be to smooth away systemic crises of confidence. (Interestingly, the same proposal of transparency was made by nuclear scientists to Western governments after the development of the US atomic bomb, in the hope of avoiding the risks of an arms race.)
It’s true that too much regulation could be damaging, limiting the ability of the complex financial system to adapt spontaneously to absorb shocks. All the more reason to strive for a theoretical understanding of the processes involved. But experience alone tells us that it is time to move beyond Gordon Gekko’s infamous credo ‘greed is good’. One might argue that ‘a bit of greed is necessary’, but too much is liable to bend and rupture the pipes of the economy. As Hutton says [3], “We need the financiers to serve business and the economy rather than be its master.”
References
[1] Lanchester, J. ‘Dicing with disaster’, Guardian 26 January 2008.
[2] FRBNY Economic Policy Review special issue, ‘New directions for understanding systemic risk’, 13(2) (2007).
[3] Hutton, W. ‘This reckless greed of the few harms the future of the many’, Observer 27 January 2008.
[4] Anderson, J. V. in Encyclopedia of Complexity and Systems Science (Springer, in press, 2008)
[5] Kambhu, J. et al. FRBNY Economic Policy Review 13(3), 1-18 (2008).