Nature versus naturoid
[This is my Materials Witness column for the January 2009 issue of Nature Materials.]
Are there metameric devices in the same way that there are metameric colours? The latter are colours that look identical to the eye but have different spectra. Might we make devices that, while made up of different components, perform identically?
Of course we can, you might say. A vacuum tube performs the same function as a semiconductor diode. Clocks can be driven by springs or batteries. But the answer may depend on how much similarity you want. Semiconductor diodes will survive a fall on a hard floor. Battery-operated clocks don’t need winding. And what about something considerably more ambitious, such as an artificial heart?
These thoughts are prompted by a recent article by sociologist Massimo Negrotti of the University of Urbino in Italy (Design Issues 24(4), 26-36; 2008). Negrotti has for several years pondered the question of what, in science and engineering, is commonly called biomimesis, trying to develop a general framework for what this entails and what its limitations might be. His vision is informed less by the usual engineering concern, evident in materials science, to learn from nature and imitate its clever solutions to design problems; rather, Negrotti wants to develop something akin to a philosophy of the artificial, analogous to (but different from) that expounded by Herbert Simon in his 1969 book The Sciences of the Artificial.
To this end, Negrotti has coined the term ‘naturoid’ to describe “all devices that are designed with natural objects in mind, by means of materials and building procedures that differ from those that nature adopts.” A naturoid could by a robot, but also a synthetic-polymer-based enzyme, an artificial-intelligence program, even a simulant of a natural odour. This concept was explored in Negrotti’s 2002 book Naturoids: On the Nature of the Artificial (World Scientific, New Jersey).
Can one say anything useful about a category so broad? That might remain a matter of taste. But Negrotti’s systematic analysis of the issues has the virtue of stripping away some of the illusions and myths that attach to attempts to ‘copy nature’.
It won’t surprise anyone that these attempts will always fall short of perfect mimicry; indeed that is often explicitly not intended. Biomimetic materials are generally imitating just one function of a biological material or structure, such as adhesion or toughness. Negrotti calls this the ‘essential performance’, which itself implies also a selected ‘observation level’ – we might make the comparison solely at the level of bulk mechanical behaviour, irrespective of, say, microstructure or chemical composition.
This inevitably means that the mimicry breaks down at some other observation level, just as colour metamerism can fail depending on the observing conditions (daylight or artificial illumination, say, or different viewing angles).
This reasoning leads Negrotti to conclude that there is no reason to suppose the capacities of naturoids can ever converge on those of the natural models. In particular, the idea that robots and computers will become ever more humanoid in features and function, forecast by some prophets of AI, has no scientific foundation.
Sunday, December 21, 2008
Dark matter and DIY genomics
[This is my column for the January 2009 issue of Prospect.]
Physicists’ understandable embarrassment that we don’t know what most of the universe is made of prompts an eagerness, verging on desperation, to identify the missing ingredients. Dark energy – the stuff apparently causing an acceleration of cosmic expansion – is currently a matter of mere speculation, but dark matter, which is thought to comprise around 85 percent of tangible material, is very much on the experimental agenda. This invisible substance is inferred on several grounds, especially that galaxies ought to fall apart without its gravitational influence. The favourite idea is that dark matter consists of unknown fundamental particles that barely interact with visible matter – hence its elusiveness.
One candidate is a particle predicted by theories that invoke extra dimensions of spacetime (beyond the familiar four). So there was much excitement at the recent suggestion that the signature of these particles has been detected in cosmic rays, which are electrically charged particles (mostly protons and electrons) that whiz through all of space. Cosmic rays can be detected when they collide with atoms in the Earth’s atmosphere. Some are probably produced in high-energy astrophysical environments such as supernovae and neutron stars, but their origins are poorly understood.
An international experiment called ATIC, which floats balloon-borne cosmic-ray detectors high over Antarctica, has found an unexpected excess of cosmic-ray electrons with high energies, which might be the debris of collisions between the hypothetical dark-matter particles. That’s the sexy interpretation. They might instead come from more conventional sources, although it’s not then clear whence this excess above the normal cosmic-ray background.
The matter is further complicated by an independent finding, from a detector called Milagro near Los Alamos in New Mexico, that high-energy cosmic-ray protons seem to be concentrated in a couple of bright patches in the sky. It’s not clear if the two results are related, but if the ATIC electrons come from the same source as the Milagro protons, that rules out dark matter, which is expected to produce no such patchiness. On the other hand, no other source is expected to do so either. It’s all very perplexing, but nonetheless a demonstration that cosmic rays, whose energies can exceed those of equivalent particles in Cern’s new Large Hadron Collider, offer an unparalleled natural resource for particle physicists.
*****
A Californian biotech company is promising, within five years, to be able to sequence your entire personal genome while you wait. In under an hour, a doctor could deduce from a swab or blood sample all of your genetic predispositions to disease. At least, that’s the theory.
Pacific Biosciences in Menlo Park has developed a technique for replicating a piece of DNA in a form that contains fluorescent chemical markers attached to each ‘base’, the fundamental building blocks of genes. Each of the four types of base gets a differently coloured marker, and so the DNA sequence – the arrangement of bases along the strand – can be discerned as a string of fairy lights, using a microchip-based light sensor that can image individual molecules.
With a readout rate of about 4.7 bases per second, the method would currently take much longer than an hour to sequence all three billion bases of a human genome. And it is plagued by errors – mistakes about the ‘colour’ of the fluorescent markers – which might wrongly identify as many as one in five of the bases. But these are early days; the basic technology evidently works. The company hopes to start selling commercial products by 2010.
Faster genome sequencing should do wonders for our fundamental understanding of, say, the relationships between species and how these have evolved, or the role of genetic diversity in human populations. There’s no doubt that it would be valuable in medicine too – for example, potential drugs that are currently unusable because of genetically based side-effects in a minority of cases could be rescued by screening that identifies those at risk. But many researchers admit that the notion of a genome-centred ‘personalized medicine’ is easily over-hyped. Not all diseases have a genetic component, and those that do may involve complex, poorly understood interactions of many genes. Worse still, DIY sequencing kits could saddle people with genetic data that they don’t know how to interpret or deal with, as well as running into a legal morass about privacy and disclosure. At this rate, the technology is far ahead of the ethics.
*****
Besides, it is becoming increasingly clear that the programme encoded in genes can be over-ridden: to put it crudely, an organism can ‘disobey’ its genes. There are now many examples of ‘epigenetic’ inheritance, in which phenotypic characteristics (hair colour, say, or susceptibility to certain diseases) can be manifested or suppressed despite a genetic imperative to the contrary (see Prospect May 2008). Commonly, epigenetic inheritance is induced by small strands of RNA, the intermediary between genes and the proteins they encode, which are acquired directly from a parent and can modify the effect of genes in the offspring.
An American team have now shown a new type of such behaviour, in which a rogue gene than can cause sterility in crossbreeds of wild and laboratory-bed fruit flies may be silenced by RNA molecules if the gene is maternally inherited, maintaining fertility in the offspring despite a ‘genetic’ sterility. Most strikingly, this effect may depend on the conditions in which the mothers are reared: warmth boosts the fertility of progeny. It’s not exactly inheritance of acquired characteristics, but is a reminder, amidst the impending Darwin celebrations, of how complicated the story of heredity has now become.
[This is my column for the January 2009 issue of Prospect.]
Physicists’ understandable embarrassment that we don’t know what most of the universe is made of prompts an eagerness, verging on desperation, to identify the missing ingredients. Dark energy – the stuff apparently causing an acceleration of cosmic expansion – is currently a matter of mere speculation, but dark matter, which is thought to comprise around 85 percent of tangible material, is very much on the experimental agenda. This invisible substance is inferred on several grounds, especially that galaxies ought to fall apart without its gravitational influence. The favourite idea is that dark matter consists of unknown fundamental particles that barely interact with visible matter – hence its elusiveness.
One candidate is a particle predicted by theories that invoke extra dimensions of spacetime (beyond the familiar four). So there was much excitement at the recent suggestion that the signature of these particles has been detected in cosmic rays, which are electrically charged particles (mostly protons and electrons) that whiz through all of space. Cosmic rays can be detected when they collide with atoms in the Earth’s atmosphere. Some are probably produced in high-energy astrophysical environments such as supernovae and neutron stars, but their origins are poorly understood.
An international experiment called ATIC, which floats balloon-borne cosmic-ray detectors high over Antarctica, has found an unexpected excess of cosmic-ray electrons with high energies, which might be the debris of collisions between the hypothetical dark-matter particles. That’s the sexy interpretation. They might instead come from more conventional sources, although it’s not then clear whence this excess above the normal cosmic-ray background.
The matter is further complicated by an independent finding, from a detector called Milagro near Los Alamos in New Mexico, that high-energy cosmic-ray protons seem to be concentrated in a couple of bright patches in the sky. It’s not clear if the two results are related, but if the ATIC electrons come from the same source as the Milagro protons, that rules out dark matter, which is expected to produce no such patchiness. On the other hand, no other source is expected to do so either. It’s all very perplexing, but nonetheless a demonstration that cosmic rays, whose energies can exceed those of equivalent particles in Cern’s new Large Hadron Collider, offer an unparalleled natural resource for particle physicists.
*****
A Californian biotech company is promising, within five years, to be able to sequence your entire personal genome while you wait. In under an hour, a doctor could deduce from a swab or blood sample all of your genetic predispositions to disease. At least, that’s the theory.
Pacific Biosciences in Menlo Park has developed a technique for replicating a piece of DNA in a form that contains fluorescent chemical markers attached to each ‘base’, the fundamental building blocks of genes. Each of the four types of base gets a differently coloured marker, and so the DNA sequence – the arrangement of bases along the strand – can be discerned as a string of fairy lights, using a microchip-based light sensor that can image individual molecules.
With a readout rate of about 4.7 bases per second, the method would currently take much longer than an hour to sequence all three billion bases of a human genome. And it is plagued by errors – mistakes about the ‘colour’ of the fluorescent markers – which might wrongly identify as many as one in five of the bases. But these are early days; the basic technology evidently works. The company hopes to start selling commercial products by 2010.
Faster genome sequencing should do wonders for our fundamental understanding of, say, the relationships between species and how these have evolved, or the role of genetic diversity in human populations. There’s no doubt that it would be valuable in medicine too – for example, potential drugs that are currently unusable because of genetically based side-effects in a minority of cases could be rescued by screening that identifies those at risk. But many researchers admit that the notion of a genome-centred ‘personalized medicine’ is easily over-hyped. Not all diseases have a genetic component, and those that do may involve complex, poorly understood interactions of many genes. Worse still, DIY sequencing kits could saddle people with genetic data that they don’t know how to interpret or deal with, as well as running into a legal morass about privacy and disclosure. At this rate, the technology is far ahead of the ethics.
*****
Besides, it is becoming increasingly clear that the programme encoded in genes can be over-ridden: to put it crudely, an organism can ‘disobey’ its genes. There are now many examples of ‘epigenetic’ inheritance, in which phenotypic characteristics (hair colour, say, or susceptibility to certain diseases) can be manifested or suppressed despite a genetic imperative to the contrary (see Prospect May 2008). Commonly, epigenetic inheritance is induced by small strands of RNA, the intermediary between genes and the proteins they encode, which are acquired directly from a parent and can modify the effect of genes in the offspring.
An American team have now shown a new type of such behaviour, in which a rogue gene than can cause sterility in crossbreeds of wild and laboratory-bed fruit flies may be silenced by RNA molecules if the gene is maternally inherited, maintaining fertility in the offspring despite a ‘genetic’ sterility. Most strikingly, this effect may depend on the conditions in which the mothers are reared: warmth boosts the fertility of progeny. It’s not exactly inheritance of acquired characteristics, but is a reminder, amidst the impending Darwin celebrations, of how complicated the story of heredity has now become.
Monday, December 08, 2008
Who knows what ET is thinking?
[My early New Year resolution is to stop giving my Nature colleagues a hard time by forcing them to edit stories that are twice as long as they should be. It won’t stop me writing them that way (so that I can stick them up here), but at least I should do the surgery myself. Here is the initial version of my latest Muse column, before it was given a much-needed shave.]
Attempts to identify the signs of astro-engineering by advanced civilizations aren’t exactly scientific. But it would be sad to rule them out on that score.
“Where is everybody?” Fermi’s famous question about intelligent extraterrestrials still taunts us. Even if the appearance of intelligent life is rare, the vast numbers of Sun-like stars in the Milky Way alone should compensate overwhelmingly, and make it a near certainty that we are not alone. So why does it look that way?
Everyone likes a good Fermi story, but it seems that the origins of the ‘Fermi Paradox’ are true [1]. In the summer of 1950, Fermi was walking to lunch at Los Alamos with Edward Teller, Emil Konopinski and Herbert York. They were discussing a recent spate of UFO reports, and Konopinski recalled a cartoon he had seen in the New Yorker blaming the disappearance of garbage bins from the streets of New York City on extraterrestrials. And so the group fell to debating the feasibility of faster-than-light travel (which Fermi considered quite likely to be found soon). Then they sat down to lunch and spoke of other things.
Suddenly, Fermi piped up, out of the blue, with his question. Everyone knew what he meant, and they laughed. Fermi apparently then did a back-of-the-envelope calculation (his forte) to show that we should have been visited by aliens long ago. Since we haven’t been (nobody mention Erich von Daniken, please), this must mean either that interstellar travel is impossible, or deemed not worthwhile, or that technological civilizations don’t last long.
Fermi’s thinking was formalized and fleshed out in the 1960s by astronomer Frank Drake of Cornell University, whose celebrated equation estimates the probability of extraterrestrial technological civilizations in our galaxy by breaking it down into the product of the various factors involved: the fraction of habitable planets, the number of them on which life appears, and so on.
Meanwhile, the question of extraterrestrial visits was broadened into the problem of whether we can see signs of technological civilizations from afar, for example via radio broadcasts of the sort that are currently sought by the SETI Project, based in Mountain View, California. This raises the issue of whether we would know signs of intelligence if we saw them. The usual assumption is that a civilization aiming to communicate would broadcast some distinctive universal pattern such as an encoding of the mathematical constant pi.
A new angle on that issue is now provided in a preprint [2] by physicist Richard Carrigan of (appropriately enough) the Fermi National Accelerator Laboratory in Batavia, Illinois. He has combed through the data from 250,000 astronomical sources found by the IRAS infrared satellite – which scanned 96 percent of the sky – to look for the signature of solar systems that have been technologically manipulated after a fashion proposed in the 1960s by physicist Freeman Dyson.
Dyson suggested that a sufficiently advanced civilization would baulk at the prospect of its star’s energy being mostly radiated uselessly into space. They could capture it, he said, by breaking up other planets in the solar system into rubble that formed a spherical shell around the star, creating a surface on which the solar energy could be harvested [3].
Can we see a Dyson Sphere from outside? It would be warm, re-radiating some of the star’s energy at a much lower temperature – for a shell with a radius of the Earth’s orbit around a Sun-like star, the temperature should be around 300 K. This would show up as a far-infrared object unlike any other currently known. If Dyson spheres exist in our galaxy, said Dyson, we should be able to see them – and he proposed that we look.
That’s what Carrigan has done. He reported a preliminary search in 2004 [4], but the new data set is sufficient to spot any Dyson Spheres around sun-like bodies out to 300 parsecs – a volume that encompasses a million such stars. It will probably surprise no one that Carrigan finds no compelling candidates. One complication is that some types of star that might resemble a Dyson Sphere, such as those in the late stage of their evolution when they become surrounded by thick dust clouds. But there are ways to weed these out, for example by looking at the spectral signatures such objects are expected to exhibit. Winnowing out such false positives left just 17 candidate objects, of which most, indeed perhaps all, could be given more conventional interpretations. It’s not quite the same as saying that the results are wholly negative – Carrigan argues that the handful of remaining candidates warrant closer inspection – but there’s currently no reason to suppose that there are indeed Dyson Spheres out there.
Dyson says that he didn’t imagine in 1960 that a search like this would be complicated by so many natural mimics of Dyson Spheres. “I had no idea that the sky would be crawling with millions of natural infrared sources”, he says. “So a search for artificial sources seemed reasonable. But after IRAS scanned the sky and found a huge number of natural sources, a search for artificial sources based on infrared data alone was obviously hopeless.”
All the same, he feels that Carrigan may be rather too stringent in whittling down the list of candidates. Carrigan basically excludes any source that doesn’t radiate energy pretty much like a ‘black body’. “I see no reason to expect that an artificial source should have a Planck [black-body] spectrum”, says Dyson. “The spectrum will depend on many unpredictable factors, such as the paint on the outside of the radiating surface.”
So although he agrees that there is no evidence that any of the IRAS sources is artificial, he says that “I do not agree that there is evidence that all of them are natural. There are many IRAS sources for which there is no evidence either way.”
Yet the obvious question hanging over all of this is: who says advanced extraterrestrials will want to make Dyson Spheres anyway? Dyson’s proposal carries a raft of assumptions about the energy requirements and sources of such a civilization. It seems an enormously hubristic assumption that we can second-guess what beings considerably more technologically advanced than us will choose to do (which, in fairness, was never Dyson’s aim). After all, history shows that we find it hard enough to predict where technology will take us in just a hundred years’ time.
Carrigan concedes that it’s a long shot: “It is hard to predict anything about some other civilization”. But he says that the attraction of looking for the Dyson Sphere signature is that “it is a fairly clean case of an astroengineering project that could be observable.”
Yet the fact is that we know absolutely nothing about civilizations more technologically advanced than ours. In that sense, while it might be fun to speculate about what is physically possible, one might charge that this strays beyond science. The Drake equation has itself been criticized as being unfalsifiable, even a ‘religion’ according to Michael Crichton, the late science-fiction writer.
All that is an old debate. But it might be more accurate to say that what we really have here is an attempt to extract knowledge from ignorance: to apply the trappings of science, such as equations and data sets, to an arena where there is nothing to build on.
There are, however, some conceptual – one might say philosophical – underpinnings to the argument. By assuming that human reasoning and agendas can be extrapolated to extraterrestrials, Dyson was in a sense leaning on the Copernican principle, which assumes that the human situation is representative rather than extraordinary. It has recently been proposed [5,6] that this principle may be put to the experimental test in a different context, to examine whether our cosmic neighbourhood is or is not unusual – whether we are, say, at the centre of a large void, which might provide a prosaic, ‘local’ explanation for the apparent cosmic acceleration that motivates the idea of dark energy.
But the Copernican principle can be considered to have a broader application than merely the geographical. Astrophysicist George Ellis has pointed out how arguments over the apparent fine-tuning of the universe – the fact, for example, that ratio of the observed to the theoretical ‘vacuum energy’ is the absurdly small 10**-120 rather than the more understandable zero – entails an assumption that our universe should not be ‘extraordinary’. With a sample of one, says Ellis, there is no logical justification for that belief: ‘there simply is no proof the universe is probable’ [7]. He argues that cosmological theories that use the fine-tuning as justification are therefore drawing on philosophical rather than scientific arguments.
It would be wrong to imagine that a question lies beyond the grasp of science just because it seems very remote and difficult – we now have well-motivated accounts of the origins of the moon, the solar system, and the universe itself from just a fraction of a second onward. But when contingency is involved – in the origin of life, say, or some aspects of evolution, or predictions of the future – the dangers of trying to do science in the absence of discriminating evidence are real. It becomes a little like trying to figure out the language of Neanderthals, or the thoughts of Moses.
It is hard to see that a survey like Carrigan’s could ever claim definitive, or even persuasive, proof of a Dyson Sphere; in that sense, the hypothesis that the paper probes might indeed be called ‘unscientific’ in a Popperian sense. And in the end, the Fermi Paradox that motivates it is not a scientific proposition either, because we know precisely nothing about the motives of other civilizations. Astronomer Glen David Brin suggested in 1983, for example, that they might opt to stay hidden from less advanced worlds, like adults speaking softly in a nursery ‘lest they disturb the infant’s extravagant and colourful time of dreaming’ [8]. We simply don’t know if there is a paradox at all.
But how sad it would be to declare out of scientific bounds speculations like Dyson’s, or experimental searches like Carrigan’s. So long as we see them for what they are, efforts to gain a foothold on metaphysical questions are surely a valid part of the playful creativity of the sciences.
References
1. E. M. Jones, Los Alamos National Laboratory LA-10311-MS (1985).
2. Carrigan, R. http://arxiv.org/abs/0811.2376
3. Dyson, F. J. Science 131, 1667-1668 (1960).
4. Carrigan, R. IAC-04-IAA-1.1.1.06, 55th International Astronautical Congress, Vancouver (2004).
5. Caldwell, R. R. & Stebbins, A. Phys. Rev. Lett. 100, 191302 (2008).
6. Clifton, T., Ferreira, P. G. & Land, K. Phys. Rev. Lett. 101, 131302 (2008).
7. Ellis, G. F. R. http://arxiv.org/abs/0811.3529 (2008).
8. Brin, G. D. Q. J. R. Astr. Soc. 24, 283-309 (1983).
Una poca lettura
For any of you who reads Italian (I am sure there are many), there is a little essay of mine up on the Italian science & culture site Fortepiano here. This is basically the text of the short talk I gave in Turin for the receipt of one of the Lagrange prizes for complexity last April. At least, I hope it is – my Italian is non-existent, I fear. Which is a shame, because the Fortepiano site looks kind of intriguing.
For any of you who reads Italian (I am sure there are many), there is a little essay of mine up on the Italian science & culture site Fortepiano here. This is basically the text of the short talk I gave in Turin for the receipt of one of the Lagrange prizes for complexity last April. At least, I hope it is – my Italian is non-existent, I fear. Which is a shame, because the Fortepiano site looks kind of intriguing.
Thursday, November 20, 2008
DIY economics
There I am, performing a bit of rotary sanding on top of a piece of the newspaper that I’d considered disposable (the Business pages of the Guardian), when something catches my eye. Namely, a reference to ‘the science weekly Nature’. What’s all this?
It is an article by the Guardian’s Management Editor Simon Caulkin, explaining why ‘self-interest is bad for the economy’. Needless to say, that’s not quite right, and presumably not quite intended. The economy relies on self-interest. What Caulkin is really saying is that self-interest without restraint or regulation is bad for the economy, especially when it generates the kind of absurd salaries that promote reckless short-termism and erosion of trust (not to mention outright arrogant malpractice). Caulkin rightly points out that Adam Smith never condoned any such unfettered selfishness.
But where does Nature feature in this? Caulkin refers to the recent article by Jean-Philippe Bouchaud that points to some of the shortcomings of conventional economic thinking, based as it is on unproven (indeed, fallacious) axioms. In physics, models that don’t fit with reality are thrown out. “Not so in economics”, says Caulkin, “whose central tenets – rational agents, the invisible hand, efficient markets – derive from economic work done in the 1950s and 1960s”. Bouchaud says that these, in hindsight, look “more like propaganda against communism than plausible science” (did anyone hear Hayek’s name whispered just then?).
Now, the last time I said any such thing (with, I hope, a little more circumspection), I was told by several economists (here and here) that this was a caricature of what economists think, and that I was just making it up. Economists know that markets are often not efficient! They know that agents aren’t always rational (in the economic sense)! Get up to date, man! Look at the recent Nobel prizes!
In fact I had wanted, in my FT article above, to mention the curious paradox that several recent Nobels have been for work decidedly outside the neoclassical paradigm, while much of economics labours doggedly within it. But there was no room. In any event, there is some justification in such responses, if the implication (not my intention) is that all economists still think as they did in the 1950s. These days I am happy to be more irenic, not only because that’s the sort of fellow I am but because it seems to me that thoughtful, progressive economists and those who challenge the neoclassical ‘rational agent’ tradition from outside should be natural allies, not foes, in the fight against the use of debased economic ideas in policy making.
But look, economists: do you think all is really so fine when a journalist paid to comment on the economy (and not just some trumped-up physicist-cum-science writer) not only possesses these views about your discipline but regards it as something of an eye-opener when someone points out in a science journal that the economy is not like this at all? Are you still so complacently sure that you are communicating your penetrating insights about economic markets to the world beyond? Are you so sure that your views are common knowledge not just in academia but to the people who actually run the economy? Maybe you are. But Nobel laureates like Joe Stiglitz and Paul Krugman aren’t.
There I am, performing a bit of rotary sanding on top of a piece of the newspaper that I’d considered disposable (the Business pages of the Guardian), when something catches my eye. Namely, a reference to ‘the science weekly Nature’. What’s all this?
It is an article by the Guardian’s Management Editor Simon Caulkin, explaining why ‘self-interest is bad for the economy’. Needless to say, that’s not quite right, and presumably not quite intended. The economy relies on self-interest. What Caulkin is really saying is that self-interest without restraint or regulation is bad for the economy, especially when it generates the kind of absurd salaries that promote reckless short-termism and erosion of trust (not to mention outright arrogant malpractice). Caulkin rightly points out that Adam Smith never condoned any such unfettered selfishness.
But where does Nature feature in this? Caulkin refers to the recent article by Jean-Philippe Bouchaud that points to some of the shortcomings of conventional economic thinking, based as it is on unproven (indeed, fallacious) axioms. In physics, models that don’t fit with reality are thrown out. “Not so in economics”, says Caulkin, “whose central tenets – rational agents, the invisible hand, efficient markets – derive from economic work done in the 1950s and 1960s”. Bouchaud says that these, in hindsight, look “more like propaganda against communism than plausible science” (did anyone hear Hayek’s name whispered just then?).
Now, the last time I said any such thing (with, I hope, a little more circumspection), I was told by several economists (here and here) that this was a caricature of what economists think, and that I was just making it up. Economists know that markets are often not efficient! They know that agents aren’t always rational (in the economic sense)! Get up to date, man! Look at the recent Nobel prizes!
In fact I had wanted, in my FT article above, to mention the curious paradox that several recent Nobels have been for work decidedly outside the neoclassical paradigm, while much of economics labours doggedly within it. But there was no room. In any event, there is some justification in such responses, if the implication (not my intention) is that all economists still think as they did in the 1950s. These days I am happy to be more irenic, not only because that’s the sort of fellow I am but because it seems to me that thoughtful, progressive economists and those who challenge the neoclassical ‘rational agent’ tradition from outside should be natural allies, not foes, in the fight against the use of debased economic ideas in policy making.
But look, economists: do you think all is really so fine when a journalist paid to comment on the economy (and not just some trumped-up physicist-cum-science writer) not only possesses these views about your discipline but regards it as something of an eye-opener when someone points out in a science journal that the economy is not like this at all? Are you still so complacently sure that you are communicating your penetrating insights about economic markets to the world beyond? Are you so sure that your views are common knowledge not just in academia but to the people who actually run the economy? Maybe you are. But Nobel laureates like Joe Stiglitz and Paul Krugman aren’t.
Thursday, November 06, 2008
What you don’t learn at school about the economy
[Why, you might wonder, would I want to expose myself to more flak from economists by writing this column for Nature’s online news? Well, if any economists get to see it at all, I hope they will recognize that it is not an attack at all but a call to make common cause in driving out simplistic myths from the world of economic policy. I was particularly taken with the article by Fred Argy on the use of economic theory in policy advising, and I guess I am moved to what hope is the irenic position of recognizing that his statement that ‘economics does not lend itself to doctrinaire policy assertions’ should apply too to criticisms of traditional economic theory. The world is too complex to become dogmatic about this stuff. My goodness, though, that seems unlikely to deter the pundits, from what I have seen.
Below, as usual, is the ‘long’ version of the column…]
When the sophisticated theories of economics get vulgarized into policy-making tools, that spells trouble for us all.
The column inches devoted to the global financial crisis must be running now into miles, and yet one could be forgiven for concluding that we are little the wiser. When one group of several hundred academic economists opposes the US Treasury’s bank bail-out and another equally eminent group supports it, how is the ordinary person supposed to decide about what is the right response to the financial mess we’re in, or indeed what caused it in the first place?
Yet some things seem clear. A largely unregulated market has clearly failed to deliver the optimal behaviour that conventional theories of economic competition promise it should. Most commentators now acknowledge that new types of financial regulation are needed, and the (for want of a better word) liberal pundits are seizing the chance to denounce the alleged superiority of the free market and to question whether Adam Smith’s invisible hand is anything but a libertarian fantasy.
Behind all this, however, is the question of why the free market hasn’t done what it is supposed to. In the New Statesman, economics Nobel laureate Joseph Stiglitz recently offered an answer: “For over a quarter of a century, we have known that Smith’s conclusions do not hold when there is imperfect information – and all markets, especially financial markets, are characterised by information imperfections.” For this reason, Stiglitz concludes, “The reason the invisible hand often seems invisible is that it is not there.”
Now, some might say that Stiglitz would say this, because analysing the effects of imperfect information is what won him his Nobel in 2001. In short, the traditional ‘neoclassical’ microeconomic models developed in the first half of the twentieth century assumed that all agents have ‘perfect’ information: they all know everything about what is being bought and sold.
This assumption makes the models mathematically tractable, not least because it ensures that all agents are identical. The idea is then that these agents use this information to deduce the actions that will maximize their ‘utility’, often synonymous with wealth or profit.
Under such conditions of perfect competition, the self-interested actions of market agents create an optimally efficient market in which asset prices attain their ‘correct’ value and supply matches demand. This is the principle notoriously summarized by Gordon Gekko in the movie Wall Street: greed is good, because it benefits society. That idea harks back to the French philosopher Bernard Mandeville, who argued semi-humorously in 1705 that ‘private vices’ have ‘public benefits.’
Stiglitz and his co-laureates George Akerlof and Michael Spence showed what can go wrong in this tidy picture when (as in the real world) market agents don’t know everything, and some know more than others. With these ‘asymmetries’ of information, the market may then no longer be ‘efficient’ at all, so that for example poor products can crowd out good ones.
Now, as Stiglitz says, this has been known for decades. So why isn’t it heeded? “Many of the problems our economy faces”, says Stiglitz, “are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy.”
Economist Steve Cohn, now at Knox College in Galesburg, Illinois, echoes this view about the failures of basic economic education: “More than one million students take principles of economics classes annually in the United States. These courses will be the main contact with formal economic theory for most undergraduates and will influence how they think about economic issues. Only a few percent of all students studying introductory microeconomics will likely use a textbook that seriously challenges the neoclassical paradigm.”
And there lies the problem. When people criticize economics for its reliance on traditional models that, while occasionally applicable in some special cases, are just plain wrong in general, the usual response is that this is a mere caricature of the discipline and that of course economists know all about the shortcomings of those models. Look, they say, at the way people like Stiglitz have been rewarded for pointing them out.
Fine. So why are the traditional models still taught as a meaningful first approximation to economics students who may never encounter the caveats before graduating and becoming financiers and policy advisers? Why has free-market fundamentalism become an unqualified act of faith among many pundits and advisers, particularly in the US, as this year’s Nobel laureate Paul Krugman has explained in his 1994 book Peddling Prosperity? Why are models still used that cannot explain crashes and recessions at all?
Robert Hunter Wade of the London School of Economics agrees that the sophistication of academic economics tends to vanish in the real world, despite what its defenders claim. “Go to the journals, they say, and you find a world of great variety and innovation, where some of the best work is done on issues of market failure. And they are right as far as they go. But one should also sample economics as it is applied by people such as World Bank country economists when they advise the government of country X, and as it is hard-wired into World Bank formulas for evaluating countries' policies and institutions. In this second kind of economics the sophistication of the first kind is stripped away to leave only ‘the fundamentals’.”
As Australian policy adviser Fred Argy has pointed out, such economic fundamentalism based on simplistic models commonly lead to dogmatic policy extremism. “We saw in the 1960s and 1970s how the work of John Maynard Keynes was vulgarised by many of his followers, and used to justify the most extreme forms of government intervention. And in the 1980s and 1990s we saw how monetarism, public choice theory and neo-classical economics have been misused by some to justify simplistic small government policies.”
An example of this misdirected thinking can be seen in the way several columnists have announced smugly that it is wrong to describe the current financial panic as ‘irrational’: it is perfectly ‘rational’, they say, for people to be offloading stock before it becomes valueless. That’s true, but it fails to acknowledge that this is not the kind of rationality described in conventional economic models. Rational herding behaviour is called irrational because it is not what the models predict. In other words, there’s something badly wrong with those models.
Scientists are used to the need for approximations and simplifications in teaching. But this doesn’t mean that they regard Lamarkism as a useful approximation to the Darwinism that graduate students will learn, or terracentrism as the best system for Astronomy 101.
Sadly, this often becomes an argument about how backward and unscientific economics is. That is not only unhelpful, but untrue: it is quite correct to say that a glance at the Nobels reveals (with a few exceptions) the dramatic leaps in understanding and realism that the discipline has made since its origins in misguided analogies from microscopic physics. Knowledgeable economists and critics of traditional economics are on the same side; they need to unite against the use of vulgarized, introductory or plain incorrect models as instruments of policy. After all, as the British economist Joan Robinson, a pioneer in the understanding of imperfect competition, put it, “the purpose of studying economics is to learn how to avoid being deceived by other economists.”
[Why, you might wonder, would I want to expose myself to more flak from economists by writing this column for Nature’s online news? Well, if any economists get to see it at all, I hope they will recognize that it is not an attack at all but a call to make common cause in driving out simplistic myths from the world of economic policy. I was particularly taken with the article by Fred Argy on the use of economic theory in policy advising, and I guess I am moved to what hope is the irenic position of recognizing that his statement that ‘economics does not lend itself to doctrinaire policy assertions’ should apply too to criticisms of traditional economic theory. The world is too complex to become dogmatic about this stuff. My goodness, though, that seems unlikely to deter the pundits, from what I have seen.
Below, as usual, is the ‘long’ version of the column…]
When the sophisticated theories of economics get vulgarized into policy-making tools, that spells trouble for us all.
The column inches devoted to the global financial crisis must be running now into miles, and yet one could be forgiven for concluding that we are little the wiser. When one group of several hundred academic economists opposes the US Treasury’s bank bail-out and another equally eminent group supports it, how is the ordinary person supposed to decide about what is the right response to the financial mess we’re in, or indeed what caused it in the first place?
Yet some things seem clear. A largely unregulated market has clearly failed to deliver the optimal behaviour that conventional theories of economic competition promise it should. Most commentators now acknowledge that new types of financial regulation are needed, and the (for want of a better word) liberal pundits are seizing the chance to denounce the alleged superiority of the free market and to question whether Adam Smith’s invisible hand is anything but a libertarian fantasy.
Behind all this, however, is the question of why the free market hasn’t done what it is supposed to. In the New Statesman, economics Nobel laureate Joseph Stiglitz recently offered an answer: “For over a quarter of a century, we have known that Smith’s conclusions do not hold when there is imperfect information – and all markets, especially financial markets, are characterised by information imperfections.” For this reason, Stiglitz concludes, “The reason the invisible hand often seems invisible is that it is not there.”
Now, some might say that Stiglitz would say this, because analysing the effects of imperfect information is what won him his Nobel in 2001. In short, the traditional ‘neoclassical’ microeconomic models developed in the first half of the twentieth century assumed that all agents have ‘perfect’ information: they all know everything about what is being bought and sold.
This assumption makes the models mathematically tractable, not least because it ensures that all agents are identical. The idea is then that these agents use this information to deduce the actions that will maximize their ‘utility’, often synonymous with wealth or profit.
Under such conditions of perfect competition, the self-interested actions of market agents create an optimally efficient market in which asset prices attain their ‘correct’ value and supply matches demand. This is the principle notoriously summarized by Gordon Gekko in the movie Wall Street: greed is good, because it benefits society. That idea harks back to the French philosopher Bernard Mandeville, who argued semi-humorously in 1705 that ‘private vices’ have ‘public benefits.’
Stiglitz and his co-laureates George Akerlof and Michael Spence showed what can go wrong in this tidy picture when (as in the real world) market agents don’t know everything, and some know more than others. With these ‘asymmetries’ of information, the market may then no longer be ‘efficient’ at all, so that for example poor products can crowd out good ones.
Now, as Stiglitz says, this has been known for decades. So why isn’t it heeded? “Many of the problems our economy faces”, says Stiglitz, “are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy.”
Economist Steve Cohn, now at Knox College in Galesburg, Illinois, echoes this view about the failures of basic economic education: “More than one million students take principles of economics classes annually in the United States. These courses will be the main contact with formal economic theory for most undergraduates and will influence how they think about economic issues. Only a few percent of all students studying introductory microeconomics will likely use a textbook that seriously challenges the neoclassical paradigm.”
And there lies the problem. When people criticize economics for its reliance on traditional models that, while occasionally applicable in some special cases, are just plain wrong in general, the usual response is that this is a mere caricature of the discipline and that of course economists know all about the shortcomings of those models. Look, they say, at the way people like Stiglitz have been rewarded for pointing them out.
Fine. So why are the traditional models still taught as a meaningful first approximation to economics students who may never encounter the caveats before graduating and becoming financiers and policy advisers? Why has free-market fundamentalism become an unqualified act of faith among many pundits and advisers, particularly in the US, as this year’s Nobel laureate Paul Krugman has explained in his 1994 book Peddling Prosperity? Why are models still used that cannot explain crashes and recessions at all?
Robert Hunter Wade of the London School of Economics agrees that the sophistication of academic economics tends to vanish in the real world, despite what its defenders claim. “Go to the journals, they say, and you find a world of great variety and innovation, where some of the best work is done on issues of market failure. And they are right as far as they go. But one should also sample economics as it is applied by people such as World Bank country economists when they advise the government of country X, and as it is hard-wired into World Bank formulas for evaluating countries' policies and institutions. In this second kind of economics the sophistication of the first kind is stripped away to leave only ‘the fundamentals’.”
As Australian policy adviser Fred Argy has pointed out, such economic fundamentalism based on simplistic models commonly lead to dogmatic policy extremism. “We saw in the 1960s and 1970s how the work of John Maynard Keynes was vulgarised by many of his followers, and used to justify the most extreme forms of government intervention. And in the 1980s and 1990s we saw how monetarism, public choice theory and neo-classical economics have been misused by some to justify simplistic small government policies.”
An example of this misdirected thinking can be seen in the way several columnists have announced smugly that it is wrong to describe the current financial panic as ‘irrational’: it is perfectly ‘rational’, they say, for people to be offloading stock before it becomes valueless. That’s true, but it fails to acknowledge that this is not the kind of rationality described in conventional economic models. Rational herding behaviour is called irrational because it is not what the models predict. In other words, there’s something badly wrong with those models.
Scientists are used to the need for approximations and simplifications in teaching. But this doesn’t mean that they regard Lamarkism as a useful approximation to the Darwinism that graduate students will learn, or terracentrism as the best system for Astronomy 101.
Sadly, this often becomes an argument about how backward and unscientific economics is. That is not only unhelpful, but untrue: it is quite correct to say that a glance at the Nobels reveals (with a few exceptions) the dramatic leaps in understanding and realism that the discipline has made since its origins in misguided analogies from microscopic physics. Knowledgeable economists and critics of traditional economics are on the same side; they need to unite against the use of vulgarized, introductory or plain incorrect models as instruments of policy. After all, as the British economist Joan Robinson, a pioneer in the understanding of imperfect competition, put it, “the purpose of studying economics is to learn how to avoid being deceived by other economists.”
Tuesday, October 21, 2008
Epidemics, tipping points and phase transitions
I just came across this comment in the FT about the kind of social dynamics I discussed in my book Critical Mass.
It’s nicely put, though the spread of ideas/disease/information in epidemiological models can in fact also be described in terms of phase transitions: they’re a far more general concept than is implied by citing just the freezing transition. I also agree that sociologists have important, indeed crucial, things to offer in this area. But Duncan Watts trained as a physicist.
I just came across this comment in the FT about the kind of social dynamics I discussed in my book Critical Mass.
It’s nicely put, though the spread of ideas/disease/information in epidemiological models can in fact also be described in terms of phase transitions: they’re a far more general concept than is implied by citing just the freezing transition. I also agree that sociologists have important, indeed crucial, things to offer in this area. But Duncan Watts trained as a physicist.
Thursday, October 16, 2008
Fractal calligraphy
Everyone got very excited several years ago when some guys claimed that Jackson Pollock’s drip paintings were fractals (R. P. Taylor et al., Nature 399, 422; 1999). That claim has come under scrutiny, but now it seems in any case that, as with everything else in the world, the Chinese were there first long ago. Yuelin Li of Argonne National Laboratory has found evidence of fractality in the calligraphy of Chinese artists dating back many hundreds of years (paper here). In particular, he describes the fractal analysis of a calligraphic letter by Huai Su, one of the legendary figures of Chinese calligraphy (Li calls him a ‘maniac Buddhist monk’, an image I rather enjoyed). Huai Su’s scroll, which hangs in the Shanghai Museum, says “Bitter bamboo shoots and tea? Excellent! Just rush them [over]. Presented by Huai Su.” (See image above: you’ve got to admit, it beats a text message.)
So what, you might be tempted to say? Isn’t this just a chance consequence of the fragmented nature of brush strokes? Apparently not. Li points out that Su seems to have drawn explicit inspiration from natural fractal objects. A conversation with the calligrapher Yan Zhenqing, recorded in 722 CE, goes as follows:
Zhenqing asked: ‘Do you have your own inspiration? Su answered: ‘I often marvel at the spectacular summer clouds and imitate it… I also find the cracks in a wall very natural.’ Zhenqing asked: ‘How about water stains of a leaking house?’ Su rose, grabbed Yan’s hands, and exclaimed: ‘I get it!’
‘This conversation’, says Li, ‘has virtually defined the aesthetic standard of Chinese calligraphy thereafter, and ‘house leaking stains’ and ‘wall cracks’ became a gold measure of the skill of a calligrapher and the quality of his work.’
Monday, October 06, 2008
The drip, drip, drip of environmental change
[You know how I like to give you added value here, which is to say, the full-blown (who said over-blown?) versions of what I write for Nature before the editors judiciously wield their scalpels. In that spirit, here is my latest Muse column.]
Your starter for ten: which of the following can alter the Earth’s climate?
(1) rain in Tibet
(2) sunspots
(3) the Earth’s magnetic field
(4) iron filings
(5) cosmic rays
(6) insects
The answers? They depend on how big an argument you want to have. All have been proposed as agents of climate change. Some of them now look fairly well established as such; others remain controversial; some have been largely discounted.
The point is that it is awfully hard to say. In every case, the perturbations that the phenomena pose to the global environment look minuscule by themselves, but the problem is that when they act over the entire planet, or over geological timescales, or both, the effects can add up. Or they might not.
This issue goes to the heart of the debate over climate change. It’s not hard to imagine that a 10-km meteorite hitting the planet at a speed of several kilometres per second, as one seems to have done at the end of the Cretaceous period 65 million years ago, might have consequences of global significance. But tiny influences in the geo-, bio-, hydro- and atmospheres that trigger dramatic environmental shifts [see Box] – the dripping taps that eventually flood the building – are not only hard for the general public to grasp. They’re also tough for scientists to evaluate, or even to spot in the first place.
Even now one can find climate sceptics ridiculing the notion that a harmless, invisible gas at a concentration of a few hundred parts per million in the atmosphere can bring about potentially catastrophic changes in climate. It just seems to defy intuitive notions of cause and effect.
Two recent papers now propose new ‘trickle effects’ connected with climate change that are subtle, far from obvious, and hard to assess. Both bear on atmospheric levels of the greenhouse gas carbon dioxide: one suggests that these may shift with changes in the strength of the Earth’s magnetic field [1], the other that they may alter the ambient noisiness of the oceans [2].
Noise? What can a trace gas have to do with that? Peter Brewer and his colleagues at Monterey Bay Aquarium Research Center in Moss Landing, California, point out [2] that the transmission of low-frequency sound in seawater has been shown to be dependent on the water’s pH: at around 1 kHz (a little above a soprano’s range), greater acidity reduces sound absorption. And as atmospheric CO2 increases, more is absorbed in the oceans and seawater gets more acid through the formation of carbonic acid.
This effect of acidity on sound seems bizarre at first encounter. But it seems unlikely to have anything to do with water per se. Rather, chemical equilibria involving dissolved borate, carbonate and bicarbonate ions are apparently involved: certain groups of these ions appear to have vibrations at acoustic frequencies, causing resonant absorption.
If this sounds vague, sadly that’s how it is. Such ‘explanations’ as exist so far seem almost scandalously sketchy. But the effect itself is well documented, including the pH-dependence that follows from the way acids or alkalis tip the balance of these acid-base processes. Brewer and colleagues use these earlier measurements to calculate how current and future changes in absorbed CO2 in the oceans will alter the sound absorption at different depths. They say that this has probably decreased by more than 12 percent already relative to pre-industrial levels, and that low-frequency sound might travel up to 70 percent further by 2050.
And indeed, low-frequency ambient noise has been found to be 9 dB higher off the Californian coast than it was in the 1960s, not all of which can be explained by increased human activity. How such changes might affect marine mammals that use long-distance acoustic communication is the question left hanging.
Uptake of atmospheric carbon dioxide by the oceans is also central to the proposal by Alexander Pazur and Michael Winklhofer of the University of Munich [1] that changes in the Earth’s magnetic field could affect climate. They claim that in a magnetic field 40 percent that of the current geomagnetic value, the solubility of carbon dioxide is 30 percent lower.
They use this to estimate that a mere 1 percent reduction in geomagnetic field strength can release ten times more CO2 than all currently emitted from subsea volcanism. Admittedly, they say, this effect is tiny compared with present inputs from human activities; but it would change the concentration by 1 part per million per decade, and could add up to a non-negligible effect over long enough times.
This isn’t the first suggested link between climate and geomagnetism. It has been proposed, for example, that growing or shrinking ice sheets could alter the Earth’s rotation rate and thus trigger changes in core circulation that drives the geodynamo. And the geomagnetic field also affects the influx of cosmic rays at the magnetic poles, whose collisions ionize molecules in the atmosphere which can then seed the formation of airborne particles. These in turn might nucleate cloud droplets, changing the Earth’s albedo.
Indeed, once you start to think about it, possible links and interactions of this sort seem endless. How to know which are worth pursuing? The effect claimed by Pazur and Winklhofer does seem a trifle hard to credit, although mercifully they are not suggesting any mysterious magnetically induced changes of ‘water structure’ – a favourite fantasy of those who insist on the powers of magnets to heal bodies or soften water. Rather, they offer the plausible hypothesis that the influence acts via ions adsorbed on the surfaces of tiny bubbles of dissolved gas. But there are good arguments why such effects seem unlikely to be significant at such weak field strengths [3]. Moreover, the researchers measure the solubility changes indirectly, via the effect of tiny bubbles on light scattering – but bubble size and coalescence is itself sensitive to dissolved salt in complicated ways [4]. In any event, the effect vanishes in pure water.
So the idea needs much more thorough study before one can say much about its validity. But the broader issue is that it is distressingly hard to anticipate these effects – merely to think of them in the first place, let alone to estimate their importance. Climate scientists have been saying pretty much that for decades: feedbacks in the biogeochemical cycles that influence climate are a devil to discern and probe, which is why the job of forecasting future change is so fraught with uncertainty.
And of course every well-motivated proposal of some subtle modifier of global change – such as cosmic rays – tends to be commandeered to spread doubt about whether global warming is caused by humans, or is happening at all, or whether scientists have the slightest notion of what is going on (and therefore whether we can trust their ‘consensus’).
Perhaps this is a good reason to embrace the metaphor of ‘planetary physiology’ proposed by James Lovelock. We are all used to the idea that minute quantities of chemical agents, or small but persistent outside influences, can produce all kinds of surprising, nonlinear and non-intuitive transformations in our bodies. One doesn’t have to buy into the arid debate about whether or not our planet is ‘alive’; maybe we need only reckon that it might as well be.
References
1. Pazur, A. & Winklhofer, M. Geophys. Res. Lett. 35, L16710 (2008).
2. Hester, K. C. et al. Geophys. Res. Lett. 35, L19601 (2008).
3. Kitazawa, K. et al. Physica B 294, 709-714 (2001).
4. Craig, V. S. J., Ninham, B. W. & Pashley, R. M. Nature 364, 317-319 (1993).
Box: Easy to miss?
Iron fertilization
Oceanographer John Martin suggested in the 1980s that atmospheric CO2 might depend on the amount of iron in the oceans (Martin, J. H. Paleoceanography 5, 1–13; 1990). Iron is an essential nutrient for phytoplankton, which absorb and fix carbon in their tissues as they grow, drawing carbon dioxide out of the atmosphere. Martin’s hypothesis was that plankton growth could be stimulated, reducing CO2 levels, by dumping iron into key parts of the world oceans.
But whether the idea will work as a way of mitigating global warming depends on a host of factors, such as whether plankton growth really is limited by the availability of iron and how quickly the fixed carbon gets recycled through the oceans and atmosphere. In the natural climate system, the iron fertilization hypothesis suggests some complex feedbacks: for example, much oceanic iron comes from windborne dust, which might be more mobilized in a drier world.
Cenozoic uplift of the Himalayas
About 40-50 million years ago, the Indian subcontinent began to collide with Asia, pushing up the crust to form the Himalayan plateau. A period of global cooling began at about the same time. Coincidence? Perhaps not, according to geologists Maureen Raymo and William Ruddiman and their collaborators (Raymo, M. E. & Ruddiman, W. F. Nature 359, 117-122; 1992). The mountain range and high ground may have intensified monsoon rainfall, and the uplift exposed rock to the downpour which underwent ‘chemical weathering’, a process involving the conversion of silicate to carbonate minerals. This consumes carbon dioxide from the atmosphere, cooling the climate.
A proof must negotiate many links in the chain of reasoning. Was weathering really more extensive then? And the monsoon more intense? How might the growth of mountain glaciers affect erosion and weathering? How do changes in dissolved minerals washed into the sea interact with CO2-dependent ocean acidity to affect the relevant biogeochemical cycles? The details are still debated.
Plant growth, carbon dioxide and the hydrological cycle
How changes in atmospheric CO2 levels will affect plant growth has been one of the most contentious issues in climate modelling. Will plants grow faster when they have more carbon dioxide available for photosynthesis, thus providing a negative feedback on climate? That’s still unclear. A separate issue has been explored by Ian Woodward at Cambridge University, who reported that plants have fewer stomata – pores that open and close to let in atmospheric CO2 – in their leaves when CO2 levels are greater (Woodward, F. I. Nature 327, 617-618; 1987). They simply don’t need so many portals when the gas is plentiful. The relationship is robust enough for stomatal density of fossil plants to be used as a proxy for ancient CO2 levels.
But stomata are also the leaks through which water vapour escapes from plants in a process called transpiration. This is a vital part of the hydrological cycle, the movement of water between the atmosphere, oceans and ground. So fewer stomata means that plants take up and evaporate less water from the earth, making the local climate less moist and producing knock-on effects such as greater runoff and increased erosion.
Ozone depletion
They sounded so good, didn’t they? Chlorofluorocarbons are gases that seemed chemically inert and therefore unlikely to harm us or the environment when used as the coolants in refrigerators from the early twentieth century. So what if the occasional whiff of CFCs leaked into the atmosphere when fridges were dumped? – the quantities would be tiny.
But their very inertness meant that they could accumulate in the air. And when exposed to harsh ultraviolet rays in the upper atmosphere, the molecules could break apart into reactive chlorine free radicals, which react with and destroy the stratospheric ozone that protects the Earth’s surface from the worst of the Sun’s harmful UV rays. This danger wasn’t seen until 1974, when it was pointed out by chemists Mark Molina and Sherwood Rowland (Molina, M. J. & Rowland, F. S. Nature 249, 810-812; 1974).
Even then, when ozone depletion was first observed in the Antarctic atmosphere in the 1980s, it was put down to instrumental error. Not until 1985 did the observations become impossible to ignore: CFCs were destroying the ozone layer (Farman, J. C., Gardiner, B. G. & Shanklin, J. D. Nature 315, 207-209; 1985). The process was confined to the Antarctic (and later the Arctic) because it required the ice particles of polar stratospheric clouds to keep chlorine in an ‘active’, ozone-destroying form.
CFCs are also potent greenhouse gases; and changes in global climate might alter the distribution and formation of the polar atmospheric vortices and stratospheric clouds on which ozone depletion depends. So the feedbacks between ozone depletion and global warming are subtle and hard to untangle.
[You know how I like to give you added value here, which is to say, the full-blown (who said over-blown?) versions of what I write for Nature before the editors judiciously wield their scalpels. In that spirit, here is my latest Muse column.]
Your starter for ten: which of the following can alter the Earth’s climate?
(1) rain in Tibet
(2) sunspots
(3) the Earth’s magnetic field
(4) iron filings
(5) cosmic rays
(6) insects
The answers? They depend on how big an argument you want to have. All have been proposed as agents of climate change. Some of them now look fairly well established as such; others remain controversial; some have been largely discounted.
The point is that it is awfully hard to say. In every case, the perturbations that the phenomena pose to the global environment look minuscule by themselves, but the problem is that when they act over the entire planet, or over geological timescales, or both, the effects can add up. Or they might not.
This issue goes to the heart of the debate over climate change. It’s not hard to imagine that a 10-km meteorite hitting the planet at a speed of several kilometres per second, as one seems to have done at the end of the Cretaceous period 65 million years ago, might have consequences of global significance. But tiny influences in the geo-, bio-, hydro- and atmospheres that trigger dramatic environmental shifts [see Box] – the dripping taps that eventually flood the building – are not only hard for the general public to grasp. They’re also tough for scientists to evaluate, or even to spot in the first place.
Even now one can find climate sceptics ridiculing the notion that a harmless, invisible gas at a concentration of a few hundred parts per million in the atmosphere can bring about potentially catastrophic changes in climate. It just seems to defy intuitive notions of cause and effect.
Two recent papers now propose new ‘trickle effects’ connected with climate change that are subtle, far from obvious, and hard to assess. Both bear on atmospheric levels of the greenhouse gas carbon dioxide: one suggests that these may shift with changes in the strength of the Earth’s magnetic field [1], the other that they may alter the ambient noisiness of the oceans [2].
Noise? What can a trace gas have to do with that? Peter Brewer and his colleagues at Monterey Bay Aquarium Research Center in Moss Landing, California, point out [2] that the transmission of low-frequency sound in seawater has been shown to be dependent on the water’s pH: at around 1 kHz (a little above a soprano’s range), greater acidity reduces sound absorption. And as atmospheric CO2 increases, more is absorbed in the oceans and seawater gets more acid through the formation of carbonic acid.
This effect of acidity on sound seems bizarre at first encounter. But it seems unlikely to have anything to do with water per se. Rather, chemical equilibria involving dissolved borate, carbonate and bicarbonate ions are apparently involved: certain groups of these ions appear to have vibrations at acoustic frequencies, causing resonant absorption.
If this sounds vague, sadly that’s how it is. Such ‘explanations’ as exist so far seem almost scandalously sketchy. But the effect itself is well documented, including the pH-dependence that follows from the way acids or alkalis tip the balance of these acid-base processes. Brewer and colleagues use these earlier measurements to calculate how current and future changes in absorbed CO2 in the oceans will alter the sound absorption at different depths. They say that this has probably decreased by more than 12 percent already relative to pre-industrial levels, and that low-frequency sound might travel up to 70 percent further by 2050.
And indeed, low-frequency ambient noise has been found to be 9 dB higher off the Californian coast than it was in the 1960s, not all of which can be explained by increased human activity. How such changes might affect marine mammals that use long-distance acoustic communication is the question left hanging.
Uptake of atmospheric carbon dioxide by the oceans is also central to the proposal by Alexander Pazur and Michael Winklhofer of the University of Munich [1] that changes in the Earth’s magnetic field could affect climate. They claim that in a magnetic field 40 percent that of the current geomagnetic value, the solubility of carbon dioxide is 30 percent lower.
They use this to estimate that a mere 1 percent reduction in geomagnetic field strength can release ten times more CO2 than all currently emitted from subsea volcanism. Admittedly, they say, this effect is tiny compared with present inputs from human activities; but it would change the concentration by 1 part per million per decade, and could add up to a non-negligible effect over long enough times.
This isn’t the first suggested link between climate and geomagnetism. It has been proposed, for example, that growing or shrinking ice sheets could alter the Earth’s rotation rate and thus trigger changes in core circulation that drives the geodynamo. And the geomagnetic field also affects the influx of cosmic rays at the magnetic poles, whose collisions ionize molecules in the atmosphere which can then seed the formation of airborne particles. These in turn might nucleate cloud droplets, changing the Earth’s albedo.
Indeed, once you start to think about it, possible links and interactions of this sort seem endless. How to know which are worth pursuing? The effect claimed by Pazur and Winklhofer does seem a trifle hard to credit, although mercifully they are not suggesting any mysterious magnetically induced changes of ‘water structure’ – a favourite fantasy of those who insist on the powers of magnets to heal bodies or soften water. Rather, they offer the plausible hypothesis that the influence acts via ions adsorbed on the surfaces of tiny bubbles of dissolved gas. But there are good arguments why such effects seem unlikely to be significant at such weak field strengths [3]. Moreover, the researchers measure the solubility changes indirectly, via the effect of tiny bubbles on light scattering – but bubble size and coalescence is itself sensitive to dissolved salt in complicated ways [4]. In any event, the effect vanishes in pure water.
So the idea needs much more thorough study before one can say much about its validity. But the broader issue is that it is distressingly hard to anticipate these effects – merely to think of them in the first place, let alone to estimate their importance. Climate scientists have been saying pretty much that for decades: feedbacks in the biogeochemical cycles that influence climate are a devil to discern and probe, which is why the job of forecasting future change is so fraught with uncertainty.
And of course every well-motivated proposal of some subtle modifier of global change – such as cosmic rays – tends to be commandeered to spread doubt about whether global warming is caused by humans, or is happening at all, or whether scientists have the slightest notion of what is going on (and therefore whether we can trust their ‘consensus’).
Perhaps this is a good reason to embrace the metaphor of ‘planetary physiology’ proposed by James Lovelock. We are all used to the idea that minute quantities of chemical agents, or small but persistent outside influences, can produce all kinds of surprising, nonlinear and non-intuitive transformations in our bodies. One doesn’t have to buy into the arid debate about whether or not our planet is ‘alive’; maybe we need only reckon that it might as well be.
References
1. Pazur, A. & Winklhofer, M. Geophys. Res. Lett. 35, L16710 (2008).
2. Hester, K. C. et al. Geophys. Res. Lett. 35, L19601 (2008).
3. Kitazawa, K. et al. Physica B 294, 709-714 (2001).
4. Craig, V. S. J., Ninham, B. W. & Pashley, R. M. Nature 364, 317-319 (1993).
Box: Easy to miss?
Iron fertilization
Oceanographer John Martin suggested in the 1980s that atmospheric CO2 might depend on the amount of iron in the oceans (Martin, J. H. Paleoceanography 5, 1–13; 1990). Iron is an essential nutrient for phytoplankton, which absorb and fix carbon in their tissues as they grow, drawing carbon dioxide out of the atmosphere. Martin’s hypothesis was that plankton growth could be stimulated, reducing CO2 levels, by dumping iron into key parts of the world oceans.
But whether the idea will work as a way of mitigating global warming depends on a host of factors, such as whether plankton growth really is limited by the availability of iron and how quickly the fixed carbon gets recycled through the oceans and atmosphere. In the natural climate system, the iron fertilization hypothesis suggests some complex feedbacks: for example, much oceanic iron comes from windborne dust, which might be more mobilized in a drier world.
Cenozoic uplift of the Himalayas
About 40-50 million years ago, the Indian subcontinent began to collide with Asia, pushing up the crust to form the Himalayan plateau. A period of global cooling began at about the same time. Coincidence? Perhaps not, according to geologists Maureen Raymo and William Ruddiman and their collaborators (Raymo, M. E. & Ruddiman, W. F. Nature 359, 117-122; 1992). The mountain range and high ground may have intensified monsoon rainfall, and the uplift exposed rock to the downpour which underwent ‘chemical weathering’, a process involving the conversion of silicate to carbonate minerals. This consumes carbon dioxide from the atmosphere, cooling the climate.
A proof must negotiate many links in the chain of reasoning. Was weathering really more extensive then? And the monsoon more intense? How might the growth of mountain glaciers affect erosion and weathering? How do changes in dissolved minerals washed into the sea interact with CO2-dependent ocean acidity to affect the relevant biogeochemical cycles? The details are still debated.
Plant growth, carbon dioxide and the hydrological cycle
How changes in atmospheric CO2 levels will affect plant growth has been one of the most contentious issues in climate modelling. Will plants grow faster when they have more carbon dioxide available for photosynthesis, thus providing a negative feedback on climate? That’s still unclear. A separate issue has been explored by Ian Woodward at Cambridge University, who reported that plants have fewer stomata – pores that open and close to let in atmospheric CO2 – in their leaves when CO2 levels are greater (Woodward, F. I. Nature 327, 617-618; 1987). They simply don’t need so many portals when the gas is plentiful. The relationship is robust enough for stomatal density of fossil plants to be used as a proxy for ancient CO2 levels.
But stomata are also the leaks through which water vapour escapes from plants in a process called transpiration. This is a vital part of the hydrological cycle, the movement of water between the atmosphere, oceans and ground. So fewer stomata means that plants take up and evaporate less water from the earth, making the local climate less moist and producing knock-on effects such as greater runoff and increased erosion.
Ozone depletion
They sounded so good, didn’t they? Chlorofluorocarbons are gases that seemed chemically inert and therefore unlikely to harm us or the environment when used as the coolants in refrigerators from the early twentieth century. So what if the occasional whiff of CFCs leaked into the atmosphere when fridges were dumped? – the quantities would be tiny.
But their very inertness meant that they could accumulate in the air. And when exposed to harsh ultraviolet rays in the upper atmosphere, the molecules could break apart into reactive chlorine free radicals, which react with and destroy the stratospheric ozone that protects the Earth’s surface from the worst of the Sun’s harmful UV rays. This danger wasn’t seen until 1974, when it was pointed out by chemists Mark Molina and Sherwood Rowland (Molina, M. J. & Rowland, F. S. Nature 249, 810-812; 1974).
Even then, when ozone depletion was first observed in the Antarctic atmosphere in the 1980s, it was put down to instrumental error. Not until 1985 did the observations become impossible to ignore: CFCs were destroying the ozone layer (Farman, J. C., Gardiner, B. G. & Shanklin, J. D. Nature 315, 207-209; 1985). The process was confined to the Antarctic (and later the Arctic) because it required the ice particles of polar stratospheric clouds to keep chlorine in an ‘active’, ozone-destroying form.
CFCs are also potent greenhouse gases; and changes in global climate might alter the distribution and formation of the polar atmospheric vortices and stratospheric clouds on which ozone depletion depends. So the feedbacks between ozone depletion and global warming are subtle and hard to untangle.
Friday, September 19, 2008
Opening the door to Hogwarts
[This is how I originally wrote my latest story for Nature’s online news. It is about another piece of creative thinking from this group at Shanghai Jiao Tong University. I was particularly struck by the milk-bottle effect that John Pendry told me about – I’d never thought about it before, but it’s actually quite a striking thing. (The same applies to water in a glass, but it’s more effective with milk.) John says that it is basically because, as one can show quite easily, no light ray can pass through the glass wall that does not also pass through some milk.
Incidentally, I have to suspect that John Pendry must be a candidate for some future Nobel for his work in this area, though probably not yet, as the committee would want to see metamaterials prove their worth. The same applies to Eli Yablonovitch and Sajeev John for their work on photonic crystals. Some really stimulating physics has come out of both of these ideas.
The photo, by the way, was Oliver Morton’s idea.]
Scientists show how to make a hidden portal
In a demonstration that the inventiveness of physicists is equal to anything fantasy writers can dream up, scientists in China have unveiled a blueprint for the hidden portal in King’s Cross railway station through which Harry Potter and his chums catch the train to Hogwarts.
Platform Nine and Three Quarters already exists at King’s Cross in London, but visitors attempting the Harry Potter manoeuvre of running at the wall and trusting to faith will be in for a rude shock.
Xudong Luo and colleagues at Shanghai Jiao Tong University have figured out what’s missing. In two preprints, they describe a method for concealing an entrance so that what looks like a blank wall actually contains invisible openings [1,2].
Physicist John Pendry of Imperial College in London, whose theoretical work laid the foundations of the trick, agrees that there is a whiff of wizardry about it all. “It’s just magic”, he says.
This is the latest stunt of metamaterials, which have already delivered invisibility cloaks [3] and other weird manipulations of light. Metamaterials are structures pieced together from ‘artificial atoms’, tiny electrical devices that allow the structure to interact with light in way that are impossible for ordinary substances.
Some metamaterials have a negative refractive index, meaning that they bend light the ‘wrong’ way. This means that an object within the metamaterial can appear to float above it. A metamaterial invisibility shield, meanwhile, bends light smoothly around an object at its centre, like water flowing around a rock in a river. The Shanghai group recently showed how the object can be revealed again with an anti-invisibility cloak [4].
Now they have worked out in theory how to hide a doorway. The trick is to create an object that, because of its unusual interactions with light, looks bigger than it really is. A pillar made of such stuff, placed in the middle of an opening in a wall, could appear to fill the gap completely, whereas in fact there are open spaces to each side.
Pendry and his coworker S. Anantha Ramakrishna demonstrated the basic principle in 2003, when they showed that a cylinder of metamaterial could act as a magnifying lens for an object inside it [5].
“When you look at a milk bottle, you don’t see the glass”, Pendry explains. Because of the way in which the milk scatters light, “the milk seems to go right to the edge of the bottle.” He and Ramakrishna showed that with a negative-refractive index metamaterial, an object in the bottle could be magnified on the surface.
And now Luo and colleagues have shown that an even more remarkable effect is possible: the milk can appear to be outside the bottle. “It’s like a three-dimensional projector”, says Pendry. “I call it a super-milk bottle.”
The Chinese team opt for the rather more prosaic term “superscatterer”. They show that such an object could be made from a metal core surrounded by a metamaterial with a negative refractive index [1].
The researchers have calculated how light interacts with a rectangular superscatterer placed in the middle of a wide opening in a wall, and find that, for the right choice of sizes and metamaterial properties, the light bounces back just as it does if there was no opening [2].
If someone passes through the concealed opening, they find, it becomes momentarily visible before disappearing again once they are on the other side.
So “platform nine and three-quarters is realizable”, the Shanghai team says. “This is terrific fun”, says Pendry. He feels that the effect is even more remarkable than the invisibility cloak, because it seems so counter-intuitive that an object can project itself into empty space.
But the calculations so far only show concealment for microwave radiation, not visible light. Pendry says that the problem in using visible-light metamaterials – which were reported last month [6,7] – is that currently they tend to absorb some light rather than scattering it all into the magnified image, making it hard to project the image a significant distance beyond the object’s surface. So openings hidden from the naked eye aren’t likely “until we get on top of these materials”, he says.
References
1. Yang, T. et al. http://arxiv.org/abs/0807.5038 (2008).
2. Luo, X. et al. http://arxiv.org/abs/0809.1823 (2008).
3. Schurig, D. et al., Science 314, 977-980 (2006).
4. Chen, H., Luo, X., Ma, H. & Chan, C. T. http://arxiv.org/abs/0807.4973 (2008).
5. Pendry, J. B. & Ramakrishna, S. A. J. Phys.: Condens. Matter 15, 6345-6364 (2003).
6. Valentine, J. et al., Nature doi:10.1038/nature07247 (2008).
7. J. Yao et al., Science 321, 930 (2008).
[This is how I originally wrote my latest story for Nature’s online news. It is about another piece of creative thinking from this group at Shanghai Jiao Tong University. I was particularly struck by the milk-bottle effect that John Pendry told me about – I’d never thought about it before, but it’s actually quite a striking thing. (The same applies to water in a glass, but it’s more effective with milk.) John says that it is basically because, as one can show quite easily, no light ray can pass through the glass wall that does not also pass through some milk.
Incidentally, I have to suspect that John Pendry must be a candidate for some future Nobel for his work in this area, though probably not yet, as the committee would want to see metamaterials prove their worth. The same applies to Eli Yablonovitch and Sajeev John for their work on photonic crystals. Some really stimulating physics has come out of both of these ideas.
The photo, by the way, was Oliver Morton’s idea.]
Scientists show how to make a hidden portal
In a demonstration that the inventiveness of physicists is equal to anything fantasy writers can dream up, scientists in China have unveiled a blueprint for the hidden portal in King’s Cross railway station through which Harry Potter and his chums catch the train to Hogwarts.
Platform Nine and Three Quarters already exists at King’s Cross in London, but visitors attempting the Harry Potter manoeuvre of running at the wall and trusting to faith will be in for a rude shock.
Xudong Luo and colleagues at Shanghai Jiao Tong University have figured out what’s missing. In two preprints, they describe a method for concealing an entrance so that what looks like a blank wall actually contains invisible openings [1,2].
Physicist John Pendry of Imperial College in London, whose theoretical work laid the foundations of the trick, agrees that there is a whiff of wizardry about it all. “It’s just magic”, he says.
This is the latest stunt of metamaterials, which have already delivered invisibility cloaks [3] and other weird manipulations of light. Metamaterials are structures pieced together from ‘artificial atoms’, tiny electrical devices that allow the structure to interact with light in way that are impossible for ordinary substances.
Some metamaterials have a negative refractive index, meaning that they bend light the ‘wrong’ way. This means that an object within the metamaterial can appear to float above it. A metamaterial invisibility shield, meanwhile, bends light smoothly around an object at its centre, like water flowing around a rock in a river. The Shanghai group recently showed how the object can be revealed again with an anti-invisibility cloak [4].
Now they have worked out in theory how to hide a doorway. The trick is to create an object that, because of its unusual interactions with light, looks bigger than it really is. A pillar made of such stuff, placed in the middle of an opening in a wall, could appear to fill the gap completely, whereas in fact there are open spaces to each side.
Pendry and his coworker S. Anantha Ramakrishna demonstrated the basic principle in 2003, when they showed that a cylinder of metamaterial could act as a magnifying lens for an object inside it [5].
“When you look at a milk bottle, you don’t see the glass”, Pendry explains. Because of the way in which the milk scatters light, “the milk seems to go right to the edge of the bottle.” He and Ramakrishna showed that with a negative-refractive index metamaterial, an object in the bottle could be magnified on the surface.
And now Luo and colleagues have shown that an even more remarkable effect is possible: the milk can appear to be outside the bottle. “It’s like a three-dimensional projector”, says Pendry. “I call it a super-milk bottle.”
The Chinese team opt for the rather more prosaic term “superscatterer”. They show that such an object could be made from a metal core surrounded by a metamaterial with a negative refractive index [1].
The researchers have calculated how light interacts with a rectangular superscatterer placed in the middle of a wide opening in a wall, and find that, for the right choice of sizes and metamaterial properties, the light bounces back just as it does if there was no opening [2].
If someone passes through the concealed opening, they find, it becomes momentarily visible before disappearing again once they are on the other side.
So “platform nine and three-quarters is realizable”, the Shanghai team says. “This is terrific fun”, says Pendry. He feels that the effect is even more remarkable than the invisibility cloak, because it seems so counter-intuitive that an object can project itself into empty space.
But the calculations so far only show concealment for microwave radiation, not visible light. Pendry says that the problem in using visible-light metamaterials – which were reported last month [6,7] – is that currently they tend to absorb some light rather than scattering it all into the magnified image, making it hard to project the image a significant distance beyond the object’s surface. So openings hidden from the naked eye aren’t likely “until we get on top of these materials”, he says.
References
1. Yang, T. et al. http://arxiv.org/abs/0807.5038 (2008).
2. Luo, X. et al. http://arxiv.org/abs/0809.1823 (2008).
3. Schurig, D. et al., Science 314, 977-980 (2006).
4. Chen, H., Luo, X., Ma, H. & Chan, C. T. http://arxiv.org/abs/0807.4973 (2008).
5. Pendry, J. B. & Ramakrishna, S. A. J. Phys.: Condens. Matter 15, 6345-6364 (2003).
6. Valentine, J. et al., Nature doi:10.1038/nature07247 (2008).
7. J. Yao et al., Science 321, 930 (2008).
Wednesday, September 17, 2008
Don't mention the 'C' word
I’m beginning to wonder whether I should be expecting the science police to come knocking on my door. After all, my latest book contains images of churches, saints, Jesus and the Virgin Mary. It discusses theology. And, goodness me, I have even taken part in a workshop organized by the Templeton Foundation. I am not sure that being an atheist will be a mitigating factor in my defence.
These dark thoughts are motivated by the fate of Michael Reiss, who has been forced to resign from his position as director of education at the Royal Society over his remarks about creationism in the classroom.
Now, Reiss isn’t blameless in all of this. Critics of his comments are right to say that the Royal Society needs to make it quite clear that creationism is not an alternative way to science of looking at the universe and evolutionism, but is plain wrong. Reiss didn’t appear to do this explicitly in his controversial talk at the British Association meeting. And his remark that “the concerns of students who do not accept the theory of evolution” should be taken “seriously and respectfully” sounds perilously close to saying that those concerns should be given serious consideration, and that one should respect the creationist point of view even while disagreeing with it. The fact is that we should feel obliged to respect points of view that are respectable, such as religious belief per se. Creationism is not respectable, scientifically, intellectually or indeed theologically (will they tell the kids that in Religious Education?). And if you are going to title your talk “Should creationism be a part of the science curriculum?”, it is reasonable that questions should be asked if you aren’t clearly seen at some point to say “No.”
So, a substantial case for the prosecution, it might seem. But for a start, one might reasonably expect that scientists, who pride themselves on accurate observation, will read your words and not just blunder in with preconceptions. It is hard to see a case, in Reiss’s address, for suggesting that his views differ from those that the Royal Society has restated in conjunction with Reiss’s resignation: “creationism has no scientific basis and should not be part of the science curriculum. However, if a young person raises creationism in a science class, teachers should be in a position to explain why evolution is a sound scientific theory and why creationism is not, in any way, scientific.”
This, to my mind, was the thrust of Reiss’s argument. He quoted from the Department for Children, Schools and Families Guidance on Creationism, published in 2007: “Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory.” The point here is that teachers should not be afraid to tackle the issue. They need not (indeed, I feel, should not) bring it up themselves, but if pupils do, they should not shy away by saying something like “We don’t discuss that in a science class.” And there is a good chance that such things will come up. I have heard stories of the genuine perplexity of schoolchildren who have received a creationist viewpoint from parents, whose views they respect, and a conflicting viewpoint from teachers who they also believe are intent on telling them the truth. Such pupils need and deserve guidance, not offhand dismissal. You can be respectful to individuals without having to ‘respect’ the views they hold, and this seems closest to what Reiss was saying.
And there’s nothing that disconcerts teachers more than their being told they must not discuss something. Indeed, that undermines their capacity to teach, just as the current proscription on physical contact with children undermines teachers’ ability to care for them in loco parentis. A fearful teacher is not a good one.
What perhaps irked some scientists more than anything else was Reiss’s remark that “Creationism can profitably be seen not as a simple misconception that careful science teaching can correct. Rather, a student who believes in creationism can be seen as inhabiting a non-scientific worldview, a very different way of seeing the world.” This is simplistic and incomplete as it stands (Gerald Holton has written about the way that a scientific viewpoint in some areas can coexist happily with irrationalism in others), but the basic point is valid. Despite (or perhaps because of) the recent decline in the popularity of the ‘deficit model’ of understanding science, some scientists still doggedly persist in the notion that everyone would be converted to a scientific way of thinking if we can just succeed in drumming enough facts into their heads. Reiss is pointing to the problem that the matter runs much deeper. Science education is essential, and the lack of it helps to pave the way for the kind of spread of ignorance that we can see in some parts of the developed world. But to imagine that this will undermine an entire culture and environment that inculcates some anti-scientific ideas is foolish and dangerous. I suspect that some scientists were angered by Reiss’s comments here because they imply that these scientists’ views of how to ‘convert’ people to a scientific worldview are naïve.
Most troubling of all, however, are the comments from some quarters which make it clear that the real source of outrage stems from the fact that Reiss is an ordained Church of England minister. The implication seems to be that, as a religious believer, he is probably sympathetic to creationism, as if one necessarily follows from the other. That creationism is an unorthodox, indeed a cranky form of Christianity (or of other kinds of fundamentalism – Islam and Judaism has its creationists too) seems here to be ignored or denied. It’s well known that Richard Dawkins sees fundamentalism as the centre of gravity of all religions, and that moderate, orthodox views are just the thin end of the wedge. But his remark that “a clergyman in charge of education for the country’s leading scientific organization” is like “a Monty Python sketch” itself has a whiff of fundamentalist intolerance. If we allow that it’s not obvious why a clergyman should have a significantly more profound belief than any other religious believer, this seems to imply that Dawkins would regard no Christian, Muslim, Hindu, Jew or so forth as fit for this job. Perhaps they should be excluded from the Royal Society altogether? Are we now to assume that no professed believer of any faith can be trusted to believe in and argue for a scientific view of the world? I do understand why some might regard these things as fundamentally incompatible, but I would slightly worry about the robustness of a mind that could not live with a little conflict and contradiction in its beliefs.
This situation has parallels to the way the Royal Society has been criticized for its involvement with the Templeton Foundation. I carry no torch for the Templeton, and indeed was on the wary lookout at the Varenna conference above for a hidden agenda. But I found none. It seems to me that the notion of exploring links between science and religion is harmless enough in itself, and it certainly has plenty of historical relevance, if nothing else. No doubt some flaky stuff comes of it, but the Templeton events that I have come across have been of high scientific quality. (I’m rather more concerned about suggestions that the Templeton has right-wing leanings, although that doesn’t seem obvious from their web site – and US rightwingers are usually quite happy to trumpet the fact.) But it seems sad that the RS’s connections with the Templeton have been lambasted not because anyone seems to have detected a dodgy agenda (I understand that the Templeton folks are explicitly unsympathetic to intelligent design, for example) but because they are a religious-based organization. Again, I thought that scientists were supposed to base their conclusions on actual evidence, not assumptions.
In regard to Reiss, I’m not going to start ranting about witch hunts (not least because that is the hallmark of the green-ink brigade). He was rather incautious, and needed to see how easily his words might be misinterpreted. But they have indeed been misinterpreted, and I don’t see that the Royal Society has done itself much of a service by ousting him, particularly as this seems to have been brought about by a knee-jerk response from scientists who are showing signs of ‘Reds (or in this case, Revs) under the bed’ paranoia.
The whole affair reminds me of the case of the Archbishop of Canterbury talking about sharia law, where the problem was not that he said anything so terrible but that he failed to be especially cautious and explicit when using trigger words that send people foaming at the mouth. But I thought scientists considered themselves more objective than that.
I’m beginning to wonder whether I should be expecting the science police to come knocking on my door. After all, my latest book contains images of churches, saints, Jesus and the Virgin Mary. It discusses theology. And, goodness me, I have even taken part in a workshop organized by the Templeton Foundation. I am not sure that being an atheist will be a mitigating factor in my defence.
These dark thoughts are motivated by the fate of Michael Reiss, who has been forced to resign from his position as director of education at the Royal Society over his remarks about creationism in the classroom.
Now, Reiss isn’t blameless in all of this. Critics of his comments are right to say that the Royal Society needs to make it quite clear that creationism is not an alternative way to science of looking at the universe and evolutionism, but is plain wrong. Reiss didn’t appear to do this explicitly in his controversial talk at the British Association meeting. And his remark that “the concerns of students who do not accept the theory of evolution” should be taken “seriously and respectfully” sounds perilously close to saying that those concerns should be given serious consideration, and that one should respect the creationist point of view even while disagreeing with it. The fact is that we should feel obliged to respect points of view that are respectable, such as religious belief per se. Creationism is not respectable, scientifically, intellectually or indeed theologically (will they tell the kids that in Religious Education?). And if you are going to title your talk “Should creationism be a part of the science curriculum?”, it is reasonable that questions should be asked if you aren’t clearly seen at some point to say “No.”
So, a substantial case for the prosecution, it might seem. But for a start, one might reasonably expect that scientists, who pride themselves on accurate observation, will read your words and not just blunder in with preconceptions. It is hard to see a case, in Reiss’s address, for suggesting that his views differ from those that the Royal Society has restated in conjunction with Reiss’s resignation: “creationism has no scientific basis and should not be part of the science curriculum. However, if a young person raises creationism in a science class, teachers should be in a position to explain why evolution is a sound scientific theory and why creationism is not, in any way, scientific.”
This, to my mind, was the thrust of Reiss’s argument. He quoted from the Department for Children, Schools and Families Guidance on Creationism, published in 2007: “Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory.” The point here is that teachers should not be afraid to tackle the issue. They need not (indeed, I feel, should not) bring it up themselves, but if pupils do, they should not shy away by saying something like “We don’t discuss that in a science class.” And there is a good chance that such things will come up. I have heard stories of the genuine perplexity of schoolchildren who have received a creationist viewpoint from parents, whose views they respect, and a conflicting viewpoint from teachers who they also believe are intent on telling them the truth. Such pupils need and deserve guidance, not offhand dismissal. You can be respectful to individuals without having to ‘respect’ the views they hold, and this seems closest to what Reiss was saying.
And there’s nothing that disconcerts teachers more than their being told they must not discuss something. Indeed, that undermines their capacity to teach, just as the current proscription on physical contact with children undermines teachers’ ability to care for them in loco parentis. A fearful teacher is not a good one.
What perhaps irked some scientists more than anything else was Reiss’s remark that “Creationism can profitably be seen not as a simple misconception that careful science teaching can correct. Rather, a student who believes in creationism can be seen as inhabiting a non-scientific worldview, a very different way of seeing the world.” This is simplistic and incomplete as it stands (Gerald Holton has written about the way that a scientific viewpoint in some areas can coexist happily with irrationalism in others), but the basic point is valid. Despite (or perhaps because of) the recent decline in the popularity of the ‘deficit model’ of understanding science, some scientists still doggedly persist in the notion that everyone would be converted to a scientific way of thinking if we can just succeed in drumming enough facts into their heads. Reiss is pointing to the problem that the matter runs much deeper. Science education is essential, and the lack of it helps to pave the way for the kind of spread of ignorance that we can see in some parts of the developed world. But to imagine that this will undermine an entire culture and environment that inculcates some anti-scientific ideas is foolish and dangerous. I suspect that some scientists were angered by Reiss’s comments here because they imply that these scientists’ views of how to ‘convert’ people to a scientific worldview are naïve.
Most troubling of all, however, are the comments from some quarters which make it clear that the real source of outrage stems from the fact that Reiss is an ordained Church of England minister. The implication seems to be that, as a religious believer, he is probably sympathetic to creationism, as if one necessarily follows from the other. That creationism is an unorthodox, indeed a cranky form of Christianity (or of other kinds of fundamentalism – Islam and Judaism has its creationists too) seems here to be ignored or denied. It’s well known that Richard Dawkins sees fundamentalism as the centre of gravity of all religions, and that moderate, orthodox views are just the thin end of the wedge. But his remark that “a clergyman in charge of education for the country’s leading scientific organization” is like “a Monty Python sketch” itself has a whiff of fundamentalist intolerance. If we allow that it’s not obvious why a clergyman should have a significantly more profound belief than any other religious believer, this seems to imply that Dawkins would regard no Christian, Muslim, Hindu, Jew or so forth as fit for this job. Perhaps they should be excluded from the Royal Society altogether? Are we now to assume that no professed believer of any faith can be trusted to believe in and argue for a scientific view of the world? I do understand why some might regard these things as fundamentally incompatible, but I would slightly worry about the robustness of a mind that could not live with a little conflict and contradiction in its beliefs.
This situation has parallels to the way the Royal Society has been criticized for its involvement with the Templeton Foundation. I carry no torch for the Templeton, and indeed was on the wary lookout at the Varenna conference above for a hidden agenda. But I found none. It seems to me that the notion of exploring links between science and religion is harmless enough in itself, and it certainly has plenty of historical relevance, if nothing else. No doubt some flaky stuff comes of it, but the Templeton events that I have come across have been of high scientific quality. (I’m rather more concerned about suggestions that the Templeton has right-wing leanings, although that doesn’t seem obvious from their web site – and US rightwingers are usually quite happy to trumpet the fact.) But it seems sad that the RS’s connections with the Templeton have been lambasted not because anyone seems to have detected a dodgy agenda (I understand that the Templeton folks are explicitly unsympathetic to intelligent design, for example) but because they are a religious-based organization. Again, I thought that scientists were supposed to base their conclusions on actual evidence, not assumptions.
In regard to Reiss, I’m not going to start ranting about witch hunts (not least because that is the hallmark of the green-ink brigade). He was rather incautious, and needed to see how easily his words might be misinterpreted. But they have indeed been misinterpreted, and I don’t see that the Royal Society has done itself much of a service by ousting him, particularly as this seems to have been brought about by a knee-jerk response from scientists who are showing signs of ‘Reds (or in this case, Revs) under the bed’ paranoia.
The whole affair reminds me of the case of the Archbishop of Canterbury talking about sharia law, where the problem was not that he said anything so terrible but that he failed to be especially cautious and explicit when using trigger words that send people foaming at the mouth. But I thought scientists considered themselves more objective than that.
Thursday, September 04, 2008
Intelligence and design
Little did I realise when I became a target of criticism from Steve Fuller of Warwick University that I would be able to wear this as a badge of honour. I just thought it rather odd that someone in a department of sociology seemed so indifferent to the foundational principles of his field, preferring to regard it as a branch of psychology rather than an attempt to understand human group behaviour. I take some solace in the fact that his resistance to physics-based ideas seems to have been anticipated by George Lundberg, one of the pioneers of the field, who, in Foundations of Sociology (1939), admits with dismay that ‘The idea that the same general laws may be applicable to both ‘physical’ and societal behavior may seem fantastic and inconceivable to many people.’ I was tempted to suggest that Fuller hadn’t read Lundberg, or Robert Park, Georg Simmel, Herbert Simon and so on, but this felt like the cheap form of rhetoric that prompts authors to say of critics whose opinions they don’t like that ‘they obviously haven’t read my book’. (On the other hand, Fuller’s first assault, on Radio 4’s Today programme, came when he really hadn’t read my book, because it hadn’t been published at that point.)
Anyway, judging from the level of scholarship A. C. Grayling finds (or rather, fails to find) in Fuller’s new book Dissent over Descent, a defence of the notion of intelligent design, maybe my hesitation was generous. But of course one shouldn’t generalize. Grayling has dissected the book in the New Humanist, and we should be grateful to him for sparing us the effort, although he clearly found the task wearisome. But wait a minute – a social scientist writing about evolution? Isn’t that a little like a chemist (sic) writing about social science?
Little did I realise when I became a target of criticism from Steve Fuller of Warwick University that I would be able to wear this as a badge of honour. I just thought it rather odd that someone in a department of sociology seemed so indifferent to the foundational principles of his field, preferring to regard it as a branch of psychology rather than an attempt to understand human group behaviour. I take some solace in the fact that his resistance to physics-based ideas seems to have been anticipated by George Lundberg, one of the pioneers of the field, who, in Foundations of Sociology (1939), admits with dismay that ‘The idea that the same general laws may be applicable to both ‘physical’ and societal behavior may seem fantastic and inconceivable to many people.’ I was tempted to suggest that Fuller hadn’t read Lundberg, or Robert Park, Georg Simmel, Herbert Simon and so on, but this felt like the cheap form of rhetoric that prompts authors to say of critics whose opinions they don’t like that ‘they obviously haven’t read my book’. (On the other hand, Fuller’s first assault, on Radio 4’s Today programme, came when he really hadn’t read my book, because it hadn’t been published at that point.)
Anyway, judging from the level of scholarship A. C. Grayling finds (or rather, fails to find) in Fuller’s new book Dissent over Descent, a defence of the notion of intelligent design, maybe my hesitation was generous. But of course one shouldn’t generalize. Grayling has dissected the book in the New Humanist, and we should be grateful to him for sparing us the effort, although he clearly found the task wearisome. But wait a minute – a social scientist writing about evolution? Isn’t that a little like a chemist (sic) writing about social science?
Friday, August 29, 2008
Why less is more in government
[This is the pre-edited version of my latest Muse for Nature’s online news.]
In committees and organizations, work expands to fill the time available while growth brings inefficiency. It’s worth trying to figure out why.
Arguments about the admission of new member states to the European Union have become highly charged since Russia sent tanks into Georgia, which harbours EU aspirations. But there may be another reason to view these wannabe nations cautiously, according to two recent preprints [1,2]. It claims that decision-making bodies may not be able to exceed about 20 members without detriment to their efficiency.
Already the EU, as well as its executive branch the European Commission, has 27 members, well in excess of the putative inefficiency threshold. And negotiations in Brussels have become notorious for their bureaucratic wrangling and inertia. The Treaty of Lisbon, which proposes various reforms in an attempt to streamline the EU’s workings, implicitly recognizes the overcrowding problem by proposing a reduction in the number of Commissioners to 18. But as if to prove the point, Ireland rejected it in June.
It’s not hard to pinpoint the problem with large committees. The bigger the group, the more factious it is liable to be, and it gets ever harder to reach a consensus. This has doubtless been recognized since time immemorial, but it was first stated explicitly in the 1950s by the British historian C. Northcote Parkinson. He pointed out how the executive governing bodies in Britain since the Middle Ages, called cabinets since the early seventeenth century, tended always to expand in inverse proportion to their ability to get anything done.
Parkinson showed that British councils and cabinets since 1257 seemed to go through a natural ‘life cycle’: they grew until they exceeded a membership of about 20, at which point they were replaced by a new body that eventually suffered the same fate. Parkinson proposed that this threshold be called the ‘coefficient of inefficiency’.
Stefan Thurner and colleagues at the Medical University of Vienna have attempted to put Parkinson’s anecdotal observations on a solid theoretical footing [1,2]. Cabinets are now a feature of governments worldwide, and Thurner and colleagues find that most of those from 197 countries have between 13 and 20 members. What’s more, the bigger the cabinet, the less well it seems to govern the country, as measured for example by an index called the Human Development Indicator, used by the United Nations Development Programme and which takes into account such factors as life expectancy, literacy and gross domestic product.
Thurner and colleagues have tried to understand where this critical mass of 20 comes from by using a mathematical model of decision-making in small groups [1]. They assume that each member may influence the decisions of a certain number of others, so that they form a complex social network. Each adopts the majority opinion of those to whom they are connected provided that this majority exceeds a certain threshold.
For a range of model parameters, a consensus is always possible for less than 10 members – with the exception of 8. Above this number, consensus becomes progressively harder to achieve. And the number of ways a ‘dissensus’ may arise expands significantly beyond about 19-21, in line with Parkinson’s observations.
Why are eight-member cabinets anomalous? This looks like a mere numerical quirk of the model chosen, but it’s curious that no eightfold cabinets appeared in the authors’ global survey. Historically, only one such cabinet seems to have been identified: the Committee of State of the British king Charles I, whose Parliament rebelled and eventually executed him.
Now the Austrian researchers have extended their analysis of Parkinson’s ideas to the one for which he is best known: Parkinson’s Law, which states that work expands to fill the time available [2]. This provided the title of the 1957 book in which Parkinson’s essays on governance and efficiency were collected.
Parkinson regarded his Law as a corollary of the inevitable expansion of bureaucracies. Drawing on his experience as a British civil servant, he pointed out that officials aim to expand their own mini-empires by gathering a cohort of subordinates. But these simply make work for each other, dwelling over minutiae that a person lacking such underlings would have sensibly prioritized and abbreviated. Dare I point out that Nature’s editorial staff numbered about 13 when I joined 20 years ago, and now numbers something like 33 – yet the editors are no less overworked now than we were then, even though the journal is basically the same size.
Parkinson’s explanation for this effect focused on the issue of promotion, which is in effect what happens to someone who acquires subordinates. His solution to the curse of Parkinson’s Law and the formation of over-sized, inefficient organizations is to engineer a suitable retirement strategy such that promotion remains feasible for all.
With promotion, he suggested, individuals progress from responsibility to distinction, dignity and wisdom (although finally succumbing to obstruction). Without it, the progression is instead from frustration to jealousy to resignation and oblivion, with a steady decrease in efficiency. This has become known as the ‘Prince Charles Syndrome’, after the British septuagenarian monarch-in-waiting who seems increasingly desperate to find a meaningful role in public life.
Thurner and colleagues have couched these ideas in mathematical terms by modelling organizations as a throughflow of staff, and they find that as long as promotion prospects can be sufficiently maintained, exponential growth can be avoided. This means adjusting the retirement age accordingly. With the right choice (which Parkinson called the ‘pension point’), the efficiency of all members can be maximized.
Of course, precise numbers in this sort of modelling should be taken with a pinch of salt. And even when they seem to generate the right qualitative trends, it doesn’t necessarily follow that they do so for the right reasons. Yet correlations like those spotted by Parkinson, and now fleshed out by Thurner and colleagues, do seem to be telling us that there are natural laws of social organization that we ignore at our peril. The secretary-general of NATO has just made positive noises about Georgia’s wish for membership. This may or may not be politically expedient; but with NATO membership currently at a bloated 26, he had better at least recognize what the consequences might be for the organization’s ability to function.
References
1. Klimek, P. et al. Preprint http://arxiv.org/abs/0804.2202
2. Klimek, P. et al. Preprint http://arxiv.org/abs/0808.1684
[This is the pre-edited version of my latest Muse for Nature’s online news.]
In committees and organizations, work expands to fill the time available while growth brings inefficiency. It’s worth trying to figure out why.
Arguments about the admission of new member states to the European Union have become highly charged since Russia sent tanks into Georgia, which harbours EU aspirations. But there may be another reason to view these wannabe nations cautiously, according to two recent preprints [1,2]. It claims that decision-making bodies may not be able to exceed about 20 members without detriment to their efficiency.
Already the EU, as well as its executive branch the European Commission, has 27 members, well in excess of the putative inefficiency threshold. And negotiations in Brussels have become notorious for their bureaucratic wrangling and inertia. The Treaty of Lisbon, which proposes various reforms in an attempt to streamline the EU’s workings, implicitly recognizes the overcrowding problem by proposing a reduction in the number of Commissioners to 18. But as if to prove the point, Ireland rejected it in June.
It’s not hard to pinpoint the problem with large committees. The bigger the group, the more factious it is liable to be, and it gets ever harder to reach a consensus. This has doubtless been recognized since time immemorial, but it was first stated explicitly in the 1950s by the British historian C. Northcote Parkinson. He pointed out how the executive governing bodies in Britain since the Middle Ages, called cabinets since the early seventeenth century, tended always to expand in inverse proportion to their ability to get anything done.
Parkinson showed that British councils and cabinets since 1257 seemed to go through a natural ‘life cycle’: they grew until they exceeded a membership of about 20, at which point they were replaced by a new body that eventually suffered the same fate. Parkinson proposed that this threshold be called the ‘coefficient of inefficiency’.
Stefan Thurner and colleagues at the Medical University of Vienna have attempted to put Parkinson’s anecdotal observations on a solid theoretical footing [1,2]. Cabinets are now a feature of governments worldwide, and Thurner and colleagues find that most of those from 197 countries have between 13 and 20 members. What’s more, the bigger the cabinet, the less well it seems to govern the country, as measured for example by an index called the Human Development Indicator, used by the United Nations Development Programme and which takes into account such factors as life expectancy, literacy and gross domestic product.
Thurner and colleagues have tried to understand where this critical mass of 20 comes from by using a mathematical model of decision-making in small groups [1]. They assume that each member may influence the decisions of a certain number of others, so that they form a complex social network. Each adopts the majority opinion of those to whom they are connected provided that this majority exceeds a certain threshold.
For a range of model parameters, a consensus is always possible for less than 10 members – with the exception of 8. Above this number, consensus becomes progressively harder to achieve. And the number of ways a ‘dissensus’ may arise expands significantly beyond about 19-21, in line with Parkinson’s observations.
Why are eight-member cabinets anomalous? This looks like a mere numerical quirk of the model chosen, but it’s curious that no eightfold cabinets appeared in the authors’ global survey. Historically, only one such cabinet seems to have been identified: the Committee of State of the British king Charles I, whose Parliament rebelled and eventually executed him.
Now the Austrian researchers have extended their analysis of Parkinson’s ideas to the one for which he is best known: Parkinson’s Law, which states that work expands to fill the time available [2]. This provided the title of the 1957 book in which Parkinson’s essays on governance and efficiency were collected.
Parkinson regarded his Law as a corollary of the inevitable expansion of bureaucracies. Drawing on his experience as a British civil servant, he pointed out that officials aim to expand their own mini-empires by gathering a cohort of subordinates. But these simply make work for each other, dwelling over minutiae that a person lacking such underlings would have sensibly prioritized and abbreviated. Dare I point out that Nature’s editorial staff numbered about 13 when I joined 20 years ago, and now numbers something like 33 – yet the editors are no less overworked now than we were then, even though the journal is basically the same size.
Parkinson’s explanation for this effect focused on the issue of promotion, which is in effect what happens to someone who acquires subordinates. His solution to the curse of Parkinson’s Law and the formation of over-sized, inefficient organizations is to engineer a suitable retirement strategy such that promotion remains feasible for all.
With promotion, he suggested, individuals progress from responsibility to distinction, dignity and wisdom (although finally succumbing to obstruction). Without it, the progression is instead from frustration to jealousy to resignation and oblivion, with a steady decrease in efficiency. This has become known as the ‘Prince Charles Syndrome’, after the British septuagenarian monarch-in-waiting who seems increasingly desperate to find a meaningful role in public life.
Thurner and colleagues have couched these ideas in mathematical terms by modelling organizations as a throughflow of staff, and they find that as long as promotion prospects can be sufficiently maintained, exponential growth can be avoided. This means adjusting the retirement age accordingly. With the right choice (which Parkinson called the ‘pension point’), the efficiency of all members can be maximized.
Of course, precise numbers in this sort of modelling should be taken with a pinch of salt. And even when they seem to generate the right qualitative trends, it doesn’t necessarily follow that they do so for the right reasons. Yet correlations like those spotted by Parkinson, and now fleshed out by Thurner and colleagues, do seem to be telling us that there are natural laws of social organization that we ignore at our peril. The secretary-general of NATO has just made positive noises about Georgia’s wish for membership. This may or may not be politically expedient; but with NATO membership currently at a bloated 26, he had better at least recognize what the consequences might be for the organization’s ability to function.
References
1. Klimek, P. et al. Preprint http://arxiv.org/abs/0804.2202
2. Klimek, P. et al. Preprint http://arxiv.org/abs/0808.1684
Friday, August 08, 2008
Crime and punishment in the lab
[This is the uncut version of my latest Muse article for Nature’s online news.]
Before we ask whether scientific conduct is dealt with harshly enough, we need to be clear about what punishment is meant to achieve.
Is science too soft on its miscreants? That could be read as the implication of a study published in Science, which shows that 43 percent of a small sample of scientists found guilty of misconduct remained employed subsequently in academia, and half of them continued to turn out a paper a year [1].
Scientists have been doing a lot of hand-wringing recently about misconduct in their ranks. A commentary in Nature [2] proposed that many such incidents go unreported, and suggested ways to improve that woeful state of affairs, such as adopting a ‘zero-tolerance culture’. This prompted several respondents to maintain that matters are even worse, for example because junior researchers see senior colleagues benefiting from ‘calculated, cautious dishonesty’ or because some countries lack regulatory bodies to police ethical breaches [3-5].
All this dismay is justified to the extent that misconduct potentially tarnishes the whole community, damaging the credibility of science in the eyes of the public. Whether the integrity of the scientific literature suffers seriously is less clear – the more important the false claim, the more likely it is to be uncovered quickly as others scrutinize the results or fail to reproduce them. This has been the case, for example, with the high-profile scandals and controversies over the work of Jan Hendrik Schön in nanotechnology, Hwang Woo-suk in cloning and Rusi Taleyarkhan in bench-top nuclear fusion.
But the discussion needs to move beyond these expressions of stern disapproval. For one thing, it isn’t clear what ‘zero tolerance’ should mean when misconduct is such a grey area. Everyone can agree that fabrication of data is beyond the pale; but as a study three years ago revealed [6], huge numbers of scientists routinely engage in practices that are questionable without being blatantly improper: using another’s ideas without credit, say, or overlooking others’ use of flawed data. Papers that inflate their apparent novelty by failing to acknowledge the extent of previous research are tiresomely common.
And it is remarkable how many austere calls for penalizing scientific misconduct omit any indication of what such penalties are meant to achieve. Such a situation is inconceivable in conventional criminology. Although there is no consensus on the objectives of a penal system – the relative weights that should be accorded to punishment, public protection, deterrence and rehabilitation – these are at least universally recognized as the components of the debate. In comparison, discussions of scientific misconduct seem all too often to stop at the primitive notion that it is a bad thing.
For example, the US Office of Research Integrity (ORI) provides ample explanation of its commendable procedures for handling allegations of misconduct, while the Office of Science and Technology Policy outlines the responsibilities of federal agencies and research institutions to conduct their own investigations. But where is the discussion of desired outcomes, beyond establishing the facts in a fair, efficient and transparent way?
This is why Redman and Merz’s study is useful. As they say, ‘little is known about the consequences of being found guilty of misconduct’. The common presumption, they say, is that such a verdict effectively spells the end of the perpetrator’s career.
Their conclusions, based on studies of 43 individuals deemed guilty by the ORI between 1994 and 2001, reveal a quite different picture. Of the 28 scientists Redman and Merz could trace, 10 were still working in academic positions. Those who agreed to be interviewed – just 7 of the 28 – were publishing an average 1.3 papers a year, while 19 of the 37 for which publication data were available published at least a paper a year.
Is this good or bad? Redman and Merz feel that the opportunity for redemption is important, not just from a liberal but also a pragmatic perspective. ‘The fact that some of these people retain useful scientific careers is sensible, given that they are trained as scientists’, says Merz. ‘They just slipped up in some fundamental way, and many can rebuild a scientific career or at least use the skills they developed as scientists.’ Besides, he adds, everyone they spoke to ‘paid a substantial price’. All reported financial and personal hardships, and some became physically ill.
But on another level, says Merz, these data ‘could be seen as undermining the deterrent effect of the perception that punishment is banishment, from academia, at least.’ Does the punishment fit the crime?
The scientific community has so far lacked much enthusiasm for confronting these questions – perhaps because misconduct, while a trait found in all fields of human activity, is felt to be uniquely embarrassing to an enterprise that considers itself in pursuit of objective truths. But the time has surely come to face the issue, ideally with more data to hand. In formulating civic penal policy, for example, one would like to know how the severity of sentencing affects crime rates (which might indicate the effectiveness of deterrence), and how different prison regimes (punitive versus educative, say) influence recidivism. And one needs to have a view on whether sanctions such as imprisonment are primarily for the sake of public protection or to mete out punishment.
The same sorts of considerations apply with scientific misconduct, because the result otherwise has a dangerously ad hoc flavour. Just a week ago, the South Korean national committee on bioethics rejected an application by Hwang Woo-suk to resume research on stem cells. Why? Because ‘he engaged in unethical and wrongful acts in the past’, according to one source. But that’s not a reason, it is simply a statement of fact. Does the committee fear that Hwang would do it again (despite the intense scrutiny that would be given to his every move)? Do they think he hasn’t been sufficiently punished yet? Or perhaps that approval would have raised doubts about the rigour of the country’s bioethics procedures? Each of these reasons might be defensible – but there’s no telling which, if any, applies.
One reason why it matters is that by all accounts Hwang is an extremely capable scientist. If he and others like him are to be excluded from making further contributions to their fields because of past transgressions, we need to be clear about why that is being done. We need a rational debate on the motivations and objectives of a scientific penal code.
References
1. Redman, B. K. & Merz, J. F., Science 321, 775 (2008).
2. Titus, S. L. et al., Nature 453, 980-982 (2008).
3. Bosch, X. Nature 454, 574 (2008).
4. Feder, N. & Stewart, W. W. Nature 454, 574 (2008).
5. Nussenzveig, P. A. & Funchal, Z. Nature 454, 574 (2008).
6. Martinson, B. C. et al., Nature 435, 737-738 (2008).
[This is the uncut version of my latest Muse article for Nature’s online news.]
Before we ask whether scientific conduct is dealt with harshly enough, we need to be clear about what punishment is meant to achieve.
Is science too soft on its miscreants? That could be read as the implication of a study published in Science, which shows that 43 percent of a small sample of scientists found guilty of misconduct remained employed subsequently in academia, and half of them continued to turn out a paper a year [1].
Scientists have been doing a lot of hand-wringing recently about misconduct in their ranks. A commentary in Nature [2] proposed that many such incidents go unreported, and suggested ways to improve that woeful state of affairs, such as adopting a ‘zero-tolerance culture’. This prompted several respondents to maintain that matters are even worse, for example because junior researchers see senior colleagues benefiting from ‘calculated, cautious dishonesty’ or because some countries lack regulatory bodies to police ethical breaches [3-5].
All this dismay is justified to the extent that misconduct potentially tarnishes the whole community, damaging the credibility of science in the eyes of the public. Whether the integrity of the scientific literature suffers seriously is less clear – the more important the false claim, the more likely it is to be uncovered quickly as others scrutinize the results or fail to reproduce them. This has been the case, for example, with the high-profile scandals and controversies over the work of Jan Hendrik Schön in nanotechnology, Hwang Woo-suk in cloning and Rusi Taleyarkhan in bench-top nuclear fusion.
But the discussion needs to move beyond these expressions of stern disapproval. For one thing, it isn’t clear what ‘zero tolerance’ should mean when misconduct is such a grey area. Everyone can agree that fabrication of data is beyond the pale; but as a study three years ago revealed [6], huge numbers of scientists routinely engage in practices that are questionable without being blatantly improper: using another’s ideas without credit, say, or overlooking others’ use of flawed data. Papers that inflate their apparent novelty by failing to acknowledge the extent of previous research are tiresomely common.
And it is remarkable how many austere calls for penalizing scientific misconduct omit any indication of what such penalties are meant to achieve. Such a situation is inconceivable in conventional criminology. Although there is no consensus on the objectives of a penal system – the relative weights that should be accorded to punishment, public protection, deterrence and rehabilitation – these are at least universally recognized as the components of the debate. In comparison, discussions of scientific misconduct seem all too often to stop at the primitive notion that it is a bad thing.
For example, the US Office of Research Integrity (ORI) provides ample explanation of its commendable procedures for handling allegations of misconduct, while the Office of Science and Technology Policy outlines the responsibilities of federal agencies and research institutions to conduct their own investigations. But where is the discussion of desired outcomes, beyond establishing the facts in a fair, efficient and transparent way?
This is why Redman and Merz’s study is useful. As they say, ‘little is known about the consequences of being found guilty of misconduct’. The common presumption, they say, is that such a verdict effectively spells the end of the perpetrator’s career.
Their conclusions, based on studies of 43 individuals deemed guilty by the ORI between 1994 and 2001, reveal a quite different picture. Of the 28 scientists Redman and Merz could trace, 10 were still working in academic positions. Those who agreed to be interviewed – just 7 of the 28 – were publishing an average 1.3 papers a year, while 19 of the 37 for which publication data were available published at least a paper a year.
Is this good or bad? Redman and Merz feel that the opportunity for redemption is important, not just from a liberal but also a pragmatic perspective. ‘The fact that some of these people retain useful scientific careers is sensible, given that they are trained as scientists’, says Merz. ‘They just slipped up in some fundamental way, and many can rebuild a scientific career or at least use the skills they developed as scientists.’ Besides, he adds, everyone they spoke to ‘paid a substantial price’. All reported financial and personal hardships, and some became physically ill.
But on another level, says Merz, these data ‘could be seen as undermining the deterrent effect of the perception that punishment is banishment, from academia, at least.’ Does the punishment fit the crime?
The scientific community has so far lacked much enthusiasm for confronting these questions – perhaps because misconduct, while a trait found in all fields of human activity, is felt to be uniquely embarrassing to an enterprise that considers itself in pursuit of objective truths. But the time has surely come to face the issue, ideally with more data to hand. In formulating civic penal policy, for example, one would like to know how the severity of sentencing affects crime rates (which might indicate the effectiveness of deterrence), and how different prison regimes (punitive versus educative, say) influence recidivism. And one needs to have a view on whether sanctions such as imprisonment are primarily for the sake of public protection or to mete out punishment.
The same sorts of considerations apply with scientific misconduct, because the result otherwise has a dangerously ad hoc flavour. Just a week ago, the South Korean national committee on bioethics rejected an application by Hwang Woo-suk to resume research on stem cells. Why? Because ‘he engaged in unethical and wrongful acts in the past’, according to one source. But that’s not a reason, it is simply a statement of fact. Does the committee fear that Hwang would do it again (despite the intense scrutiny that would be given to his every move)? Do they think he hasn’t been sufficiently punished yet? Or perhaps that approval would have raised doubts about the rigour of the country’s bioethics procedures? Each of these reasons might be defensible – but there’s no telling which, if any, applies.
One reason why it matters is that by all accounts Hwang is an extremely capable scientist. If he and others like him are to be excluded from making further contributions to their fields because of past transgressions, we need to be clear about why that is being done. We need a rational debate on the motivations and objectives of a scientific penal code.
References
1. Redman, B. K. & Merz, J. F., Science 321, 775 (2008).
2. Titus, S. L. et al., Nature 453, 980-982 (2008).
3. Bosch, X. Nature 454, 574 (2008).
4. Feder, N. & Stewart, W. W. Nature 454, 574 (2008).
5. Nussenzveig, P. A. & Funchal, Z. Nature 454, 574 (2008).
6. Martinson, B. C. et al., Nature 435, 737-738 (2008).
Tuesday, August 05, 2008
Who is Karl Neder?
‘These people tend to define themselves by what they don’t like, which is usually much the same: relativity, the Big Bang. Einstein. Especially Einstein, poor fellow.’
In my novel The Sun and Moon Corrupted, where these words appear, I sought to convey the fact that the group of individuals who scientists would call cranks, and who submit their ideas with tenacious insistence and persistence to journals such as Nature, have remarkably similar characteristics and obsessions. They tend to express themselves in much the same manner, exemplified in my book by the letters of the fictional Hungarian physicist Karl Neder. And their egocentricity knows no bounds.
I realised that, if I was right in this characterization, it would not be long at all before some of these people became convinced that Karl Neder is based on them. (The fact is that he is indeed loosely based on a real person, but there are reasons why I can be very confident that this person will never identify the fact.)
And so it comes to pass. The first person to cry ‘It’s me!’ seems to be one Pentcho Valev . I do not know who Valev is, but it seems I once (more than once?) had the task of rejecting a paper he submitted to Nature. I remember more than you might imagine about the decisions I made while an editor at Nature, and by no means always because the memory is pleasant. But I fear that Valev rings no bells at all. Nonetheless, says Valev, there are “Too many coincidences: Bulgaria + thermodynamics + Einstein + desperately trying to publish (in Nature) + Phillip [sic] Ball is Nature’s editor at that time and mercilessly rejects all my papers. Yes most probably I am at least part of this Karl Neder. Bravo Phillip Ball! Some may say it is unethical for you to make money by describing the plight of your victims but don't believe them: there is nothing unethical in Einstein zombie world.” (If it is any consolation, Mr Valev, the notion that this book has brought me "fortune" provokes hollow laughter.)
Ah, but this is all so unnervingly close to the terms in which Karl Neder expresses himself (which mimic those of his real-life model). In fact, Valev seems first to have identified ‘his’ voice from a quote from the book in a review in the Telegraph:
‘Actually, what [Neder] says is: "PERPETUUM MOBILE IS CONSTRUCTED BY ME!!!!!!!!!"; his voluminous correspondence being littered with blood-curdling Igorisms of this sort.’
Even I would not have dreamt up the scenario in which Mr Valev is apparently saying to himself “Blood-curdling Igorisms? But that’s exactly like me, damn it!” (Or rather, “LIKE ME!!!!!!!!!”)
Valev continues: “If Philip Ball as Nature’s editor had not fought so successfully against crazy Eastern Europe anti-relativists, those cranks could have turned gold into silver and so the very foundation of Western culture would have been destroyed” – and he quotes from a piece I wrote in which I mentioned how relativistic effects in the electron orbitals of gold atoms are responsible for its reddish tint. This is where I start to wonder if it is all some delicious hoax by the wicked Henry Gee or one of the people who read my book for the Royal Institution book club, and therefore knows that indeed it plunges headlong into alchemy and metallic transmutation in its final chapters. What are you trying to do, turn me paranoid?
‘These people tend to define themselves by what they don’t like, which is usually much the same: relativity, the Big Bang. Einstein. Especially Einstein, poor fellow.’
In my novel The Sun and Moon Corrupted, where these words appear, I sought to convey the fact that the group of individuals who scientists would call cranks, and who submit their ideas with tenacious insistence and persistence to journals such as Nature, have remarkably similar characteristics and obsessions. They tend to express themselves in much the same manner, exemplified in my book by the letters of the fictional Hungarian physicist Karl Neder. And their egocentricity knows no bounds.
I realised that, if I was right in this characterization, it would not be long at all before some of these people became convinced that Karl Neder is based on them. (The fact is that he is indeed loosely based on a real person, but there are reasons why I can be very confident that this person will never identify the fact.)
And so it comes to pass. The first person to cry ‘It’s me!’ seems to be one Pentcho Valev . I do not know who Valev is, but it seems I once (more than once?) had the task of rejecting a paper he submitted to Nature. I remember more than you might imagine about the decisions I made while an editor at Nature, and by no means always because the memory is pleasant. But I fear that Valev rings no bells at all. Nonetheless, says Valev, there are “Too many coincidences: Bulgaria + thermodynamics + Einstein + desperately trying to publish (in Nature) + Phillip [sic] Ball is Nature’s editor at that time and mercilessly rejects all my papers. Yes most probably I am at least part of this Karl Neder. Bravo Phillip Ball! Some may say it is unethical for you to make money by describing the plight of your victims but don't believe them: there is nothing unethical in Einstein zombie world.” (If it is any consolation, Mr Valev, the notion that this book has brought me "fortune" provokes hollow laughter.)
Ah, but this is all so unnervingly close to the terms in which Karl Neder expresses himself (which mimic those of his real-life model). In fact, Valev seems first to have identified ‘his’ voice from a quote from the book in a review in the Telegraph:
‘Actually, what [Neder] says is: "PERPETUUM MOBILE IS CONSTRUCTED BY ME!!!!!!!!!"; his voluminous correspondence being littered with blood-curdling Igorisms of this sort.’
Even I would not have dreamt up the scenario in which Mr Valev is apparently saying to himself “Blood-curdling Igorisms? But that’s exactly like me, damn it!” (Or rather, “LIKE ME!!!!!!!!!”)
Valev continues: “If Philip Ball as Nature’s editor had not fought so successfully against crazy Eastern Europe anti-relativists, those cranks could have turned gold into silver and so the very foundation of Western culture would have been destroyed” – and he quotes from a piece I wrote in which I mentioned how relativistic effects in the electron orbitals of gold atoms are responsible for its reddish tint. This is where I start to wonder if it is all some delicious hoax by the wicked Henry Gee or one of the people who read my book for the Royal Institution book club, and therefore knows that indeed it plunges headlong into alchemy and metallic transmutation in its final chapters. What are you trying to do, turn me paranoid?
Saturday, August 02, 2008
Might religion be good for your health?
[Here is the uncut version of my latest Muse for Nature news online.]
Religion is not a disease, a new study claims, but a protection against it.
Science and religion, anyone? Oh come now, don’t tell me you’re bored with the subject already. Before you answer that, let me explain that a paper in the Proceedings of the Royal Society B [1] has a new perspective on offer.
Well, perhaps not new. In fact it is far older than the authors, Corey Fincher and Randy Thornhill of the University of New Mexico, acknowledge. Their treatment of religion as a social phenomenon harks back to classic works by two of sociology’s founding fathers, Emile Durkheim and Max Weber, who, around the start of the twentieth century, offered explanations of how religions around the world have shaped and been shaped by the societies in which they are embedded.
That this approach has fallen out of fashion tells us more about our times than about its validity. The increasing focus on individualism in the Western world since Durkheim wrote that “God is society, writ large” is reflected in the current enthusiasm for what has been dubbed neurotheology: attempts to locate religious experience in brain activity and genetic predispositions for certain mental states. Such studies might ultimately tell us why some folks go to church and other don’t, but they can say rather little about how a predisposition towards religiosity crystallizes into a relatively small number of institutionalized religions - why, say, the 'religiously inclined' don't simply each have a personal religion.
Similarly, the militant atheists who gnash their teeth at the sheer irrationality and arbitrariness of religious belief will be doomed forever to do so unless they accept Durkheim’s point that, rather than being some pernicious mental virus propagating through cultures, religion has social capital and thus possible adaptive value [2]. Durkheim argued that it once was, and still is in many cultures, the cement of society that maintains order. This cohesive function is as evident today in much of American society as it is in Tehran or Warsaw.
But of course there is a flipside to that. Within Durkheim’s definition of a religion as ‘a unified set of beliefs and practices which unite in one single moral community all those who adhere to them’ is a potential antagonism towards those outside that community – a potential that has become, largely unanticipated, the real spectre haunting the modern world.
It is in a sense the source of this tension that forms the central question of Fincher and Thornhill’s paper. Whereas Weber looked at the different social structures that different religions tended to promote, and Durkheim focused on ‘secular utility’ such as the benefits of social cohesion, Fincher and Thornhill propose a specific reason why religions create a propensity to exclude outsiders. In their view, the development of a religion is a strategy for avoiding disease.
The more a society disperses and mixes with other groups, the more it risks contracting new diseases. ‘There is ample evidence’, the authors say, ‘that the psychology of xenophobia and ethnocentrism is importantly related to avoidance and management of infectious disease.’
Fincher and Thornhill have previously shown that global patterns of social collectivism [3] and of language diversity [4] correlate with the diversity of infectious disease in a manner consistent with avoidance strategies: strangers can be bad for your health. Now they have found that religious diversity is also greater in parts of the world where the risk of catching something nasty from those outside your group (who are likely to have different immunity patterns) is higher.
It’s an intriguing observation. But as with all correlation studies, cause and effect are hard to untangle. Fincher and Thornhill offer the notion that new religions are actively generated as societal markers that inhibit inter-group interactions. One could equally argue, however, that a tendency to avoid contacts with other social groups prevents the spread of some cultural traits at the expense of others, and so merely preserves an intrinsic diversity.
This, indeed, is the basis of some theoretical models for how cultural exchange and transmission occurs [5]. Where opportunities for interaction are fewer, there is more likelihood that several ‘island cultures’ will coexist rather than being consumed by a dominant one.
And the theory of Fincher and Thornhill tells us nothing about religion per se, beyond its simple function as a way of discriminating those ‘like you’ from those who aren’t. It might as well be any other societal trait, such as style of pottery or family names. In fact, compared with such indicators, religion is a fantastically baroque and socially costly means of separating friend from foe. As recent ethnic conflicts in African nations have shown, humans are remarkably and fatefully adept at identifying the smallest signs of difference.
What we have here, then, is very far from a theory of how and why religions arise and spread. The main value of the work may instead reside in the suggestion that there are ‘hidden’ biological influences on the dynamics of cultural diversification. It is also, however, a timely reminder that religion is not so much a personal belief (deluded or virtuous, according to taste) as, in Durkheim’s words, a ‘social fact’.
References
1. Fincher, C. L. & Thornhill, R. Proc. R. Soc. B doi:10.1098/rspb.2008.0688.
2. Wilson, D. S. Darwin’s Cathedral: Evolution, Religion, and the Nature of Society (University of Chicago Press, 2002).
3. Fincher, C. L. et al., Proc. R. Soc. B 275, 1279-1285 (2008).
4. Fincher, C. L. & Thornhill, R. Oikos doi:10.1111/j.0030-1299.2008.16684.x.
5. Axelrod, R. J. Conflict Resolution 41, 203-226 (1997).
[Here is the uncut version of my latest Muse for Nature news online.]
Religion is not a disease, a new study claims, but a protection against it.
Science and religion, anyone? Oh come now, don’t tell me you’re bored with the subject already. Before you answer that, let me explain that a paper in the Proceedings of the Royal Society B [1] has a new perspective on offer.
Well, perhaps not new. In fact it is far older than the authors, Corey Fincher and Randy Thornhill of the University of New Mexico, acknowledge. Their treatment of religion as a social phenomenon harks back to classic works by two of sociology’s founding fathers, Emile Durkheim and Max Weber, who, around the start of the twentieth century, offered explanations of how religions around the world have shaped and been shaped by the societies in which they are embedded.
That this approach has fallen out of fashion tells us more about our times than about its validity. The increasing focus on individualism in the Western world since Durkheim wrote that “God is society, writ large” is reflected in the current enthusiasm for what has been dubbed neurotheology: attempts to locate religious experience in brain activity and genetic predispositions for certain mental states. Such studies might ultimately tell us why some folks go to church and other don’t, but they can say rather little about how a predisposition towards religiosity crystallizes into a relatively small number of institutionalized religions - why, say, the 'religiously inclined' don't simply each have a personal religion.
Similarly, the militant atheists who gnash their teeth at the sheer irrationality and arbitrariness of religious belief will be doomed forever to do so unless they accept Durkheim’s point that, rather than being some pernicious mental virus propagating through cultures, religion has social capital and thus possible adaptive value [2]. Durkheim argued that it once was, and still is in many cultures, the cement of society that maintains order. This cohesive function is as evident today in much of American society as it is in Tehran or Warsaw.
But of course there is a flipside to that. Within Durkheim’s definition of a religion as ‘a unified set of beliefs and practices which unite in one single moral community all those who adhere to them’ is a potential antagonism towards those outside that community – a potential that has become, largely unanticipated, the real spectre haunting the modern world.
It is in a sense the source of this tension that forms the central question of Fincher and Thornhill’s paper. Whereas Weber looked at the different social structures that different religions tended to promote, and Durkheim focused on ‘secular utility’ such as the benefits of social cohesion, Fincher and Thornhill propose a specific reason why religions create a propensity to exclude outsiders. In their view, the development of a religion is a strategy for avoiding disease.
The more a society disperses and mixes with other groups, the more it risks contracting new diseases. ‘There is ample evidence’, the authors say, ‘that the psychology of xenophobia and ethnocentrism is importantly related to avoidance and management of infectious disease.’
Fincher and Thornhill have previously shown that global patterns of social collectivism [3] and of language diversity [4] correlate with the diversity of infectious disease in a manner consistent with avoidance strategies: strangers can be bad for your health. Now they have found that religious diversity is also greater in parts of the world where the risk of catching something nasty from those outside your group (who are likely to have different immunity patterns) is higher.
It’s an intriguing observation. But as with all correlation studies, cause and effect are hard to untangle. Fincher and Thornhill offer the notion that new religions are actively generated as societal markers that inhibit inter-group interactions. One could equally argue, however, that a tendency to avoid contacts with other social groups prevents the spread of some cultural traits at the expense of others, and so merely preserves an intrinsic diversity.
This, indeed, is the basis of some theoretical models for how cultural exchange and transmission occurs [5]. Where opportunities for interaction are fewer, there is more likelihood that several ‘island cultures’ will coexist rather than being consumed by a dominant one.
And the theory of Fincher and Thornhill tells us nothing about religion per se, beyond its simple function as a way of discriminating those ‘like you’ from those who aren’t. It might as well be any other societal trait, such as style of pottery or family names. In fact, compared with such indicators, religion is a fantastically baroque and socially costly means of separating friend from foe. As recent ethnic conflicts in African nations have shown, humans are remarkably and fatefully adept at identifying the smallest signs of difference.
What we have here, then, is very far from a theory of how and why religions arise and spread. The main value of the work may instead reside in the suggestion that there are ‘hidden’ biological influences on the dynamics of cultural diversification. It is also, however, a timely reminder that religion is not so much a personal belief (deluded or virtuous, according to taste) as, in Durkheim’s words, a ‘social fact’.
References
1. Fincher, C. L. & Thornhill, R. Proc. R. Soc. B doi:10.1098/rspb.2008.0688.
2. Wilson, D. S. Darwin’s Cathedral: Evolution, Religion, and the Nature of Society (University of Chicago Press, 2002).
3. Fincher, C. L. et al., Proc. R. Soc. B 275, 1279-1285 (2008).
4. Fincher, C. L. & Thornhill, R. Oikos doi:10.1111/j.0030-1299.2008.16684.x.
5. Axelrod, R. J. Conflict Resolution 41, 203-226 (1997).
Subscribe to:
Posts (Atom)