Epidemics, tipping points and phase transitions
I just came across this comment in the FT about the kind of social dynamics I discussed in my book Critical Mass.
It’s nicely put, though the spread of ideas/disease/information in epidemiological models can in fact also be described in terms of phase transitions: they’re a far more general concept than is implied by citing just the freezing transition. I also agree that sociologists have important, indeed crucial, things to offer in this area. But Duncan Watts trained as a physicist.
Tuesday, October 21, 2008
Thursday, October 16, 2008

Fractal calligraphy
Everyone got very excited several years ago when some guys claimed that Jackson Pollock’s drip paintings were fractals (R. P. Taylor et al., Nature 399, 422; 1999). That claim has come under scrutiny, but now it seems in any case that, as with everything else in the world, the Chinese were there first long ago. Yuelin Li of Argonne National Laboratory has found evidence of fractality in the calligraphy of Chinese artists dating back many hundreds of years (paper here). In particular, he describes the fractal analysis of a calligraphic letter by Huai Su, one of the legendary figures of Chinese calligraphy (Li calls him a ‘maniac Buddhist monk’, an image I rather enjoyed). Huai Su’s scroll, which hangs in the Shanghai Museum, says “Bitter bamboo shoots and tea? Excellent! Just rush them [over]. Presented by Huai Su.” (See image above: you’ve got to admit, it beats a text message.)
So what, you might be tempted to say? Isn’t this just a chance consequence of the fragmented nature of brush strokes? Apparently not. Li points out that Su seems to have drawn explicit inspiration from natural fractal objects. A conversation with the calligrapher Yan Zhenqing, recorded in 722 CE, goes as follows:
Zhenqing asked: ‘Do you have your own inspiration? Su answered: ‘I often marvel at the spectacular summer clouds and imitate it… I also find the cracks in a wall very natural.’ Zhenqing asked: ‘How about water stains of a leaking house?’ Su rose, grabbed Yan’s hands, and exclaimed: ‘I get it!’
‘This conversation’, says Li, ‘has virtually defined the aesthetic standard of Chinese calligraphy thereafter, and ‘house leaking stains’ and ‘wall cracks’ became a gold measure of the skill of a calligrapher and the quality of his work.’
Monday, October 06, 2008
The drip, drip, drip of environmental change
[You know how I like to give you added value here, which is to say, the full-blown (who said over-blown?) versions of what I write for Nature before the editors judiciously wield their scalpels. In that spirit, here is my latest Muse column.]
Your starter for ten: which of the following can alter the Earth’s climate?
(1) rain in Tibet
(2) sunspots
(3) the Earth’s magnetic field
(4) iron filings
(5) cosmic rays
(6) insects
The answers? They depend on how big an argument you want to have. All have been proposed as agents of climate change. Some of them now look fairly well established as such; others remain controversial; some have been largely discounted.
The point is that it is awfully hard to say. In every case, the perturbations that the phenomena pose to the global environment look minuscule by themselves, but the problem is that when they act over the entire planet, or over geological timescales, or both, the effects can add up. Or they might not.
This issue goes to the heart of the debate over climate change. It’s not hard to imagine that a 10-km meteorite hitting the planet at a speed of several kilometres per second, as one seems to have done at the end of the Cretaceous period 65 million years ago, might have consequences of global significance. But tiny influences in the geo-, bio-, hydro- and atmospheres that trigger dramatic environmental shifts [see Box] – the dripping taps that eventually flood the building – are not only hard for the general public to grasp. They’re also tough for scientists to evaluate, or even to spot in the first place.
Even now one can find climate sceptics ridiculing the notion that a harmless, invisible gas at a concentration of a few hundred parts per million in the atmosphere can bring about potentially catastrophic changes in climate. It just seems to defy intuitive notions of cause and effect.
Two recent papers now propose new ‘trickle effects’ connected with climate change that are subtle, far from obvious, and hard to assess. Both bear on atmospheric levels of the greenhouse gas carbon dioxide: one suggests that these may shift with changes in the strength of the Earth’s magnetic field [1], the other that they may alter the ambient noisiness of the oceans [2].
Noise? What can a trace gas have to do with that? Peter Brewer and his colleagues at Monterey Bay Aquarium Research Center in Moss Landing, California, point out [2] that the transmission of low-frequency sound in seawater has been shown to be dependent on the water’s pH: at around 1 kHz (a little above a soprano’s range), greater acidity reduces sound absorption. And as atmospheric CO2 increases, more is absorbed in the oceans and seawater gets more acid through the formation of carbonic acid.
This effect of acidity on sound seems bizarre at first encounter. But it seems unlikely to have anything to do with water per se. Rather, chemical equilibria involving dissolved borate, carbonate and bicarbonate ions are apparently involved: certain groups of these ions appear to have vibrations at acoustic frequencies, causing resonant absorption.
If this sounds vague, sadly that’s how it is. Such ‘explanations’ as exist so far seem almost scandalously sketchy. But the effect itself is well documented, including the pH-dependence that follows from the way acids or alkalis tip the balance of these acid-base processes. Brewer and colleagues use these earlier measurements to calculate how current and future changes in absorbed CO2 in the oceans will alter the sound absorption at different depths. They say that this has probably decreased by more than 12 percent already relative to pre-industrial levels, and that low-frequency sound might travel up to 70 percent further by 2050.
And indeed, low-frequency ambient noise has been found to be 9 dB higher off the Californian coast than it was in the 1960s, not all of which can be explained by increased human activity. How such changes might affect marine mammals that use long-distance acoustic communication is the question left hanging.
Uptake of atmospheric carbon dioxide by the oceans is also central to the proposal by Alexander Pazur and Michael Winklhofer of the University of Munich [1] that changes in the Earth’s magnetic field could affect climate. They claim that in a magnetic field 40 percent that of the current geomagnetic value, the solubility of carbon dioxide is 30 percent lower.
They use this to estimate that a mere 1 percent reduction in geomagnetic field strength can release ten times more CO2 than all currently emitted from subsea volcanism. Admittedly, they say, this effect is tiny compared with present inputs from human activities; but it would change the concentration by 1 part per million per decade, and could add up to a non-negligible effect over long enough times.
This isn’t the first suggested link between climate and geomagnetism. It has been proposed, for example, that growing or shrinking ice sheets could alter the Earth’s rotation rate and thus trigger changes in core circulation that drives the geodynamo. And the geomagnetic field also affects the influx of cosmic rays at the magnetic poles, whose collisions ionize molecules in the atmosphere which can then seed the formation of airborne particles. These in turn might nucleate cloud droplets, changing the Earth’s albedo.
Indeed, once you start to think about it, possible links and interactions of this sort seem endless. How to know which are worth pursuing? The effect claimed by Pazur and Winklhofer does seem a trifle hard to credit, although mercifully they are not suggesting any mysterious magnetically induced changes of ‘water structure’ – a favourite fantasy of those who insist on the powers of magnets to heal bodies or soften water. Rather, they offer the plausible hypothesis that the influence acts via ions adsorbed on the surfaces of tiny bubbles of dissolved gas. But there are good arguments why such effects seem unlikely to be significant at such weak field strengths [3]. Moreover, the researchers measure the solubility changes indirectly, via the effect of tiny bubbles on light scattering – but bubble size and coalescence is itself sensitive to dissolved salt in complicated ways [4]. In any event, the effect vanishes in pure water.
So the idea needs much more thorough study before one can say much about its validity. But the broader issue is that it is distressingly hard to anticipate these effects – merely to think of them in the first place, let alone to estimate their importance. Climate scientists have been saying pretty much that for decades: feedbacks in the biogeochemical cycles that influence climate are a devil to discern and probe, which is why the job of forecasting future change is so fraught with uncertainty.
And of course every well-motivated proposal of some subtle modifier of global change – such as cosmic rays – tends to be commandeered to spread doubt about whether global warming is caused by humans, or is happening at all, or whether scientists have the slightest notion of what is going on (and therefore whether we can trust their ‘consensus’).
Perhaps this is a good reason to embrace the metaphor of ‘planetary physiology’ proposed by James Lovelock. We are all used to the idea that minute quantities of chemical agents, or small but persistent outside influences, can produce all kinds of surprising, nonlinear and non-intuitive transformations in our bodies. One doesn’t have to buy into the arid debate about whether or not our planet is ‘alive’; maybe we need only reckon that it might as well be.
References
1. Pazur, A. & Winklhofer, M. Geophys. Res. Lett. 35, L16710 (2008).
2. Hester, K. C. et al. Geophys. Res. Lett. 35, L19601 (2008).
3. Kitazawa, K. et al. Physica B 294, 709-714 (2001).
4. Craig, V. S. J., Ninham, B. W. & Pashley, R. M. Nature 364, 317-319 (1993).
Box: Easy to miss?
Iron fertilization
Oceanographer John Martin suggested in the 1980s that atmospheric CO2 might depend on the amount of iron in the oceans (Martin, J. H. Paleoceanography 5, 1–13; 1990). Iron is an essential nutrient for phytoplankton, which absorb and fix carbon in their tissues as they grow, drawing carbon dioxide out of the atmosphere. Martin’s hypothesis was that plankton growth could be stimulated, reducing CO2 levels, by dumping iron into key parts of the world oceans.
But whether the idea will work as a way of mitigating global warming depends on a host of factors, such as whether plankton growth really is limited by the availability of iron and how quickly the fixed carbon gets recycled through the oceans and atmosphere. In the natural climate system, the iron fertilization hypothesis suggests some complex feedbacks: for example, much oceanic iron comes from windborne dust, which might be more mobilized in a drier world.
Cenozoic uplift of the Himalayas
About 40-50 million years ago, the Indian subcontinent began to collide with Asia, pushing up the crust to form the Himalayan plateau. A period of global cooling began at about the same time. Coincidence? Perhaps not, according to geologists Maureen Raymo and William Ruddiman and their collaborators (Raymo, M. E. & Ruddiman, W. F. Nature 359, 117-122; 1992). The mountain range and high ground may have intensified monsoon rainfall, and the uplift exposed rock to the downpour which underwent ‘chemical weathering’, a process involving the conversion of silicate to carbonate minerals. This consumes carbon dioxide from the atmosphere, cooling the climate.
A proof must negotiate many links in the chain of reasoning. Was weathering really more extensive then? And the monsoon more intense? How might the growth of mountain glaciers affect erosion and weathering? How do changes in dissolved minerals washed into the sea interact with CO2-dependent ocean acidity to affect the relevant biogeochemical cycles? The details are still debated.
Plant growth, carbon dioxide and the hydrological cycle
How changes in atmospheric CO2 levels will affect plant growth has been one of the most contentious issues in climate modelling. Will plants grow faster when they have more carbon dioxide available for photosynthesis, thus providing a negative feedback on climate? That’s still unclear. A separate issue has been explored by Ian Woodward at Cambridge University, who reported that plants have fewer stomata – pores that open and close to let in atmospheric CO2 – in their leaves when CO2 levels are greater (Woodward, F. I. Nature 327, 617-618; 1987). They simply don’t need so many portals when the gas is plentiful. The relationship is robust enough for stomatal density of fossil plants to be used as a proxy for ancient CO2 levels.
But stomata are also the leaks through which water vapour escapes from plants in a process called transpiration. This is a vital part of the hydrological cycle, the movement of water between the atmosphere, oceans and ground. So fewer stomata means that plants take up and evaporate less water from the earth, making the local climate less moist and producing knock-on effects such as greater runoff and increased erosion.
Ozone depletion
They sounded so good, didn’t they? Chlorofluorocarbons are gases that seemed chemically inert and therefore unlikely to harm us or the environment when used as the coolants in refrigerators from the early twentieth century. So what if the occasional whiff of CFCs leaked into the atmosphere when fridges were dumped? – the quantities would be tiny.
But their very inertness meant that they could accumulate in the air. And when exposed to harsh ultraviolet rays in the upper atmosphere, the molecules could break apart into reactive chlorine free radicals, which react with and destroy the stratospheric ozone that protects the Earth’s surface from the worst of the Sun’s harmful UV rays. This danger wasn’t seen until 1974, when it was pointed out by chemists Mark Molina and Sherwood Rowland (Molina, M. J. & Rowland, F. S. Nature 249, 810-812; 1974).
Even then, when ozone depletion was first observed in the Antarctic atmosphere in the 1980s, it was put down to instrumental error. Not until 1985 did the observations become impossible to ignore: CFCs were destroying the ozone layer (Farman, J. C., Gardiner, B. G. & Shanklin, J. D. Nature 315, 207-209; 1985). The process was confined to the Antarctic (and later the Arctic) because it required the ice particles of polar stratospheric clouds to keep chlorine in an ‘active’, ozone-destroying form.
CFCs are also potent greenhouse gases; and changes in global climate might alter the distribution and formation of the polar atmospheric vortices and stratospheric clouds on which ozone depletion depends. So the feedbacks between ozone depletion and global warming are subtle and hard to untangle.
[You know how I like to give you added value here, which is to say, the full-blown (who said over-blown?) versions of what I write for Nature before the editors judiciously wield their scalpels. In that spirit, here is my latest Muse column.]
Your starter for ten: which of the following can alter the Earth’s climate?
(1) rain in Tibet
(2) sunspots
(3) the Earth’s magnetic field
(4) iron filings
(5) cosmic rays
(6) insects
The answers? They depend on how big an argument you want to have. All have been proposed as agents of climate change. Some of them now look fairly well established as such; others remain controversial; some have been largely discounted.
The point is that it is awfully hard to say. In every case, the perturbations that the phenomena pose to the global environment look minuscule by themselves, but the problem is that when they act over the entire planet, or over geological timescales, or both, the effects can add up. Or they might not.
This issue goes to the heart of the debate over climate change. It’s not hard to imagine that a 10-km meteorite hitting the planet at a speed of several kilometres per second, as one seems to have done at the end of the Cretaceous period 65 million years ago, might have consequences of global significance. But tiny influences in the geo-, bio-, hydro- and atmospheres that trigger dramatic environmental shifts [see Box] – the dripping taps that eventually flood the building – are not only hard for the general public to grasp. They’re also tough for scientists to evaluate, or even to spot in the first place.
Even now one can find climate sceptics ridiculing the notion that a harmless, invisible gas at a concentration of a few hundred parts per million in the atmosphere can bring about potentially catastrophic changes in climate. It just seems to defy intuitive notions of cause and effect.
Two recent papers now propose new ‘trickle effects’ connected with climate change that are subtle, far from obvious, and hard to assess. Both bear on atmospheric levels of the greenhouse gas carbon dioxide: one suggests that these may shift with changes in the strength of the Earth’s magnetic field [1], the other that they may alter the ambient noisiness of the oceans [2].
Noise? What can a trace gas have to do with that? Peter Brewer and his colleagues at Monterey Bay Aquarium Research Center in Moss Landing, California, point out [2] that the transmission of low-frequency sound in seawater has been shown to be dependent on the water’s pH: at around 1 kHz (a little above a soprano’s range), greater acidity reduces sound absorption. And as atmospheric CO2 increases, more is absorbed in the oceans and seawater gets more acid through the formation of carbonic acid.
This effect of acidity on sound seems bizarre at first encounter. But it seems unlikely to have anything to do with water per se. Rather, chemical equilibria involving dissolved borate, carbonate and bicarbonate ions are apparently involved: certain groups of these ions appear to have vibrations at acoustic frequencies, causing resonant absorption.
If this sounds vague, sadly that’s how it is. Such ‘explanations’ as exist so far seem almost scandalously sketchy. But the effect itself is well documented, including the pH-dependence that follows from the way acids or alkalis tip the balance of these acid-base processes. Brewer and colleagues use these earlier measurements to calculate how current and future changes in absorbed CO2 in the oceans will alter the sound absorption at different depths. They say that this has probably decreased by more than 12 percent already relative to pre-industrial levels, and that low-frequency sound might travel up to 70 percent further by 2050.
And indeed, low-frequency ambient noise has been found to be 9 dB higher off the Californian coast than it was in the 1960s, not all of which can be explained by increased human activity. How such changes might affect marine mammals that use long-distance acoustic communication is the question left hanging.
Uptake of atmospheric carbon dioxide by the oceans is also central to the proposal by Alexander Pazur and Michael Winklhofer of the University of Munich [1] that changes in the Earth’s magnetic field could affect climate. They claim that in a magnetic field 40 percent that of the current geomagnetic value, the solubility of carbon dioxide is 30 percent lower.
They use this to estimate that a mere 1 percent reduction in geomagnetic field strength can release ten times more CO2 than all currently emitted from subsea volcanism. Admittedly, they say, this effect is tiny compared with present inputs from human activities; but it would change the concentration by 1 part per million per decade, and could add up to a non-negligible effect over long enough times.
This isn’t the first suggested link between climate and geomagnetism. It has been proposed, for example, that growing or shrinking ice sheets could alter the Earth’s rotation rate and thus trigger changes in core circulation that drives the geodynamo. And the geomagnetic field also affects the influx of cosmic rays at the magnetic poles, whose collisions ionize molecules in the atmosphere which can then seed the formation of airborne particles. These in turn might nucleate cloud droplets, changing the Earth’s albedo.
Indeed, once you start to think about it, possible links and interactions of this sort seem endless. How to know which are worth pursuing? The effect claimed by Pazur and Winklhofer does seem a trifle hard to credit, although mercifully they are not suggesting any mysterious magnetically induced changes of ‘water structure’ – a favourite fantasy of those who insist on the powers of magnets to heal bodies or soften water. Rather, they offer the plausible hypothesis that the influence acts via ions adsorbed on the surfaces of tiny bubbles of dissolved gas. But there are good arguments why such effects seem unlikely to be significant at such weak field strengths [3]. Moreover, the researchers measure the solubility changes indirectly, via the effect of tiny bubbles on light scattering – but bubble size and coalescence is itself sensitive to dissolved salt in complicated ways [4]. In any event, the effect vanishes in pure water.
So the idea needs much more thorough study before one can say much about its validity. But the broader issue is that it is distressingly hard to anticipate these effects – merely to think of them in the first place, let alone to estimate their importance. Climate scientists have been saying pretty much that for decades: feedbacks in the biogeochemical cycles that influence climate are a devil to discern and probe, which is why the job of forecasting future change is so fraught with uncertainty.
And of course every well-motivated proposal of some subtle modifier of global change – such as cosmic rays – tends to be commandeered to spread doubt about whether global warming is caused by humans, or is happening at all, or whether scientists have the slightest notion of what is going on (and therefore whether we can trust their ‘consensus’).
Perhaps this is a good reason to embrace the metaphor of ‘planetary physiology’ proposed by James Lovelock. We are all used to the idea that minute quantities of chemical agents, or small but persistent outside influences, can produce all kinds of surprising, nonlinear and non-intuitive transformations in our bodies. One doesn’t have to buy into the arid debate about whether or not our planet is ‘alive’; maybe we need only reckon that it might as well be.
References
1. Pazur, A. & Winklhofer, M. Geophys. Res. Lett. 35, L16710 (2008).
2. Hester, K. C. et al. Geophys. Res. Lett. 35, L19601 (2008).
3. Kitazawa, K. et al. Physica B 294, 709-714 (2001).
4. Craig, V. S. J., Ninham, B. W. & Pashley, R. M. Nature 364, 317-319 (1993).
Box: Easy to miss?
Iron fertilization
Oceanographer John Martin suggested in the 1980s that atmospheric CO2 might depend on the amount of iron in the oceans (Martin, J. H. Paleoceanography 5, 1–13; 1990). Iron is an essential nutrient for phytoplankton, which absorb and fix carbon in their tissues as they grow, drawing carbon dioxide out of the atmosphere. Martin’s hypothesis was that plankton growth could be stimulated, reducing CO2 levels, by dumping iron into key parts of the world oceans.
But whether the idea will work as a way of mitigating global warming depends on a host of factors, such as whether plankton growth really is limited by the availability of iron and how quickly the fixed carbon gets recycled through the oceans and atmosphere. In the natural climate system, the iron fertilization hypothesis suggests some complex feedbacks: for example, much oceanic iron comes from windborne dust, which might be more mobilized in a drier world.
Cenozoic uplift of the Himalayas
About 40-50 million years ago, the Indian subcontinent began to collide with Asia, pushing up the crust to form the Himalayan plateau. A period of global cooling began at about the same time. Coincidence? Perhaps not, according to geologists Maureen Raymo and William Ruddiman and their collaborators (Raymo, M. E. & Ruddiman, W. F. Nature 359, 117-122; 1992). The mountain range and high ground may have intensified monsoon rainfall, and the uplift exposed rock to the downpour which underwent ‘chemical weathering’, a process involving the conversion of silicate to carbonate minerals. This consumes carbon dioxide from the atmosphere, cooling the climate.
A proof must negotiate many links in the chain of reasoning. Was weathering really more extensive then? And the monsoon more intense? How might the growth of mountain glaciers affect erosion and weathering? How do changes in dissolved minerals washed into the sea interact with CO2-dependent ocean acidity to affect the relevant biogeochemical cycles? The details are still debated.
Plant growth, carbon dioxide and the hydrological cycle
How changes in atmospheric CO2 levels will affect plant growth has been one of the most contentious issues in climate modelling. Will plants grow faster when they have more carbon dioxide available for photosynthesis, thus providing a negative feedback on climate? That’s still unclear. A separate issue has been explored by Ian Woodward at Cambridge University, who reported that plants have fewer stomata – pores that open and close to let in atmospheric CO2 – in their leaves when CO2 levels are greater (Woodward, F. I. Nature 327, 617-618; 1987). They simply don’t need so many portals when the gas is plentiful. The relationship is robust enough for stomatal density of fossil plants to be used as a proxy for ancient CO2 levels.
But stomata are also the leaks through which water vapour escapes from plants in a process called transpiration. This is a vital part of the hydrological cycle, the movement of water between the atmosphere, oceans and ground. So fewer stomata means that plants take up and evaporate less water from the earth, making the local climate less moist and producing knock-on effects such as greater runoff and increased erosion.
Ozone depletion
They sounded so good, didn’t they? Chlorofluorocarbons are gases that seemed chemically inert and therefore unlikely to harm us or the environment when used as the coolants in refrigerators from the early twentieth century. So what if the occasional whiff of CFCs leaked into the atmosphere when fridges were dumped? – the quantities would be tiny.
But their very inertness meant that they could accumulate in the air. And when exposed to harsh ultraviolet rays in the upper atmosphere, the molecules could break apart into reactive chlorine free radicals, which react with and destroy the stratospheric ozone that protects the Earth’s surface from the worst of the Sun’s harmful UV rays. This danger wasn’t seen until 1974, when it was pointed out by chemists Mark Molina and Sherwood Rowland (Molina, M. J. & Rowland, F. S. Nature 249, 810-812; 1974).
Even then, when ozone depletion was first observed in the Antarctic atmosphere in the 1980s, it was put down to instrumental error. Not until 1985 did the observations become impossible to ignore: CFCs were destroying the ozone layer (Farman, J. C., Gardiner, B. G. & Shanklin, J. D. Nature 315, 207-209; 1985). The process was confined to the Antarctic (and later the Arctic) because it required the ice particles of polar stratospheric clouds to keep chlorine in an ‘active’, ozone-destroying form.
CFCs are also potent greenhouse gases; and changes in global climate might alter the distribution and formation of the polar atmospheric vortices and stratospheric clouds on which ozone depletion depends. So the feedbacks between ozone depletion and global warming are subtle and hard to untangle.
Friday, September 19, 2008
Opening the door to Hogwarts
[This is how I originally wrote my latest story for Nature’s online news. It is about another piece of creative thinking from this group at Shanghai Jiao Tong University. I was particularly struck by the milk-bottle effect that John Pendry told me about – I’d never thought about it before, but it’s actually quite a striking thing. (The same applies to water in a glass, but it’s more effective with milk.) John says that it is basically because, as one can show quite easily, no light ray can pass through the glass wall that does not also pass through some milk.
Incidentally, I have to suspect that John Pendry must be a candidate for some future Nobel for his work in this area, though probably not yet, as the committee would want to see metamaterials prove their worth. The same applies to Eli Yablonovitch and Sajeev John for their work on photonic crystals. Some really stimulating physics has come out of both of these ideas.
The photo, by the way, was Oliver Morton’s idea.]
Scientists show how to make a hidden portal
In a demonstration that the inventiveness of physicists is equal to anything fantasy writers can dream up, scientists in China have unveiled a blueprint for the hidden portal in King’s Cross railway station through which Harry Potter and his chums catch the train to Hogwarts.
Platform Nine and Three Quarters already exists at King’s Cross in London, but visitors attempting the Harry Potter manoeuvre of running at the wall and trusting to faith will be in for a rude shock.
Xudong Luo and colleagues at Shanghai Jiao Tong University have figured out what’s missing. In two preprints, they describe a method for concealing an entrance so that what looks like a blank wall actually contains invisible openings [1,2].
Physicist John Pendry of Imperial College in London, whose theoretical work laid the foundations of the trick, agrees that there is a whiff of wizardry about it all. “It’s just magic”, he says.
This is the latest stunt of metamaterials, which have already delivered invisibility cloaks [3] and other weird manipulations of light. Metamaterials are structures pieced together from ‘artificial atoms’, tiny electrical devices that allow the structure to interact with light in way that are impossible for ordinary substances.
Some metamaterials have a negative refractive index, meaning that they bend light the ‘wrong’ way. This means that an object within the metamaterial can appear to float above it. A metamaterial invisibility shield, meanwhile, bends light smoothly around an object at its centre, like water flowing around a rock in a river. The Shanghai group recently showed how the object can be revealed again with an anti-invisibility cloak [4].
Now they have worked out in theory how to hide a doorway. The trick is to create an object that, because of its unusual interactions with light, looks bigger than it really is. A pillar made of such stuff, placed in the middle of an opening in a wall, could appear to fill the gap completely, whereas in fact there are open spaces to each side.
Pendry and his coworker S. Anantha Ramakrishna demonstrated the basic principle in 2003, when they showed that a cylinder of metamaterial could act as a magnifying lens for an object inside it [5].
“When you look at a milk bottle, you don’t see the glass”, Pendry explains. Because of the way in which the milk scatters light, “the milk seems to go right to the edge of the bottle.” He and Ramakrishna showed that with a negative-refractive index metamaterial, an object in the bottle could be magnified on the surface.
And now Luo and colleagues have shown that an even more remarkable effect is possible: the milk can appear to be outside the bottle. “It’s like a three-dimensional projector”, says Pendry. “I call it a super-milk bottle.”
The Chinese team opt for the rather more prosaic term “superscatterer”. They show that such an object could be made from a metal core surrounded by a metamaterial with a negative refractive index [1].
The researchers have calculated how light interacts with a rectangular superscatterer placed in the middle of a wide opening in a wall, and find that, for the right choice of sizes and metamaterial properties, the light bounces back just as it does if there was no opening [2].
If someone passes through the concealed opening, they find, it becomes momentarily visible before disappearing again once they are on the other side.
So “platform nine and three-quarters is realizable”, the Shanghai team says. “This is terrific fun”, says Pendry. He feels that the effect is even more remarkable than the invisibility cloak, because it seems so counter-intuitive that an object can project itself into empty space.
But the calculations so far only show concealment for microwave radiation, not visible light. Pendry says that the problem in using visible-light metamaterials – which were reported last month [6,7] – is that currently they tend to absorb some light rather than scattering it all into the magnified image, making it hard to project the image a significant distance beyond the object’s surface. So openings hidden from the naked eye aren’t likely “until we get on top of these materials”, he says.
References
1. Yang, T. et al. http://arxiv.org/abs/0807.5038 (2008).
2. Luo, X. et al. http://arxiv.org/abs/0809.1823 (2008).
3. Schurig, D. et al., Science 314, 977-980 (2006).
4. Chen, H., Luo, X., Ma, H. & Chan, C. T. http://arxiv.org/abs/0807.4973 (2008).
5. Pendry, J. B. & Ramakrishna, S. A. J. Phys.: Condens. Matter 15, 6345-6364 (2003).
6. Valentine, J. et al., Nature doi:10.1038/nature07247 (2008).
7. J. Yao et al., Science 321, 930 (2008).
[This is how I originally wrote my latest story for Nature’s online news. It is about another piece of creative thinking from this group at Shanghai Jiao Tong University. I was particularly struck by the milk-bottle effect that John Pendry told me about – I’d never thought about it before, but it’s actually quite a striking thing. (The same applies to water in a glass, but it’s more effective with milk.) John says that it is basically because, as one can show quite easily, no light ray can pass through the glass wall that does not also pass through some milk.
Incidentally, I have to suspect that John Pendry must be a candidate for some future Nobel for his work in this area, though probably not yet, as the committee would want to see metamaterials prove their worth. The same applies to Eli Yablonovitch and Sajeev John for their work on photonic crystals. Some really stimulating physics has come out of both of these ideas.
The photo, by the way, was Oliver Morton’s idea.]
Scientists show how to make a hidden portal
In a demonstration that the inventiveness of physicists is equal to anything fantasy writers can dream up, scientists in China have unveiled a blueprint for the hidden portal in King’s Cross railway station through which Harry Potter and his chums catch the train to Hogwarts.
Platform Nine and Three Quarters already exists at King’s Cross in London, but visitors attempting the Harry Potter manoeuvre of running at the wall and trusting to faith will be in for a rude shock.
Xudong Luo and colleagues at Shanghai Jiao Tong University have figured out what’s missing. In two preprints, they describe a method for concealing an entrance so that what looks like a blank wall actually contains invisible openings [1,2].
Physicist John Pendry of Imperial College in London, whose theoretical work laid the foundations of the trick, agrees that there is a whiff of wizardry about it all. “It’s just magic”, he says.
This is the latest stunt of metamaterials, which have already delivered invisibility cloaks [3] and other weird manipulations of light. Metamaterials are structures pieced together from ‘artificial atoms’, tiny electrical devices that allow the structure to interact with light in way that are impossible for ordinary substances.
Some metamaterials have a negative refractive index, meaning that they bend light the ‘wrong’ way. This means that an object within the metamaterial can appear to float above it. A metamaterial invisibility shield, meanwhile, bends light smoothly around an object at its centre, like water flowing around a rock in a river. The Shanghai group recently showed how the object can be revealed again with an anti-invisibility cloak [4].
Now they have worked out in theory how to hide a doorway. The trick is to create an object that, because of its unusual interactions with light, looks bigger than it really is. A pillar made of such stuff, placed in the middle of an opening in a wall, could appear to fill the gap completely, whereas in fact there are open spaces to each side.
Pendry and his coworker S. Anantha Ramakrishna demonstrated the basic principle in 2003, when they showed that a cylinder of metamaterial could act as a magnifying lens for an object inside it [5].
“When you look at a milk bottle, you don’t see the glass”, Pendry explains. Because of the way in which the milk scatters light, “the milk seems to go right to the edge of the bottle.” He and Ramakrishna showed that with a negative-refractive index metamaterial, an object in the bottle could be magnified on the surface.
And now Luo and colleagues have shown that an even more remarkable effect is possible: the milk can appear to be outside the bottle. “It’s like a three-dimensional projector”, says Pendry. “I call it a super-milk bottle.”
The Chinese team opt for the rather more prosaic term “superscatterer”. They show that such an object could be made from a metal core surrounded by a metamaterial with a negative refractive index [1].
The researchers have calculated how light interacts with a rectangular superscatterer placed in the middle of a wide opening in a wall, and find that, for the right choice of sizes and metamaterial properties, the light bounces back just as it does if there was no opening [2].
If someone passes through the concealed opening, they find, it becomes momentarily visible before disappearing again once they are on the other side.
So “platform nine and three-quarters is realizable”, the Shanghai team says. “This is terrific fun”, says Pendry. He feels that the effect is even more remarkable than the invisibility cloak, because it seems so counter-intuitive that an object can project itself into empty space.
But the calculations so far only show concealment for microwave radiation, not visible light. Pendry says that the problem in using visible-light metamaterials – which were reported last month [6,7] – is that currently they tend to absorb some light rather than scattering it all into the magnified image, making it hard to project the image a significant distance beyond the object’s surface. So openings hidden from the naked eye aren’t likely “until we get on top of these materials”, he says.
References
1. Yang, T. et al. http://arxiv.org/abs/0807.5038 (2008).
2. Luo, X. et al. http://arxiv.org/abs/0809.1823 (2008).
3. Schurig, D. et al., Science 314, 977-980 (2006).
4. Chen, H., Luo, X., Ma, H. & Chan, C. T. http://arxiv.org/abs/0807.4973 (2008).
5. Pendry, J. B. & Ramakrishna, S. A. J. Phys.: Condens. Matter 15, 6345-6364 (2003).
6. Valentine, J. et al., Nature doi:10.1038/nature07247 (2008).
7. J. Yao et al., Science 321, 930 (2008).
Wednesday, September 17, 2008
Don't mention the 'C' word
I’m beginning to wonder whether I should be expecting the science police to come knocking on my door. After all, my latest book contains images of churches, saints, Jesus and the Virgin Mary. It discusses theology. And, goodness me, I have even taken part in a workshop organized by the Templeton Foundation. I am not sure that being an atheist will be a mitigating factor in my defence.
These dark thoughts are motivated by the fate of Michael Reiss, who has been forced to resign from his position as director of education at the Royal Society over his remarks about creationism in the classroom.
Now, Reiss isn’t blameless in all of this. Critics of his comments are right to say that the Royal Society needs to make it quite clear that creationism is not an alternative way to science of looking at the universe and evolutionism, but is plain wrong. Reiss didn’t appear to do this explicitly in his controversial talk at the British Association meeting. And his remark that “the concerns of students who do not accept the theory of evolution” should be taken “seriously and respectfully” sounds perilously close to saying that those concerns should be given serious consideration, and that one should respect the creationist point of view even while disagreeing with it. The fact is that we should feel obliged to respect points of view that are respectable, such as religious belief per se. Creationism is not respectable, scientifically, intellectually or indeed theologically (will they tell the kids that in Religious Education?). And if you are going to title your talk “Should creationism be a part of the science curriculum?”, it is reasonable that questions should be asked if you aren’t clearly seen at some point to say “No.”
So, a substantial case for the prosecution, it might seem. But for a start, one might reasonably expect that scientists, who pride themselves on accurate observation, will read your words and not just blunder in with preconceptions. It is hard to see a case, in Reiss’s address, for suggesting that his views differ from those that the Royal Society has restated in conjunction with Reiss’s resignation: “creationism has no scientific basis and should not be part of the science curriculum. However, if a young person raises creationism in a science class, teachers should be in a position to explain why evolution is a sound scientific theory and why creationism is not, in any way, scientific.”
This, to my mind, was the thrust of Reiss’s argument. He quoted from the Department for Children, Schools and Families Guidance on Creationism, published in 2007: “Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory.” The point here is that teachers should not be afraid to tackle the issue. They need not (indeed, I feel, should not) bring it up themselves, but if pupils do, they should not shy away by saying something like “We don’t discuss that in a science class.” And there is a good chance that such things will come up. I have heard stories of the genuine perplexity of schoolchildren who have received a creationist viewpoint from parents, whose views they respect, and a conflicting viewpoint from teachers who they also believe are intent on telling them the truth. Such pupils need and deserve guidance, not offhand dismissal. You can be respectful to individuals without having to ‘respect’ the views they hold, and this seems closest to what Reiss was saying.
And there’s nothing that disconcerts teachers more than their being told they must not discuss something. Indeed, that undermines their capacity to teach, just as the current proscription on physical contact with children undermines teachers’ ability to care for them in loco parentis. A fearful teacher is not a good one.
What perhaps irked some scientists more than anything else was Reiss’s remark that “Creationism can profitably be seen not as a simple misconception that careful science teaching can correct. Rather, a student who believes in creationism can be seen as inhabiting a non-scientific worldview, a very different way of seeing the world.” This is simplistic and incomplete as it stands (Gerald Holton has written about the way that a scientific viewpoint in some areas can coexist happily with irrationalism in others), but the basic point is valid. Despite (or perhaps because of) the recent decline in the popularity of the ‘deficit model’ of understanding science, some scientists still doggedly persist in the notion that everyone would be converted to a scientific way of thinking if we can just succeed in drumming enough facts into their heads. Reiss is pointing to the problem that the matter runs much deeper. Science education is essential, and the lack of it helps to pave the way for the kind of spread of ignorance that we can see in some parts of the developed world. But to imagine that this will undermine an entire culture and environment that inculcates some anti-scientific ideas is foolish and dangerous. I suspect that some scientists were angered by Reiss’s comments here because they imply that these scientists’ views of how to ‘convert’ people to a scientific worldview are naïve.
Most troubling of all, however, are the comments from some quarters which make it clear that the real source of outrage stems from the fact that Reiss is an ordained Church of England minister. The implication seems to be that, as a religious believer, he is probably sympathetic to creationism, as if one necessarily follows from the other. That creationism is an unorthodox, indeed a cranky form of Christianity (or of other kinds of fundamentalism – Islam and Judaism has its creationists too) seems here to be ignored or denied. It’s well known that Richard Dawkins sees fundamentalism as the centre of gravity of all religions, and that moderate, orthodox views are just the thin end of the wedge. But his remark that “a clergyman in charge of education for the country’s leading scientific organization” is like “a Monty Python sketch” itself has a whiff of fundamentalist intolerance. If we allow that it’s not obvious why a clergyman should have a significantly more profound belief than any other religious believer, this seems to imply that Dawkins would regard no Christian, Muslim, Hindu, Jew or so forth as fit for this job. Perhaps they should be excluded from the Royal Society altogether? Are we now to assume that no professed believer of any faith can be trusted to believe in and argue for a scientific view of the world? I do understand why some might regard these things as fundamentally incompatible, but I would slightly worry about the robustness of a mind that could not live with a little conflict and contradiction in its beliefs.
This situation has parallels to the way the Royal Society has been criticized for its involvement with the Templeton Foundation. I carry no torch for the Templeton, and indeed was on the wary lookout at the Varenna conference above for a hidden agenda. But I found none. It seems to me that the notion of exploring links between science and religion is harmless enough in itself, and it certainly has plenty of historical relevance, if nothing else. No doubt some flaky stuff comes of it, but the Templeton events that I have come across have been of high scientific quality. (I’m rather more concerned about suggestions that the Templeton has right-wing leanings, although that doesn’t seem obvious from their web site – and US rightwingers are usually quite happy to trumpet the fact.) But it seems sad that the RS’s connections with the Templeton have been lambasted not because anyone seems to have detected a dodgy agenda (I understand that the Templeton folks are explicitly unsympathetic to intelligent design, for example) but because they are a religious-based organization. Again, I thought that scientists were supposed to base their conclusions on actual evidence, not assumptions.
In regard to Reiss, I’m not going to start ranting about witch hunts (not least because that is the hallmark of the green-ink brigade). He was rather incautious, and needed to see how easily his words might be misinterpreted. But they have indeed been misinterpreted, and I don’t see that the Royal Society has done itself much of a service by ousting him, particularly as this seems to have been brought about by a knee-jerk response from scientists who are showing signs of ‘Reds (or in this case, Revs) under the bed’ paranoia.
The whole affair reminds me of the case of the Archbishop of Canterbury talking about sharia law, where the problem was not that he said anything so terrible but that he failed to be especially cautious and explicit when using trigger words that send people foaming at the mouth. But I thought scientists considered themselves more objective than that.
I’m beginning to wonder whether I should be expecting the science police to come knocking on my door. After all, my latest book contains images of churches, saints, Jesus and the Virgin Mary. It discusses theology. And, goodness me, I have even taken part in a workshop organized by the Templeton Foundation. I am not sure that being an atheist will be a mitigating factor in my defence.
These dark thoughts are motivated by the fate of Michael Reiss, who has been forced to resign from his position as director of education at the Royal Society over his remarks about creationism in the classroom.
Now, Reiss isn’t blameless in all of this. Critics of his comments are right to say that the Royal Society needs to make it quite clear that creationism is not an alternative way to science of looking at the universe and evolutionism, but is plain wrong. Reiss didn’t appear to do this explicitly in his controversial talk at the British Association meeting. And his remark that “the concerns of students who do not accept the theory of evolution” should be taken “seriously and respectfully” sounds perilously close to saying that those concerns should be given serious consideration, and that one should respect the creationist point of view even while disagreeing with it. The fact is that we should feel obliged to respect points of view that are respectable, such as religious belief per se. Creationism is not respectable, scientifically, intellectually or indeed theologically (will they tell the kids that in Religious Education?). And if you are going to title your talk “Should creationism be a part of the science curriculum?”, it is reasonable that questions should be asked if you aren’t clearly seen at some point to say “No.”
So, a substantial case for the prosecution, it might seem. But for a start, one might reasonably expect that scientists, who pride themselves on accurate observation, will read your words and not just blunder in with preconceptions. It is hard to see a case, in Reiss’s address, for suggesting that his views differ from those that the Royal Society has restated in conjunction with Reiss’s resignation: “creationism has no scientific basis and should not be part of the science curriculum. However, if a young person raises creationism in a science class, teachers should be in a position to explain why evolution is a sound scientific theory and why creationism is not, in any way, scientific.”
This, to my mind, was the thrust of Reiss’s argument. He quoted from the Department for Children, Schools and Families Guidance on Creationism, published in 2007: “Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory.” The point here is that teachers should not be afraid to tackle the issue. They need not (indeed, I feel, should not) bring it up themselves, but if pupils do, they should not shy away by saying something like “We don’t discuss that in a science class.” And there is a good chance that such things will come up. I have heard stories of the genuine perplexity of schoolchildren who have received a creationist viewpoint from parents, whose views they respect, and a conflicting viewpoint from teachers who they also believe are intent on telling them the truth. Such pupils need and deserve guidance, not offhand dismissal. You can be respectful to individuals without having to ‘respect’ the views they hold, and this seems closest to what Reiss was saying.
And there’s nothing that disconcerts teachers more than their being told they must not discuss something. Indeed, that undermines their capacity to teach, just as the current proscription on physical contact with children undermines teachers’ ability to care for them in loco parentis. A fearful teacher is not a good one.
What perhaps irked some scientists more than anything else was Reiss’s remark that “Creationism can profitably be seen not as a simple misconception that careful science teaching can correct. Rather, a student who believes in creationism can be seen as inhabiting a non-scientific worldview, a very different way of seeing the world.” This is simplistic and incomplete as it stands (Gerald Holton has written about the way that a scientific viewpoint in some areas can coexist happily with irrationalism in others), but the basic point is valid. Despite (or perhaps because of) the recent decline in the popularity of the ‘deficit model’ of understanding science, some scientists still doggedly persist in the notion that everyone would be converted to a scientific way of thinking if we can just succeed in drumming enough facts into their heads. Reiss is pointing to the problem that the matter runs much deeper. Science education is essential, and the lack of it helps to pave the way for the kind of spread of ignorance that we can see in some parts of the developed world. But to imagine that this will undermine an entire culture and environment that inculcates some anti-scientific ideas is foolish and dangerous. I suspect that some scientists were angered by Reiss’s comments here because they imply that these scientists’ views of how to ‘convert’ people to a scientific worldview are naïve.
Most troubling of all, however, are the comments from some quarters which make it clear that the real source of outrage stems from the fact that Reiss is an ordained Church of England minister. The implication seems to be that, as a religious believer, he is probably sympathetic to creationism, as if one necessarily follows from the other. That creationism is an unorthodox, indeed a cranky form of Christianity (or of other kinds of fundamentalism – Islam and Judaism has its creationists too) seems here to be ignored or denied. It’s well known that Richard Dawkins sees fundamentalism as the centre of gravity of all religions, and that moderate, orthodox views are just the thin end of the wedge. But his remark that “a clergyman in charge of education for the country’s leading scientific organization” is like “a Monty Python sketch” itself has a whiff of fundamentalist intolerance. If we allow that it’s not obvious why a clergyman should have a significantly more profound belief than any other religious believer, this seems to imply that Dawkins would regard no Christian, Muslim, Hindu, Jew or so forth as fit for this job. Perhaps they should be excluded from the Royal Society altogether? Are we now to assume that no professed believer of any faith can be trusted to believe in and argue for a scientific view of the world? I do understand why some might regard these things as fundamentally incompatible, but I would slightly worry about the robustness of a mind that could not live with a little conflict and contradiction in its beliefs.
This situation has parallels to the way the Royal Society has been criticized for its involvement with the Templeton Foundation. I carry no torch for the Templeton, and indeed was on the wary lookout at the Varenna conference above for a hidden agenda. But I found none. It seems to me that the notion of exploring links between science and religion is harmless enough in itself, and it certainly has plenty of historical relevance, if nothing else. No doubt some flaky stuff comes of it, but the Templeton events that I have come across have been of high scientific quality. (I’m rather more concerned about suggestions that the Templeton has right-wing leanings, although that doesn’t seem obvious from their web site – and US rightwingers are usually quite happy to trumpet the fact.) But it seems sad that the RS’s connections with the Templeton have been lambasted not because anyone seems to have detected a dodgy agenda (I understand that the Templeton folks are explicitly unsympathetic to intelligent design, for example) but because they are a religious-based organization. Again, I thought that scientists were supposed to base their conclusions on actual evidence, not assumptions.
In regard to Reiss, I’m not going to start ranting about witch hunts (not least because that is the hallmark of the green-ink brigade). He was rather incautious, and needed to see how easily his words might be misinterpreted. But they have indeed been misinterpreted, and I don’t see that the Royal Society has done itself much of a service by ousting him, particularly as this seems to have been brought about by a knee-jerk response from scientists who are showing signs of ‘Reds (or in this case, Revs) under the bed’ paranoia.
The whole affair reminds me of the case of the Archbishop of Canterbury talking about sharia law, where the problem was not that he said anything so terrible but that he failed to be especially cautious and explicit when using trigger words that send people foaming at the mouth. But I thought scientists considered themselves more objective than that.
Thursday, September 04, 2008
Intelligence and design
Little did I realise when I became a target of criticism from Steve Fuller of Warwick University that I would be able to wear this as a badge of honour. I just thought it rather odd that someone in a department of sociology seemed so indifferent to the foundational principles of his field, preferring to regard it as a branch of psychology rather than an attempt to understand human group behaviour. I take some solace in the fact that his resistance to physics-based ideas seems to have been anticipated by George Lundberg, one of the pioneers of the field, who, in Foundations of Sociology (1939), admits with dismay that ‘The idea that the same general laws may be applicable to both ‘physical’ and societal behavior may seem fantastic and inconceivable to many people.’ I was tempted to suggest that Fuller hadn’t read Lundberg, or Robert Park, Georg Simmel, Herbert Simon and so on, but this felt like the cheap form of rhetoric that prompts authors to say of critics whose opinions they don’t like that ‘they obviously haven’t read my book’. (On the other hand, Fuller’s first assault, on Radio 4’s Today programme, came when he really hadn’t read my book, because it hadn’t been published at that point.)
Anyway, judging from the level of scholarship A. C. Grayling finds (or rather, fails to find) in Fuller’s new book Dissent over Descent, a defence of the notion of intelligent design, maybe my hesitation was generous. But of course one shouldn’t generalize. Grayling has dissected the book in the New Humanist, and we should be grateful to him for sparing us the effort, although he clearly found the task wearisome. But wait a minute – a social scientist writing about evolution? Isn’t that a little like a chemist (sic) writing about social science?
Little did I realise when I became a target of criticism from Steve Fuller of Warwick University that I would be able to wear this as a badge of honour. I just thought it rather odd that someone in a department of sociology seemed so indifferent to the foundational principles of his field, preferring to regard it as a branch of psychology rather than an attempt to understand human group behaviour. I take some solace in the fact that his resistance to physics-based ideas seems to have been anticipated by George Lundberg, one of the pioneers of the field, who, in Foundations of Sociology (1939), admits with dismay that ‘The idea that the same general laws may be applicable to both ‘physical’ and societal behavior may seem fantastic and inconceivable to many people.’ I was tempted to suggest that Fuller hadn’t read Lundberg, or Robert Park, Georg Simmel, Herbert Simon and so on, but this felt like the cheap form of rhetoric that prompts authors to say of critics whose opinions they don’t like that ‘they obviously haven’t read my book’. (On the other hand, Fuller’s first assault, on Radio 4’s Today programme, came when he really hadn’t read my book, because it hadn’t been published at that point.)
Anyway, judging from the level of scholarship A. C. Grayling finds (or rather, fails to find) in Fuller’s new book Dissent over Descent, a defence of the notion of intelligent design, maybe my hesitation was generous. But of course one shouldn’t generalize. Grayling has dissected the book in the New Humanist, and we should be grateful to him for sparing us the effort, although he clearly found the task wearisome. But wait a minute – a social scientist writing about evolution? Isn’t that a little like a chemist (sic) writing about social science?
Friday, August 29, 2008
Why less is more in government
[This is the pre-edited version of my latest Muse for Nature’s online news.]
In committees and organizations, work expands to fill the time available while growth brings inefficiency. It’s worth trying to figure out why.
Arguments about the admission of new member states to the European Union have become highly charged since Russia sent tanks into Georgia, which harbours EU aspirations. But there may be another reason to view these wannabe nations cautiously, according to two recent preprints [1,2]. It claims that decision-making bodies may not be able to exceed about 20 members without detriment to their efficiency.
Already the EU, as well as its executive branch the European Commission, has 27 members, well in excess of the putative inefficiency threshold. And negotiations in Brussels have become notorious for their bureaucratic wrangling and inertia. The Treaty of Lisbon, which proposes various reforms in an attempt to streamline the EU’s workings, implicitly recognizes the overcrowding problem by proposing a reduction in the number of Commissioners to 18. But as if to prove the point, Ireland rejected it in June.
It’s not hard to pinpoint the problem with large committees. The bigger the group, the more factious it is liable to be, and it gets ever harder to reach a consensus. This has doubtless been recognized since time immemorial, but it was first stated explicitly in the 1950s by the British historian C. Northcote Parkinson. He pointed out how the executive governing bodies in Britain since the Middle Ages, called cabinets since the early seventeenth century, tended always to expand in inverse proportion to their ability to get anything done.
Parkinson showed that British councils and cabinets since 1257 seemed to go through a natural ‘life cycle’: they grew until they exceeded a membership of about 20, at which point they were replaced by a new body that eventually suffered the same fate. Parkinson proposed that this threshold be called the ‘coefficient of inefficiency’.
Stefan Thurner and colleagues at the Medical University of Vienna have attempted to put Parkinson’s anecdotal observations on a solid theoretical footing [1,2]. Cabinets are now a feature of governments worldwide, and Thurner and colleagues find that most of those from 197 countries have between 13 and 20 members. What’s more, the bigger the cabinet, the less well it seems to govern the country, as measured for example by an index called the Human Development Indicator, used by the United Nations Development Programme and which takes into account such factors as life expectancy, literacy and gross domestic product.
Thurner and colleagues have tried to understand where this critical mass of 20 comes from by using a mathematical model of decision-making in small groups [1]. They assume that each member may influence the decisions of a certain number of others, so that they form a complex social network. Each adopts the majority opinion of those to whom they are connected provided that this majority exceeds a certain threshold.
For a range of model parameters, a consensus is always possible for less than 10 members – with the exception of 8. Above this number, consensus becomes progressively harder to achieve. And the number of ways a ‘dissensus’ may arise expands significantly beyond about 19-21, in line with Parkinson’s observations.
Why are eight-member cabinets anomalous? This looks like a mere numerical quirk of the model chosen, but it’s curious that no eightfold cabinets appeared in the authors’ global survey. Historically, only one such cabinet seems to have been identified: the Committee of State of the British king Charles I, whose Parliament rebelled and eventually executed him.
Now the Austrian researchers have extended their analysis of Parkinson’s ideas to the one for which he is best known: Parkinson’s Law, which states that work expands to fill the time available [2]. This provided the title of the 1957 book in which Parkinson’s essays on governance and efficiency were collected.
Parkinson regarded his Law as a corollary of the inevitable expansion of bureaucracies. Drawing on his experience as a British civil servant, he pointed out that officials aim to expand their own mini-empires by gathering a cohort of subordinates. But these simply make work for each other, dwelling over minutiae that a person lacking such underlings would have sensibly prioritized and abbreviated. Dare I point out that Nature’s editorial staff numbered about 13 when I joined 20 years ago, and now numbers something like 33 – yet the editors are no less overworked now than we were then, even though the journal is basically the same size.
Parkinson’s explanation for this effect focused on the issue of promotion, which is in effect what happens to someone who acquires subordinates. His solution to the curse of Parkinson’s Law and the formation of over-sized, inefficient organizations is to engineer a suitable retirement strategy such that promotion remains feasible for all.
With promotion, he suggested, individuals progress from responsibility to distinction, dignity and wisdom (although finally succumbing to obstruction). Without it, the progression is instead from frustration to jealousy to resignation and oblivion, with a steady decrease in efficiency. This has become known as the ‘Prince Charles Syndrome’, after the British septuagenarian monarch-in-waiting who seems increasingly desperate to find a meaningful role in public life.
Thurner and colleagues have couched these ideas in mathematical terms by modelling organizations as a throughflow of staff, and they find that as long as promotion prospects can be sufficiently maintained, exponential growth can be avoided. This means adjusting the retirement age accordingly. With the right choice (which Parkinson called the ‘pension point’), the efficiency of all members can be maximized.
Of course, precise numbers in this sort of modelling should be taken with a pinch of salt. And even when they seem to generate the right qualitative trends, it doesn’t necessarily follow that they do so for the right reasons. Yet correlations like those spotted by Parkinson, and now fleshed out by Thurner and colleagues, do seem to be telling us that there are natural laws of social organization that we ignore at our peril. The secretary-general of NATO has just made positive noises about Georgia’s wish for membership. This may or may not be politically expedient; but with NATO membership currently at a bloated 26, he had better at least recognize what the consequences might be for the organization’s ability to function.
References
1. Klimek, P. et al. Preprint http://arxiv.org/abs/0804.2202
2. Klimek, P. et al. Preprint http://arxiv.org/abs/0808.1684
[This is the pre-edited version of my latest Muse for Nature’s online news.]
In committees and organizations, work expands to fill the time available while growth brings inefficiency. It’s worth trying to figure out why.
Arguments about the admission of new member states to the European Union have become highly charged since Russia sent tanks into Georgia, which harbours EU aspirations. But there may be another reason to view these wannabe nations cautiously, according to two recent preprints [1,2]. It claims that decision-making bodies may not be able to exceed about 20 members without detriment to their efficiency.
Already the EU, as well as its executive branch the European Commission, has 27 members, well in excess of the putative inefficiency threshold. And negotiations in Brussels have become notorious for their bureaucratic wrangling and inertia. The Treaty of Lisbon, which proposes various reforms in an attempt to streamline the EU’s workings, implicitly recognizes the overcrowding problem by proposing a reduction in the number of Commissioners to 18. But as if to prove the point, Ireland rejected it in June.
It’s not hard to pinpoint the problem with large committees. The bigger the group, the more factious it is liable to be, and it gets ever harder to reach a consensus. This has doubtless been recognized since time immemorial, but it was first stated explicitly in the 1950s by the British historian C. Northcote Parkinson. He pointed out how the executive governing bodies in Britain since the Middle Ages, called cabinets since the early seventeenth century, tended always to expand in inverse proportion to their ability to get anything done.
Parkinson showed that British councils and cabinets since 1257 seemed to go through a natural ‘life cycle’: they grew until they exceeded a membership of about 20, at which point they were replaced by a new body that eventually suffered the same fate. Parkinson proposed that this threshold be called the ‘coefficient of inefficiency’.
Stefan Thurner and colleagues at the Medical University of Vienna have attempted to put Parkinson’s anecdotal observations on a solid theoretical footing [1,2]. Cabinets are now a feature of governments worldwide, and Thurner and colleagues find that most of those from 197 countries have between 13 and 20 members. What’s more, the bigger the cabinet, the less well it seems to govern the country, as measured for example by an index called the Human Development Indicator, used by the United Nations Development Programme and which takes into account such factors as life expectancy, literacy and gross domestic product.
Thurner and colleagues have tried to understand where this critical mass of 20 comes from by using a mathematical model of decision-making in small groups [1]. They assume that each member may influence the decisions of a certain number of others, so that they form a complex social network. Each adopts the majority opinion of those to whom they are connected provided that this majority exceeds a certain threshold.
For a range of model parameters, a consensus is always possible for less than 10 members – with the exception of 8. Above this number, consensus becomes progressively harder to achieve. And the number of ways a ‘dissensus’ may arise expands significantly beyond about 19-21, in line with Parkinson’s observations.
Why are eight-member cabinets anomalous? This looks like a mere numerical quirk of the model chosen, but it’s curious that no eightfold cabinets appeared in the authors’ global survey. Historically, only one such cabinet seems to have been identified: the Committee of State of the British king Charles I, whose Parliament rebelled and eventually executed him.
Now the Austrian researchers have extended their analysis of Parkinson’s ideas to the one for which he is best known: Parkinson’s Law, which states that work expands to fill the time available [2]. This provided the title of the 1957 book in which Parkinson’s essays on governance and efficiency were collected.
Parkinson regarded his Law as a corollary of the inevitable expansion of bureaucracies. Drawing on his experience as a British civil servant, he pointed out that officials aim to expand their own mini-empires by gathering a cohort of subordinates. But these simply make work for each other, dwelling over minutiae that a person lacking such underlings would have sensibly prioritized and abbreviated. Dare I point out that Nature’s editorial staff numbered about 13 when I joined 20 years ago, and now numbers something like 33 – yet the editors are no less overworked now than we were then, even though the journal is basically the same size.
Parkinson’s explanation for this effect focused on the issue of promotion, which is in effect what happens to someone who acquires subordinates. His solution to the curse of Parkinson’s Law and the formation of over-sized, inefficient organizations is to engineer a suitable retirement strategy such that promotion remains feasible for all.
With promotion, he suggested, individuals progress from responsibility to distinction, dignity and wisdom (although finally succumbing to obstruction). Without it, the progression is instead from frustration to jealousy to resignation and oblivion, with a steady decrease in efficiency. This has become known as the ‘Prince Charles Syndrome’, after the British septuagenarian monarch-in-waiting who seems increasingly desperate to find a meaningful role in public life.
Thurner and colleagues have couched these ideas in mathematical terms by modelling organizations as a throughflow of staff, and they find that as long as promotion prospects can be sufficiently maintained, exponential growth can be avoided. This means adjusting the retirement age accordingly. With the right choice (which Parkinson called the ‘pension point’), the efficiency of all members can be maximized.
Of course, precise numbers in this sort of modelling should be taken with a pinch of salt. And even when they seem to generate the right qualitative trends, it doesn’t necessarily follow that they do so for the right reasons. Yet correlations like those spotted by Parkinson, and now fleshed out by Thurner and colleagues, do seem to be telling us that there are natural laws of social organization that we ignore at our peril. The secretary-general of NATO has just made positive noises about Georgia’s wish for membership. This may or may not be politically expedient; but with NATO membership currently at a bloated 26, he had better at least recognize what the consequences might be for the organization’s ability to function.
References
1. Klimek, P. et al. Preprint http://arxiv.org/abs/0804.2202
2. Klimek, P. et al. Preprint http://arxiv.org/abs/0808.1684
Friday, August 08, 2008
Crime and punishment in the lab
[This is the uncut version of my latest Muse article for Nature’s online news.]
Before we ask whether scientific conduct is dealt with harshly enough, we need to be clear about what punishment is meant to achieve.
Is science too soft on its miscreants? That could be read as the implication of a study published in Science, which shows that 43 percent of a small sample of scientists found guilty of misconduct remained employed subsequently in academia, and half of them continued to turn out a paper a year [1].
Scientists have been doing a lot of hand-wringing recently about misconduct in their ranks. A commentary in Nature [2] proposed that many such incidents go unreported, and suggested ways to improve that woeful state of affairs, such as adopting a ‘zero-tolerance culture’. This prompted several respondents to maintain that matters are even worse, for example because junior researchers see senior colleagues benefiting from ‘calculated, cautious dishonesty’ or because some countries lack regulatory bodies to police ethical breaches [3-5].
All this dismay is justified to the extent that misconduct potentially tarnishes the whole community, damaging the credibility of science in the eyes of the public. Whether the integrity of the scientific literature suffers seriously is less clear – the more important the false claim, the more likely it is to be uncovered quickly as others scrutinize the results or fail to reproduce them. This has been the case, for example, with the high-profile scandals and controversies over the work of Jan Hendrik Schön in nanotechnology, Hwang Woo-suk in cloning and Rusi Taleyarkhan in bench-top nuclear fusion.
But the discussion needs to move beyond these expressions of stern disapproval. For one thing, it isn’t clear what ‘zero tolerance’ should mean when misconduct is such a grey area. Everyone can agree that fabrication of data is beyond the pale; but as a study three years ago revealed [6], huge numbers of scientists routinely engage in practices that are questionable without being blatantly improper: using another’s ideas without credit, say, or overlooking others’ use of flawed data. Papers that inflate their apparent novelty by failing to acknowledge the extent of previous research are tiresomely common.
And it is remarkable how many austere calls for penalizing scientific misconduct omit any indication of what such penalties are meant to achieve. Such a situation is inconceivable in conventional criminology. Although there is no consensus on the objectives of a penal system – the relative weights that should be accorded to punishment, public protection, deterrence and rehabilitation – these are at least universally recognized as the components of the debate. In comparison, discussions of scientific misconduct seem all too often to stop at the primitive notion that it is a bad thing.
For example, the US Office of Research Integrity (ORI) provides ample explanation of its commendable procedures for handling allegations of misconduct, while the Office of Science and Technology Policy outlines the responsibilities of federal agencies and research institutions to conduct their own investigations. But where is the discussion of desired outcomes, beyond establishing the facts in a fair, efficient and transparent way?
This is why Redman and Merz’s study is useful. As they say, ‘little is known about the consequences of being found guilty of misconduct’. The common presumption, they say, is that such a verdict effectively spells the end of the perpetrator’s career.
Their conclusions, based on studies of 43 individuals deemed guilty by the ORI between 1994 and 2001, reveal a quite different picture. Of the 28 scientists Redman and Merz could trace, 10 were still working in academic positions. Those who agreed to be interviewed – just 7 of the 28 – were publishing an average 1.3 papers a year, while 19 of the 37 for which publication data were available published at least a paper a year.
Is this good or bad? Redman and Merz feel that the opportunity for redemption is important, not just from a liberal but also a pragmatic perspective. ‘The fact that some of these people retain useful scientific careers is sensible, given that they are trained as scientists’, says Merz. ‘They just slipped up in some fundamental way, and many can rebuild a scientific career or at least use the skills they developed as scientists.’ Besides, he adds, everyone they spoke to ‘paid a substantial price’. All reported financial and personal hardships, and some became physically ill.
But on another level, says Merz, these data ‘could be seen as undermining the deterrent effect of the perception that punishment is banishment, from academia, at least.’ Does the punishment fit the crime?
The scientific community has so far lacked much enthusiasm for confronting these questions – perhaps because misconduct, while a trait found in all fields of human activity, is felt to be uniquely embarrassing to an enterprise that considers itself in pursuit of objective truths. But the time has surely come to face the issue, ideally with more data to hand. In formulating civic penal policy, for example, one would like to know how the severity of sentencing affects crime rates (which might indicate the effectiveness of deterrence), and how different prison regimes (punitive versus educative, say) influence recidivism. And one needs to have a view on whether sanctions such as imprisonment are primarily for the sake of public protection or to mete out punishment.
The same sorts of considerations apply with scientific misconduct, because the result otherwise has a dangerously ad hoc flavour. Just a week ago, the South Korean national committee on bioethics rejected an application by Hwang Woo-suk to resume research on stem cells. Why? Because ‘he engaged in unethical and wrongful acts in the past’, according to one source. But that’s not a reason, it is simply a statement of fact. Does the committee fear that Hwang would do it again (despite the intense scrutiny that would be given to his every move)? Do they think he hasn’t been sufficiently punished yet? Or perhaps that approval would have raised doubts about the rigour of the country’s bioethics procedures? Each of these reasons might be defensible – but there’s no telling which, if any, applies.
One reason why it matters is that by all accounts Hwang is an extremely capable scientist. If he and others like him are to be excluded from making further contributions to their fields because of past transgressions, we need to be clear about why that is being done. We need a rational debate on the motivations and objectives of a scientific penal code.
References
1. Redman, B. K. & Merz, J. F., Science 321, 775 (2008).
2. Titus, S. L. et al., Nature 453, 980-982 (2008).
3. Bosch, X. Nature 454, 574 (2008).
4. Feder, N. & Stewart, W. W. Nature 454, 574 (2008).
5. Nussenzveig, P. A. & Funchal, Z. Nature 454, 574 (2008).
6. Martinson, B. C. et al., Nature 435, 737-738 (2008).
[This is the uncut version of my latest Muse article for Nature’s online news.]
Before we ask whether scientific conduct is dealt with harshly enough, we need to be clear about what punishment is meant to achieve.
Is science too soft on its miscreants? That could be read as the implication of a study published in Science, which shows that 43 percent of a small sample of scientists found guilty of misconduct remained employed subsequently in academia, and half of them continued to turn out a paper a year [1].
Scientists have been doing a lot of hand-wringing recently about misconduct in their ranks. A commentary in Nature [2] proposed that many such incidents go unreported, and suggested ways to improve that woeful state of affairs, such as adopting a ‘zero-tolerance culture’. This prompted several respondents to maintain that matters are even worse, for example because junior researchers see senior colleagues benefiting from ‘calculated, cautious dishonesty’ or because some countries lack regulatory bodies to police ethical breaches [3-5].
All this dismay is justified to the extent that misconduct potentially tarnishes the whole community, damaging the credibility of science in the eyes of the public. Whether the integrity of the scientific literature suffers seriously is less clear – the more important the false claim, the more likely it is to be uncovered quickly as others scrutinize the results or fail to reproduce them. This has been the case, for example, with the high-profile scandals and controversies over the work of Jan Hendrik Schön in nanotechnology, Hwang Woo-suk in cloning and Rusi Taleyarkhan in bench-top nuclear fusion.
But the discussion needs to move beyond these expressions of stern disapproval. For one thing, it isn’t clear what ‘zero tolerance’ should mean when misconduct is such a grey area. Everyone can agree that fabrication of data is beyond the pale; but as a study three years ago revealed [6], huge numbers of scientists routinely engage in practices that are questionable without being blatantly improper: using another’s ideas without credit, say, or overlooking others’ use of flawed data. Papers that inflate their apparent novelty by failing to acknowledge the extent of previous research are tiresomely common.
And it is remarkable how many austere calls for penalizing scientific misconduct omit any indication of what such penalties are meant to achieve. Such a situation is inconceivable in conventional criminology. Although there is no consensus on the objectives of a penal system – the relative weights that should be accorded to punishment, public protection, deterrence and rehabilitation – these are at least universally recognized as the components of the debate. In comparison, discussions of scientific misconduct seem all too often to stop at the primitive notion that it is a bad thing.
For example, the US Office of Research Integrity (ORI) provides ample explanation of its commendable procedures for handling allegations of misconduct, while the Office of Science and Technology Policy outlines the responsibilities of federal agencies and research institutions to conduct their own investigations. But where is the discussion of desired outcomes, beyond establishing the facts in a fair, efficient and transparent way?
This is why Redman and Merz’s study is useful. As they say, ‘little is known about the consequences of being found guilty of misconduct’. The common presumption, they say, is that such a verdict effectively spells the end of the perpetrator’s career.
Their conclusions, based on studies of 43 individuals deemed guilty by the ORI between 1994 and 2001, reveal a quite different picture. Of the 28 scientists Redman and Merz could trace, 10 were still working in academic positions. Those who agreed to be interviewed – just 7 of the 28 – were publishing an average 1.3 papers a year, while 19 of the 37 for which publication data were available published at least a paper a year.
Is this good or bad? Redman and Merz feel that the opportunity for redemption is important, not just from a liberal but also a pragmatic perspective. ‘The fact that some of these people retain useful scientific careers is sensible, given that they are trained as scientists’, says Merz. ‘They just slipped up in some fundamental way, and many can rebuild a scientific career or at least use the skills they developed as scientists.’ Besides, he adds, everyone they spoke to ‘paid a substantial price’. All reported financial and personal hardships, and some became physically ill.
But on another level, says Merz, these data ‘could be seen as undermining the deterrent effect of the perception that punishment is banishment, from academia, at least.’ Does the punishment fit the crime?
The scientific community has so far lacked much enthusiasm for confronting these questions – perhaps because misconduct, while a trait found in all fields of human activity, is felt to be uniquely embarrassing to an enterprise that considers itself in pursuit of objective truths. But the time has surely come to face the issue, ideally with more data to hand. In formulating civic penal policy, for example, one would like to know how the severity of sentencing affects crime rates (which might indicate the effectiveness of deterrence), and how different prison regimes (punitive versus educative, say) influence recidivism. And one needs to have a view on whether sanctions such as imprisonment are primarily for the sake of public protection or to mete out punishment.
The same sorts of considerations apply with scientific misconduct, because the result otherwise has a dangerously ad hoc flavour. Just a week ago, the South Korean national committee on bioethics rejected an application by Hwang Woo-suk to resume research on stem cells. Why? Because ‘he engaged in unethical and wrongful acts in the past’, according to one source. But that’s not a reason, it is simply a statement of fact. Does the committee fear that Hwang would do it again (despite the intense scrutiny that would be given to his every move)? Do they think he hasn’t been sufficiently punished yet? Or perhaps that approval would have raised doubts about the rigour of the country’s bioethics procedures? Each of these reasons might be defensible – but there’s no telling which, if any, applies.
One reason why it matters is that by all accounts Hwang is an extremely capable scientist. If he and others like him are to be excluded from making further contributions to their fields because of past transgressions, we need to be clear about why that is being done. We need a rational debate on the motivations and objectives of a scientific penal code.
References
1. Redman, B. K. & Merz, J. F., Science 321, 775 (2008).
2. Titus, S. L. et al., Nature 453, 980-982 (2008).
3. Bosch, X. Nature 454, 574 (2008).
4. Feder, N. & Stewart, W. W. Nature 454, 574 (2008).
5. Nussenzveig, P. A. & Funchal, Z. Nature 454, 574 (2008).
6. Martinson, B. C. et al., Nature 435, 737-738 (2008).
Tuesday, August 05, 2008
Who is Karl Neder?
‘These people tend to define themselves by what they don’t like, which is usually much the same: relativity, the Big Bang. Einstein. Especially Einstein, poor fellow.’
In my novel The Sun and Moon Corrupted, where these words appear, I sought to convey the fact that the group of individuals who scientists would call cranks, and who submit their ideas with tenacious insistence and persistence to journals such as Nature, have remarkably similar characteristics and obsessions. They tend to express themselves in much the same manner, exemplified in my book by the letters of the fictional Hungarian physicist Karl Neder. And their egocentricity knows no bounds.
I realised that, if I was right in this characterization, it would not be long at all before some of these people became convinced that Karl Neder is based on them. (The fact is that he is indeed loosely based on a real person, but there are reasons why I can be very confident that this person will never identify the fact.)
And so it comes to pass. The first person to cry ‘It’s me!’ seems to be one Pentcho Valev . I do not know who Valev is, but it seems I once (more than once?) had the task of rejecting a paper he submitted to Nature. I remember more than you might imagine about the decisions I made while an editor at Nature, and by no means always because the memory is pleasant. But I fear that Valev rings no bells at all. Nonetheless, says Valev, there are “Too many coincidences: Bulgaria + thermodynamics + Einstein + desperately trying to publish (in Nature) + Phillip [sic] Ball is Nature’s editor at that time and mercilessly rejects all my papers. Yes most probably I am at least part of this Karl Neder. Bravo Phillip Ball! Some may say it is unethical for you to make money by describing the plight of your victims but don't believe them: there is nothing unethical in Einstein zombie world.” (If it is any consolation, Mr Valev, the notion that this book has brought me "fortune" provokes hollow laughter.)
Ah, but this is all so unnervingly close to the terms in which Karl Neder expresses himself (which mimic those of his real-life model). In fact, Valev seems first to have identified ‘his’ voice from a quote from the book in a review in the Telegraph:
‘Actually, what [Neder] says is: "PERPETUUM MOBILE IS CONSTRUCTED BY ME!!!!!!!!!"; his voluminous correspondence being littered with blood-curdling Igorisms of this sort.’
Even I would not have dreamt up the scenario in which Mr Valev is apparently saying to himself “Blood-curdling Igorisms? But that’s exactly like me, damn it!” (Or rather, “LIKE ME!!!!!!!!!”)
Valev continues: “If Philip Ball as Nature’s editor had not fought so successfully against crazy Eastern Europe anti-relativists, those cranks could have turned gold into silver and so the very foundation of Western culture would have been destroyed” – and he quotes from a piece I wrote in which I mentioned how relativistic effects in the electron orbitals of gold atoms are responsible for its reddish tint. This is where I start to wonder if it is all some delicious hoax by the wicked Henry Gee or one of the people who read my book for the Royal Institution book club, and therefore knows that indeed it plunges headlong into alchemy and metallic transmutation in its final chapters. What are you trying to do, turn me paranoid?
‘These people tend to define themselves by what they don’t like, which is usually much the same: relativity, the Big Bang. Einstein. Especially Einstein, poor fellow.’
In my novel The Sun and Moon Corrupted, where these words appear, I sought to convey the fact that the group of individuals who scientists would call cranks, and who submit their ideas with tenacious insistence and persistence to journals such as Nature, have remarkably similar characteristics and obsessions. They tend to express themselves in much the same manner, exemplified in my book by the letters of the fictional Hungarian physicist Karl Neder. And their egocentricity knows no bounds.
I realised that, if I was right in this characterization, it would not be long at all before some of these people became convinced that Karl Neder is based on them. (The fact is that he is indeed loosely based on a real person, but there are reasons why I can be very confident that this person will never identify the fact.)
And so it comes to pass. The first person to cry ‘It’s me!’ seems to be one Pentcho Valev . I do not know who Valev is, but it seems I once (more than once?) had the task of rejecting a paper he submitted to Nature. I remember more than you might imagine about the decisions I made while an editor at Nature, and by no means always because the memory is pleasant. But I fear that Valev rings no bells at all. Nonetheless, says Valev, there are “Too many coincidences: Bulgaria + thermodynamics + Einstein + desperately trying to publish (in Nature) + Phillip [sic] Ball is Nature’s editor at that time and mercilessly rejects all my papers. Yes most probably I am at least part of this Karl Neder. Bravo Phillip Ball! Some may say it is unethical for you to make money by describing the plight of your victims but don't believe them: there is nothing unethical in Einstein zombie world.” (If it is any consolation, Mr Valev, the notion that this book has brought me "fortune" provokes hollow laughter.)
Ah, but this is all so unnervingly close to the terms in which Karl Neder expresses himself (which mimic those of his real-life model). In fact, Valev seems first to have identified ‘his’ voice from a quote from the book in a review in the Telegraph:
‘Actually, what [Neder] says is: "PERPETUUM MOBILE IS CONSTRUCTED BY ME!!!!!!!!!"; his voluminous correspondence being littered with blood-curdling Igorisms of this sort.’
Even I would not have dreamt up the scenario in which Mr Valev is apparently saying to himself “Blood-curdling Igorisms? But that’s exactly like me, damn it!” (Or rather, “LIKE ME!!!!!!!!!”)
Valev continues: “If Philip Ball as Nature’s editor had not fought so successfully against crazy Eastern Europe anti-relativists, those cranks could have turned gold into silver and so the very foundation of Western culture would have been destroyed” – and he quotes from a piece I wrote in which I mentioned how relativistic effects in the electron orbitals of gold atoms are responsible for its reddish tint. This is where I start to wonder if it is all some delicious hoax by the wicked Henry Gee or one of the people who read my book for the Royal Institution book club, and therefore knows that indeed it plunges headlong into alchemy and metallic transmutation in its final chapters. What are you trying to do, turn me paranoid?
Saturday, August 02, 2008
Might religion be good for your health?
[Here is the uncut version of my latest Muse for Nature news online.]
Religion is not a disease, a new study claims, but a protection against it.
Science and religion, anyone? Oh come now, don’t tell me you’re bored with the subject already. Before you answer that, let me explain that a paper in the Proceedings of the Royal Society B [1] has a new perspective on offer.
Well, perhaps not new. In fact it is far older than the authors, Corey Fincher and Randy Thornhill of the University of New Mexico, acknowledge. Their treatment of religion as a social phenomenon harks back to classic works by two of sociology’s founding fathers, Emile Durkheim and Max Weber, who, around the start of the twentieth century, offered explanations of how religions around the world have shaped and been shaped by the societies in which they are embedded.
That this approach has fallen out of fashion tells us more about our times than about its validity. The increasing focus on individualism in the Western world since Durkheim wrote that “God is society, writ large” is reflected in the current enthusiasm for what has been dubbed neurotheology: attempts to locate religious experience in brain activity and genetic predispositions for certain mental states. Such studies might ultimately tell us why some folks go to church and other don’t, but they can say rather little about how a predisposition towards religiosity crystallizes into a relatively small number of institutionalized religions - why, say, the 'religiously inclined' don't simply each have a personal religion.
Similarly, the militant atheists who gnash their teeth at the sheer irrationality and arbitrariness of religious belief will be doomed forever to do so unless they accept Durkheim’s point that, rather than being some pernicious mental virus propagating through cultures, religion has social capital and thus possible adaptive value [2]. Durkheim argued that it once was, and still is in many cultures, the cement of society that maintains order. This cohesive function is as evident today in much of American society as it is in Tehran or Warsaw.
But of course there is a flipside to that. Within Durkheim’s definition of a religion as ‘a unified set of beliefs and practices which unite in one single moral community all those who adhere to them’ is a potential antagonism towards those outside that community – a potential that has become, largely unanticipated, the real spectre haunting the modern world.
It is in a sense the source of this tension that forms the central question of Fincher and Thornhill’s paper. Whereas Weber looked at the different social structures that different religions tended to promote, and Durkheim focused on ‘secular utility’ such as the benefits of social cohesion, Fincher and Thornhill propose a specific reason why religions create a propensity to exclude outsiders. In their view, the development of a religion is a strategy for avoiding disease.
The more a society disperses and mixes with other groups, the more it risks contracting new diseases. ‘There is ample evidence’, the authors say, ‘that the psychology of xenophobia and ethnocentrism is importantly related to avoidance and management of infectious disease.’
Fincher and Thornhill have previously shown that global patterns of social collectivism [3] and of language diversity [4] correlate with the diversity of infectious disease in a manner consistent with avoidance strategies: strangers can be bad for your health. Now they have found that religious diversity is also greater in parts of the world where the risk of catching something nasty from those outside your group (who are likely to have different immunity patterns) is higher.
It’s an intriguing observation. But as with all correlation studies, cause and effect are hard to untangle. Fincher and Thornhill offer the notion that new religions are actively generated as societal markers that inhibit inter-group interactions. One could equally argue, however, that a tendency to avoid contacts with other social groups prevents the spread of some cultural traits at the expense of others, and so merely preserves an intrinsic diversity.
This, indeed, is the basis of some theoretical models for how cultural exchange and transmission occurs [5]. Where opportunities for interaction are fewer, there is more likelihood that several ‘island cultures’ will coexist rather than being consumed by a dominant one.
And the theory of Fincher and Thornhill tells us nothing about religion per se, beyond its simple function as a way of discriminating those ‘like you’ from those who aren’t. It might as well be any other societal trait, such as style of pottery or family names. In fact, compared with such indicators, religion is a fantastically baroque and socially costly means of separating friend from foe. As recent ethnic conflicts in African nations have shown, humans are remarkably and fatefully adept at identifying the smallest signs of difference.
What we have here, then, is very far from a theory of how and why religions arise and spread. The main value of the work may instead reside in the suggestion that there are ‘hidden’ biological influences on the dynamics of cultural diversification. It is also, however, a timely reminder that religion is not so much a personal belief (deluded or virtuous, according to taste) as, in Durkheim’s words, a ‘social fact’.
References
1. Fincher, C. L. & Thornhill, R. Proc. R. Soc. B doi:10.1098/rspb.2008.0688.
2. Wilson, D. S. Darwin’s Cathedral: Evolution, Religion, and the Nature of Society (University of Chicago Press, 2002).
3. Fincher, C. L. et al., Proc. R. Soc. B 275, 1279-1285 (2008).
4. Fincher, C. L. & Thornhill, R. Oikos doi:10.1111/j.0030-1299.2008.16684.x.
5. Axelrod, R. J. Conflict Resolution 41, 203-226 (1997).
[Here is the uncut version of my latest Muse for Nature news online.]
Religion is not a disease, a new study claims, but a protection against it.
Science and religion, anyone? Oh come now, don’t tell me you’re bored with the subject already. Before you answer that, let me explain that a paper in the Proceedings of the Royal Society B [1] has a new perspective on offer.
Well, perhaps not new. In fact it is far older than the authors, Corey Fincher and Randy Thornhill of the University of New Mexico, acknowledge. Their treatment of religion as a social phenomenon harks back to classic works by two of sociology’s founding fathers, Emile Durkheim and Max Weber, who, around the start of the twentieth century, offered explanations of how religions around the world have shaped and been shaped by the societies in which they are embedded.
That this approach has fallen out of fashion tells us more about our times than about its validity. The increasing focus on individualism in the Western world since Durkheim wrote that “God is society, writ large” is reflected in the current enthusiasm for what has been dubbed neurotheology: attempts to locate religious experience in brain activity and genetic predispositions for certain mental states. Such studies might ultimately tell us why some folks go to church and other don’t, but they can say rather little about how a predisposition towards religiosity crystallizes into a relatively small number of institutionalized religions - why, say, the 'religiously inclined' don't simply each have a personal religion.
Similarly, the militant atheists who gnash their teeth at the sheer irrationality and arbitrariness of religious belief will be doomed forever to do so unless they accept Durkheim’s point that, rather than being some pernicious mental virus propagating through cultures, religion has social capital and thus possible adaptive value [2]. Durkheim argued that it once was, and still is in many cultures, the cement of society that maintains order. This cohesive function is as evident today in much of American society as it is in Tehran or Warsaw.
But of course there is a flipside to that. Within Durkheim’s definition of a religion as ‘a unified set of beliefs and practices which unite in one single moral community all those who adhere to them’ is a potential antagonism towards those outside that community – a potential that has become, largely unanticipated, the real spectre haunting the modern world.
It is in a sense the source of this tension that forms the central question of Fincher and Thornhill’s paper. Whereas Weber looked at the different social structures that different religions tended to promote, and Durkheim focused on ‘secular utility’ such as the benefits of social cohesion, Fincher and Thornhill propose a specific reason why religions create a propensity to exclude outsiders. In their view, the development of a religion is a strategy for avoiding disease.
The more a society disperses and mixes with other groups, the more it risks contracting new diseases. ‘There is ample evidence’, the authors say, ‘that the psychology of xenophobia and ethnocentrism is importantly related to avoidance and management of infectious disease.’
Fincher and Thornhill have previously shown that global patterns of social collectivism [3] and of language diversity [4] correlate with the diversity of infectious disease in a manner consistent with avoidance strategies: strangers can be bad for your health. Now they have found that religious diversity is also greater in parts of the world where the risk of catching something nasty from those outside your group (who are likely to have different immunity patterns) is higher.
It’s an intriguing observation. But as with all correlation studies, cause and effect are hard to untangle. Fincher and Thornhill offer the notion that new religions are actively generated as societal markers that inhibit inter-group interactions. One could equally argue, however, that a tendency to avoid contacts with other social groups prevents the spread of some cultural traits at the expense of others, and so merely preserves an intrinsic diversity.
This, indeed, is the basis of some theoretical models for how cultural exchange and transmission occurs [5]. Where opportunities for interaction are fewer, there is more likelihood that several ‘island cultures’ will coexist rather than being consumed by a dominant one.
And the theory of Fincher and Thornhill tells us nothing about religion per se, beyond its simple function as a way of discriminating those ‘like you’ from those who aren’t. It might as well be any other societal trait, such as style of pottery or family names. In fact, compared with such indicators, religion is a fantastically baroque and socially costly means of separating friend from foe. As recent ethnic conflicts in African nations have shown, humans are remarkably and fatefully adept at identifying the smallest signs of difference.
What we have here, then, is very far from a theory of how and why religions arise and spread. The main value of the work may instead reside in the suggestion that there are ‘hidden’ biological influences on the dynamics of cultural diversification. It is also, however, a timely reminder that religion is not so much a personal belief (deluded or virtuous, according to taste) as, in Durkheim’s words, a ‘social fact’.
References
1. Fincher, C. L. & Thornhill, R. Proc. R. Soc. B doi:10.1098/rspb.2008.0688.
2. Wilson, D. S. Darwin’s Cathedral: Evolution, Religion, and the Nature of Society (University of Chicago Press, 2002).
3. Fincher, C. L. et al., Proc. R. Soc. B 275, 1279-1285 (2008).
4. Fincher, C. L. & Thornhill, R. Oikos doi:10.1111/j.0030-1299.2008.16684.x.
5. Axelrod, R. J. Conflict Resolution 41, 203-226 (1997).
Thursday, July 17, 2008
Who says the Internet broadens your horizons?
[Here’s the long version of my latest, understandable shortened Muse for Nature News.]
A new finding that electronic journals create a narrowing of scientific scholarship illustrates the mixed blessings of online access.
It’s a rare scientist these days who does not know his or her citation index, most commonly in the form of the h-index introduced in 2005 by physicist Jorge Hirsch [1]. Proposed as a measure of the cumulative impact of one’s published works, this and related indices are being used informally to rank scientists, whether this be for drawing up lists of the most stellar performers or for assessing young researchers applying for tenure. Increasingly, careers are being weighed up through citation records.
All this makes more pressing the question of how papers get cited in the first place: does this provide an honest measure of their worth? A study published in Science by sociologist James Evans at the University of Chicago adds a new ingredient to this volatile ferment [2]. He has shown that the increasing availability of papers and journals online, including what may be decades of back issues, is paradoxically leading to a narrowing of the number and range of papers cited. Evans suggests that this is the result of the way browsing of print journals is being replaced by focused online searches, which tend both to identify more recent papers and to quickly converge on a smaller subset of them.
The argument is that when a journal goes online, fewer people flick through the print version and so there is less chance that readers will just happen across a paper related to their work. Rather, an automated search, or following hyperlinks from other online articles, will take them directly to the most immediately relevant articles.
Evans has compiled citation data for 34 million articles from a wide range of scientific disciplines, some dating back as far as 1945. He has studied how citation patterns changed as many of the journals became available online. On average, a hypothetical journal would, by making five years of its issues available free or commercially online, suffer a drop in the number of its own articles cited from 600 to 200.
That sounds like a bad business model, but in fact there are some important qualifications here. It doesn’t necessarily mean that a journal gets cited less when it goes online, but simply that its citations get focused on fewer distinct articles. And all these changes are set against an ever-growing body of published work, which means that more and more papers are getting cited overall. The changes caused by going online are relative, set within the context of a still widening and deepening universe of citations.
All the same, this means that the trend for online access is making citation patterns narrower than they would be otherwise: fixated on fewer papers and fewer journals.
In some ways, the narrowing is not a bad thing. Online searching can deliver you more quickly to just those papers that are most immediately relevant to your own work, without having to wade through more peripheral material. This may in turn mean that the citation lists in papers are more helpful and pertinent to readers.
Online access also makes it much easier for researchers to check citation details – to look at what a reference actually said, rather than what someone else implies they said. It’s not clear how often this is actually done, however – one study
(see also here), using mis-citations as a proxy, has suggested that 70-90 percent of literature citations have simply been copied from other reference lists, rather than being directly consulted [3,4]. But at the very least, easier access should reduce the chances of that.
Yet there are two reasons in particular why Evans’ findings are concerning. One is in fact a mixed blessing. With online resources, scientific consensus is reached more quickly and efficiently, because for example hyperlinked citations allow you to deduce rapidly which papers other are citing. Some search strategies also rely on consensual views about relevance and significance.
This might mean that less attention, time and effort get wasted down dead ends. But it also means there is more chance of missing something important. “It pushes science in the direction of a press release”, says Evans. “Unless they are picked up immediately, things will be forgotten more quickly.”
Moreover, feedback about the value judgements of others seems to lead to amplification of opinions in a way that is not necessarily linked to ‘absolute’ value [5]. It’s an example of the rich-get-richer or ‘Matthew’ effect, whereby fame becomes self-fulfilling and a few individuals get disproportionate rewards at the expense of other, perhaps equally deserving cases. While highly cited papers may indeed deserve to be, it seems the citation statistics would not look very different if these papers had simply benefited from random amplification of negligible differences in quality [6]. Again, this could happen even with old-style manual searching of journals, but online searches make it more likely.
The other worry is that this trend exacerbates the already lamented narrowing of researchers’ horizons. It is by scanning through the contents pages of journals that you find out what others outside your field are doing. If scientists are reading only the papers that are directly relevant to their immediate research, science as a whole will suffer, not least because its tightly drawn disciplines will cease to be fertilized by ideas from outside.
Related to this concern is the possibility of collective amnesia: the past ceases to matter in a desperate bid to keep abreast of the present. Older scientists have probably been complaining that youngsters no longer ‘read the old literature’ ever since science journals existed, but it seems that neglecting the history of your field is made more likely with online tools.
There’s a risk of overplaying this issue, however. It’s likely that so-called ‘ceremonial citation’, the token nod to venerable and unread papers, has been going on for a long time. And the increased availability of foundational texts online can only be a good thing. Nonetheless, Evans’ data indicate that online access is driving citations to become ‘younger’ and reducing an article’s shelf-life. This must surely increase the danger of reinventing the wheel. And there is an important difference between having decided that an old paper is not sufficiently relevant to cite, and having assumed it, or having not even known of its existence.
In many ways these trends are just an extension to the scientific research community of things that have been much debated in the broader sphere of news media, where the possibilities for personalization of content leads to a solipsistic outlook in which individuals hear only the things they want to hear. (The awful geek-speak for this – individuated news – itself makes the point, having apparently been coined in ignorance of the fact that individuation already has a different historical meaning.) Instead of reading newspapers, the fear is that people will soon read only the ‘Daily Me.’ Web journalist Steve Outing has said that “90 percent of my daughters’ media consumption is ‘individuated’. For kids today, non-individuated media is outside the norm.” We may be approaching the point where that also applies to young scientists, particularly if it is the model they have become accustomed to as children.
Ultimately, the concerns that Evans raises are thus not a necessary consequence of the mere fact of online access and archives, but stem from the cultural norms within which this material is becoming available. And it is no response – or at least, a futile one – to say that we must bring back the days when scientists would have to visit the library each week and pick up the journals. The efficiency of online searching and the availability of archives are both to be welcomed. But a laissez-faire attitude to this ‘literature market’ could have some unwelcome consequences, in particular the risk of reduced meritocracy, loss of valuable research, and increased parochialism. The paper journal may be on the way out, but we’d better make sure that the journal club doesn’t go the same way.
References
1. J. E. Hirsch, Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. J. A. Evans, Science 321, 395-399 (2008).
3. M. V. Simkin & V. P. Roychowdhury, Complex Syst. 14, 269-274 (2003).
4. M. V. Simkin & V. P. Roychowdhury, Scientometrics 62, 367-384 (2005).
5. M. J. Sagalnik et al., Science 311, 854-856 (2006).
6. M. V. Simkin & V. P. Chowdhury, Annals Improb. Res. 11, 24-27 (2005).
[Here’s the long version of my latest, understandable shortened Muse for Nature News.]
A new finding that electronic journals create a narrowing of scientific scholarship illustrates the mixed blessings of online access.
It’s a rare scientist these days who does not know his or her citation index, most commonly in the form of the h-index introduced in 2005 by physicist Jorge Hirsch [1]. Proposed as a measure of the cumulative impact of one’s published works, this and related indices are being used informally to rank scientists, whether this be for drawing up lists of the most stellar performers or for assessing young researchers applying for tenure. Increasingly, careers are being weighed up through citation records.
All this makes more pressing the question of how papers get cited in the first place: does this provide an honest measure of their worth? A study published in Science by sociologist James Evans at the University of Chicago adds a new ingredient to this volatile ferment [2]. He has shown that the increasing availability of papers and journals online, including what may be decades of back issues, is paradoxically leading to a narrowing of the number and range of papers cited. Evans suggests that this is the result of the way browsing of print journals is being replaced by focused online searches, which tend both to identify more recent papers and to quickly converge on a smaller subset of them.
The argument is that when a journal goes online, fewer people flick through the print version and so there is less chance that readers will just happen across a paper related to their work. Rather, an automated search, or following hyperlinks from other online articles, will take them directly to the most immediately relevant articles.
Evans has compiled citation data for 34 million articles from a wide range of scientific disciplines, some dating back as far as 1945. He has studied how citation patterns changed as many of the journals became available online. On average, a hypothetical journal would, by making five years of its issues available free or commercially online, suffer a drop in the number of its own articles cited from 600 to 200.
That sounds like a bad business model, but in fact there are some important qualifications here. It doesn’t necessarily mean that a journal gets cited less when it goes online, but simply that its citations get focused on fewer distinct articles. And all these changes are set against an ever-growing body of published work, which means that more and more papers are getting cited overall. The changes caused by going online are relative, set within the context of a still widening and deepening universe of citations.
All the same, this means that the trend for online access is making citation patterns narrower than they would be otherwise: fixated on fewer papers and fewer journals.
In some ways, the narrowing is not a bad thing. Online searching can deliver you more quickly to just those papers that are most immediately relevant to your own work, without having to wade through more peripheral material. This may in turn mean that the citation lists in papers are more helpful and pertinent to readers.
Online access also makes it much easier for researchers to check citation details – to look at what a reference actually said, rather than what someone else implies they said. It’s not clear how often this is actually done, however – one study
(see also here), using mis-citations as a proxy, has suggested that 70-90 percent of literature citations have simply been copied from other reference lists, rather than being directly consulted [3,4]. But at the very least, easier access should reduce the chances of that.
Yet there are two reasons in particular why Evans’ findings are concerning. One is in fact a mixed blessing. With online resources, scientific consensus is reached more quickly and efficiently, because for example hyperlinked citations allow you to deduce rapidly which papers other are citing. Some search strategies also rely on consensual views about relevance and significance.
This might mean that less attention, time and effort get wasted down dead ends. But it also means there is more chance of missing something important. “It pushes science in the direction of a press release”, says Evans. “Unless they are picked up immediately, things will be forgotten more quickly.”
Moreover, feedback about the value judgements of others seems to lead to amplification of opinions in a way that is not necessarily linked to ‘absolute’ value [5]. It’s an example of the rich-get-richer or ‘Matthew’ effect, whereby fame becomes self-fulfilling and a few individuals get disproportionate rewards at the expense of other, perhaps equally deserving cases. While highly cited papers may indeed deserve to be, it seems the citation statistics would not look very different if these papers had simply benefited from random amplification of negligible differences in quality [6]. Again, this could happen even with old-style manual searching of journals, but online searches make it more likely.
The other worry is that this trend exacerbates the already lamented narrowing of researchers’ horizons. It is by scanning through the contents pages of journals that you find out what others outside your field are doing. If scientists are reading only the papers that are directly relevant to their immediate research, science as a whole will suffer, not least because its tightly drawn disciplines will cease to be fertilized by ideas from outside.
Related to this concern is the possibility of collective amnesia: the past ceases to matter in a desperate bid to keep abreast of the present. Older scientists have probably been complaining that youngsters no longer ‘read the old literature’ ever since science journals existed, but it seems that neglecting the history of your field is made more likely with online tools.
There’s a risk of overplaying this issue, however. It’s likely that so-called ‘ceremonial citation’, the token nod to venerable and unread papers, has been going on for a long time. And the increased availability of foundational texts online can only be a good thing. Nonetheless, Evans’ data indicate that online access is driving citations to become ‘younger’ and reducing an article’s shelf-life. This must surely increase the danger of reinventing the wheel. And there is an important difference between having decided that an old paper is not sufficiently relevant to cite, and having assumed it, or having not even known of its existence.
In many ways these trends are just an extension to the scientific research community of things that have been much debated in the broader sphere of news media, where the possibilities for personalization of content leads to a solipsistic outlook in which individuals hear only the things they want to hear. (The awful geek-speak for this – individuated news – itself makes the point, having apparently been coined in ignorance of the fact that individuation already has a different historical meaning.) Instead of reading newspapers, the fear is that people will soon read only the ‘Daily Me.’ Web journalist Steve Outing has said that “90 percent of my daughters’ media consumption is ‘individuated’. For kids today, non-individuated media is outside the norm.” We may be approaching the point where that also applies to young scientists, particularly if it is the model they have become accustomed to as children.
Ultimately, the concerns that Evans raises are thus not a necessary consequence of the mere fact of online access and archives, but stem from the cultural norms within which this material is becoming available. And it is no response – or at least, a futile one – to say that we must bring back the days when scientists would have to visit the library each week and pick up the journals. The efficiency of online searching and the availability of archives are both to be welcomed. But a laissez-faire attitude to this ‘literature market’ could have some unwelcome consequences, in particular the risk of reduced meritocracy, loss of valuable research, and increased parochialism. The paper journal may be on the way out, but we’d better make sure that the journal club doesn’t go the same way.
References
1. J. E. Hirsch, Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. J. A. Evans, Science 321, 395-399 (2008).
3. M. V. Simkin & V. P. Roychowdhury, Complex Syst. 14, 269-274 (2003).
4. M. V. Simkin & V. P. Roychowdhury, Scientometrics 62, 367-384 (2005).
5. M. J. Sagalnik et al., Science 311, 854-856 (2006).
6. M. V. Simkin & V. P. Chowdhury, Annals Improb. Res. 11, 24-27 (2005).
Sunday, July 13, 2008
Is music just for babies?
I’m grateful to a friend for pointing me towards a recent preposterous article on music by Terry Kealey in the Times, suggesting in essence that music is anti-intellectual, regressive and appeals to our baser instincts. Now, I have sparred with Terry before and I know that he likes to be provocative. I don’t want to seem to be rising to the bait like some quivering Verdi aficionado. But really, he shouldn’t be allowed to be so naughty without being challenged. I have to say that his article struck me as a classic case of a little knowledge being a dangerous thing.
His bizarre opening gambit seems to be that music and intelligence are somehow mutually exclusive, so that one may make way for the other. This will come as news to any neuroscientist or psychologist who has ever studied music. A large part of the argument seems to rest on the idea that perfect pitch is a sign of mental incapacity. Isn’t it particularly common in autistic people and children, he asks? Er, no, frankly. Sorry, it’s as simple as that. Terry may be confusing the fact that children can acquire perfect pitch through learning more easily than adults – but that’s true of many things, including language (which presumably does not make language an infantile attribute). Perfect pitch is also more common in Chinese people, but I think even a controversialist like Terry might stop short of wanting to say that this proves his point. Merely, it seems to be enhanced in speakers of tonal languages, which stands to reason.
But more to the point – and this is a bit of a giveaway – perfect pitch has nothing to do with musical ability. There is no correlation between the two. It is true that many composers had/have perfect pitch, but that’s no mystery, because as Terry points out, it can be learnt with effort, i.e. with lots of exposure to music. It is, indeed, even possible to have perfect pitch and to be simultaneously clinically tone deaf, since one involves the identification of absolute pitch in single notes and the other of pitch relationships between multiple notes.
Birds too have perfect pitch, we’re told, and so did Neanderthals (thanks to another of Stephen Mithen’s wild speculations, swallowed hook, line and sinker). And don’t birds have music too, showing that it is for bird-brains? Sorry, again no. Anyone who thinks birds have music doesn’t know what music is. Music has syntax and hierarchical patterns. Birdsong does not – it is a linear sequence of acoustic signals. I’m afraid Terry again and again disqualifies himself during the article from saying anything on the subject.
Similarly, he claims that music has only emotional value, not intellectual. So how to explain the well-documented fact that musical training improves children’s IQ? Or that music cognition uses so many different areas of the brain – not just ‘primitive emotional centres’ such as the amygdala but logic-processing centres in the frontal cortex and areas that overlap with those used for handling language syntax? This is a statement of pure prejudice that takes no account of any evidence. ‘To a scientist, music can appear as a throwback to a primeval, swampy stage of human evolution’, Terry claims. Not to any scientist I know.
Finally, we have Terry’s remark that music encourages dictatorships, because Hitler and Stalin loved it, but Churchill and Roosevelt were indifferent. I am not going to insult his intelligence by implying that this is a serious suggestion that warrants a po-faced response, but really Terry, you have to be careful about making this sort of jest in the Times. I can imagine retired colonels all over the country snorting over their toast and marmalade: ‘Good god, the chap has a point!’
I must confess that I find something rather delightful in the fact that there are still people today who will, like some latter-day Saint Bernard of Clairvaux, denounce music as inciting ‘lust and superstition’. It’s wonderful stuff in its way, although one can’t help but scent the faint reek of the attacks on jazz in the early twentieth century, which of course had far baser motivations. Plato shared these worries too – Terry fails to point out that he felt only the right music educated the soul in virtue, while the wrong music would corrupt it. The same was true of Saint Augustine, but in his case it was his very love of music that made him fearful – he was all too aware of the strong effects it could exert, for ‘better’ or ‘worse’. In Terry Kealey’s case, it seems as though all music leaves him feeling vaguely unclean and infantilized, or perhaps just cold. That’s sad, but not necessarily beyond the reach of treatment.
I’m grateful to a friend for pointing me towards a recent preposterous article on music by Terry Kealey in the Times, suggesting in essence that music is anti-intellectual, regressive and appeals to our baser instincts. Now, I have sparred with Terry before and I know that he likes to be provocative. I don’t want to seem to be rising to the bait like some quivering Verdi aficionado. But really, he shouldn’t be allowed to be so naughty without being challenged. I have to say that his article struck me as a classic case of a little knowledge being a dangerous thing.
His bizarre opening gambit seems to be that music and intelligence are somehow mutually exclusive, so that one may make way for the other. This will come as news to any neuroscientist or psychologist who has ever studied music. A large part of the argument seems to rest on the idea that perfect pitch is a sign of mental incapacity. Isn’t it particularly common in autistic people and children, he asks? Er, no, frankly. Sorry, it’s as simple as that. Terry may be confusing the fact that children can acquire perfect pitch through learning more easily than adults – but that’s true of many things, including language (which presumably does not make language an infantile attribute). Perfect pitch is also more common in Chinese people, but I think even a controversialist like Terry might stop short of wanting to say that this proves his point. Merely, it seems to be enhanced in speakers of tonal languages, which stands to reason.
But more to the point – and this is a bit of a giveaway – perfect pitch has nothing to do with musical ability. There is no correlation between the two. It is true that many composers had/have perfect pitch, but that’s no mystery, because as Terry points out, it can be learnt with effort, i.e. with lots of exposure to music. It is, indeed, even possible to have perfect pitch and to be simultaneously clinically tone deaf, since one involves the identification of absolute pitch in single notes and the other of pitch relationships between multiple notes.
Birds too have perfect pitch, we’re told, and so did Neanderthals (thanks to another of Stephen Mithen’s wild speculations, swallowed hook, line and sinker). And don’t birds have music too, showing that it is for bird-brains? Sorry, again no. Anyone who thinks birds have music doesn’t know what music is. Music has syntax and hierarchical patterns. Birdsong does not – it is a linear sequence of acoustic signals. I’m afraid Terry again and again disqualifies himself during the article from saying anything on the subject.
Similarly, he claims that music has only emotional value, not intellectual. So how to explain the well-documented fact that musical training improves children’s IQ? Or that music cognition uses so many different areas of the brain – not just ‘primitive emotional centres’ such as the amygdala but logic-processing centres in the frontal cortex and areas that overlap with those used for handling language syntax? This is a statement of pure prejudice that takes no account of any evidence. ‘To a scientist, music can appear as a throwback to a primeval, swampy stage of human evolution’, Terry claims. Not to any scientist I know.
Finally, we have Terry’s remark that music encourages dictatorships, because Hitler and Stalin loved it, but Churchill and Roosevelt were indifferent. I am not going to insult his intelligence by implying that this is a serious suggestion that warrants a po-faced response, but really Terry, you have to be careful about making this sort of jest in the Times. I can imagine retired colonels all over the country snorting over their toast and marmalade: ‘Good god, the chap has a point!’
I must confess that I find something rather delightful in the fact that there are still people today who will, like some latter-day Saint Bernard of Clairvaux, denounce music as inciting ‘lust and superstition’. It’s wonderful stuff in its way, although one can’t help but scent the faint reek of the attacks on jazz in the early twentieth century, which of course had far baser motivations. Plato shared these worries too – Terry fails to point out that he felt only the right music educated the soul in virtue, while the wrong music would corrupt it. The same was true of Saint Augustine, but in his case it was his very love of music that made him fearful – he was all too aware of the strong effects it could exert, for ‘better’ or ‘worse’. In Terry Kealey’s case, it seems as though all music leaves him feeling vaguely unclean and infantilized, or perhaps just cold. That’s sad, but not necessarily beyond the reach of treatment.
Saturday, July 12, 2008
Were there architectural drawings for Chartres?
Michael Lewis has given Universe of Stone a nice review in the Wall Street Journal. The reason I want to respond to the points he raises is not to score points or pick an argument, but because they touch on interesting issues.
Lewis’s complaint about the absence of much discussion of the sculpture at Chartres is understandable if one is led by the American subtitle to expect a genuine biography of the building. And it’s natural that he would have been. But as my UK subtitle indicates, this is not in fact my aim: this is really a book about the origins of Gothic and what it indicates about the intellectual currents of the twelfth-century renaissance. The Chartrain sculpture doesn’t have so much to say about that (with some notable exceptions that I do mention).
Lewis’s most serious criticism, however, concerns the question of architectural drawings in the period when Chartres was built. As he says (and as he acknowledges I say), drawings for Gothic churches certainly did exist: there are some spectacular ones for Strasbourg and Reims Cathedrals in particular. As I say in my book, ‘These are extremely detailed and executed with high technical proficiency.’ They date from around 1250 onwards.
The question is: were similar drawings used for Chartres? Lewis is in no doubt: ‘analogous drawings would certainly have existed for Chartres.’ That's a level of certainty that other historians of Gothic don't seem to share - unsurprisingly, given that we lack any evidence either way. But most importantly, I would surely and rightly have been hauled over the coals if I had committed the cardinal sin of assuming that one period in the Middle Ages stands proxy for all others. The mid-thirteenth century was a very different time from the late twelfth, in terms of building practices as in many other respects: in particular, architecture became much more professionalized in the early thirteenth century than it had been before. My guess, as it is no more than that, is that if drawings existed for Chartres – which is certainly possible, but nothing more – they would have looked more akin to those of Villard de Honnecourt, made around 1220 or so, which have none of the precision of the Strasbourg drawings. Lewis says that the sophistication of the latter speaks of a mature tradition that must have already existed for a long time. That seems reasonable, until you consider this. Suppose all cathedrals before Chartres had been destroyed. We might, with analogous reasoning to Lewis’s, then look at its flying buttresses and say ‘Well, they certainly must have had good flying buttresses in the 1130s, since these ones are so mature.’ And of course we’d be utterly wrong. (What’s more, the skills needed to make flying buttresses are considerably more demanding than those needed to make scale drawings.)
I think Lewis may have misunderstood my text in places. I never claimed that the architect of Chartres designed it all ‘in his head’. I simply said that this is what architectural historian Robert Branner claimed. (I'm not sure I'd agree with him.) Neither did I say that architectural drawings would all be simply ‘symbolic, showing codified relationships without any real attention to dimension’ – I said that this was true of medieval art and maps.
I’m grateful to Lewis for raising this as an issue, and his comments suggest that it might be good if I spell things out a little more explicitly in the paperback (which, in the US, will probably have a different subtitle!).
Michael Lewis has given Universe of Stone a nice review in the Wall Street Journal. The reason I want to respond to the points he raises is not to score points or pick an argument, but because they touch on interesting issues.
Lewis’s complaint about the absence of much discussion of the sculpture at Chartres is understandable if one is led by the American subtitle to expect a genuine biography of the building. And it’s natural that he would have been. But as my UK subtitle indicates, this is not in fact my aim: this is really a book about the origins of Gothic and what it indicates about the intellectual currents of the twelfth-century renaissance. The Chartrain sculpture doesn’t have so much to say about that (with some notable exceptions that I do mention).
Lewis’s most serious criticism, however, concerns the question of architectural drawings in the period when Chartres was built. As he says (and as he acknowledges I say), drawings for Gothic churches certainly did exist: there are some spectacular ones for Strasbourg and Reims Cathedrals in particular. As I say in my book, ‘These are extremely detailed and executed with high technical proficiency.’ They date from around 1250 onwards.
The question is: were similar drawings used for Chartres? Lewis is in no doubt: ‘analogous drawings would certainly have existed for Chartres.’ That's a level of certainty that other historians of Gothic don't seem to share - unsurprisingly, given that we lack any evidence either way. But most importantly, I would surely and rightly have been hauled over the coals if I had committed the cardinal sin of assuming that one period in the Middle Ages stands proxy for all others. The mid-thirteenth century was a very different time from the late twelfth, in terms of building practices as in many other respects: in particular, architecture became much more professionalized in the early thirteenth century than it had been before. My guess, as it is no more than that, is that if drawings existed for Chartres – which is certainly possible, but nothing more – they would have looked more akin to those of Villard de Honnecourt, made around 1220 or so, which have none of the precision of the Strasbourg drawings. Lewis says that the sophistication of the latter speaks of a mature tradition that must have already existed for a long time. That seems reasonable, until you consider this. Suppose all cathedrals before Chartres had been destroyed. We might, with analogous reasoning to Lewis’s, then look at its flying buttresses and say ‘Well, they certainly must have had good flying buttresses in the 1130s, since these ones are so mature.’ And of course we’d be utterly wrong. (What’s more, the skills needed to make flying buttresses are considerably more demanding than those needed to make scale drawings.)
I think Lewis may have misunderstood my text in places. I never claimed that the architect of Chartres designed it all ‘in his head’. I simply said that this is what architectural historian Robert Branner claimed. (I'm not sure I'd agree with him.) Neither did I say that architectural drawings would all be simply ‘symbolic, showing codified relationships without any real attention to dimension’ – I said that this was true of medieval art and maps.
I’m grateful to Lewis for raising this as an issue, and his comments suggest that it might be good if I spell things out a little more explicitly in the paperback (which, in the US, will probably have a different subtitle!).
Friday, July 04, 2008
Behind the mask of the LHC
[Here is my latest Muse for Nature News, which, bless them, they ran at its extravagant length and complexity.]
The physics that the Large Hadron Collider will explore has tentative philosophical foundations. But that’s a good thing.
Physicists, and indeed all scientists, should rejoice that the advent of the Large Hadron Collider (LHC) has become a significant cultural event. Dubbed the ‘Big Bang machine’, the new particle accelerator at CERN — the European centre for particle physics near Geneva — should answer some of the most profound questions in fundamental physics and may open up a new chapter in our exploration of why the world is the way it is. The breathless media coverage of the impending switch-on is a reassuring sign of the public thirst for enlightenment on matters that could easily seem recondite and remote.
But there are pitfalls with this kind of jamboree. The most obvious is the temptation for hype and false promises about what the LHC will achieve, as though all the secrets of creation are about to come tumbling out of its tunnels. And it is an uneasy spectacle to see media commentators duty-bound to wax lyrical about matters they understandably don’t really grasp. Most scientists are now reasonably alert to the dangers of overselling, even if they sometimes struggle to keep them in view.
It’s also worth reminding spectators that the LHC is no model of ‘normal’ science. The scale and cost of the enterprise are much vaster than those enjoyed by most researchers, and this very fact restricts the freedom of the scientists involved to let their imaginations and intuitions roam. The key experiments are necessarily preordained and decided by committee and consensus, a world away from a small lab following its nose. This is not intrinsically a bad thing, but it is different.
There is, however, a deeper reason to think carefully about what the prospect of the LHC offers. Triumphalism can mask the fact that there are some unresolved questions about the scientific and philosophical underpinnings of the enterprise, which will not necessarily be answered by statistical analyses of the debris of particle collisions. These issues are revealingly explored in a preprint by Alexei Grinbaum, a researcher at the French Atomic Energy Commission (CEA) in Gif-sur-Yvette [1].
Under the carpet
Let’s be clear that high-energy physics is by no means alone in preferring to sweep some foundational loose ends under the carpet so that it can get on with its day-to-day business. The same is true, for example, of condensed-matter physics (which, contrary to media impressions, is what most physicists do) and quantum theory. It is a time-honoured principle of science that a theory can be useful and valid even if its foundations have no rigorous justification.
But the best reason to tease apart the weak joints in the basement of fundamental physics is not in order to expose it as a precarious edifice — which it is not — but because these issues are so interesting in themselves.
Paramount among them, says Grinbaum, is the matter of symmetry. That’s a ubiquitous word in the lexicon of high-energy physics, but it is far from easy for a lay person to see what is meant by it. At root, the word retains its everyday meaning. But what this corresponds to becomes harder to discern when, for example, symmetry is proposed to unite classes of quantum particles or fields.
Controlling the masses
It is symmetry that anchors the notion of the Higgs particle, probably the one target of the LHC that anyone with any interest in the subject will have heard of. It is easy enough to explain that ‘the Higgs particle gives other particles their mass’ (an apocryphal quote has Lenin comparing it to the Communist Party: it controls the masses). And yes, we can offer catchy analogies about celebrities accreting hordes of hangers-on as they pass through a party. But what does this actually mean? Ultimately, the Higgs mechanism is motivated by a need to explain why a symmetry that seemed once to render equivalent two fundamental forces — the electromagnetic and weak nuclear forces — has been broken, so that the two forces now have different strengths and ranges.
This — the ‘symmetry breaking’ of a previously unified ‘electroweak’ force — is what the LHC will primarily probe. The Higgs explanation for this phenomenon fits nicely into the Standard Model of particle physics — the summation of all we currently know about this branch of reality. It is the only component of the Standard Model that remains to be verified (or not).
So far, this is pretty much the story that, if pressed beyond sound bites, the LHC’s spokespeople will tell. But here’s the thing: we don’t truly know what role symmetry does and should play in physical theory.
Practically speaking, symmetry has become the cornerstone of physics. But this now tends to pass as an unexamined truth. The German mathematician Hermann Weyl, who introduced the notion of gauge symmetry (in essence, a description of how symmetry acts on local points in space) in the 1920s, claimed that “all a priori statements in physics have their origin in symmetry”. For him and his contemporaries, laws of physics have to possess certain symmetry properties — Einstein surely had something of this sort in mind when he said that “the only physical theories that we are willing to accept are the beautiful ones”. For physicist Steven Weinberg, symmetry properties “dictate the very existence” of all physical forces — if they didn’t obey symmetry principles, the Universe would find a way to forbid them.
Breaking the pattern
But is the Universe indeed some gloriously symmetrical thing, like a cosmic diamond? Evidently not. It’s a mess, not just at the level of my desk or the arbitrary patchwork of galaxy clusters, but also at the level of fundamental physics, with its proliferation of particles and forces. That’s where symmetry-breaking comes in: when a cosmic symmetry breaks, things that previously looked identical become distinct. We get, among other things, two different forces from one electroweak force.
And the Higgs particle is generally believed to hold the key to how that happened. This ‘particle’ is just a convenient, potentially detectable signature of the broader hypothesis for explaining the symmetry breaking — the ‘Higgs mechanism’. If the mechanism works, there is a particle associated with it.
But the problem with the Higgs mechanism is that it does not and cannot specify how the symmetry is broken. As a result, it does not uniquely determine the mass of the Higgs particle. Several versions of the theory offer different estimates, which vary by a factor of around 100. That’s a crucial difference in terms of how readily the LHC might observe it, if at all. Now, accounts of this search may present this situation blandly as simply a test of competing theories; but the fact is that the situation arises because of ambiguities about what symmetry-breaking actually is.
The issue goes still deeper, however. Isn’t it curious that we should seek for an explanation of dissimilar entities in terms of a theory in which they are the same? Suppose you find that the world contains some red balls and some blue ones. Is it more natural to decide that there is a theory that explains red balls, and a different one that explains blue balls, or to assume that red and blue balls were once indistinguishable? As it happens, we already have very compelling reasons to believe that the electromagnetic and weak forces were once unified; but deciding to make unification a general aim of physical theories is quite another matter.
Physics Nobel laureate David Gross has pointed out the apparent paradox in that latter approach: “The search for new symmetries of nature is based on the possibility of finding mechanisms, such as spontaneous symmetry breaking, that hide the new symmetry” [2]. Grinbaum is arguing that it’s worth pausing to think about that assumption. To rely on symmetry arguments is to accept that the resulting theory will not predict the particular outcome you observe, where the symmetry may be broken in an arbitrary way. Only experiments can tell you what the result of the symmetry-breaking is.
Should we trust in beauty?
Einstein’s statement is revealing because it exposes a strand of platonic thinking in modern physics: beauty matters, and it is a vision of beauty based on order and symmetry. Pragmatically speaking, arguments that use symmetry have proved to be fantastically fertile in fundamental physics. But as Weyl’s remark shows, they are motivated only by assumptions about how things ought to be.
A sense of aesthetic beauty is now not just something that physicists discover in the world; it is, in the words of Gian Francesco Giudice, a theoretical physicist at CERN, “a powerful guiding principle for physicists as they try to construct new theories” [3]. They look for ways to build it in. This, as Grinbaum points out, “is logically unsound and heuristically doubtful”.
Grinbaum says that such aesthetic judgements give rise to ideas about the ‘naturalness’ of theories. This notion of naturalness figures in many areas of science, Giudice points out, but is generally dangerously subjective: it is ‘natural’ to us that the solar system is heliocentric, but it wasn’t at all to the ancient Greeks, or indeed to Tycho Brahe, the sixteenth-century Danish astrologer.
But Giudice explains that “a more precise form of naturalness criterion has been developed in particle physics and it is playing a fundamental role in the formulation of theoretical predictions for new phenomena to be observed at the LHC”. The details of this concept of naturalness are technical, but in essence it purports to explain why symmetry-breaking of the electroweak interaction left gravity so much weaker than the weak force (its name notwithstanding). The reasoning here leads to the prediction that production of the Higgs particle will be accompanied by a welter of other new particles not included in the Standard Model. The curious thing about this prediction is that it is motivated not to make any theory work out, but simply to remove the apparent ‘unnaturalness’ of the imbalance in the strengths of the two forces. It is basically a philosophical matter of what ‘seems right’.
Workable theories
There are also fundamental questions about why physics has managed to construct all manner of workable theories — of electromagnetism, say — without having to postulate the Higgs particle at all. The simple answer is that, so long as we are talking about energies well below the furious levels at which the Higgs particle becomes apparent, and which the LHC hopes to create, it is enough to subsume the whole Higgs mechanism within the concept of mass. This involves creating what physicists call an effective field theory, in which phenomena that become explicit above a certain energy threshold remain merely implicit in the parameters of the theory. Much the same principle permits us to use Newtonian mechanics when objects’ velocities are much less than the speed of light.
Effective field theories thus work only up to some limiting energy. But Grinbaum points out that this is no longer just a practical simplification but a methodology: “Today physicists tend to think of all physical theories, including the Standard Model, as effective field theories with respect to new physics at higher energies.” The result is an infinite regression of such theories, and thus a renunciation of the search for a ‘final theory’ — entirely the opposite of what you might think physics is trying to do, if you judge from popular accounts (or occasionally, from their own words).
Effective field theories are a way of not having to answer everything at once. But if they simply mount up into an infinite tower, it will be an ungainly edifice at best. As philosopher of science Stephan Hartmann at Tilburg University in the Netherlands has put it, the predictive power of such a composite theory would steadily diminish “just as the predictive power of the Ptolemaic system went down when more epicycles were added” [4].
Einstein seemed to have an intimation of this. He expressed discomfort that his theory of relativity was based not simply on known facts but on an a priori postulate about the speed of light. He seemed to sense that this made it less fundamental.
These and other foundational issues are not new to LHC physics, but by probing the limits of the Standard Model the new collider could bring them to the fore. All this suggests that it would be a shame if the results were presented simply as data points to be compared against theoretical predictions, as though to coolly assess the merits of various well-understood proposals. The really exciting fact is that the LHC should mark the end of one era — defined by the Standard Model — and the beginning of the next. And at this point, we do not even know the appropriate language to describe what will follow — whether, for example, it will be rooted in new symmetry principles (such as supersymmetry, which relates hitherto distinct particles), or extra dimensions, or something else. So let’s acknowledge and even celebrate our ignorance, which is after all the springboard of the most creative science.
References
1. Grinbaum, A. Preprint at http://www.arxiv.org/abs/0806.4268 (2008).
2. Gross, D. in Conceptual Foundations of Quantum Field Theory Cao, T. Y. (ed.) (Cambridge Univ. Press, 1999).
3. Giudice, G. F. Preprint at http://www.arxiv.org/abs/0801.2562 (2008).
4. Hartmann, S. Stud. Hist. Phil. Mod. Phys. 32, 267-304 (2001).
[Here is my latest Muse for Nature News, which, bless them, they ran at its extravagant length and complexity.]
The physics that the Large Hadron Collider will explore has tentative philosophical foundations. But that’s a good thing.
Physicists, and indeed all scientists, should rejoice that the advent of the Large Hadron Collider (LHC) has become a significant cultural event. Dubbed the ‘Big Bang machine’, the new particle accelerator at CERN — the European centre for particle physics near Geneva — should answer some of the most profound questions in fundamental physics and may open up a new chapter in our exploration of why the world is the way it is. The breathless media coverage of the impending switch-on is a reassuring sign of the public thirst for enlightenment on matters that could easily seem recondite and remote.
But there are pitfalls with this kind of jamboree. The most obvious is the temptation for hype and false promises about what the LHC will achieve, as though all the secrets of creation are about to come tumbling out of its tunnels. And it is an uneasy spectacle to see media commentators duty-bound to wax lyrical about matters they understandably don’t really grasp. Most scientists are now reasonably alert to the dangers of overselling, even if they sometimes struggle to keep them in view.
It’s also worth reminding spectators that the LHC is no model of ‘normal’ science. The scale and cost of the enterprise are much vaster than those enjoyed by most researchers, and this very fact restricts the freedom of the scientists involved to let their imaginations and intuitions roam. The key experiments are necessarily preordained and decided by committee and consensus, a world away from a small lab following its nose. This is not intrinsically a bad thing, but it is different.
There is, however, a deeper reason to think carefully about what the prospect of the LHC offers. Triumphalism can mask the fact that there are some unresolved questions about the scientific and philosophical underpinnings of the enterprise, which will not necessarily be answered by statistical analyses of the debris of particle collisions. These issues are revealingly explored in a preprint by Alexei Grinbaum, a researcher at the French Atomic Energy Commission (CEA) in Gif-sur-Yvette [1].
Under the carpet
Let’s be clear that high-energy physics is by no means alone in preferring to sweep some foundational loose ends under the carpet so that it can get on with its day-to-day business. The same is true, for example, of condensed-matter physics (which, contrary to media impressions, is what most physicists do) and quantum theory. It is a time-honoured principle of science that a theory can be useful and valid even if its foundations have no rigorous justification.
But the best reason to tease apart the weak joints in the basement of fundamental physics is not in order to expose it as a precarious edifice — which it is not — but because these issues are so interesting in themselves.
Paramount among them, says Grinbaum, is the matter of symmetry. That’s a ubiquitous word in the lexicon of high-energy physics, but it is far from easy for a lay person to see what is meant by it. At root, the word retains its everyday meaning. But what this corresponds to becomes harder to discern when, for example, symmetry is proposed to unite classes of quantum particles or fields.
Controlling the masses
It is symmetry that anchors the notion of the Higgs particle, probably the one target of the LHC that anyone with any interest in the subject will have heard of. It is easy enough to explain that ‘the Higgs particle gives other particles their mass’ (an apocryphal quote has Lenin comparing it to the Communist Party: it controls the masses). And yes, we can offer catchy analogies about celebrities accreting hordes of hangers-on as they pass through a party. But what does this actually mean? Ultimately, the Higgs mechanism is motivated by a need to explain why a symmetry that seemed once to render equivalent two fundamental forces — the electromagnetic and weak nuclear forces — has been broken, so that the two forces now have different strengths and ranges.
This — the ‘symmetry breaking’ of a previously unified ‘electroweak’ force — is what the LHC will primarily probe. The Higgs explanation for this phenomenon fits nicely into the Standard Model of particle physics — the summation of all we currently know about this branch of reality. It is the only component of the Standard Model that remains to be verified (or not).
So far, this is pretty much the story that, if pressed beyond sound bites, the LHC’s spokespeople will tell. But here’s the thing: we don’t truly know what role symmetry does and should play in physical theory.
Practically speaking, symmetry has become the cornerstone of physics. But this now tends to pass as an unexamined truth. The German mathematician Hermann Weyl, who introduced the notion of gauge symmetry (in essence, a description of how symmetry acts on local points in space) in the 1920s, claimed that “all a priori statements in physics have their origin in symmetry”. For him and his contemporaries, laws of physics have to possess certain symmetry properties — Einstein surely had something of this sort in mind when he said that “the only physical theories that we are willing to accept are the beautiful ones”. For physicist Steven Weinberg, symmetry properties “dictate the very existence” of all physical forces — if they didn’t obey symmetry principles, the Universe would find a way to forbid them.
Breaking the pattern
But is the Universe indeed some gloriously symmetrical thing, like a cosmic diamond? Evidently not. It’s a mess, not just at the level of my desk or the arbitrary patchwork of galaxy clusters, but also at the level of fundamental physics, with its proliferation of particles and forces. That’s where symmetry-breaking comes in: when a cosmic symmetry breaks, things that previously looked identical become distinct. We get, among other things, two different forces from one electroweak force.
And the Higgs particle is generally believed to hold the key to how that happened. This ‘particle’ is just a convenient, potentially detectable signature of the broader hypothesis for explaining the symmetry breaking — the ‘Higgs mechanism’. If the mechanism works, there is a particle associated with it.
But the problem with the Higgs mechanism is that it does not and cannot specify how the symmetry is broken. As a result, it does not uniquely determine the mass of the Higgs particle. Several versions of the theory offer different estimates, which vary by a factor of around 100. That’s a crucial difference in terms of how readily the LHC might observe it, if at all. Now, accounts of this search may present this situation blandly as simply a test of competing theories; but the fact is that the situation arises because of ambiguities about what symmetry-breaking actually is.
The issue goes still deeper, however. Isn’t it curious that we should seek for an explanation of dissimilar entities in terms of a theory in which they are the same? Suppose you find that the world contains some red balls and some blue ones. Is it more natural to decide that there is a theory that explains red balls, and a different one that explains blue balls, or to assume that red and blue balls were once indistinguishable? As it happens, we already have very compelling reasons to believe that the electromagnetic and weak forces were once unified; but deciding to make unification a general aim of physical theories is quite another matter.
Physics Nobel laureate David Gross has pointed out the apparent paradox in that latter approach: “The search for new symmetries of nature is based on the possibility of finding mechanisms, such as spontaneous symmetry breaking, that hide the new symmetry” [2]. Grinbaum is arguing that it’s worth pausing to think about that assumption. To rely on symmetry arguments is to accept that the resulting theory will not predict the particular outcome you observe, where the symmetry may be broken in an arbitrary way. Only experiments can tell you what the result of the symmetry-breaking is.
Should we trust in beauty?
Einstein’s statement is revealing because it exposes a strand of platonic thinking in modern physics: beauty matters, and it is a vision of beauty based on order and symmetry. Pragmatically speaking, arguments that use symmetry have proved to be fantastically fertile in fundamental physics. But as Weyl’s remark shows, they are motivated only by assumptions about how things ought to be.
A sense of aesthetic beauty is now not just something that physicists discover in the world; it is, in the words of Gian Francesco Giudice, a theoretical physicist at CERN, “a powerful guiding principle for physicists as they try to construct new theories” [3]. They look for ways to build it in. This, as Grinbaum points out, “is logically unsound and heuristically doubtful”.
Grinbaum says that such aesthetic judgements give rise to ideas about the ‘naturalness’ of theories. This notion of naturalness figures in many areas of science, Giudice points out, but is generally dangerously subjective: it is ‘natural’ to us that the solar system is heliocentric, but it wasn’t at all to the ancient Greeks, or indeed to Tycho Brahe, the sixteenth-century Danish astrologer.
But Giudice explains that “a more precise form of naturalness criterion has been developed in particle physics and it is playing a fundamental role in the formulation of theoretical predictions for new phenomena to be observed at the LHC”. The details of this concept of naturalness are technical, but in essence it purports to explain why symmetry-breaking of the electroweak interaction left gravity so much weaker than the weak force (its name notwithstanding). The reasoning here leads to the prediction that production of the Higgs particle will be accompanied by a welter of other new particles not included in the Standard Model. The curious thing about this prediction is that it is motivated not to make any theory work out, but simply to remove the apparent ‘unnaturalness’ of the imbalance in the strengths of the two forces. It is basically a philosophical matter of what ‘seems right’.
Workable theories
There are also fundamental questions about why physics has managed to construct all manner of workable theories — of electromagnetism, say — without having to postulate the Higgs particle at all. The simple answer is that, so long as we are talking about energies well below the furious levels at which the Higgs particle becomes apparent, and which the LHC hopes to create, it is enough to subsume the whole Higgs mechanism within the concept of mass. This involves creating what physicists call an effective field theory, in which phenomena that become explicit above a certain energy threshold remain merely implicit in the parameters of the theory. Much the same principle permits us to use Newtonian mechanics when objects’ velocities are much less than the speed of light.
Effective field theories thus work only up to some limiting energy. But Grinbaum points out that this is no longer just a practical simplification but a methodology: “Today physicists tend to think of all physical theories, including the Standard Model, as effective field theories with respect to new physics at higher energies.” The result is an infinite regression of such theories, and thus a renunciation of the search for a ‘final theory’ — entirely the opposite of what you might think physics is trying to do, if you judge from popular accounts (or occasionally, from their own words).
Effective field theories are a way of not having to answer everything at once. But if they simply mount up into an infinite tower, it will be an ungainly edifice at best. As philosopher of science Stephan Hartmann at Tilburg University in the Netherlands has put it, the predictive power of such a composite theory would steadily diminish “just as the predictive power of the Ptolemaic system went down when more epicycles were added” [4].
Einstein seemed to have an intimation of this. He expressed discomfort that his theory of relativity was based not simply on known facts but on an a priori postulate about the speed of light. He seemed to sense that this made it less fundamental.
These and other foundational issues are not new to LHC physics, but by probing the limits of the Standard Model the new collider could bring them to the fore. All this suggests that it would be a shame if the results were presented simply as data points to be compared against theoretical predictions, as though to coolly assess the merits of various well-understood proposals. The really exciting fact is that the LHC should mark the end of one era — defined by the Standard Model — and the beginning of the next. And at this point, we do not even know the appropriate language to describe what will follow — whether, for example, it will be rooted in new symmetry principles (such as supersymmetry, which relates hitherto distinct particles), or extra dimensions, or something else. So let’s acknowledge and even celebrate our ignorance, which is after all the springboard of the most creative science.
References
1. Grinbaum, A. Preprint at http://www.arxiv.org/abs/0806.4268 (2008).
2. Gross, D. in Conceptual Foundations of Quantum Field Theory Cao, T. Y. (ed.) (Cambridge Univ. Press, 1999).
3. Giudice, G. F. Preprint at http://www.arxiv.org/abs/0801.2562 (2008).
4. Hartmann, S. Stud. Hist. Phil. Mod. Phys. 32, 267-304 (2001).
Wednesday, June 25, 2008
Birds that boogie
[I reckon this one speaks for itself. It is on Nature News. I just hope Snowball can handle the fame.]
YouTube videos of dancing cockatoos are not flukes but the first genuine evidence of animal dancing
When Snowball, a sulphur-crested male cockatoo, was shown last year in a YouTube video apparently moving in time to pop music, he became an internet sensation. But only now has his performance been subjected to scientific scrutiny. And the conclusion is that Snowball really can dance.
Aniruddh Patel of the Neurosciences Institute in La Jolla, California, and his colleagues say that Snowball’s ability to shake his stuff is much more than a cute curiosity. It could shed light on the biological bases of rhythm perception, and might even hold implications for the use of music in treating neurodegenerative disease.
‘Music with a beat can sometimes help people with Parkinson’s disease to initiate and coordinate walking’, says Patel. ‘But we don’t know why. If nonhuman animals can synchronize to a beat, what we learn from their brains could be relevant for understanding the mechanisms behind the clinical power of rhythmic music in Parkinson’s.’
Anyone watching Snowball can see that his foot-tapping seems to be well synchronized with the musical beat. But it was possible that in the original videos he was using timing cues from people dancing off camera. His previous owner says that he and his children would encourage Snowball’s ‘dancing’ with rhythmic gestures of their own.
Genuine ‘dancing’ – the ability to perceive and move in time with a beat – would also require that Snowball adjust his movements to match different rhythmic speeds (tempi).
To examine this, Patel and his colleagues went to meet Snowball. He had been left by his previous owner at a bird shelter, Birdlovers Only Rescue Service Inc. in Schererville, Indiana, in August 2007, along with a CD containing a song to which his owner said that Snowball liked to dance: ‘Everybody’ by the Backstreet Boys.
Patel and colleagues videoed Snowball ‘dancing’ in one of his favourite spots, on the back of an armchair in the office of Birdlovers Only. They altered the tempi of the music in small steps, and studied whether Snowball stayed in synch.
This wasn’t as easy as it might sound, because Snowball didn’t ‘dance’ continuously during the music, and sometimes he didn’t get into the groove at all. So it was important to check whether the episodes of apparent synchrony could be down to pure chance.
‘On each trial he actually dances at a range of tempi’, says Patel. But the lower end of this range seemed to correlate with the beat of the music. ‘When the music tempo was slow, his tempo range included slow dancing. When the music was fast, his tempo range didn’t include these slower tempi.’
A statistical check on these variations showed that the correlation between the music’s rhythm and Snowball’s slower movements was very unlikely to have happened by chance. ‘To us, this shows that he really does have tempo sensitivity, and is not just ‘doing his own thing’ at some preferred tempo’, says Patel.
He says that Snowball is unlikely to be unique. Adena Schachner of Harvard University has also found evidence of genuine synchrony in YouTube videos of parrots, and also in studies of perhaps the most celebrated ‘intelligent parrot’, the late Alex, trained by psychologist Irene Pepperberg [1]. Patel [2] and Schachner will both present their findings at the 10th International Conference on Music Perception and Cognition in Sapporo, Japan, in August.
Patel and his colleagues hope to explore whether Snowball’s dance moves are related to the natural sexual-display movements of cockatoos. Has he invented his own moves, or simply adapted those of his instinctive repertoire? Will he dance with a partner, and if so, will that change his style?
But the implications extend beyond the natural proclivities of birds. Patel points out that Snowball’s dancing behaviour is better than that of very young children, who will move to music but without any real synchrony to the beat [3]. ‘Snowball is better than a typical 2-4 year old, but not as good as a human adult’, he says. (Some might say the same of Snowball’s musical tastes.)
This suggests that a capacity for rhythmic synchronization is not a ‘musical’ adaptation, because animals have no genuine ‘music’. The question of whether musicality is biologically innate in humans has been highly controversial – some argue that music has served adaptive functions that create a genetic predisposition for it. But Snowball seems to be showing that an ability to dance to a beat does not stem from a propensity for music-making.
References
1. Pepperberg, I. M. Alex & Me (HarperCollins, 2008).
2. Patel, A. D. et al., Proc. 10th Int. Conf. on Music Perception and Cognition, eds M. Adachi et al. (Causal Productions, Adelaide, in press).
3. Eerola, T. et al., Proc. 9th Int. Conf. on Music Perception and Cognition, eds M. Baroni et al. (2006).
[I reckon this one speaks for itself. It is on Nature News. I just hope Snowball can handle the fame.]
YouTube videos of dancing cockatoos are not flukes but the first genuine evidence of animal dancing
When Snowball, a sulphur-crested male cockatoo, was shown last year in a YouTube video apparently moving in time to pop music, he became an internet sensation. But only now has his performance been subjected to scientific scrutiny. And the conclusion is that Snowball really can dance.
Aniruddh Patel of the Neurosciences Institute in La Jolla, California, and his colleagues say that Snowball’s ability to shake his stuff is much more than a cute curiosity. It could shed light on the biological bases of rhythm perception, and might even hold implications for the use of music in treating neurodegenerative disease.
‘Music with a beat can sometimes help people with Parkinson’s disease to initiate and coordinate walking’, says Patel. ‘But we don’t know why. If nonhuman animals can synchronize to a beat, what we learn from their brains could be relevant for understanding the mechanisms behind the clinical power of rhythmic music in Parkinson’s.’
Anyone watching Snowball can see that his foot-tapping seems to be well synchronized with the musical beat. But it was possible that in the original videos he was using timing cues from people dancing off camera. His previous owner says that he and his children would encourage Snowball’s ‘dancing’ with rhythmic gestures of their own.
Genuine ‘dancing’ – the ability to perceive and move in time with a beat – would also require that Snowball adjust his movements to match different rhythmic speeds (tempi).
To examine this, Patel and his colleagues went to meet Snowball. He had been left by his previous owner at a bird shelter, Birdlovers Only Rescue Service Inc. in Schererville, Indiana, in August 2007, along with a CD containing a song to which his owner said that Snowball liked to dance: ‘Everybody’ by the Backstreet Boys.
Patel and colleagues videoed Snowball ‘dancing’ in one of his favourite spots, on the back of an armchair in the office of Birdlovers Only. They altered the tempi of the music in small steps, and studied whether Snowball stayed in synch.
This wasn’t as easy as it might sound, because Snowball didn’t ‘dance’ continuously during the music, and sometimes he didn’t get into the groove at all. So it was important to check whether the episodes of apparent synchrony could be down to pure chance.
‘On each trial he actually dances at a range of tempi’, says Patel. But the lower end of this range seemed to correlate with the beat of the music. ‘When the music tempo was slow, his tempo range included slow dancing. When the music was fast, his tempo range didn’t include these slower tempi.’
A statistical check on these variations showed that the correlation between the music’s rhythm and Snowball’s slower movements was very unlikely to have happened by chance. ‘To us, this shows that he really does have tempo sensitivity, and is not just ‘doing his own thing’ at some preferred tempo’, says Patel.
He says that Snowball is unlikely to be unique. Adena Schachner of Harvard University has also found evidence of genuine synchrony in YouTube videos of parrots, and also in studies of perhaps the most celebrated ‘intelligent parrot’, the late Alex, trained by psychologist Irene Pepperberg [1]. Patel [2] and Schachner will both present their findings at the 10th International Conference on Music Perception and Cognition in Sapporo, Japan, in August.
Patel and his colleagues hope to explore whether Snowball’s dance moves are related to the natural sexual-display movements of cockatoos. Has he invented his own moves, or simply adapted those of his instinctive repertoire? Will he dance with a partner, and if so, will that change his style?
But the implications extend beyond the natural proclivities of birds. Patel points out that Snowball’s dancing behaviour is better than that of very young children, who will move to music but without any real synchrony to the beat [3]. ‘Snowball is better than a typical 2-4 year old, but not as good as a human adult’, he says. (Some might say the same of Snowball’s musical tastes.)
This suggests that a capacity for rhythmic synchronization is not a ‘musical’ adaptation, because animals have no genuine ‘music’. The question of whether musicality is biologically innate in humans has been highly controversial – some argue that music has served adaptive functions that create a genetic predisposition for it. But Snowball seems to be showing that an ability to dance to a beat does not stem from a propensity for music-making.
References
1. Pepperberg, I. M. Alex & Me (HarperCollins, 2008).
2. Patel, A. D. et al., Proc. 10th Int. Conf. on Music Perception and Cognition, eds M. Adachi et al. (Causal Productions, Adelaide, in press).
3. Eerola, T. et al., Proc. 9th Int. Conf. on Music Perception and Cognition, eds M. Baroni et al. (2006).
Subscribe to:
Posts (Atom)