For Francophones, I have a piece in the February issue of La Recherche on spacetime cloaking, part of a special feature on invisibility. For some reason it’s not included in the online material. But here in any case is how it began in my mother tongue.
_______________________________________________________________
We all have experiences that we’d rather never happened – or perhaps that we just wish no one else had seen. Now researchers have shown how to carry out this kind of editing of history. They use the principles behind invisibility cloaks, which have already been shown to hide objects from light. But instead of hiding objects, we can hide events. In other words, we can apparently carve out a hole in spacetime so that no one on the outside can tell that whatever goes on inside it has ever taken place.
“Such speculations are not fantasy”, insist physicist Martin McCall of Imperial College in London and his colleagues, who came up with the idea last July [1]. They imagine a safe-cracker casting a spacetime cloak over the scene of the crime, so that he can open the safe and remove the contents while a security camera would see just a continuously empty room.
Suppose the cloak was used to conceal someone’s journey from one place to another. Because the device splices together the spacetime on either side of the ‘hole’, it would look as though the person vanished from the starting point and, in the blink of an eye, appeared at her destination. This would then create “the illusion of a Star Trek transporter”, the researchers say.
“It’s definitely a cool idea”, says Ulf Leonhardt, a specialist in invisibility cloaking at the University of St Andrews in Scotland. “Altering the history has been the metier of undemocratic politicians”, he adds, pointing to the way Soviet leaders would doctor photographs to remove individuals who had fallen from favour. “Now altering history has become a subject of physics.”
Lost in spacetime
Conventional invisibility cloaks hide objects by bending light rays around them and then bringing the rays back onto their original trajectory on the far side. That way, it looks to an observer as though the light has passed through an empty space where the hidden object resides. In contrast, the spacetime cloak would manipulate not the path of the rays but their speed. It would be made of materials that slow down light or speed it up. This means that some of the light that would have been scattered by the hidden event is ushered forward to pass before it happened, while the rest was held back until after the event.
These slowed and accelerated rays are then rejoined seamlessly so that there seems to be no gap in spacetime. It’s like bending rays in invisibility cloaks, except that they are bent not in space but in spacetime.
How do you slow down or speed up light? Both have been demonstrated already in some exotic substances such as ultracold gases of alkali metals: light has been both brought to a standstill and speeded up by a factor of 300, so that, bizarrely, a pulse seems to exit the system before it has even arrived. But the spacetime cloak needs to manipulate light in ways that are both simpler and more profound. Light is slowed down in any medium relative to its speed in a vacuum – that is precisely why it bends when it enters water or gas from air, causing the phenomenon of refraction. The amount of slowing down is measured by the refractive index: the bigger this value, the slower the speed relative to a vacuum.
In a spacetime cloak, the light must simply be slowed or speeded up relative to its speed before it entered the cloak. If the cloak itself is surrounded by some cladding material, then the light must be speeded up or retarded only relative to this – there’s no need for fancy tricks that seem to make light travel faster than its speed in a vacuum.
But to obtain perfect and versatile cloaking demands some sophisticated manipulation of the light, for which you need more that just any old transparent materials. For one thing, you need to alter both the electric and the magnetic components of the electromagnetic wave. Most materials (such as glass), being non-magnetic, don’t affect the latter. What’s more, the effects on the electric and magnetic components must be the same, since otherwise some light will be reflected as it enters the material – in this case, making the cloak itself visible. When the electric and magnetic effects are equalized, the material is said to be “impedance matched”. “For a perfect device, we need to modulate the refractive index while also keeping it impedance matched”, explains Paul Kinsler, McCall’s colleague at Imperial.
Hidden recipe
There aren’t really any ordinary materials that would satisfy all these requirements. But they can be met using the same substances that have been used already to make invisibility shields: so-called metamaterials. These are materials made from individual components that interact with electromagnetic radiation in unusual ways. Invisibility cloaks for microwaves have been built in which the metamaterial ‘atoms’ are little electrical circuits etched into copper film, which can pick up the electromagnetic waves like antennae, resonate with them, and re-radiate the energy. Because the precise response of these circuits can be tailored by altering their size and shape, metamaterials can be designed with a range of curious behaviours. For example, they can be given a negative refractive index, so that light rays are bent the wrong way. “Metamaterials that work by resonance offer a large range of strong responses that allow more design freedom”, says Kinsler. “They are also usually designed to have both electric and magnetic responses, which will in general be different from one another.”
Using a combination of these materials, McCall and colleagues offer a prescription for how to put together a spacetime cloak. It’s a tricky business: to divert light around the spacetime hole, one needs to change the optical properties of the cloaking material over time in a particular sequence, switching each layer of material by the right amount at the right moment. “The exact theory requires a perfectly matched and perfectly timed set of changes to both the electric and magnetic properties of the cloak”, says Kinsler.
The result, however, is a sleight of hand more profound than any that normal invisibility shields can offer. “If you turn an ordinary invisibility cloak on and off, you will see a cloaked object disappear and reappear”, explains Kinsler. “With our concept, you never see anything change at all.” At least, not from one side. The spacetime hole opened up by the cloak is not symmetrical – it operates from one side but not the other (although the cloak itself would be invisible from both directions). So an observer on one side might see an event that an observer on the other side will swear never took place.
Could such a device really be used to hide events in the macroscopic world? Physicist John Pendry, also at Imperial (but not part of McCall’s group) and one of the pioneers of invisibility cloaks, considers that unlikely. But he agrees with McCall and colleagues that there might well be more immediate and more practical applications for the technique. “Possible uses might be in a telecommunications switching station, where several packets of information might be competing for the same channel”, he says. “The time cloak could engineer a seamless flow in all channels” – by cloaking interruptions of one signal by another, it would seem as though all had simultaneously flowed unbroken down the same channel.
There could be some more fundamental implications of the work too. This manipulation of spacetime is analogous to what happens at a black hole. Here, light coming from the region near the hole is effectively brought to a standstill at the event horizon, so that time itself seems to be arrested there: an object falling into the hole seems, to an outside observer, to be stopped forever at the event horizon. The parallel between transformation optics and black-hole physics has been pointed out by Leonhardt and his coworkers, who in 2008 revealed an optical analogue of a black hole made from optical fibres. Leonhardt says that the analogy exists for spacetime cloaks also, and that therefore these systems might be used to create the analogue of Hawking radiation: the radiation predicted by Stephen Hawking to be emitted from black holes as a result of the quantum effects of the distortion of spacetime. Such radiation has never been detected yet in astronomical observations of real black holes, but its production at the edge of a spacetime ‘hole’ made by cloaking would provide strong support for Hawking’s idea.
Unlike black holes, however, a spacetime cloak doesn’t really distort spacetime – it just looks as though it does. “I can certainly imagine a transformation device that gives the illusion that causal relationships are distorted or even reversed – a causality editor, rather than our history editor”, says Kinsler. “But the effects generated are only an illusion.”
In the pipeline
In order to manipulate visible light, the component ‘atoms’ of a metamaterial have to be about the same size as the wavelength of the light – less than a micrometre. This means that, while microwave invisibility cloaks have been put together from macroscale components, optical metamaterials are much harder to make.
There’s an easier way, however. Some researchers have realised that another way to perform the necessary light gymnastics is to use transparent substances with unusual optical properties, such as birefringent minerals in which light travels at different speeds in different directions. Objects have been cloaked from visible light in this way using carefully shaped blocks of the mineral calcite (Iceland spar).
In the same spirit, McCall and colleagues realised that sandwiches of existing materials with ‘tunable’ refractive indices might be used to make ‘approximate’ spacetime cloaks. For example, one could use optical fibres whose refractive indices depend on the intensity of the light passing through them. A control beam would manipulate these properties, opening and closing a spacetime cloak for a second beam.
However, as with the ‘simple’ invisibility cloaks made from calcite, the result is that although the object or event can be fully hidden, the cloak itself is not: light is still reflected from it. “Although the event itself can in principle be undetectable, the cloaking process itself isn't”, Kinsler says.
This idea of manipulating the optical properties of optical fibres for spacetime cloaking has already been demonstrated by Moti Fridman and colleagues at Cornell University [2]. Stimulated by the Imperial team’s proposal, they figured out how to put the idea into practice. They use so-called ‘time lenses’ which modify how a light wave propagates not in space, like an ordinary lens, but in time. Just as an ordinary lens can separate different light frequencies in space, and can thus be used to spread out or focus a beam, so a time lens uses the phenomenon of dispersion (the frequency dependence of the speed at which light travels through a medium) to separate frequencies in time, slowing some of them down relative to others.
Because of this equivalence of space and time in the two types of lens, a two-part ‘split time-lens’ can bend a probe beam around a spacetime hole in the same way as two ordinary lenses could bend a light beam around either side of an object to cloak it in space. In the Cornell experiment, a second split time-lens then restored the probe to its original state. In this way, the researchers could temporarily hide the interaction between the probe beam and a second short light pulse, which would otherwise cause the probe signal to be amplified. Fridman and colleagues presented their findings at a Californian meeting of the Optical Society of America in October. “It's a nice experiment, and achieved results remarkably quickly”, says Kinsler. “We were surprised to see it – we were expecting it might take years to do.”
But the spacetime cloaking in this experiment lasts only for a fleeting moment – about 15 picoseconds (trillionths of a second). And Fridman and colleagues admit that the material properties of the optical fibres themselves will make it impossible to extend the gap beyond a little over one millionth of a second. So there’s much work to be done to create a more perfect and more long-lasting cloak. In the meantime, McCall and Kinsler have their eye on other possibilities. Perhaps, they say, we could also edit sound this way by applying the same principles to acoustic waves. As well as hiding things you wish you’d never done, might you be able to literally take back things you wish you’d never said?
1. M. W. McCall, A. Favaro, P. Kinsler & A. Boardman, Journal of Optics 13, 024003 (2011).
2. M. Fridman, A. Farsi, Y. Okawachi & A. L. Gaeta, Nature 481, 62-65 (2012).
Monday, February 27, 2012
Friday, February 24, 2012
Survival in New York
Well, I'm here and just thought it possible that someone in NYC might see this before tomorrow (25 Feb) is up. I'm taking part in this event, linked to David Rothenberg's excellent new book. It's free (the event, not the book), and promises to be great fun. If you're in Manhattan - see you tomorrow?
Thursday, February 16, 2012
Call to arms
I wrote a leader for this week’s Nature on the forthcoming talks for an international Arms Trade Treaty. Here’s the original version.
________________________________________________________________
Scientists have always been some of the strongest voices among those trying to make the world a safer place. Albert Einstein’s commitment to international peace is well known; Andrei Sakharov and Linus Pauling are among the scientists who have been awarded the Nobel Peace prize, as is Joseph Rotblat, the subject of a new autobiography (see Nature 481, 438; 2012), in conjunction with the Pugwash organization that he helped to found. This accords not only with the internationalism of scientific endeavour but with the humanitarian goals that mostly motivate it.
At the same time, the military applications of science and technology are never far from view, and defence funding supports a great deal of research (much of it excellent). There need be no contradiction here. Nations have a right to self-defence, and increasingly armed forces are deployed for peace-keeping rather than aggression. But what constitutes responsible use of military might is delicate and controversial, and peace-keeping is generally necessary only because aggressors have been supplied with military hardware in the first place.
Arms control is a thorny subject for scientists. When, at a session on human rights at a physics conference several years ago, Nature asked if the evident link between the arms trade and human-rights abuses might raise ethical concerns about research on offensive weaponry, the panel shuffled their feet and became tongue-tied.
There are no easy answers to the question of where the ethical boundaries of defence research lie. But all responsible scientists should surely welcome the progress in the United Nations towards an international Arms Trade Treaty (ATT), for which a preparatory meeting in New York next week presages the final negotiations in July. The sale of weapons, from small arms to high-tech missile systems, hinders sustainable development and progress towards the UN’s Millennium Development Goals, and undermines democracy.
Yet there are dangers. Some nations will attempt to have the treaty watered down. That the sole vote against the principle at the UN General Assembly in October 2009 was from Zimbabwe speaks volumes about likely reasons for opposition. But let’s not overlook the fact that in the previous vote a year earlier, Zimbabwe was joined by one other dissenter: the United States, still at that point governed by George W. Bush’s administration. Would any of the current leading US Republican candidates be better disposed towards an ATT?
Paradoxical as it might seem, however, a binding international treaty on the arms trade is not necessarily a step forward anyway. Most of the military technology used for recent human-rights abuses was obtained by legal routes. Such sales from the UK, for example, helped Libya’s former leaders to suppress ‘rebels’ in 2011 and enabled Zimbabwe to launch assaults in the Democratic Republic of Congo in the 1990s.
The British government admits that it anticipates that the Arms Trade Treaty, which it supports, will not reduce arms exports. It says that the criteria for exports “would be based on existing obligations and commitments to prevent human rights abuse” – which have not been notably effective. According to the UK’s Foreign and Commonwealth Office (FCO), the ATT aims “to prevent weapons reaching the hands of terrorists, insurgents and human rights abusers”. But as Libya demonstrated, one person’s insurgents are another’s democratizers, while today’s legitimate rulers can become tomorrow’s human-right abusers.
The FCO says that the treaty “will be good for business, both manufacturing and export sales.” Indeed, arms manufacturers support it as a way of levelling the market playing field. The ATT could simply legitimize business as usual by more clearly demarcating it from a black market, and will not cover peripheral military hardware such as surveillance and IT systems. Some have argued that the treaty will be a mere distraction to the real problem of preventing arms reaching human-rights violators (D. P. Kopel et al., Penn State Law Rev. 114, 101-163; 2010).
So while there are good reasons to call for a strong ATT, it is no panacea. The real question is what a “responsible” arms trade could look like, if this isn’t merely oxymoronic. That would benefit from some hard research on how existing, ‘above-board’ sales have affected governance, political stability and socioeconomic conditions worldwide. Such quantification is challenging and contentious, but several starts have been made (for example, www.unidir.org and www.prio.no/nisat). We need more.
________________________________________________________________
Scientists have always been some of the strongest voices among those trying to make the world a safer place. Albert Einstein’s commitment to international peace is well known; Andrei Sakharov and Linus Pauling are among the scientists who have been awarded the Nobel Peace prize, as is Joseph Rotblat, the subject of a new autobiography (see Nature 481, 438; 2012), in conjunction with the Pugwash organization that he helped to found. This accords not only with the internationalism of scientific endeavour but with the humanitarian goals that mostly motivate it.
At the same time, the military applications of science and technology are never far from view, and defence funding supports a great deal of research (much of it excellent). There need be no contradiction here. Nations have a right to self-defence, and increasingly armed forces are deployed for peace-keeping rather than aggression. But what constitutes responsible use of military might is delicate and controversial, and peace-keeping is generally necessary only because aggressors have been supplied with military hardware in the first place.
Arms control is a thorny subject for scientists. When, at a session on human rights at a physics conference several years ago, Nature asked if the evident link between the arms trade and human-rights abuses might raise ethical concerns about research on offensive weaponry, the panel shuffled their feet and became tongue-tied.
There are no easy answers to the question of where the ethical boundaries of defence research lie. But all responsible scientists should surely welcome the progress in the United Nations towards an international Arms Trade Treaty (ATT), for which a preparatory meeting in New York next week presages the final negotiations in July. The sale of weapons, from small arms to high-tech missile systems, hinders sustainable development and progress towards the UN’s Millennium Development Goals, and undermines democracy.
Yet there are dangers. Some nations will attempt to have the treaty watered down. That the sole vote against the principle at the UN General Assembly in October 2009 was from Zimbabwe speaks volumes about likely reasons for opposition. But let’s not overlook the fact that in the previous vote a year earlier, Zimbabwe was joined by one other dissenter: the United States, still at that point governed by George W. Bush’s administration. Would any of the current leading US Republican candidates be better disposed towards an ATT?
Paradoxical as it might seem, however, a binding international treaty on the arms trade is not necessarily a step forward anyway. Most of the military technology used for recent human-rights abuses was obtained by legal routes. Such sales from the UK, for example, helped Libya’s former leaders to suppress ‘rebels’ in 2011 and enabled Zimbabwe to launch assaults in the Democratic Republic of Congo in the 1990s.
The British government admits that it anticipates that the Arms Trade Treaty, which it supports, will not reduce arms exports. It says that the criteria for exports “would be based on existing obligations and commitments to prevent human rights abuse” – which have not been notably effective. According to the UK’s Foreign and Commonwealth Office (FCO), the ATT aims “to prevent weapons reaching the hands of terrorists, insurgents and human rights abusers”. But as Libya demonstrated, one person’s insurgents are another’s democratizers, while today’s legitimate rulers can become tomorrow’s human-right abusers.
The FCO says that the treaty “will be good for business, both manufacturing and export sales.” Indeed, arms manufacturers support it as a way of levelling the market playing field. The ATT could simply legitimize business as usual by more clearly demarcating it from a black market, and will not cover peripheral military hardware such as surveillance and IT systems. Some have argued that the treaty will be a mere distraction to the real problem of preventing arms reaching human-rights violators (D. P. Kopel et al., Penn State Law Rev. 114, 101-163; 2010).
So while there are good reasons to call for a strong ATT, it is no panacea. The real question is what a “responsible” arms trade could look like, if this isn’t merely oxymoronic. That would benefit from some hard research on how existing, ‘above-board’ sales have affected governance, political stability and socioeconomic conditions worldwide. Such quantification is challenging and contentious, but several starts have been made (for example, www.unidir.org and www.prio.no/nisat). We need more.
Tuesday, February 14, 2012
... but I just want to say this
With the shrill cries of new atheists ringing in my ears (you would not believe some of that stuff, but I won’t go there), I read John Gray’s review of Alain de Botton’s book Religion for Atheists in the New Statesman and it is as though someone has opened a window and let in some air – not because of the book, which I’ve not read, but because of what John says. Sadly you can’t get it online: the nearest thing is here.
Sunday, February 12, 2012
Moving swiftly on
This piece in the Guardian has caused a little storm, and I’m not so naive as to be totally surprised by that. There’s much I could say about it, but frankly it never helps. I’m tired of how little productive dialogue ever seems to stem from these things and figure I will just leave the damned business alone (no doubt to the delight of the more rapid detractors). I will say here only a few things about this pre-edited version, which was necessarily slimmed down to fit the slot in the printed paper: (1) to those who thought I was saying “hey, wouldn’t it be a great idea if sociologists studied religion”, note the reference to Durkheim as a shorthand way of acknowledging that this notion goes back a long, long way; (2) note that I’m not against everything Dawkins stands for on this subject – I agree with him on more than just the matter of faith schools mentioned below, although of course I do disagree with other things. The “for us or against us” attitude that one seems to see so much of in online discussions is the kind of infantile attitude that I figure we should be leaving to the likes of George W. Bush.
______________________________________________________________
The research reported this week showing that American Christians adjust their concept of Jesus to match their own sociopolitical persuasion will surely surprise nobody. Liberals regard Christ primarily as someone who promoted fellowship and caring, say psychologist Lee Ross of Stanford University in California and his colleagues, while conservatives see him as a firm moralist. In other words, he’s like me, only more so.
Yes, it’s pointing out the blindingly obvious. Yet the work offers a timely reminder of how religious thinking operates that has so far been resolutely resisted by some strident “new atheists”.
You might imagine that it’s uncontentious to suggest that religion is essentially a social phenomenon, not least because particular varieties of it – fundamentalist, tolerant, mystical – tend to develop within specific communities united by geography or cultural ties rather than arising at random throughout society. Without entering the speculative debate about whether religiosity has become hardwired by evolution, it seems clear enough that specific types of religious behaviour are as prone to be transmitted through social networks as are, say, obesity and smoking.
Bizarrely, this is ignored by some of the most prominent opponents of religion today. Arguments about science and religion are mostly conducted as if Emile Durkheim had never existed, and all that matters is whether or not religious belief is testable. Many atheists prefer to regard religion as a virus that jumps from one hapless individual to another, or a misdirection of evolutionary instincts – in any case, curable only with a strong shot of reason. These epidemiological and Darwinian models have an elegant simplicity that contamination with broader social and cultural factors would spoil. Yet the result is akin to imagining that, to solve Africa’s AIDS crisis, there is no point in trying to understand African societies.
Thus arch new atheist Sam Harris swatted away my suggestion that we might approach religious belief as a social construct with the contemptuous comment that I was saying something “either trivially true or obscurantist”. I find it equally peculiar that chemist Harry Kroto should insist that “I am not interested in why religion continues” while so devoutly wishing that it would not.
At face value, this apparent lack of interest in how religion actually manifests and propagates in society is odd coming from people who so loudly deplore its prevalence. But I think it may not be so hard to explain.
For one thing, regarding religion as a social phenomenon would force us to see it as something real, like governments or book groups, and not just a self-propagating delusion. It is so much safer and easier to ridicule a literal belief in miracles, virgin births and other supernatural agencies than to consider religion as (among other things) one of the ways that human societies have long chosen to organize their structures of authority and status, for better or worse.
It also means that one might feel compelled to abandon the heroic goal of dislodging God from his status as Creator in favour of asking such questions as whether particular socioeconomic conditions tend to promote intolerant fundamentalism over liberal pluralism. It turns a Manichean conflict between truth and ignorance into a mundane question of why some people are kind or beastly towards others. Yet to suggest that we can relax about some forms of religious belief – that they need offer no obstacle to an acceptance of scientific inquiry and discovery, and will not demand the stoning of infidels – is already, for some new atheists, to have conceded defeat. They will not have been pleased with David Attenborough’s gentle agnosticism on Desert Island Discs, although I doubt that they will dare say so.
The worst of it is that to reject an anthropological approach to religion is, in the end, unscientific. To decide to be uninterested in questions of how and why societies have religion, of why it has the many complexions that it does and how these compete, is a matter of personal taste. But to insist that these are pointless questions is to deny that this important aspect of human behaviour warrants scientific study. Harris’s preference to look to neuroscience – to the individual, not society – will only get you so far, unless you want to argue that brains evolved differently in Kansas (tempting, I admit).
Richard Dawkins is right to worry that faith schools can potentially become training grounds for intolerance, and that daily indoctrination into a particular faith should have no place in education. But I’m sure he’d agree that how people formulate their specific religious beliefs is a much wider question than that. The Stanford research reinforces the fact that a single holy book can provide the basis both for a permissive, enquiring and pro-scientific outlook (think tea and biscuits with Richard Coles) or for apocalyptic, bigoted ignorance (think a Tea Party with Sarah Palin). Might we then, as good scientists alert to the principles of cause and effect, suspect that the real ills of religion originate not in the book itself, but elsewhere?
______________________________________________________________
The research reported this week showing that American Christians adjust their concept of Jesus to match their own sociopolitical persuasion will surely surprise nobody. Liberals regard Christ primarily as someone who promoted fellowship and caring, say psychologist Lee Ross of Stanford University in California and his colleagues, while conservatives see him as a firm moralist. In other words, he’s like me, only more so.
Yes, it’s pointing out the blindingly obvious. Yet the work offers a timely reminder of how religious thinking operates that has so far been resolutely resisted by some strident “new atheists”.
You might imagine that it’s uncontentious to suggest that religion is essentially a social phenomenon, not least because particular varieties of it – fundamentalist, tolerant, mystical – tend to develop within specific communities united by geography or cultural ties rather than arising at random throughout society. Without entering the speculative debate about whether religiosity has become hardwired by evolution, it seems clear enough that specific types of religious behaviour are as prone to be transmitted through social networks as are, say, obesity and smoking.
Bizarrely, this is ignored by some of the most prominent opponents of religion today. Arguments about science and religion are mostly conducted as if Emile Durkheim had never existed, and all that matters is whether or not religious belief is testable. Many atheists prefer to regard religion as a virus that jumps from one hapless individual to another, or a misdirection of evolutionary instincts – in any case, curable only with a strong shot of reason. These epidemiological and Darwinian models have an elegant simplicity that contamination with broader social and cultural factors would spoil. Yet the result is akin to imagining that, to solve Africa’s AIDS crisis, there is no point in trying to understand African societies.
Thus arch new atheist Sam Harris swatted away my suggestion that we might approach religious belief as a social construct with the contemptuous comment that I was saying something “either trivially true or obscurantist”. I find it equally peculiar that chemist Harry Kroto should insist that “I am not interested in why religion continues” while so devoutly wishing that it would not.
At face value, this apparent lack of interest in how religion actually manifests and propagates in society is odd coming from people who so loudly deplore its prevalence. But I think it may not be so hard to explain.
For one thing, regarding religion as a social phenomenon would force us to see it as something real, like governments or book groups, and not just a self-propagating delusion. It is so much safer and easier to ridicule a literal belief in miracles, virgin births and other supernatural agencies than to consider religion as (among other things) one of the ways that human societies have long chosen to organize their structures of authority and status, for better or worse.
It also means that one might feel compelled to abandon the heroic goal of dislodging God from his status as Creator in favour of asking such questions as whether particular socioeconomic conditions tend to promote intolerant fundamentalism over liberal pluralism. It turns a Manichean conflict between truth and ignorance into a mundane question of why some people are kind or beastly towards others. Yet to suggest that we can relax about some forms of religious belief – that they need offer no obstacle to an acceptance of scientific inquiry and discovery, and will not demand the stoning of infidels – is already, for some new atheists, to have conceded defeat. They will not have been pleased with David Attenborough’s gentle agnosticism on Desert Island Discs, although I doubt that they will dare say so.
The worst of it is that to reject an anthropological approach to religion is, in the end, unscientific. To decide to be uninterested in questions of how and why societies have religion, of why it has the many complexions that it does and how these compete, is a matter of personal taste. But to insist that these are pointless questions is to deny that this important aspect of human behaviour warrants scientific study. Harris’s preference to look to neuroscience – to the individual, not society – will only get you so far, unless you want to argue that brains evolved differently in Kansas (tempting, I admit).
Richard Dawkins is right to worry that faith schools can potentially become training grounds for intolerance, and that daily indoctrination into a particular faith should have no place in education. But I’m sure he’d agree that how people formulate their specific religious beliefs is a much wider question than that. The Stanford research reinforces the fact that a single holy book can provide the basis both for a permissive, enquiring and pro-scientific outlook (think tea and biscuits with Richard Coles) or for apocalyptic, bigoted ignorance (think a Tea Party with Sarah Palin). Might we then, as good scientists alert to the principles of cause and effect, suspect that the real ills of religion originate not in the book itself, but elsewhere?
Friday, February 10, 2012
Impractical magic
I have a review of a book about John Dee in the latest issue of Nature. Here's how it started.
_______________________________________________________
The Arch-Conjuror of England: John Dee
by Glyn Parry
Yale University Press, 2011
ISBN 978-0-300-11719-6
335 pages
The late sixteenth-century mathematician and alchemist John Dee exerts a powerful grip on the public imagination. In recent times, he has been the subject of several novels, including The House of Doctor Dee by Peter Ackroyd, and inspired the pop opera Doctor Dee by Damon Albarn of the group Blur. Now, in The Arch-Conjuror of England, historian Glyn Parry gives us probably the most meticulous account of Dee’s career to date.
In some ways, all this attention seems disproportionate. Dee was less important in the philosophy of natural magic than such lesser-known individuals as Giambattista Della Porta and Cornelius Agrippa, and less significant as a transitional figure between magic and science than his contemporaries Della Porta, Bernardino Telesio and Tommaso Campanella, both anti-Aristotelian empiricists from Calabria. Dee’s works, such as the notoriously opaque Monas hieroglyphica, in which the unity of the cosmos was represented in a mystical symbol, were widely deemed impenetrable even in his own day.
There’s no doubt that Dee was prominent during the Elizabethan age – he probably provided the model for both Shakespeare’s Prospero and Ben Jonson’s Subtle in the satire The Alchemist. Yet what surely gives Dee his allure more than anything else is the same thing that lends glamour to Walter Raleigh, Francis Drake and Philip Sidney: they all fell within the orbit of Queen Elizabeth herself. Benjamin Woolley’s earlier biography of Dee draws explicitly on this connection, calling him ‘the queen’s conjuror’. Yet in a real sense he was precisely that, on and off, as his fortunes waxed and waned in the fickle, treacherous Elizabethan court.
There is no way to make sense of Dee without embedding him within the magical cult of Elizabeth, just as this holds the key to Spenser’s epic poem The Faerie Queen and to the flights of fancy in A Midsummer Night’s Dream. To the English, the reign of Elizabeth heralded the dawn of a mystical Protestant awakening. In Germany that dream died in the brutal Thirty Years War; in England it spawned an empire. Dee was the first to coin the phrase ‘the British Empire’, but his vision was less colonialist than a magical yoking of Elizabeth to the Arthurian legend of Albion.
It is one of the strengths of Glyn Parry’s book that he shows how deeply woven magic and the occult sciences were into the fabric of early modern culture. Elizabeth was particularly knowledgeable about alchemy. After all, why would a monarch who had no reason to doubt the possibility of transmutation of gold pass up this chance to fill the royal coffers? Because she believed he could make the philosopher’s stone, the queen was desperate to lure Dee’s former associate, the slippery Edward Kelley, back to England after he left with Dee for Poland and Prague in 1583. The Holy Roman Emperor Rudolf II was equally eager to keep Kelley in Bohemia, making him a baron. Even Dee’s involvement in the failed quest of the adventurer Martin Frobisher to find a northwest passage to the Pacific had an alchemical tint when it was rumoured that Frobisher had found gold-containing ore.
The relationship with Kelley is another element of the popular fascination with Dee. Kelley claimed to be able to converse with angels via Dee’s crystal ball, and Dee’s faith in Kelley’s prophecies and angelic commands never wavered even when the increasingly deranged Kelley told him that the angels had commanded them to swap wives. The inversion of the servant-master relationship as Kelley’s reputation grew in Bohemia makes Dee a pathetic figure towards the end of their ill-fated excursion on the continent – forced on them after Dee blundered in Elizabeth’s court.
He was always doing that. However brilliant his reputation as a magician and mathematician, Dee was hopeless at court politics, regularly backing the wrong horse. He ruined his chances in Prague by passing on Kelley’s angelic reprimand to Rudolf for his errant ways. But Dee can’t be held entirely to blame. Parry makes it clear just how miserable it was for any courtier trying to negotiate the subtle currents of the court, especially in England where the memory of Mary I’s brief and bloody reign still hung in the air along with a lingering fear of papist plots.
This is probably the most meticulous account of Dee’s career to date, although the details aren’t always given shape. Often the political intrigues become as baffling and Byzantine for the reader as they must have been for Dee. But what I really missed was context. It is hard enough to locate Dee in history without hearing about other contemporary figures who also sought to expand natural philosophy, such as Della Porta and Francis Bacon. Bacon in particular was another intellectual whose grand schemes and attempts to gain the queen’s ear were hampered by court rivalries.
But to truly understand Dee’s significance, we need more than the cradle-to-grave story. For example, although Parry patiently explains the numerological and symbolic mysticism of Dee’s Monas hieroglyphica, its preoccupation with divine and Adamic languages can seem sheer delirium if not linked to, say, the later work of the German Jesuit Athanasius Kircher (the most Dee-like figure of the early Enlightenment) or of John Wilkins, one of the Royal Society’s founders.
Likewise, it would have been easier to evaluate Dee’s mathematics if we had been told that this subject had, even until the mid-seventeenth century, a close association both with witchcraft and with mechanical ingenuity, at which Dee also excelled. Wilkins’ Mathematical Magick (1648) was a direct descendant of Dee’s famed Mathematical Preface to a new volume of Euclid. We’d never know from this book that Dee influenced the early modern scientific world via the likes of Robert Fludd, Elias Ashmole and Margaret Cavendish, nor that his works were studied by none other than Robert Boyle, and probably by Isaac Newton. Parry has assembled an important contribution to our understanding of how magic became science. It’s a shame he didn’t see it as part of his task to make that connection.
_______________________________________________________
The Arch-Conjuror of England: John Dee
by Glyn Parry
Yale University Press, 2011
ISBN 978-0-300-11719-6
335 pages
The late sixteenth-century mathematician and alchemist John Dee exerts a powerful grip on the public imagination. In recent times, he has been the subject of several novels, including The House of Doctor Dee by Peter Ackroyd, and inspired the pop opera Doctor Dee by Damon Albarn of the group Blur. Now, in The Arch-Conjuror of England, historian Glyn Parry gives us probably the most meticulous account of Dee’s career to date.
In some ways, all this attention seems disproportionate. Dee was less important in the philosophy of natural magic than such lesser-known individuals as Giambattista Della Porta and Cornelius Agrippa, and less significant as a transitional figure between magic and science than his contemporaries Della Porta, Bernardino Telesio and Tommaso Campanella, both anti-Aristotelian empiricists from Calabria. Dee’s works, such as the notoriously opaque Monas hieroglyphica, in which the unity of the cosmos was represented in a mystical symbol, were widely deemed impenetrable even in his own day.
There’s no doubt that Dee was prominent during the Elizabethan age – he probably provided the model for both Shakespeare’s Prospero and Ben Jonson’s Subtle in the satire The Alchemist. Yet what surely gives Dee his allure more than anything else is the same thing that lends glamour to Walter Raleigh, Francis Drake and Philip Sidney: they all fell within the orbit of Queen Elizabeth herself. Benjamin Woolley’s earlier biography of Dee draws explicitly on this connection, calling him ‘the queen’s conjuror’. Yet in a real sense he was precisely that, on and off, as his fortunes waxed and waned in the fickle, treacherous Elizabethan court.
There is no way to make sense of Dee without embedding him within the magical cult of Elizabeth, just as this holds the key to Spenser’s epic poem The Faerie Queen and to the flights of fancy in A Midsummer Night’s Dream. To the English, the reign of Elizabeth heralded the dawn of a mystical Protestant awakening. In Germany that dream died in the brutal Thirty Years War; in England it spawned an empire. Dee was the first to coin the phrase ‘the British Empire’, but his vision was less colonialist than a magical yoking of Elizabeth to the Arthurian legend of Albion.
It is one of the strengths of Glyn Parry’s book that he shows how deeply woven magic and the occult sciences were into the fabric of early modern culture. Elizabeth was particularly knowledgeable about alchemy. After all, why would a monarch who had no reason to doubt the possibility of transmutation of gold pass up this chance to fill the royal coffers? Because she believed he could make the philosopher’s stone, the queen was desperate to lure Dee’s former associate, the slippery Edward Kelley, back to England after he left with Dee for Poland and Prague in 1583. The Holy Roman Emperor Rudolf II was equally eager to keep Kelley in Bohemia, making him a baron. Even Dee’s involvement in the failed quest of the adventurer Martin Frobisher to find a northwest passage to the Pacific had an alchemical tint when it was rumoured that Frobisher had found gold-containing ore.
The relationship with Kelley is another element of the popular fascination with Dee. Kelley claimed to be able to converse with angels via Dee’s crystal ball, and Dee’s faith in Kelley’s prophecies and angelic commands never wavered even when the increasingly deranged Kelley told him that the angels had commanded them to swap wives. The inversion of the servant-master relationship as Kelley’s reputation grew in Bohemia makes Dee a pathetic figure towards the end of their ill-fated excursion on the continent – forced on them after Dee blundered in Elizabeth’s court.
He was always doing that. However brilliant his reputation as a magician and mathematician, Dee was hopeless at court politics, regularly backing the wrong horse. He ruined his chances in Prague by passing on Kelley’s angelic reprimand to Rudolf for his errant ways. But Dee can’t be held entirely to blame. Parry makes it clear just how miserable it was for any courtier trying to negotiate the subtle currents of the court, especially in England where the memory of Mary I’s brief and bloody reign still hung in the air along with a lingering fear of papist plots.
This is probably the most meticulous account of Dee’s career to date, although the details aren’t always given shape. Often the political intrigues become as baffling and Byzantine for the reader as they must have been for Dee. But what I really missed was context. It is hard enough to locate Dee in history without hearing about other contemporary figures who also sought to expand natural philosophy, such as Della Porta and Francis Bacon. Bacon in particular was another intellectual whose grand schemes and attempts to gain the queen’s ear were hampered by court rivalries.
But to truly understand Dee’s significance, we need more than the cradle-to-grave story. For example, although Parry patiently explains the numerological and symbolic mysticism of Dee’s Monas hieroglyphica, its preoccupation with divine and Adamic languages can seem sheer delirium if not linked to, say, the later work of the German Jesuit Athanasius Kircher (the most Dee-like figure of the early Enlightenment) or of John Wilkins, one of the Royal Society’s founders.
Likewise, it would have been easier to evaluate Dee’s mathematics if we had been told that this subject had, even until the mid-seventeenth century, a close association both with witchcraft and with mechanical ingenuity, at which Dee also excelled. Wilkins’ Mathematical Magick (1648) was a direct descendant of Dee’s famed Mathematical Preface to a new volume of Euclid. We’d never know from this book that Dee influenced the early modern scientific world via the likes of Robert Fludd, Elias Ashmole and Margaret Cavendish, nor that his works were studied by none other than Robert Boyle, and probably by Isaac Newton. Parry has assembled an important contribution to our understanding of how magic became science. It’s a shame he didn’t see it as part of his task to make that connection.
Thursday, February 02, 2012
Democracy, huh?
Here’s my latest Muse for Nature News. But while I’m in that neck of the woods, I very much enjoyed the piece on Dickens in the latest issue. Yes, even Nature is in on that act.
____________________________________________________________
“The people who cast the votes decide nothing”, Josef Stalin is reputed to have said. “The people who count them decide everything.” Little has changed in Russia, if the findings of a new preprint are to be believed. Peter Klimek of the Medical University of Vienna in Austria and his colleagues say that the 2011 election for the Duma (the lower Federal Assembly) in Russia, won by Vladimir Putin’s United Russia party with 49 percent of the votes, shows a clear statistical signature of ballot-rigging [1].
This is not a new accusation. Some have claimed that the Russian statistics show suspicious peaks at multiples of 5 or 10 percent, as though ballot officials simply assigned rounded proportions of votes to meet pre-determined figures. And in December the Wall Street Journal conducted its own analysis of the statistics which led political scientists at the Universities of Michigan and Chicago to concur that there were signs of fraud.
Naturally, Putin denies this. But if you suspect that neither he nor the Wall Street Journal are exactly the most neutral of sources on Russian politics, Klimek and colleagues offer a welcome alternative. They say that the statistical distribution of votes in the Duma election shows over a hundred times more skew than a normal (bell-curve or gaussian) distribution, the expected outcome of a set of independent choices.
The same is true for the contested Ugandan election of February 2011. Both of these statistical distributions are, even at a glance, profoundly different from those of recent elections in, say, Austria, Switzerland and Spain.
Breaking down the numbers into scatter plots of regional votes lays the problems bare. For both Russia and Uganda these distributions are bimodal. Distortion in the main peak suggests ballot rigging which, for Russia, afflicts about 64 percent of districts.
But the second, smaller peaks reveal much cruder fraud. These correspond to districts showing both 100 percent turnout and 100 percent votes for the winning party. As if.
It’s good to see science expose these corruptions of democracy. Yet science also hints that democracy isn’t quite what it’s popularly sold as anyway. Take the choice of voting system. One of the most celebrated results of the branch of economics known as choice theory is that there can be no perfectly fair means of deciding the outcome of a democratic vote. Possible voting schemes are manifold, and their relative merits hotly debated: first-past-the-post (the UK), proportional representation (Scandinavia), schemes for ranking candidates rather than simply selecting one, and so on.
But as economics Nobel laureate Kenneth Arrow showed in the 1950s, none of these systems, nor any other, can satisfy all the criteria of fairness and logic one might demand [2]. For example, a system under which candidate A would be elected from A, B and C should ideally also select A if B is the only alternative. What Arrow’s ‘impossibility theorem’ implies is that either we need to accept that democratic majority rule has some undesirable consequences or we need to find alternatives – which no one has.
Other considerations can undermine the democratic principle too, such as when a bipartisan vote falls within the margin of statistical error. As the Bush vs Gore US election of 2000 showed, the result is then not democratic but legalistic.
And analysis of voting statistics suggests that, regardless of the voting system, our political choices are not free and independent (as most definitions of democracy pretend) but partly the collective result of peer influence. That is one – although not the only – explanation of why some voting statistics don’t follow a gaussian distribution but instead a relationship called a power law [3,4]. Klimek and colleagues find less extreme but significant deviations from gaussian statistics in their analysis of ‘unrigged’ elections [1], which they assume to result from similar collectivization, or as they put it, voter mobilization.
A key premise of current models of voting and opinion formation [5,6] is that most social consensus arises from mutual influence and the spreading of opinion, not from isolated decisions. On the one hand you could say this is just how democratic societies work. On the other, it makes voting a nonlinear process in which small effects (media bias or party budgets, say) can have disproportionately big consequences. At the very least, it makes voting a more complex and less transparent process than is normally assumed.
This isn’t to invalidate Churchill’s famous dictum that democracy is the least bad political system. But let’s not fool ourselves about what it entails.
References
1. Klimek, P., Yegorov, Y., Hanel, R. & Thurner, S. preprint http://www.arxiv.org/abs/1201.3087 (2012).
2. Arrow, K. Social Choice and Individual Values (Yale University Press, New Haven, 1951).
3. Costa Filho, R. N., Almeida, M. P., Andrade, J. S. Jr & Moreira, J. E. Phys. Rev. E 60, 1067-1068 (1999).
4. Costa Filho, R. N., Almeida, M. P., Moreira, J. E. & Andrade, J. S. Jr, Physica A 322, 698-700 (2003).
5. Fortunato S. & Castellano, C. Phys. Rev. Lett. 99, 138701 (2007).
6. D. Stauffer, ‘Opinion dynamics and sociophysics’, in Encyclopedia of Complexity & System Science, ed. R. A. Meyers, 6380-6388. Springer, Heidelberg, 2009.
____________________________________________________________
“The people who cast the votes decide nothing”, Josef Stalin is reputed to have said. “The people who count them decide everything.” Little has changed in Russia, if the findings of a new preprint are to be believed. Peter Klimek of the Medical University of Vienna in Austria and his colleagues say that the 2011 election for the Duma (the lower Federal Assembly) in Russia, won by Vladimir Putin’s United Russia party with 49 percent of the votes, shows a clear statistical signature of ballot-rigging [1].
This is not a new accusation. Some have claimed that the Russian statistics show suspicious peaks at multiples of 5 or 10 percent, as though ballot officials simply assigned rounded proportions of votes to meet pre-determined figures. And in December the Wall Street Journal conducted its own analysis of the statistics which led political scientists at the Universities of Michigan and Chicago to concur that there were signs of fraud.
Naturally, Putin denies this. But if you suspect that neither he nor the Wall Street Journal are exactly the most neutral of sources on Russian politics, Klimek and colleagues offer a welcome alternative. They say that the statistical distribution of votes in the Duma election shows over a hundred times more skew than a normal (bell-curve or gaussian) distribution, the expected outcome of a set of independent choices.
The same is true for the contested Ugandan election of February 2011. Both of these statistical distributions are, even at a glance, profoundly different from those of recent elections in, say, Austria, Switzerland and Spain.
Breaking down the numbers into scatter plots of regional votes lays the problems bare. For both Russia and Uganda these distributions are bimodal. Distortion in the main peak suggests ballot rigging which, for Russia, afflicts about 64 percent of districts.
But the second, smaller peaks reveal much cruder fraud. These correspond to districts showing both 100 percent turnout and 100 percent votes for the winning party. As if.
It’s good to see science expose these corruptions of democracy. Yet science also hints that democracy isn’t quite what it’s popularly sold as anyway. Take the choice of voting system. One of the most celebrated results of the branch of economics known as choice theory is that there can be no perfectly fair means of deciding the outcome of a democratic vote. Possible voting schemes are manifold, and their relative merits hotly debated: first-past-the-post (the UK), proportional representation (Scandinavia), schemes for ranking candidates rather than simply selecting one, and so on.
But as economics Nobel laureate Kenneth Arrow showed in the 1950s, none of these systems, nor any other, can satisfy all the criteria of fairness and logic one might demand [2]. For example, a system under which candidate A would be elected from A, B and C should ideally also select A if B is the only alternative. What Arrow’s ‘impossibility theorem’ implies is that either we need to accept that democratic majority rule has some undesirable consequences or we need to find alternatives – which no one has.
Other considerations can undermine the democratic principle too, such as when a bipartisan vote falls within the margin of statistical error. As the Bush vs Gore US election of 2000 showed, the result is then not democratic but legalistic.
And analysis of voting statistics suggests that, regardless of the voting system, our political choices are not free and independent (as most definitions of democracy pretend) but partly the collective result of peer influence. That is one – although not the only – explanation of why some voting statistics don’t follow a gaussian distribution but instead a relationship called a power law [3,4]. Klimek and colleagues find less extreme but significant deviations from gaussian statistics in their analysis of ‘unrigged’ elections [1], which they assume to result from similar collectivization, or as they put it, voter mobilization.
A key premise of current models of voting and opinion formation [5,6] is that most social consensus arises from mutual influence and the spreading of opinion, not from isolated decisions. On the one hand you could say this is just how democratic societies work. On the other, it makes voting a nonlinear process in which small effects (media bias or party budgets, say) can have disproportionately big consequences. At the very least, it makes voting a more complex and less transparent process than is normally assumed.
This isn’t to invalidate Churchill’s famous dictum that democracy is the least bad political system. But let’s not fool ourselves about what it entails.
References
1. Klimek, P., Yegorov, Y., Hanel, R. & Thurner, S. preprint http://www.arxiv.org/abs/1201.3087 (2012).
2. Arrow, K. Social Choice and Individual Values (Yale University Press, New Haven, 1951).
3. Costa Filho, R. N., Almeida, M. P., Andrade, J. S. Jr & Moreira, J. E. Phys. Rev. E 60, 1067-1068 (1999).
4. Costa Filho, R. N., Almeida, M. P., Moreira, J. E. & Andrade, J. S. Jr, Physica A 322, 698-700 (2003).
5. Fortunato S. & Castellano, C. Phys. Rev. Lett. 99, 138701 (2007).
6. D. Stauffer, ‘Opinion dynamics and sociophysics’, in Encyclopedia of Complexity & System Science, ed. R. A. Meyers, 6380-6388. Springer, Heidelberg, 2009.
Sunday, January 29, 2012
Fake flakes
Tanguy Chouard at Nature has pointed out to me Google’s tribute to the snowflake today:
This is a beautiful example of the kind of bogus flake I collected for my spot in Nine Lessons and Carols for Godless People just before Christmas. Eight-pointed flakes like this are relatively common, because they are easier to draw than six-pointed ones:
(from a Prospect mailing)
(from an Amnesty Christmas card (occasionally sent by yours truly))
More rarely one sees five-pointed examples like this from some wrapping paper in 2010:
Or, more deliciously, this one from the Millibands last year:
I like to point out that the possible sighting of quasicrystalline ice should make us hesitant to be too dismissive of these inventive geometries. What’s more, there do exist claims of pentagonal flakes having been observed, though this seems extremely hard to credit. Of course, in truth quasicrystal ice, even if it exists in very rare circumstances, hardly has five- or eightfold snowflakes as its inevitable corollary. But it’s fun to think about it, especially near the quadricentenary [?] of Kepler’s classic treatise on the snowflake, De nive sexangula.
This is a beautiful example of the kind of bogus flake I collected for my spot in Nine Lessons and Carols for Godless People just before Christmas. Eight-pointed flakes like this are relatively common, because they are easier to draw than six-pointed ones:
(from a Prospect mailing)
(from an Amnesty Christmas card (occasionally sent by yours truly))
More rarely one sees five-pointed examples like this from some wrapping paper in 2010:
Or, more deliciously, this one from the Millibands last year:
I like to point out that the possible sighting of quasicrystalline ice should make us hesitant to be too dismissive of these inventive geometries. What’s more, there do exist claims of pentagonal flakes having been observed, though this seems extremely hard to credit. Of course, in truth quasicrystal ice, even if it exists in very rare circumstances, hardly has five- or eightfold snowflakes as its inevitable corollary. But it’s fun to think about it, especially near the quadricentenary [?] of Kepler’s classic treatise on the snowflake, De nive sexangula.
Thursday, January 26, 2012
Forbidden chemistry
I’ve just published a feature article in New Scientist on “reactions they said could never happen” (at least, that was my brief). A fair bit of the introductory discussion had to be dropped, so here’s the original full text – sorry, a long post. I’m going to put the pdf on my web site, with a few figures added.
_____________________________________________________________________
The award of the 2011 Nobel prize for chemistry to Dan Shechtman for discovering quasicrystals allowed reporters to relish tales of experts being proved wrong. For his heretical suggestion that the packing of atoms in crystals can have a kind of fivefold (quasi)symmetry, Shechtman was ridiculed and ostracized and almost lost his job. The eminent chemist Linus Pauling derided him as a “quasi-scientist”.
Pauling of all people should have known that sometimes it is worth risking being bold and wrong, as he was himself with the structure of DNA in the 1950s. As it turned out, Shechtman was bold and right: quasicrystals do exist, and they earn their ‘impossible’ fivefold symmetry at the cost of not being fully ordered: not truly crystalline in the traditional sense. But while everyone enjoys seeing experts with egg on their faces, there’s a much more illuminating way to think about apparent violations of what is ‘possible’ in chemistry.
Here are some other examples of chemical processes that seemed to break the rules – reactions that ‘shouldn’t’ happen. They demonstrate why chemistry is such a vibrant, exciting science: because it operates on the borders of predictability and certainty. The laws of physics have an air of finality: they don’t tolerate exceptions. No one except cranks expects the conservation of energy to be violated. In biology, in contrast, ‘laws’ seem destined to have exceptions: even the heresy of inheritance of acquired characteristics is permitted by epigenetics. Chemistry sits in the middle ground between the rigidity of physics and the permissiveness of biology. Its basis in physics sets some limits and constraints, but the messy diversity of the elements can often transcend or undermine them.
That’s why chemists often rely on intuition to decide what should or shouldn’t be possible. When his postdoc student Xiao-Dong Wen told Nobel laureate Roald Hoffmann that his computer calculations found graphane – puckered sheets of carbon hexagons with hydrogens attached, with a C:H ratio of 1:1 – was more stable than familiar old benzene, Hoffmann insisted that the calculations were wrong. The superior stability of benzene, he said, “is sacrosanct - it’s hard to argue with it”. But eventually Hoffmann realized that his intuition was wrong: graphane is more stable, though no one has yet succeeded in proving definitively that it can be made.
You could say that chemistry flirts with its own law-breaking inclinations. Chemists often speak of reactions that are ‘forbidden’. For example, symmetry-forbidden reactions are ones that break the rules formulated by Hoffmann in his Nobel-winning work with organic chemist Robert Woodward in 1965 – rules governed by the mathematical symmetry properties of electron orbitals as they are rearranged or recombined by light or heat. Similarly, reactions that fail to conserve the total amount of ‘spin’, a quantum-mechanical property of electrons, are said to be spin-forbidden. And yet neither of these types of ‘forbidden’ reaction is impossible – they merely happen at slower rates. Hoffmann says that he (at Woodward’s insistence) even asserted in their 1965 paper that there were no exceptions to their rules, knowing that this would spur others into finding them.
So this gallery of ‘reactions they said couldn’t happen’ is not a litany of chemists’ conservatism and prejudice (although – let’s be honest – that sometimes played a part). It is a reflection of how chemistry itself exists in an unstable state, needing an intuition of right and wrong but having constantly to readjust that to the lessons of experience. That’s what makes it exciting – it’s not the case that anything might happen, but nevertheless big surprises certainly can. That’s why, however peculiar the claim, the right response in chemistry, perhaps more than any other branch of science, is not “that’s impossible”, but “prove it”.
Crazy tiling
In the early 1980s, Daniel Shechtman was bombarding metal alloys with electrons at the then National Bureau of Standards (NBS) in Gaithersburg, Maryland. Through mathematical analysis of the interference patterns formed as the beams reflected from different layers of the crystals, it was possible to determine exactly how the atoms were packed.
Among the alloys Shechtman studied, a blend of aluminium and manganese produced a beautiful pattern of sharp diffraction spots, which had always been found to be an indicator of crystalline order. But the crystal symmetry suggested by the pattern didn’t make sense. It was fivefold, like that of a pentagon. One of the basic rules of crystallography is that atoms can’t be packed into a regular, repeating arrangement with fivefold symmetry, just as pentagons can’t tile a floor in a periodic way that leaves no gaps.
Pauling wasn’t the only fierce critic of Shectman’s claims. When he persisted with them, his boss at NBS asked him to leave the group. And a paper he submitted in the summer of 1984 was rejected immediately. Only when he found some colleagues to back him up did he get the results published at the end of that year.
Yet the answer to the riddle they posed had been found already. In the 1970s the mathematician Roger Penrose had discovered that two rhombus-shaped tiles could be used to cover a flat plane without gaps and without the pattern ever repeating. In 1981, the crystallographer Alan Mackay found that if an atom were placed at every vertex of such a Penrose tiling, it would produce a diffraction pattern with fivefold symmetry, even though the tiling itself was not perfectly periodic. Shechtman’s alloy was analogous to a three-dimensional Penrose tiling. It was not a perfect crystal, because the atomic arrangement never repeated exactly; it was a quasicrystal.
Since then, many other quasicrystalline alloys have been discovered. They, or structures very much like them in polymers and assemblies of soap-like molecules called micelles. It has even been suggested that water, when confined in very narrow slits, can freeze into quasicrystalline ice.
You can’t have it both ways
For poor Boris Belousov, vindication came too late. When he was awarded the prestigious Lenin prize by the Soviet government in 1980 for his pioneering work on oscillating chemical reactions, he had already been dead for ten years.
Still, at least Belousov lived long enough to see the scorn heaped on his initial work turn to grudging acceptance by many chemists. When he discovered oscillating chemical reactions in the 1950s, he was deemed to have violated one of the most cherished principles of science: the second law of thermodynamics.
This states that all change in the universe must be accompanied by an increase in entropy – crudely speaking, it must leave things less ordered than they were to begin with. Even processes that seem to create order, such as the freezing of water to ice, in fact promote a broader disorder – here by releasing latent heat into the surroundings. This principle is what prohibits many perpetual motion machines (others violate the first law – the conservation of energy – instead). Violations of the second law are thus something that only cranks propose.
But Belousov was no crank. He was a respectable Russian biochemist interested in the mechanisms of metabolism, and specifically in glycolysis: how enzymes break down sugars. To study this process, Belousov devised a cocktail of chemical ingredients that should act like a simplified analogue of glycolysis. He shook them up and watched as the reaction proceeded, turning from clear to yellow.
Then it did something astonishing: it went clear again. Then yellow. Then clear. It began to oscillate repeatedly between these two coloured states. The problem is that entropy can’t possibly increase in both directions. So what’s up?
Belousov wasn’t actually the first to see an oscillating reaction. In 1921 American chemist William Bray reported oscillations in the reaction of hydrogen peroxide and iodate ions. But no one believed him either, even though the ecologist Alfred Lotka had shown in 1910 how oscillations could arise in a simple, hypothetical reaction. As for Belousov, he couldn’t get his findings published anywhere, and in the end he appended them to a paper in a Soviet conference proceedings on a different topic: a Pyrrhic victory, since they then remained almost totally obscure.
But not quite. In the 1960s another Soviet chemist, Anatoly Zhabotinsky, modified Belousov’s reaction mixture so that it switched between red and blue. That was pretty hard for others to ignore. The Belousov-Zhabotinsky (BZ) reaction became recognized as one of a whole class of oscillating reactions, and after it was transmitted to the West in a meeting of Soviet and Western scientists in Prague in 1967, these processes were gradually explained.
They don’t violate the second law after all, for the simple reason that the oscillations don’t last forever. Left to their own devices, they eventually die away and the reaction settles down to an unchanging state. They exist only while the reaction approaches its equilibrium state, and are thus an out-of-equilibrium phenomenon. Since thermodynamics speaks only about equilibrium states and not what happens en route to them, it is not threatened by oscillating reactions.
The oscillations are the result of self-amplifying feedback. As the reaction proceeds, one of the intermediate products (call it A) is autocatalytic: it speeds up the rate of its own production. This makes the reaction accelerate until the reagents are exhausted. But there is a second autocatalytic process that consumes A and produces another product, B, which kicks in when the first process runs out of steam. This too quickly exhausts itself, and the system reverts to the first process. It repeatedly flips back and forth between the two reactions, over-reaching itself first in one direction and then in the other. Lotka showed that the same thing can happen in populations of predators and their prey, which can get caught in alternating cycles of boom and bust.
If the BZ reaction is constantly fed fresh reagents, while the final products are removed, the oscillations can be sustained indefinitely: it remains out of equilibrium. Such oscillations are now know to happen in many chemical processes, including some industrially important reactions on metal catalysts and even in real glycolysis and other biochemical processes. If it takes place in an unstirred mixture, the BZ oscillations can spread from initiating spots as chemical waves, giving rise to complex patterns. Related patterns are the probable cause of many animal pigmentation markings. BZ chemical waves are analogues of the waves of electrical excitation that pass through heart tissue and induce regular heartbeats; if they are disturbed, the waves break up and the result can be a heart attack.
These waves might also form the basis of a novel form of computation. Andrew Adamatsky at the University of the West of England in Bristol is using their interactions to create logic gates, which he believes can be miniaturized to make a genuine “wet’” chemical computer. He and collaborators in Germany and Poland have launched a project called NeuNeu to make chemical circuits that will crudely mimic the behaviour of neurons, including a capacity for self-repair.
The quantum escape clause
It’s very cold in space. So cold that molecules encountering one another in the frigid molecular clouds that pepper the interstellar void should generally lack enough energy to react. In general, reactions proceed via the formation of high-energy intermediate molecules which then reconfigure into lower-energy products. Energy (usually thermal) is needed to get the reactants to get over this barrier, but in space there is next to none.
In the 1970s a Soviet chemist named Vitali Goldanski challenged that dogma. He showed that, with a bit of help from high-energy radiation such as gamma-rays or electron beams, some chemicals could react even when chilled by liquid helium to just four degrees above absolute zero – just a little higher than the coldest parts of space. For example, under these conditions Goldanski found that formaldehyde, a fairly common component of molecular clouds, could link up into polymer chains several hundred molecules long. At that temperature, conventional chemical kinetic theory suggested that the reaction should be so slow as to be virtually frozen.
Why was it possible? Goldanski argued that the reactions were getting help from quantum effects. It is well known that particles governed by quantum rules can get across energy barriers even if they don’t appear to have enough energy to do so. Instead of going over the top, they can pass through the barrier, a process known as tunnelling. It’s possible because of the smeared-out nature of quantum objects: they aren’t simply here or there, but have positions described by a probability distribution. A quantum particle on one side of a barrier has a small probability of suddenly and spontaneously turning up on the other side.
Goldanski saw the signature of quantum tunnelling in his ultracold experiments in the lab: the rate of formaldehyde polymerization didn’t steadily increase with temperature, as conventional kinetic theory predicts, but stayed much the same as the temperature rose.
Goldanski believed that his quantum-assisted reactions in space might have helped the molecular building blocks of life to have assembled there from simple ingredients such as hydrogen cyanide, ammonia and water. He even thought they could help to explain why biological molecules such as amino acids have a preferred ‘handedness’. Most amino acids have so-called chiral carbon atoms, to which four different chemical groups are attached, permitting two mirror-image variants. In living organisms these amino acids are always of the right-handed variety, a long-standing and still unexplained mystery. Goldanski argued that his ultracold reactions could favour one enantiomer over the other, since the tunnelling rates might be highly sensitive to tiny biasing influences such as the polarization of radiation inducing them.
Chemical reactions assisted by quantum tunnelling are now well established – not just in space, but in the living cell. Some enzymes are more efficient catalysts than one would expect classically, because they involve the movement of hydrogen ions – lone protons, which are light enough to experience significant quantum tunnelling.
This counter-intuitive phenomenon can also subvert conventional expectations about what the products of a reaction will be. That was demonstrated very recently by Wesley Allen of the University of Georgia and his coworkers. They trapped a highly reactive free-radical molecule called methylhydroxycarbene, which has unpaired electrons that predispose it to react fast, in an inert matrix of solid argon at 11 degrees Kelvin. This molecule can in theory rearrange its atoms to form vinyl alcohol or acetaldehyde. In practice, however, it shouldn’t have enough energy to get over the barrier to these reactions under these ultracold conditions. But the carbene was transformed nonetheless – because of tunnelling.
“Tunnelling is not specifically a low-temperature phenomenon”, Allen explains. “It occurs at all temperatures. But at low temperatures the thermal activation shuts off, so tunnelling is all that is left.”
What’s more, although the formation of vinyl alcohol has a lower energy barrier, Allen and colleagues found that most of the carbene was transformed instead to acetaldehyde. That defied kinetic theory, which says that the lower the energy barrier to the formation of a product, the faster it will be produced and so the more it dominates the resulting mixture. The researchers figured that although the barrier to formation of acetaldehyde may have been higher, it was also narrower, which meant that it was easier to tunnel through.
Tunnelling through such high barriers as these “was quite a shock to most chemists”, says Allen. He says the result shows that “tunnelling is a broader aspect of chemical kinetics that has been understood in the past”.
Not so noble
Dmitri Mendeleev’s first periodic table in 1869 didn’t just have some gaps for yet-undiscovered elements. It had a whole column missing: a whole family of chemical elements whose existence no one suspected. The lightest of them – helium – was discovered that very same year, and the others began to turn up in the 1890s, starting with argon. The reason they took so long to surface, even though they are abundant (helium is the second most abundant element in the universe) is that they don’t do anything: they are inert, “noble”, not reacting with other elements.
That supposed unreactivity was tested with every extreme chemists could devise. Just after the noble gas argon was discovered in 1894, the French chemist Henri Moissan mixed it with fluorine, the viciously reactive element that he had isolated in 1886, and sent sparks through the mixture. Result: nothing. By 1924, the Austrian chemist Friedrich Paneth pronounced the consensus: “the unreactivity of the noble gas elements belongs to the surest of all experimental results.” Theories of chemical bonding seemed to explain why that was: the noble gases had filled shells of electrons, and therefore no capacity for adding more by sharing electrons in chemical bonds.
Linus Pauling, the chief architect of those theories, didn’t give up. In the 1930s he blagged a rare sample of the noble gas xenon and peruaded his colleague Don Yost at Caltech to try to get it to react with fluorine. After more cooking and sparking, Yost had succeeded only in corroding the walls of his supposedly inert quartz flasks.
Against this intransigent background, it was either a brave or foolish soul who would still try to make compounds from noble gases. But the first person to do so, British chemist Neil Bartlett at the University of British Columbia in Vancouver, was not setting out to be an iconoclast. He was just following some wonderfully plain reasoning.
In 1961 Bartlett discovered that the compound platinum hexafluoride (PtF6), first made three years earlier by US chemists, was an eye-wateringly powerful oxidant. Oxidation – the removal of electrons from a chemical element or compound – is so named because its prototypical form is the reaction with oxygen gas, a substance almost unparalleled in its ability to grab electrons. But Bartlett found that PtF6 can out-oxidize oxygen itself.
In early 1962 Bartlett was preparing a standard undergraduate lecture on inorganic chemistry and happened to glance at a textbook graph of ‘ionization potentials’ of substances: how much energy is needed to remove an electron from them. He noticed that it takes almost exactly the same energy to ionize – that is, to oxidize – oxygen molecules as xenon atoms. He realised that if PtF6 can do it to oxygen, it should do it to xenon too.
So he tried the experiment, simply mixing red gaseous PtF6 and colourless xenon. Straight away, the glass was covered with a yellow material, which Bartlett found to have the formula XePtF6: the first noble-gas compound.
Since then, many other compounds of both xenon and krypton, another noble gas, have been made. Some are explosively unstable: Bartlett nearly lost an eye studying xenon dioxide. Heavy, radioactive radon forms compounds too, although it wasn’t until 2000 that the first compound of argon was reported by a group in Finland. Even now, the noble gases continue to produce surprises. Roald Hoffmann admits to being shocked when, in that same year, a compound of xenon and gold was reported by chemists in Berlin – for gold is supposed to be a noble, unreactive metal too. You can persuade elements to do almost anything, it seems.
Improper bonds
Covalent chemical bonds form when two atoms share a pair of electrons, which act as a glue that binds the union. At least, that’s what we learn at school. But chemists have come to accept that there are plenty of other ways to form bonds.
Take the hydrogen bond – the interaction of electron ‘lone pairs’ on one atom such as oxygen or nitrogen with a hydrogen atom on another molecular group with a slight positive charge. This interaction is now acknowledged as the key to water’s unusual properties and the glue that sticks DNA’s double helix together. But the formation of a second bond by hydrogen, supposedly a one-bond atom, was initially derided in the 1920s as a fictitious kind of chemical “bigamy”.
That, however, was nothing compared to the controversy that surrounded the notion, first put forward in the 1940s, that some organic molecules, such as ‘carbocations’ in which carbon atoms are positively charged, could form short-lived structures over the course of a reaction in which a pair of electrons was dispersed over three rather than two atoms. This arrangement was considered so extraordinary that it became known as non-classical bonding.
The idea was invoked to explain some reactions involving the swapping of dangling groups attached to molecules with bridged carbon rings. In the first step of the reaction, the ‘leaving group’ falls off to create an intermediate carbocation. By rights, the replacement dangling group, with an overall negative charge, should have attached at the same place, at the positively charged atom. But it didn’t: the “reactive centre” of the carbocation seemed able to shift.
Some chemists, especially Saul Winstein at the University of California at Los Angeles, argued that the intermediate carbocation is bridged by a non-classical bond that bridged three carbon atoms in a triangular ring, with its positive charge smeared between them, giving the replacement group more than one place to dock. This bonding structure would temporarily, and rather heretically, give one of the carbon atoms five instead of the usual four bonding partners.
Such an unusual kind of bonding offended the sensibilities of other chemists, most of all Herbert Brown, who was awarded a Nobel prize in 1979 for his work on boron compounds. In 1961 he opened the “non-classical ion” war with a paper dismissing proposals for these structures as lacking “the same care and same sound experimental basis as that which is customary in other areas of experimental organic chemistry”. The ensuing arguments raged for two decades in what Brown called a “holy war”. “By the time the controversy sputtered to a halt in the early 1980s”, says philosopher of chemistry William Goodwin of Rowan University in New Jersey, “a tremendous amount of intellectual energy, resources, and invective had been invested in resolving an issue that was crucial neither to progress in physical organic chemistry generally nor to the subfield of carbocation chemistry.” Both sides accused the rival theory of being ‘soft’ – able to fit any result, and therefore not truly scientific.
Brown and his followers didn’t object in principle to the idea of electrons being smeared over more than two atomic nuclei – that happened in benzene, after all. But they considered the nonclassical ion an unnecessary and faddish imposition for an effect that could be explained by less drastic, more traditional means. The argument was really about how to interpret the experiments that bore on the matter, and it shows that, particularly in chemistry, it could and still can be very hard to apply a kind of Popperian falsification to distinguish between rival theories. Goodwin thinks that the non-classical ion dispute was provoked and sustained by ambiguities built into in the way organic chemists try to understand and describe the mechanisms of their reactions. “Organic chemists have sacrificed unambiguous explanation for something much more useful – a theory that helps them make plausible, but fallible, assessments of the chemical behavior of novel, complex compounds”, he says. As a result, chemistry is naturally prone to arguments that get resolved only when one side or the other runs out of energy – or dies.
The non-classical ion argument raged for two decades, until eventually most chemists except Brown accepted that these ions were real. Ironically, in the course of the debate both Winstein and Brown implied to a young Hungarian emigré chemist, George Olah, that his claim to have isolated a relatively long-lived carbocation – a development that ultimately helped resolve the issue – was unwise. This was another ‘reaction that couldn’t happen’, they advised – the ions were too unstable. But Olah was right, and his work on carbocations earned him a Nobel prize in 1994.
_____________________________________________________________________
The award of the 2011 Nobel prize for chemistry to Dan Shechtman for discovering quasicrystals allowed reporters to relish tales of experts being proved wrong. For his heretical suggestion that the packing of atoms in crystals can have a kind of fivefold (quasi)symmetry, Shechtman was ridiculed and ostracized and almost lost his job. The eminent chemist Linus Pauling derided him as a “quasi-scientist”.
Pauling of all people should have known that sometimes it is worth risking being bold and wrong, as he was himself with the structure of DNA in the 1950s. As it turned out, Shechtman was bold and right: quasicrystals do exist, and they earn their ‘impossible’ fivefold symmetry at the cost of not being fully ordered: not truly crystalline in the traditional sense. But while everyone enjoys seeing experts with egg on their faces, there’s a much more illuminating way to think about apparent violations of what is ‘possible’ in chemistry.
Here are some other examples of chemical processes that seemed to break the rules – reactions that ‘shouldn’t’ happen. They demonstrate why chemistry is such a vibrant, exciting science: because it operates on the borders of predictability and certainty. The laws of physics have an air of finality: they don’t tolerate exceptions. No one except cranks expects the conservation of energy to be violated. In biology, in contrast, ‘laws’ seem destined to have exceptions: even the heresy of inheritance of acquired characteristics is permitted by epigenetics. Chemistry sits in the middle ground between the rigidity of physics and the permissiveness of biology. Its basis in physics sets some limits and constraints, but the messy diversity of the elements can often transcend or undermine them.
That’s why chemists often rely on intuition to decide what should or shouldn’t be possible. When his postdoc student Xiao-Dong Wen told Nobel laureate Roald Hoffmann that his computer calculations found graphane – puckered sheets of carbon hexagons with hydrogens attached, with a C:H ratio of 1:1 – was more stable than familiar old benzene, Hoffmann insisted that the calculations were wrong. The superior stability of benzene, he said, “is sacrosanct - it’s hard to argue with it”. But eventually Hoffmann realized that his intuition was wrong: graphane is more stable, though no one has yet succeeded in proving definitively that it can be made.
You could say that chemistry flirts with its own law-breaking inclinations. Chemists often speak of reactions that are ‘forbidden’. For example, symmetry-forbidden reactions are ones that break the rules formulated by Hoffmann in his Nobel-winning work with organic chemist Robert Woodward in 1965 – rules governed by the mathematical symmetry properties of electron orbitals as they are rearranged or recombined by light or heat. Similarly, reactions that fail to conserve the total amount of ‘spin’, a quantum-mechanical property of electrons, are said to be spin-forbidden. And yet neither of these types of ‘forbidden’ reaction is impossible – they merely happen at slower rates. Hoffmann says that he (at Woodward’s insistence) even asserted in their 1965 paper that there were no exceptions to their rules, knowing that this would spur others into finding them.
So this gallery of ‘reactions they said couldn’t happen’ is not a litany of chemists’ conservatism and prejudice (although – let’s be honest – that sometimes played a part). It is a reflection of how chemistry itself exists in an unstable state, needing an intuition of right and wrong but having constantly to readjust that to the lessons of experience. That’s what makes it exciting – it’s not the case that anything might happen, but nevertheless big surprises certainly can. That’s why, however peculiar the claim, the right response in chemistry, perhaps more than any other branch of science, is not “that’s impossible”, but “prove it”.
Crazy tiling
In the early 1980s, Daniel Shechtman was bombarding metal alloys with electrons at the then National Bureau of Standards (NBS) in Gaithersburg, Maryland. Through mathematical analysis of the interference patterns formed as the beams reflected from different layers of the crystals, it was possible to determine exactly how the atoms were packed.
Among the alloys Shechtman studied, a blend of aluminium and manganese produced a beautiful pattern of sharp diffraction spots, which had always been found to be an indicator of crystalline order. But the crystal symmetry suggested by the pattern didn’t make sense. It was fivefold, like that of a pentagon. One of the basic rules of crystallography is that atoms can’t be packed into a regular, repeating arrangement with fivefold symmetry, just as pentagons can’t tile a floor in a periodic way that leaves no gaps.
Pauling wasn’t the only fierce critic of Shectman’s claims. When he persisted with them, his boss at NBS asked him to leave the group. And a paper he submitted in the summer of 1984 was rejected immediately. Only when he found some colleagues to back him up did he get the results published at the end of that year.
Yet the answer to the riddle they posed had been found already. In the 1970s the mathematician Roger Penrose had discovered that two rhombus-shaped tiles could be used to cover a flat plane without gaps and without the pattern ever repeating. In 1981, the crystallographer Alan Mackay found that if an atom were placed at every vertex of such a Penrose tiling, it would produce a diffraction pattern with fivefold symmetry, even though the tiling itself was not perfectly periodic. Shechtman’s alloy was analogous to a three-dimensional Penrose tiling. It was not a perfect crystal, because the atomic arrangement never repeated exactly; it was a quasicrystal.
Since then, many other quasicrystalline alloys have been discovered. They, or structures very much like them in polymers and assemblies of soap-like molecules called micelles. It has even been suggested that water, when confined in very narrow slits, can freeze into quasicrystalline ice.
You can’t have it both ways
For poor Boris Belousov, vindication came too late. When he was awarded the prestigious Lenin prize by the Soviet government in 1980 for his pioneering work on oscillating chemical reactions, he had already been dead for ten years.
Still, at least Belousov lived long enough to see the scorn heaped on his initial work turn to grudging acceptance by many chemists. When he discovered oscillating chemical reactions in the 1950s, he was deemed to have violated one of the most cherished principles of science: the second law of thermodynamics.
This states that all change in the universe must be accompanied by an increase in entropy – crudely speaking, it must leave things less ordered than they were to begin with. Even processes that seem to create order, such as the freezing of water to ice, in fact promote a broader disorder – here by releasing latent heat into the surroundings. This principle is what prohibits many perpetual motion machines (others violate the first law – the conservation of energy – instead). Violations of the second law are thus something that only cranks propose.
But Belousov was no crank. He was a respectable Russian biochemist interested in the mechanisms of metabolism, and specifically in glycolysis: how enzymes break down sugars. To study this process, Belousov devised a cocktail of chemical ingredients that should act like a simplified analogue of glycolysis. He shook them up and watched as the reaction proceeded, turning from clear to yellow.
Then it did something astonishing: it went clear again. Then yellow. Then clear. It began to oscillate repeatedly between these two coloured states. The problem is that entropy can’t possibly increase in both directions. So what’s up?
Belousov wasn’t actually the first to see an oscillating reaction. In 1921 American chemist William Bray reported oscillations in the reaction of hydrogen peroxide and iodate ions. But no one believed him either, even though the ecologist Alfred Lotka had shown in 1910 how oscillations could arise in a simple, hypothetical reaction. As for Belousov, he couldn’t get his findings published anywhere, and in the end he appended them to a paper in a Soviet conference proceedings on a different topic: a Pyrrhic victory, since they then remained almost totally obscure.
But not quite. In the 1960s another Soviet chemist, Anatoly Zhabotinsky, modified Belousov’s reaction mixture so that it switched between red and blue. That was pretty hard for others to ignore. The Belousov-Zhabotinsky (BZ) reaction became recognized as one of a whole class of oscillating reactions, and after it was transmitted to the West in a meeting of Soviet and Western scientists in Prague in 1967, these processes were gradually explained.
They don’t violate the second law after all, for the simple reason that the oscillations don’t last forever. Left to their own devices, they eventually die away and the reaction settles down to an unchanging state. They exist only while the reaction approaches its equilibrium state, and are thus an out-of-equilibrium phenomenon. Since thermodynamics speaks only about equilibrium states and not what happens en route to them, it is not threatened by oscillating reactions.
The oscillations are the result of self-amplifying feedback. As the reaction proceeds, one of the intermediate products (call it A) is autocatalytic: it speeds up the rate of its own production. This makes the reaction accelerate until the reagents are exhausted. But there is a second autocatalytic process that consumes A and produces another product, B, which kicks in when the first process runs out of steam. This too quickly exhausts itself, and the system reverts to the first process. It repeatedly flips back and forth between the two reactions, over-reaching itself first in one direction and then in the other. Lotka showed that the same thing can happen in populations of predators and their prey, which can get caught in alternating cycles of boom and bust.
If the BZ reaction is constantly fed fresh reagents, while the final products are removed, the oscillations can be sustained indefinitely: it remains out of equilibrium. Such oscillations are now know to happen in many chemical processes, including some industrially important reactions on metal catalysts and even in real glycolysis and other biochemical processes. If it takes place in an unstirred mixture, the BZ oscillations can spread from initiating spots as chemical waves, giving rise to complex patterns. Related patterns are the probable cause of many animal pigmentation markings. BZ chemical waves are analogues of the waves of electrical excitation that pass through heart tissue and induce regular heartbeats; if they are disturbed, the waves break up and the result can be a heart attack.
These waves might also form the basis of a novel form of computation. Andrew Adamatsky at the University of the West of England in Bristol is using their interactions to create logic gates, which he believes can be miniaturized to make a genuine “wet’” chemical computer. He and collaborators in Germany and Poland have launched a project called NeuNeu to make chemical circuits that will crudely mimic the behaviour of neurons, including a capacity for self-repair.
The quantum escape clause
It’s very cold in space. So cold that molecules encountering one another in the frigid molecular clouds that pepper the interstellar void should generally lack enough energy to react. In general, reactions proceed via the formation of high-energy intermediate molecules which then reconfigure into lower-energy products. Energy (usually thermal) is needed to get the reactants to get over this barrier, but in space there is next to none.
In the 1970s a Soviet chemist named Vitali Goldanski challenged that dogma. He showed that, with a bit of help from high-energy radiation such as gamma-rays or electron beams, some chemicals could react even when chilled by liquid helium to just four degrees above absolute zero – just a little higher than the coldest parts of space. For example, under these conditions Goldanski found that formaldehyde, a fairly common component of molecular clouds, could link up into polymer chains several hundred molecules long. At that temperature, conventional chemical kinetic theory suggested that the reaction should be so slow as to be virtually frozen.
Why was it possible? Goldanski argued that the reactions were getting help from quantum effects. It is well known that particles governed by quantum rules can get across energy barriers even if they don’t appear to have enough energy to do so. Instead of going over the top, they can pass through the barrier, a process known as tunnelling. It’s possible because of the smeared-out nature of quantum objects: they aren’t simply here or there, but have positions described by a probability distribution. A quantum particle on one side of a barrier has a small probability of suddenly and spontaneously turning up on the other side.
Goldanski saw the signature of quantum tunnelling in his ultracold experiments in the lab: the rate of formaldehyde polymerization didn’t steadily increase with temperature, as conventional kinetic theory predicts, but stayed much the same as the temperature rose.
Goldanski believed that his quantum-assisted reactions in space might have helped the molecular building blocks of life to have assembled there from simple ingredients such as hydrogen cyanide, ammonia and water. He even thought they could help to explain why biological molecules such as amino acids have a preferred ‘handedness’. Most amino acids have so-called chiral carbon atoms, to which four different chemical groups are attached, permitting two mirror-image variants. In living organisms these amino acids are always of the right-handed variety, a long-standing and still unexplained mystery. Goldanski argued that his ultracold reactions could favour one enantiomer over the other, since the tunnelling rates might be highly sensitive to tiny biasing influences such as the polarization of radiation inducing them.
Chemical reactions assisted by quantum tunnelling are now well established – not just in space, but in the living cell. Some enzymes are more efficient catalysts than one would expect classically, because they involve the movement of hydrogen ions – lone protons, which are light enough to experience significant quantum tunnelling.
This counter-intuitive phenomenon can also subvert conventional expectations about what the products of a reaction will be. That was demonstrated very recently by Wesley Allen of the University of Georgia and his coworkers. They trapped a highly reactive free-radical molecule called methylhydroxycarbene, which has unpaired electrons that predispose it to react fast, in an inert matrix of solid argon at 11 degrees Kelvin. This molecule can in theory rearrange its atoms to form vinyl alcohol or acetaldehyde. In practice, however, it shouldn’t have enough energy to get over the barrier to these reactions under these ultracold conditions. But the carbene was transformed nonetheless – because of tunnelling.
“Tunnelling is not specifically a low-temperature phenomenon”, Allen explains. “It occurs at all temperatures. But at low temperatures the thermal activation shuts off, so tunnelling is all that is left.”
What’s more, although the formation of vinyl alcohol has a lower energy barrier, Allen and colleagues found that most of the carbene was transformed instead to acetaldehyde. That defied kinetic theory, which says that the lower the energy barrier to the formation of a product, the faster it will be produced and so the more it dominates the resulting mixture. The researchers figured that although the barrier to formation of acetaldehyde may have been higher, it was also narrower, which meant that it was easier to tunnel through.
Tunnelling through such high barriers as these “was quite a shock to most chemists”, says Allen. He says the result shows that “tunnelling is a broader aspect of chemical kinetics that has been understood in the past”.
Not so noble
Dmitri Mendeleev’s first periodic table in 1869 didn’t just have some gaps for yet-undiscovered elements. It had a whole column missing: a whole family of chemical elements whose existence no one suspected. The lightest of them – helium – was discovered that very same year, and the others began to turn up in the 1890s, starting with argon. The reason they took so long to surface, even though they are abundant (helium is the second most abundant element in the universe) is that they don’t do anything: they are inert, “noble”, not reacting with other elements.
That supposed unreactivity was tested with every extreme chemists could devise. Just after the noble gas argon was discovered in 1894, the French chemist Henri Moissan mixed it with fluorine, the viciously reactive element that he had isolated in 1886, and sent sparks through the mixture. Result: nothing. By 1924, the Austrian chemist Friedrich Paneth pronounced the consensus: “the unreactivity of the noble gas elements belongs to the surest of all experimental results.” Theories of chemical bonding seemed to explain why that was: the noble gases had filled shells of electrons, and therefore no capacity for adding more by sharing electrons in chemical bonds.
Linus Pauling, the chief architect of those theories, didn’t give up. In the 1930s he blagged a rare sample of the noble gas xenon and peruaded his colleague Don Yost at Caltech to try to get it to react with fluorine. After more cooking and sparking, Yost had succeeded only in corroding the walls of his supposedly inert quartz flasks.
Against this intransigent background, it was either a brave or foolish soul who would still try to make compounds from noble gases. But the first person to do so, British chemist Neil Bartlett at the University of British Columbia in Vancouver, was not setting out to be an iconoclast. He was just following some wonderfully plain reasoning.
In 1961 Bartlett discovered that the compound platinum hexafluoride (PtF6), first made three years earlier by US chemists, was an eye-wateringly powerful oxidant. Oxidation – the removal of electrons from a chemical element or compound – is so named because its prototypical form is the reaction with oxygen gas, a substance almost unparalleled in its ability to grab electrons. But Bartlett found that PtF6 can out-oxidize oxygen itself.
In early 1962 Bartlett was preparing a standard undergraduate lecture on inorganic chemistry and happened to glance at a textbook graph of ‘ionization potentials’ of substances: how much energy is needed to remove an electron from them. He noticed that it takes almost exactly the same energy to ionize – that is, to oxidize – oxygen molecules as xenon atoms. He realised that if PtF6 can do it to oxygen, it should do it to xenon too.
So he tried the experiment, simply mixing red gaseous PtF6 and colourless xenon. Straight away, the glass was covered with a yellow material, which Bartlett found to have the formula XePtF6: the first noble-gas compound.
Since then, many other compounds of both xenon and krypton, another noble gas, have been made. Some are explosively unstable: Bartlett nearly lost an eye studying xenon dioxide. Heavy, radioactive radon forms compounds too, although it wasn’t until 2000 that the first compound of argon was reported by a group in Finland. Even now, the noble gases continue to produce surprises. Roald Hoffmann admits to being shocked when, in that same year, a compound of xenon and gold was reported by chemists in Berlin – for gold is supposed to be a noble, unreactive metal too. You can persuade elements to do almost anything, it seems.
Improper bonds
Covalent chemical bonds form when two atoms share a pair of electrons, which act as a glue that binds the union. At least, that’s what we learn at school. But chemists have come to accept that there are plenty of other ways to form bonds.
Take the hydrogen bond – the interaction of electron ‘lone pairs’ on one atom such as oxygen or nitrogen with a hydrogen atom on another molecular group with a slight positive charge. This interaction is now acknowledged as the key to water’s unusual properties and the glue that sticks DNA’s double helix together. But the formation of a second bond by hydrogen, supposedly a one-bond atom, was initially derided in the 1920s as a fictitious kind of chemical “bigamy”.
That, however, was nothing compared to the controversy that surrounded the notion, first put forward in the 1940s, that some organic molecules, such as ‘carbocations’ in which carbon atoms are positively charged, could form short-lived structures over the course of a reaction in which a pair of electrons was dispersed over three rather than two atoms. This arrangement was considered so extraordinary that it became known as non-classical bonding.
The idea was invoked to explain some reactions involving the swapping of dangling groups attached to molecules with bridged carbon rings. In the first step of the reaction, the ‘leaving group’ falls off to create an intermediate carbocation. By rights, the replacement dangling group, with an overall negative charge, should have attached at the same place, at the positively charged atom. But it didn’t: the “reactive centre” of the carbocation seemed able to shift.
Some chemists, especially Saul Winstein at the University of California at Los Angeles, argued that the intermediate carbocation is bridged by a non-classical bond that bridged three carbon atoms in a triangular ring, with its positive charge smeared between them, giving the replacement group more than one place to dock. This bonding structure would temporarily, and rather heretically, give one of the carbon atoms five instead of the usual four bonding partners.
Such an unusual kind of bonding offended the sensibilities of other chemists, most of all Herbert Brown, who was awarded a Nobel prize in 1979 for his work on boron compounds. In 1961 he opened the “non-classical ion” war with a paper dismissing proposals for these structures as lacking “the same care and same sound experimental basis as that which is customary in other areas of experimental organic chemistry”. The ensuing arguments raged for two decades in what Brown called a “holy war”. “By the time the controversy sputtered to a halt in the early 1980s”, says philosopher of chemistry William Goodwin of Rowan University in New Jersey, “a tremendous amount of intellectual energy, resources, and invective had been invested in resolving an issue that was crucial neither to progress in physical organic chemistry generally nor to the subfield of carbocation chemistry.” Both sides accused the rival theory of being ‘soft’ – able to fit any result, and therefore not truly scientific.
Brown and his followers didn’t object in principle to the idea of electrons being smeared over more than two atomic nuclei – that happened in benzene, after all. But they considered the nonclassical ion an unnecessary and faddish imposition for an effect that could be explained by less drastic, more traditional means. The argument was really about how to interpret the experiments that bore on the matter, and it shows that, particularly in chemistry, it could and still can be very hard to apply a kind of Popperian falsification to distinguish between rival theories. Goodwin thinks that the non-classical ion dispute was provoked and sustained by ambiguities built into in the way organic chemists try to understand and describe the mechanisms of their reactions. “Organic chemists have sacrificed unambiguous explanation for something much more useful – a theory that helps them make plausible, but fallible, assessments of the chemical behavior of novel, complex compounds”, he says. As a result, chemistry is naturally prone to arguments that get resolved only when one side or the other runs out of energy – or dies.
The non-classical ion argument raged for two decades, until eventually most chemists except Brown accepted that these ions were real. Ironically, in the course of the debate both Winstein and Brown implied to a young Hungarian emigré chemist, George Olah, that his claim to have isolated a relatively long-lived carbocation – a development that ultimately helped resolve the issue – was unwise. This was another ‘reaction that couldn’t happen’, they advised – the ions were too unstable. But Olah was right, and his work on carbocations earned him a Nobel prize in 1994.
Monday, January 23, 2012
Nanotheology
Belatedly, here’s my final column for the Saturday Guardian. It’s final because, in a reshuffle to ‘consolidate’ the paper (i.e. save space because they’re losing so much money), the back page and its contents have been chopped. It was kind of fun while it lasted, though I intend shortly to post a few thoughts on being exposed to (and encouraged to engage with) the Comment is Free feedback. This piece was particularly revealing in that respect, eliciting as it did a fair bit of outrage from the transhumanists. Who’d have thought there were so many people desperately and credulously hanging out for the Singularity?
____________________________________________________________
What does God think of nanotechnology? The glib answer is that, like the rest of us, he’s only just heard of it. If you think it’s a silly question anyway, consider that a 2009 study claimed “religiosity is the dominant predictor of moral acceptance of nanotechnology.” ‘Science anthropologist’ Chris Toumey has recently surveyed this moral landscape.
Nanotechnology is a catch-all term that encompasses a host of diverse efforts to manipulate matter on the very small scales of atoms and cells. There’s no single objective. Some nanotechnologists are exploring new approaches to medicine, others want to make computer circuits or new materials.
Of the rather few explicitly religious commentaries on nanotech so far, some have focused on issues that could equally be raised by secular voices: sensible concerns about safety, commercial control and accountability, and responsible application. (None seems too bothered about the strong military interest.)
Yet much of the discussion has headed down the blind alley of transhumanism. Nanotech scientists have long sought to rescue their discipline’s public image from the vocal but fringe spokespersons such as Eric Drexler and billionaire inventor Ray Kurzweil, who have painted a fantastic picture of tiny robots patching up our cells and perhaps hugely extending our longevity. Kurzweil has suggested that nanotech will play a big role in guiding us to a moment he calls the Singularity: a convergence of exponentially growing computer power and medical capability that will transform us into disembodied immortals. He has even set up a Singularity University, based on NASA’s research park in Silicon Valley, to prepare the way.
Needless to say, immortality – or its pursuit – isn’t acceptable to most religious observers of any creed, since it entails a hubristic attempt to transcend the divinely decreed limitations of the human body, and relieves us from saving our souls. But the transhumanism question isn’t unique to nanotech – it’s part of a wider debate about the ethics of human enhancement and modification.
In any case, as far as nanotech is concerned the theologians can relax. Transhumanism and Kurzweil’s Singularity are just delirious dreams and on no serious scientist’s agenda. One Christian writer admitted to being shocked by what he heard at a transhumanist conference. Quite right too: all these folks determined to freeze their heads or download their consciousness into computers are living in an infantile fantasy.
So are there any ethical issues in nanotech that really do have a religious dimension? Science-fiction writer Charles Stross has imagined the dilemmas of Muslims faced with bacon that is chemically identical to the real thing but assembled by nanotechnology rather than pigs. He wasn’t entirely serious, but some liberal Muslim scholars have debated whether the Qu’ran places any constraints on the permitted rearrangements of matter. Given that chemistry was pioneered by Muslims between the eighth and twelfth centuries, this seems unlikely. Jewish scholars, meanwhile, have used the legend of the golem to think about the ethics of making life from inanimate matter, partly in reference to nanotech and artificial intelligence. In the 1960s the pre-eminent expert on the golem legends Gershom Scholem was sanguine about the idea, asking only that our digital golems “develop peacefully and don’t destroy the world.”
These academic discussions have so far been rather considered and tolerant. Toumey wonders whether they’d impinge on the views of, say, your average Southern Baptist, hinting tactfully at what we might suspect anyway: both sensible people and bigots adapt their religion to their temperament and prejudices rather than vice versa.
One British study of attitudes to nanotech made the point that religious groups were better able than secular ones to articulate their ethical concerns because they possessed a vocabulary and conceptual framework for them. The researchers suggested that religious groups might therefore take the lead in communicating public perceptions. I’m not so sure. Articulacy is useful, but it’s more important that you first understand the science. And just because you can couch your views eloquently in terms of souls and afterlives doesn’t make them more valid.
____________________________________________________________
What does God think of nanotechnology? The glib answer is that, like the rest of us, he’s only just heard of it. If you think it’s a silly question anyway, consider that a 2009 study claimed “religiosity is the dominant predictor of moral acceptance of nanotechnology.” ‘Science anthropologist’ Chris Toumey has recently surveyed this moral landscape.
Nanotechnology is a catch-all term that encompasses a host of diverse efforts to manipulate matter on the very small scales of atoms and cells. There’s no single objective. Some nanotechnologists are exploring new approaches to medicine, others want to make computer circuits or new materials.
Of the rather few explicitly religious commentaries on nanotech so far, some have focused on issues that could equally be raised by secular voices: sensible concerns about safety, commercial control and accountability, and responsible application. (None seems too bothered about the strong military interest.)
Yet much of the discussion has headed down the blind alley of transhumanism. Nanotech scientists have long sought to rescue their discipline’s public image from the vocal but fringe spokespersons such as Eric Drexler and billionaire inventor Ray Kurzweil, who have painted a fantastic picture of tiny robots patching up our cells and perhaps hugely extending our longevity. Kurzweil has suggested that nanotech will play a big role in guiding us to a moment he calls the Singularity: a convergence of exponentially growing computer power and medical capability that will transform us into disembodied immortals. He has even set up a Singularity University, based on NASA’s research park in Silicon Valley, to prepare the way.
Needless to say, immortality – or its pursuit – isn’t acceptable to most religious observers of any creed, since it entails a hubristic attempt to transcend the divinely decreed limitations of the human body, and relieves us from saving our souls. But the transhumanism question isn’t unique to nanotech – it’s part of a wider debate about the ethics of human enhancement and modification.
In any case, as far as nanotech is concerned the theologians can relax. Transhumanism and Kurzweil’s Singularity are just delirious dreams and on no serious scientist’s agenda. One Christian writer admitted to being shocked by what he heard at a transhumanist conference. Quite right too: all these folks determined to freeze their heads or download their consciousness into computers are living in an infantile fantasy.
So are there any ethical issues in nanotech that really do have a religious dimension? Science-fiction writer Charles Stross has imagined the dilemmas of Muslims faced with bacon that is chemically identical to the real thing but assembled by nanotechnology rather than pigs. He wasn’t entirely serious, but some liberal Muslim scholars have debated whether the Qu’ran places any constraints on the permitted rearrangements of matter. Given that chemistry was pioneered by Muslims between the eighth and twelfth centuries, this seems unlikely. Jewish scholars, meanwhile, have used the legend of the golem to think about the ethics of making life from inanimate matter, partly in reference to nanotech and artificial intelligence. In the 1960s the pre-eminent expert on the golem legends Gershom Scholem was sanguine about the idea, asking only that our digital golems “develop peacefully and don’t destroy the world.”
These academic discussions have so far been rather considered and tolerant. Toumey wonders whether they’d impinge on the views of, say, your average Southern Baptist, hinting tactfully at what we might suspect anyway: both sensible people and bigots adapt their religion to their temperament and prejudices rather than vice versa.
One British study of attitudes to nanotech made the point that religious groups were better able than secular ones to articulate their ethical concerns because they possessed a vocabulary and conceptual framework for them. The researchers suggested that religious groups might therefore take the lead in communicating public perceptions. I’m not so sure. Articulacy is useful, but it’s more important that you first understand the science. And just because you can couch your views eloquently in terms of souls and afterlives doesn’t make them more valid.
Tuesday, January 17, 2012
Forever young?
I was asked by the Guardian to write an online story about the new ‘youth cream’ from L’Oreal. I think they were anticipating a debunking job, but I guess I learnt here the difference between skepticism and cynicism. I’m not really interested in whether these things work or not (whatever ‘work’ can mean in this instance), but I had to admit that there was some kind of science behind this stuff, even if I see no proof yet that it has any lasting effect on wrinkles. So I was overcome by an attack of fairness (who said "gullibility"?). This is what resulted.
___________________________________________________
I don’t suppose I’m in the target group for Yves Saint Laurent’s new skin cream Forever Youth Liberator - but what if I did want to know it’s worth shelling out sixty quid for a 50 ml tub? I could be wowed by the (strangely similar) media reports. “It is likely to be one of the most sought after face creams ever”, says the Telegraph, “5,000 women have already pre-ordered a face cream using ingredients which scientists claimed would change the world.” Or as the Daily Mail puts it, the cream is “hailed as the ‘holy grail’ of anti-ageing.” (You have to read on to discover that it’s Amandine Ohayon, general manager of Yves Saint Laurent, who is doing the hailing here.)
But I’m hard to please. I want to know about the science supporting these claims. After all, cosmetics companies have been trying to blind us with science for years – perhaps ever since the white coats began to appear in the DuPont chemical company’s ads (“Better living through chemistry”) in the 1930s. Recently we’ve had skin creams loaded with nano-capsules, vitamins A, C and E, antioxidants and things with even longer names.
“The science behind the brand lies in the groundbreaking technology of Glycobiology”, one puff tells us. “It’s been noted as the future in the medical field, the fruit of more than 100 years of research and recognized by seven Nobel Prizes.” The Telegraph, meanwhile, parrots the PR that, “the cream has been 20 years in development, and has the backing of the Max Planck Institute in Germany.”
I rather wish that, as a chemist, I could say this is all tripe. But it’s not as simple as, say, claims by bottled-water companies to have a secret process that alters the molecular structure of water to assist hydration. For example, it’s true that glycobiology is a big deal. This field studies an undervalued and once unfashionable ingredient of living cells: sugars. Glycans are complicated sugar molecules that play many important biological roles. Attached to proteins at the surfaces of our cells, such sugars act as labels that distinguish different cell types – for example, they determine your blood group. Glycans and related biochemicals are an essential component of the way our cells recognise and communicate with one another.
Skin cells – essentially, tissue-generating cells called fibroblasts – produce glycans and other substances that form a surrounding extracellular matrix, Some of these glycans attract water and keep the skin plump and soft. But their production declines as fibroblasts age, and so the skin becomes dry and wrinkled. Skin creams routinely contain glycoproteins and glycans to redress this deficit.
Fine – but what’s so different about the new cream? It’s based on a combination of artificial glycans trademarked Glycanactif. Selfridges tells us that they “unlock the cells to reactivate their vital functions and liberate the youth potential at all levels of the skin”. Well, it would be nice if cells really were little boxes brimming with ‘youth potential’, just waiting to be ‘unlocked’, but this statement is basically voodoo.
So I contact YSL. And – what do you know? – they sent me some useful science. It’s surrounded by gloss and puff (“Youth is a state of mind that cannot live without science” – meaning what, exactly?), and exposed as the source of that garbled soundbite from Selfridges. But it also shows that YSL has enlisted some serious scientists, most notably Peter Seeberger, a specialist in glycan chemistry at the Max Planck Institute of Colloids and Interfaces in Berlin. And it explains that, instead of just supplying a source of glycans in the extracellular matrix to make up for their reduced production in ageing cells, Glycanactif apparently binds to glycan receptors on the cell surface and stimulates them to start making the molecules (including other glycans and related compounds) needed for healthy skin.
Tough-skinned cynic that I am about the claims of cosmetics manufacturers, I am nonetheless emolliated, if not exactly rejuvenated. True, there’s nothing in the leaflet which proves that FYL does a better job than other skin creams. The science remains very sketchy in places. And (this is true of any claims for cosmetics) we’d reserve judgement until the long-term clinical trials, if it were a drug. But I’m offered a troupe of serious scientists ready to talk about the work. I’m open to persuasion.
Still, it puzzles me. How many of the thousands of advance orders, or no doubt the millions to come, will have been based on examination of the technical data? I know we lack the time, and usually the expertise, for such rigour. So what instead informs our decision to shell out sixty quid on a tiny tub of youthfulness? And if the science was all nonsense, would it make a difference?
___________________________________________________
I don’t suppose I’m in the target group for Yves Saint Laurent’s new skin cream Forever Youth Liberator - but what if I did want to know it’s worth shelling out sixty quid for a 50 ml tub? I could be wowed by the (strangely similar) media reports. “It is likely to be one of the most sought after face creams ever”, says the Telegraph, “5,000 women have already pre-ordered a face cream using ingredients which scientists claimed would change the world.” Or as the Daily Mail puts it, the cream is “hailed as the ‘holy grail’ of anti-ageing.” (You have to read on to discover that it’s Amandine Ohayon, general manager of Yves Saint Laurent, who is doing the hailing here.)
But I’m hard to please. I want to know about the science supporting these claims. After all, cosmetics companies have been trying to blind us with science for years – perhaps ever since the white coats began to appear in the DuPont chemical company’s ads (“Better living through chemistry”) in the 1930s. Recently we’ve had skin creams loaded with nano-capsules, vitamins A, C and E, antioxidants and things with even longer names.
“The science behind the brand lies in the groundbreaking technology of Glycobiology”, one puff tells us. “It’s been noted as the future in the medical field, the fruit of more than 100 years of research and recognized by seven Nobel Prizes.” The Telegraph, meanwhile, parrots the PR that, “the cream has been 20 years in development, and has the backing of the Max Planck Institute in Germany.”
I rather wish that, as a chemist, I could say this is all tripe. But it’s not as simple as, say, claims by bottled-water companies to have a secret process that alters the molecular structure of water to assist hydration. For example, it’s true that glycobiology is a big deal. This field studies an undervalued and once unfashionable ingredient of living cells: sugars. Glycans are complicated sugar molecules that play many important biological roles. Attached to proteins at the surfaces of our cells, such sugars act as labels that distinguish different cell types – for example, they determine your blood group. Glycans and related biochemicals are an essential component of the way our cells recognise and communicate with one another.
Skin cells – essentially, tissue-generating cells called fibroblasts – produce glycans and other substances that form a surrounding extracellular matrix, Some of these glycans attract water and keep the skin plump and soft. But their production declines as fibroblasts age, and so the skin becomes dry and wrinkled. Skin creams routinely contain glycoproteins and glycans to redress this deficit.
Fine – but what’s so different about the new cream? It’s based on a combination of artificial glycans trademarked Glycanactif. Selfridges tells us that they “unlock the cells to reactivate their vital functions and liberate the youth potential at all levels of the skin”. Well, it would be nice if cells really were little boxes brimming with ‘youth potential’, just waiting to be ‘unlocked’, but this statement is basically voodoo.
So I contact YSL. And – what do you know? – they sent me some useful science. It’s surrounded by gloss and puff (“Youth is a state of mind that cannot live without science” – meaning what, exactly?), and exposed as the source of that garbled soundbite from Selfridges. But it also shows that YSL has enlisted some serious scientists, most notably Peter Seeberger, a specialist in glycan chemistry at the Max Planck Institute of Colloids and Interfaces in Berlin. And it explains that, instead of just supplying a source of glycans in the extracellular matrix to make up for their reduced production in ageing cells, Glycanactif apparently binds to glycan receptors on the cell surface and stimulates them to start making the molecules (including other glycans and related compounds) needed for healthy skin.
Tough-skinned cynic that I am about the claims of cosmetics manufacturers, I am nonetheless emolliated, if not exactly rejuvenated. True, there’s nothing in the leaflet which proves that FYL does a better job than other skin creams. The science remains very sketchy in places. And (this is true of any claims for cosmetics) we’d reserve judgement until the long-term clinical trials, if it were a drug. But I’m offered a troupe of serious scientists ready to talk about the work. I’m open to persuasion.
Still, it puzzles me. How many of the thousands of advance orders, or no doubt the millions to come, will have been based on examination of the technical data? I know we lack the time, and usually the expertise, for such rigour. So what instead informs our decision to shell out sixty quid on a tiny tub of youthfulness? And if the science was all nonsense, would it make a difference?
Monday, January 16, 2012
The truth about Einstein's wife
Some weeks back I mentioned in passing in my Guardian column the far-fetched claim that Einstein’s first wife Mileva Maric was partly or even primarily responsible for the ideas behind his theory of relativity. Allen Esterson has written to me to point out that this claim is still widely circulated and accepted as established fact by some people. Indeed, he says that “the 2008-2009 EU Europa Diary for secondary school children (print run 3 million) had the following: ‘Did you know? Mileva Marić, Einstein's first wife, confidant and colleague – and co-developer of his Theory of Relativity – was born in what is now Serbia’”. Seems to me that this sort of thing (and the concomitant notion that this ‘truth’ has been long suppressed) ultimately doesn’t do the feminist cause any good. Allen has also posted on the web site Butterflies and Wheels a critique of an independent short film that tries to promote the myth – you can find it here.
Wednesday, January 11, 2012
How big is yours?
Here, then, is my column from last Saturday’s Guardian.
While writing this, I discovered that Google Scholar has an add-on that will tot up your citations to establish an h-index. From that, I gather that mine is around 29. One of the comments on the Guardian thread points out that Richard Feynman has an h of 23. As Nigel Tufnell famously said apropos Jimmy Page, “I think that says quite a lot.”
_________________________________________________________________
Many scientists worry that theirs isn’t big enough. Even those who sniff that size isn’t everything probably can’t resist taking a peek to see how they compare with their rivals. The truly desperate can google for dodgy techniques to make theirs bigger.
I’m talking about the h-index, a number that supposedly measures the quality of a researcher’s output. And if the schoolboy double entendres seem puerile, there does seem to be something decidedly male about the notion of a number that rates your prowess and ranks you in a league table. Given that, say, the 100 chemists with the highest h-index are all male, whereas 1 in 4 postdoctoral chemists is female, the h-index does seem to be the academic equivalent of a stag’s antlers.
Few topics excite more controversy among scientists. When I spoke about the h-index to the German Physical Society a few years back, I was astonished to find the huge auditorium packed. Some deplore it; some find it useful. Some welcome it as a defence against the subjective capriciousness of review and tenure boards.
The h-index is named after its inventor, physicist Jorge Hirsch, who proposed it in 2005 precisely as a means of bringing some rigour to the slippery question of who is most deserving of a grant or a post. The index measures how many highly cited papers a scientist has written: your value of h is the number of your papers that have each been cited by (included in the reference lists of) at least h other papers. So a researcher with an h of 10 has written 10 papers that have received at least 10 citations each.
The idea is that citations are a measure of quality: if a paper reports something important, other scientists will refer to it. That’s a broadly a reasonable assumption, but not airtight. There’s evidence that some papers get highly cited by chance, because of a runaway copycat effect: people cite them just because others have, in the same way that some mediocre books and songs become unaccountably popular.
But to get a big h-index, it’s not enough to write a few influential papers. You have to write a lot of them. A single paper could transform a field of science and win its author a Nobel prize, while doing little for the author’s h-index if he or she doesn’t write anything else of note. Nobel laureate chemist Harry Kroto is ranked an apparently undistinguished 264th in the h-index list of chemists because his (deserved) fame rests largely on a single breakthrough paper in 1985.
That’s one of the criticisms of the h-index – it imposes a one-size-fits-all view of scientific impact. There are many other potential faults. Young scientists with few publications score lower, however brilliant they are. The value of h can be artificially boosted – slightly but significantly – by scientists repeatedly citing their own papers. It fails to distinguish the relative contributions to the work in many-author papers. The numbers can’t be compared across disciplines, because citation habits differ.
Many variants of the h-index have been proposed to get round these problems, but there’s no perfect answer, and one great virtue of the h-index is its simplicity, which means that its pros and cons are relative transparent. In any case, it’s here to stay. No one officially endorses the h-index for evaluation, but scientists confess that they use it all the time as an informal way of, say, assessing applicants for a job. The trouble is that it’s precisely for average scientists that the index works rather poorly: small differences in small h-indices don’t tell you very much.
The h-index is part of a wider trend in science to rely on metrics – numbers rather than opinions – for assessment. For some, that’s like assuming that book sales measure literary merit. It can distort priorities, encouraging researchers to publish all they can and follow fads (it would have served Darwin poorly). But numbers aren’t hostage to fickle whim, discrimination or favouritism. So there’s a place for the h-index, as long as we can keep it there.
While writing this, I discovered that Google Scholar has an add-on that will tot up your citations to establish an h-index. From that, I gather that mine is around 29. One of the comments on the Guardian thread points out that Richard Feynman has an h of 23. As Nigel Tufnell famously said apropos Jimmy Page, “I think that says quite a lot.”
_________________________________________________________________
Many scientists worry that theirs isn’t big enough. Even those who sniff that size isn’t everything probably can’t resist taking a peek to see how they compare with their rivals. The truly desperate can google for dodgy techniques to make theirs bigger.
I’m talking about the h-index, a number that supposedly measures the quality of a researcher’s output. And if the schoolboy double entendres seem puerile, there does seem to be something decidedly male about the notion of a number that rates your prowess and ranks you in a league table. Given that, say, the 100 chemists with the highest h-index are all male, whereas 1 in 4 postdoctoral chemists is female, the h-index does seem to be the academic equivalent of a stag’s antlers.
Few topics excite more controversy among scientists. When I spoke about the h-index to the German Physical Society a few years back, I was astonished to find the huge auditorium packed. Some deplore it; some find it useful. Some welcome it as a defence against the subjective capriciousness of review and tenure boards.
The h-index is named after its inventor, physicist Jorge Hirsch, who proposed it in 2005 precisely as a means of bringing some rigour to the slippery question of who is most deserving of a grant or a post. The index measures how many highly cited papers a scientist has written: your value of h is the number of your papers that have each been cited by (included in the reference lists of) at least h other papers. So a researcher with an h of 10 has written 10 papers that have received at least 10 citations each.
The idea is that citations are a measure of quality: if a paper reports something important, other scientists will refer to it. That’s a broadly a reasonable assumption, but not airtight. There’s evidence that some papers get highly cited by chance, because of a runaway copycat effect: people cite them just because others have, in the same way that some mediocre books and songs become unaccountably popular.
But to get a big h-index, it’s not enough to write a few influential papers. You have to write a lot of them. A single paper could transform a field of science and win its author a Nobel prize, while doing little for the author’s h-index if he or she doesn’t write anything else of note. Nobel laureate chemist Harry Kroto is ranked an apparently undistinguished 264th in the h-index list of chemists because his (deserved) fame rests largely on a single breakthrough paper in 1985.
That’s one of the criticisms of the h-index – it imposes a one-size-fits-all view of scientific impact. There are many other potential faults. Young scientists with few publications score lower, however brilliant they are. The value of h can be artificially boosted – slightly but significantly – by scientists repeatedly citing their own papers. It fails to distinguish the relative contributions to the work in many-author papers. The numbers can’t be compared across disciplines, because citation habits differ.
Many variants of the h-index have been proposed to get round these problems, but there’s no perfect answer, and one great virtue of the h-index is its simplicity, which means that its pros and cons are relative transparent. In any case, it’s here to stay. No one officially endorses the h-index for evaluation, but scientists confess that they use it all the time as an informal way of, say, assessing applicants for a job. The trouble is that it’s precisely for average scientists that the index works rather poorly: small differences in small h-indices don’t tell you very much.
The h-index is part of a wider trend in science to rely on metrics – numbers rather than opinions – for assessment. For some, that’s like assuming that book sales measure literary merit. It can distort priorities, encouraging researchers to publish all they can and follow fads (it would have served Darwin poorly). But numbers aren’t hostage to fickle whim, discrimination or favouritism. So there’s a place for the h-index, as long as we can keep it there.
Monday, January 09, 2012
No secret
Before I post my last Guardian column, here’s one that got away: I’d planned to write about a paper in PNAS (not yet online) on blind testing of new and old violins, until – as I was half-expecting – Ian Sample wrote a regular story on it. So this had to be scrapped.
Radio 4's PM programme covered the story too, but in a somewhat silly way. They got a sceptical professor from the Royal College of Music to come on and play some Bach on a new and and old instrument, and asked listeners to see if they could identify which was which. A good demonstration, I suppose, of exactly why double-blind tests were invented.
___________________________________________________________
At last we now know Antonio Stradivari’s secret. Violinists and craftsmen have long speculated about what makes the legendary Italian luthier’s instruments sound so special. Does the magic lie in the forgotten recipe for the varnish, or in a chemical pre-treatment of the wood? Or perhaps it’s the sheer passage of time that mellows the tone into such richness?
Alas, none of these. A new study by French and US researchers suggests that the reason the sound of a Stradivari is so venerated is because it has never before been properly put to the test.
Twenty-one experienced violinists were asked to blind-test six violins – three new, two Stradivaris and one made by the equally esteemed eighteenth-century instrument-maker Guarneri del Gesù. Most of the players were unable to tell if an instrument was new or old, and their preferences bore no relation to cost or age. Although their opinions varied, the favourite choice was a modern instrument, and the least favourite, by a clear margin, was a Stradivari.
OK, it’s just a small-scale test – getting hold of even three old violins (combined value $10m) was no mean feat. And you’ll have to trust me that the researchers took all the right precautions. The tests were, for example, literally double-blind – both the researchers and the players wore welders’ goggles in dim lighting to make sure they couldn’t identify the type of instrument by eye. And in case you’re thinking they just hit on a dud Stradivari (which do exist), the one with the worst rating had been owned by several well-known violinists.
This is embarrassing for the experts, both scientists and musicians. In judging quality, “the opinions of different violinists would coincide absolutely”, one acoustics expert has previously said. “Any musician will tell you immediately whether an instrument he is playing on is an antique instrument or a modern one”, claimed another. And a distinguished violinist once insisted to me that the superior sound of the most expensive old instruments is “very real”.
But acoustic scientists have struggled to identify any clear differences between the tone of antique and (good) new instruments. And as for putting belief to the test, an acoustic scientist once told me that he doubted any musicians would risk exposing themselves to a blind test, preferring the safety of the myth.
That’s why the participants in the latest study deserve credit. They’re anonymous, but they must know how much fury they could bring down on their heads. If you’ve paid $3m for one of the 500 or so remaining Strads, you don’t want to be told that a modern instrument would sound as good at a hundredth of the price.
But that’s perhaps the problem in the first place. In a recent blind wine-testing study, the ‘quality’ was deemed greater when the subjects were told that the bottle cost more.
Is there a killjoy aspect to this demonstration that the mystique of the Strad evaporates under scientific scrutiny? Is it fair to tell violinists that their rapture at these instruments’ irreplaceable tone is a neural illusion? Is this an example of Keats’ famous criticism that science will “clip and Angel’s wings/Conquer all mysteries by rule and line”?
I suspect that depends on whether you want to patronize musicians or treat them as grown-ups – as well as whether you wish to deny modern luthiers the credit they are evidently due. In fact, musicians themselves sometimes chafe at the way their instruments are revered over their own skill. The famous violinist Jascha Heifetz, who played a Guarneri del Gesù, pointedly implied that it’s the player, not the instrument, who makes the difference between the sublime and the mediocre. A female fan once breathlessly complimented him after a performance on the “beautiful tone” of his violin. Heifetz turned around and bent to put his ear close to the violin lying in its case. “I don’t hear anything”, he said.
Radio 4's PM programme covered the story too, but in a somewhat silly way. They got a sceptical professor from the Royal College of Music to come on and play some Bach on a new and and old instrument, and asked listeners to see if they could identify which was which. A good demonstration, I suppose, of exactly why double-blind tests were invented.
___________________________________________________________
At last we now know Antonio Stradivari’s secret. Violinists and craftsmen have long speculated about what makes the legendary Italian luthier’s instruments sound so special. Does the magic lie in the forgotten recipe for the varnish, or in a chemical pre-treatment of the wood? Or perhaps it’s the sheer passage of time that mellows the tone into such richness?
Alas, none of these. A new study by French and US researchers suggests that the reason the sound of a Stradivari is so venerated is because it has never before been properly put to the test.
Twenty-one experienced violinists were asked to blind-test six violins – three new, two Stradivaris and one made by the equally esteemed eighteenth-century instrument-maker Guarneri del Gesù. Most of the players were unable to tell if an instrument was new or old, and their preferences bore no relation to cost or age. Although their opinions varied, the favourite choice was a modern instrument, and the least favourite, by a clear margin, was a Stradivari.
OK, it’s just a small-scale test – getting hold of even three old violins (combined value $10m) was no mean feat. And you’ll have to trust me that the researchers took all the right precautions. The tests were, for example, literally double-blind – both the researchers and the players wore welders’ goggles in dim lighting to make sure they couldn’t identify the type of instrument by eye. And in case you’re thinking they just hit on a dud Stradivari (which do exist), the one with the worst rating had been owned by several well-known violinists.
This is embarrassing for the experts, both scientists and musicians. In judging quality, “the opinions of different violinists would coincide absolutely”, one acoustics expert has previously said. “Any musician will tell you immediately whether an instrument he is playing on is an antique instrument or a modern one”, claimed another. And a distinguished violinist once insisted to me that the superior sound of the most expensive old instruments is “very real”.
But acoustic scientists have struggled to identify any clear differences between the tone of antique and (good) new instruments. And as for putting belief to the test, an acoustic scientist once told me that he doubted any musicians would risk exposing themselves to a blind test, preferring the safety of the myth.
That’s why the participants in the latest study deserve credit. They’re anonymous, but they must know how much fury they could bring down on their heads. If you’ve paid $3m for one of the 500 or so remaining Strads, you don’t want to be told that a modern instrument would sound as good at a hundredth of the price.
But that’s perhaps the problem in the first place. In a recent blind wine-testing study, the ‘quality’ was deemed greater when the subjects were told that the bottle cost more.
Is there a killjoy aspect to this demonstration that the mystique of the Strad evaporates under scientific scrutiny? Is it fair to tell violinists that their rapture at these instruments’ irreplaceable tone is a neural illusion? Is this an example of Keats’ famous criticism that science will “clip and Angel’s wings/Conquer all mysteries by rule and line”?
I suspect that depends on whether you want to patronize musicians or treat them as grown-ups – as well as whether you wish to deny modern luthiers the credit they are evidently due. In fact, musicians themselves sometimes chafe at the way their instruments are revered over their own skill. The famous violinist Jascha Heifetz, who played a Guarneri del Gesù, pointedly implied that it’s the player, not the instrument, who makes the difference between the sublime and the mediocre. A female fan once breathlessly complimented him after a performance on the “beautiful tone” of his violin. Heifetz turned around and bent to put his ear close to the violin lying in its case. “I don’t hear anything”, he said.
Wednesday, January 04, 2012
Science is a joke
Belatedly, here is last Saturday’s Critical Scientist column for the Guardian.
_____________________________________________________________________
Is there something funny about science? Audiences at Robin Ince’s seasonal slice of rationalist revelry, Nine Carols and Songs for Godless People, just before Christmas seemed to think so. This annual event at the Bloomsbury Theatre in London is far more a celebration of the wonders of science than an exercise in atheistic God-baiting. In fact God gets a rather easy ride: the bad science of tabloids, fundamentalists, quacks and climate-change sceptics provides richer comic fodder.
Time was when London theatre audiences preferred to laugh at science rather than with it, most famously with Thomas Shadwell’s satire on the Royal Society, The Virtuoso, in 1676. Samuel Butler and Jonathan Swift followed suit in showering the Enlightenment rationalists with ridicule. In modern times, scientists (usually mad) remained the butt of such jokes as came their way.
They haven’t helped matters with a formerly rather feeble line in laughs. Even now there are popularizing scientists who imagine that another repetition of the ‘joke’ about spherical cows will prove them all to be jolly japers. And while allowing that much humour lies in the delivery, there are scant laughs still to be wrung from formulaic juxtapositions of the exotic with the mundane (“imagine looking for the yoghurt in an eleven-dimensional supermarket!”), or anthropomorphising the sexual habits of other animals.
Meanwhile, science has its in-jokes like any other profession. A typical example: A neutron goes into a bar and orders a drink. “How much?”, he asks the bartender, who replies: “For you, no charge”. Look, I’m just telling you. Occasionally the humour is so rarefied that its solipsism becomes virtually a part of the joke itself. Thomas Pynchon, for instance, provides a rare example of an equation gag, which I risk straining the Guardian’s typography to repeat: ∫1/cabin d(cabin) = log cabin + c = houseboat. This was the only calculus joke I’d ever seen until Matt Parker produced a better one at Nine Carols. Speaking of rates of flow (OK, it was flow of poo, d(poo)/dt – some things never fail), he admitted that this part of his material was a little derivative.
The rise of stand-up has changed everything. Not only do we now have stand-ups who specialize in science, but several, such as Timandra Harkness and Helen Keen, are women, diluting the relentless blokeishness of much science humour. Some aim to be informative as well as funny. At the Bloomsbury you could watch Dr Hula (Richard Vranch) and his assistant demonstrate atomic theory and chemical bonding with hula hoops (more fun than perhaps it sounds).
As Ben Goldacre’s readers know, good jokes often have serious intent. Perhaps the most notorious scientific example was not exactly a joke at all. Certainly, when in 1996 the physicist Alan Sokal got a completely spurious paper on ‘quantum hermeneutics’ published in the journal of postmodern criticism Social Text, the postmodernists weren’t laughing. And Sokal himself was more intent on proving a point than making us giggle. Arguably funnier was the epilogue: in the early 2000s, a group of papers on quantum cosmology published in physics journals by the French brothers Igor and Grichka Bogdanov was so incomprehensible that this was rumoured to be the postmodernists’ revenge – until the indignant Bogdanovs protested that they were perfectly serious.
But my favourite example of this sort of prank was a paper submitted by computer scientists David Mazières and Eddie Kohler to one of the ‘junk science’ conferences that plague their field with spammed solicitations. The paper had a title, abstract, text, figures and captions that all consisted solely of the phrase “Get me off your fucking email list”. Mazières was keen to present the paper at the conference but was never told if it was accepted or not. Reporting the incident made me probably the first and only person to say ‘fucking’ in the august pages of Nature* – not, I admit, the most distinguished achievement, but we must take our glory where we can find it.
*Apparently not, according to Adam Rutherford on the Guardian site...
_____________________________________________________________________
Is there something funny about science? Audiences at Robin Ince’s seasonal slice of rationalist revelry, Nine Carols and Songs for Godless People, just before Christmas seemed to think so. This annual event at the Bloomsbury Theatre in London is far more a celebration of the wonders of science than an exercise in atheistic God-baiting. In fact God gets a rather easy ride: the bad science of tabloids, fundamentalists, quacks and climate-change sceptics provides richer comic fodder.
Time was when London theatre audiences preferred to laugh at science rather than with it, most famously with Thomas Shadwell’s satire on the Royal Society, The Virtuoso, in 1676. Samuel Butler and Jonathan Swift followed suit in showering the Enlightenment rationalists with ridicule. In modern times, scientists (usually mad) remained the butt of such jokes as came their way.
They haven’t helped matters with a formerly rather feeble line in laughs. Even now there are popularizing scientists who imagine that another repetition of the ‘joke’ about spherical cows will prove them all to be jolly japers. And while allowing that much humour lies in the delivery, there are scant laughs still to be wrung from formulaic juxtapositions of the exotic with the mundane (“imagine looking for the yoghurt in an eleven-dimensional supermarket!”), or anthropomorphising the sexual habits of other animals.
Meanwhile, science has its in-jokes like any other profession. A typical example: A neutron goes into a bar and orders a drink. “How much?”, he asks the bartender, who replies: “For you, no charge”. Look, I’m just telling you. Occasionally the humour is so rarefied that its solipsism becomes virtually a part of the joke itself. Thomas Pynchon, for instance, provides a rare example of an equation gag, which I risk straining the Guardian’s typography to repeat: ∫1/cabin d(cabin) = log cabin + c = houseboat. This was the only calculus joke I’d ever seen until Matt Parker produced a better one at Nine Carols. Speaking of rates of flow (OK, it was flow of poo, d(poo)/dt – some things never fail), he admitted that this part of his material was a little derivative.
The rise of stand-up has changed everything. Not only do we now have stand-ups who specialize in science, but several, such as Timandra Harkness and Helen Keen, are women, diluting the relentless blokeishness of much science humour. Some aim to be informative as well as funny. At the Bloomsbury you could watch Dr Hula (Richard Vranch) and his assistant demonstrate atomic theory and chemical bonding with hula hoops (more fun than perhaps it sounds).
As Ben Goldacre’s readers know, good jokes often have serious intent. Perhaps the most notorious scientific example was not exactly a joke at all. Certainly, when in 1996 the physicist Alan Sokal got a completely spurious paper on ‘quantum hermeneutics’ published in the journal of postmodern criticism Social Text, the postmodernists weren’t laughing. And Sokal himself was more intent on proving a point than making us giggle. Arguably funnier was the epilogue: in the early 2000s, a group of papers on quantum cosmology published in physics journals by the French brothers Igor and Grichka Bogdanov was so incomprehensible that this was rumoured to be the postmodernists’ revenge – until the indignant Bogdanovs protested that they were perfectly serious.
But my favourite example of this sort of prank was a paper submitted by computer scientists David Mazières and Eddie Kohler to one of the ‘junk science’ conferences that plague their field with spammed solicitations. The paper had a title, abstract, text, figures and captions that all consisted solely of the phrase “Get me off your fucking email list”. Mazières was keen to present the paper at the conference but was never told if it was accepted or not. Reporting the incident made me probably the first and only person to say ‘fucking’ in the august pages of Nature* – not, I admit, the most distinguished achievement, but we must take our glory where we can find it.
*Apparently not, according to Adam Rutherford on the Guardian site...
Monday, January 02, 2012
The new history
Here is the original draft of the end-of-year essay I published in the last 2011 issue of Nature.
___________________________________________________
2011 shows that our highly networked society is ever more prone to abrupt change. The future of our complex world depends on building resilience to shocks.
In the 1990s, American political scientist Francis Fukuyama, now at Stanford, predicted that the world was approaching the ‘end of history’ [1]. Like most smart ideas that prove to be wrong, Fukuyama’s was illuminating precisely for its errors. Events this year have helped to reveal why.
Fukuyama argued that after the collapse of the Soviet Union, liberal democracy could be seen as the logical and stable end point of civilization. Yet the prospect that the world will gradually replicate the US model of liberal democracy, as Fukuyama hoped, looks more remote today than it did at the end of the twentieth century.
This year we have seen proliferating protest movements in the fallout from the financial crisis – not just the cries of the marginalized and disaffected, but genuine challenges to the legitimacy of the economic system on which recent liberal democracies have been based. In the face of the grave debt crisis in Greece, the wisdom of deploying democracy’s ultimate tool – the national referendum – to solve it was questioned. The political situation in Russia and Turkey suggests that there is nothing inexorable or irreversible about a process of democratization, while North Africa and the Middle East demonstrate to politicians what political scientists could already have told them: that democratization can itself inflame conflict, especially when it is imposed in the absence of a strong pre-existing state [2,3]. Meanwhile, China continues to show that aggressive capitalism depends on neither liberalism nor democracy. As a recent report of the US National Intelligence Council admits, in the coming years “the Western model of economic liberalism, democracy, and secularism, which many assumed to be inevitable, may lose its luster” [4].
The real shortcoming behind Fukuyama’s thesis, however, was not his faith democracy but that he considered history to be gradualist: tomorrow’s history is more (or less) of the same. The common talk among political analysts now is of ‘discontinuous change’, a notion raised by Irish philosopher Charles Handy 20 years ago [5], and alluded to by President Obama in his speech at the West Point Military Academy last year, when he spoke of ‘moments of change’. Sudden disruptive events, particularly wars, have of course always been a part of history. But they would come and go against a slowly evolving social, cultural and political backdrop. Now the potential for discontinuous social and political change is woven into the very fabric of global affairs.
Take the terrorist attack on the World Trade Centre’s twin towers in 2001. This was said by many to have proved Fukuyama wrong – but on this tenth anniversary of that event we can now see more clearly in what sense that was so. It was not simply that this was a significant historical event – Fukuyama was never claiming that those would cease. Rather, it was a harbinger of the new world order, which the subsequent ‘war on terror’ failed catastrophically to acknowledge. That was a war waged in the old way, by sending armies to battlegrounds (in Afghanistan and Iraq) according to Carl von Clausewitz’s old definition, in his classic 1832 work On War, of a continuation of international politics by other means. But not only were those wars in no sense ‘won, they were barely wars at all – illustrating the remark of American strategic analyst Anthony Cordesman that “one of the lessons of modern war is that war can no longer be called war” [6]. Rather, armed conflict is a diffuse, nebulous affair, no longer corralled from peacetime by declarations and treaties, no longer recognizing generals or even statehood. In its place is a network of insurgents, militias, terrorist cells, suicide bombers, overlapping and sometimes competing ‘enemy’ organizations [7]. Somewhere in this web we have had to say farewell to war and peace.
Network revolutions
The nature of discontinuous change is often misunderstood. It is sometimes said – this is literally the defence of traditional economists in their failure to predict the on-going financial and national-debt crises – that no one can be expected to foresee such radical departures from the previous quotidian. They come, like a hijacked aircraft, out of a clear blue sky. Yet social and political discontinuities are rarely if ever random in that sense, even if there is a certain arbitrary character to their immediate triggers. Rather, they are abrupt in the same way, and for the same reasons, that phase transitions are abrupt in physics. In complex systems, including social ones, discontinuities don’t reflect profound changes in the governing forces but instead derive from the interactions and feedbacks between the component parts. Thus, discontinuities in history are precisely what you'd expect if you start considering social phenomena from a complex-systems perspective.
Experience with natural and technological complex systems teaches us, for example, that highly connected networks of strong interactions create a propensity for avalanches, catastrophic failures, and systemic ruptures [8,9]: in short, for discontinuous change.
So it should come as no surprise that today’s highly networked, interconnected world, replete with cell phones, ipads and social media, is prone to abrupt changes in course. It is much more than idle analogy that connects the cascade of minor failures leading to the 2003 power blackout of eastern North America with the freezing of liquidity in the global banking network in 2007-8.
Some see the revolts in Tunisia and Egypt in this way too, dubbing them ‘Twitter revolutions’ because of the way unrest and news of demonstration were spread on social networks. Although this is an over-simplification, it is abundantly clear that networking supplied the possibility for a random event to trigger a major one. The Tunisian revolt was set in motion by the self-immolation of a street vendor, Mohammed Bouazizi, in Sidi Bouzid, in protest at harsh treatment by officials. Three months earlier there was a similar case in the city of Monastir – but no one knew about it because the news was not spread on Facebook.
It was surely not without reasons that Twitter and Facebook were shut down by both the Tunisian and Egyptian authorities. The issue is not so much whether they ‘caused’ the revolutions, but that their existence – and the concomitant potential for mobilizing the young, educated populations of these countries – can alter the way things happen in the Middle East and beyond. These same tools are now vital to the Occupy protests disrupting complacent financial districts worldwide, from New York to Taipei, drawing attention to issues of social and economic inequality.
Social media seem also to have the potential to facilitate qualitatively new collective behaviours, such as the riots during the summer in the UK. These brief, destructive paroxysms are still an enigma. Unlike previous riots, they were not confined either to particular demographic subsets of the population or to areas of serious social deprivation. They had no obvious agenda, not even a release of suppressed communal fury – although there was surely a link to post-financial-crash austerity policies. One might almost call them events that grew simply because they could. Some British politicians suggested that Twitter should be disabled in such circumstances, displaying not only a loss of perspective (some of the same people celebrated the power of networking in the Arab Spring) but also a failure to understand the new order. After all, police monitoring of Twitter in some UK cities provided information that helped suppress rioting.
What all these events really point towards is the profound impact of globalization. They show how deep and dense the interdependence of economies, cultures and institutions has become, in large part thanks to the pervasive nature of information and communication technologies. And with this transformation come new, spontaneous modes of social and political organization, from terrorist and protest networks to online consumerism – modes that are especially prone to discontinuous change. Nothing will work that fails to take this new interconnectedness into account: not the economy, not policing, not democracy.
The path forwards
Such extreme interdependence makes it hard to find, or even to meaningfully define, the causes of major events. The US subprime mortgage problem caused the financial collapse only in the way Bouazizi’s immolation caused the Arab Spring – it could equally have been something else that set events in motion. The real vulnerabilities were systemic: webs of dependence that became destabilized by, say, runaway profits in the US banking industry, or rising food prices in North Africa. This means that potential solutions must lie there too.
Complex systems can rarely if ever be controlled by top-down measures. Instead, they must be managed by guiding the trajectories from the bottom up [10]. In a much simpler but instructive example, traffic lights may direct flows more efficiently if they are given adaptive autonomy and allowed to self-organize their switching, rather than imposing a rigid, supposedly optimal sequence [11]. The robustness of the Internet to random server failures is precisely due to the fact that no one designed it – it grew its ‘small world’ topology spontaneously.
This does not imply that political interventions are doomed to fail, but just that they must take other forms from those often advanced today. “Complex systems cannot be steered like a bus”, says Dirk Helbing of the Swiss Federal Institute of Technology (ETH) in Zurich, a specialist on the understanding and management of complex social systems. “Attempts to control the systems from the top down may be strong enough to disturb its intrinsic self-organization but not strong enough to re-establish order. The result would be chaos and inefficiency. Modern governance typically changes the institutional framework too quickly to allow individuals and companies to adapt. This destroys the hierarchy of time scales needed to establish stable order.”
But these systems are nevertheless manageable, Helbing insists – not by imposing structures but by creating the rules needed to allow the system to find its own stable organization. “This can’t be ensured by a regulatory authority that monitors the system and tries to enforce specific individual action”, he says.
That’s why theories or ideologies are likely to be less effective at predicting or averting crises than scenario modelling. It’s why problems need to be considered at several hierarchical levels, probably with multiple, overlapping models, and why solutions must have scope for adaptation and flexibility. And although cascading crises and discontinuous changes may be unpredictable, the connections and vulnerabilities that permit them are not. Planning for the future, then, might not be so much a matter of foreseeing what could go wrong as of making our systems and institutions robust enough to withstand a variety of shocks. This is how the new history will work.
References
1. Fukuyama, F. The End of History and the Last Man (Penguin, London, 1992).
2. E. D. Mansfield & Snyder, J. Int. Secur. 20, 5–38 (1995).
3. Cederman, L.-E., Hug, S. & Wenger, A., in Democritization (eds Grimm, S. & Merkel, W.), 15, 509-524 (Routledge, London, 2008).
4. National Intelligence Council, Global Trends 2025: A Transformed World (US Government Printing Office, Washington DC, 2008).
5. Handy, C., The Age of Unreason (Harvard Business School Press, Boston, 1990).
6. In H. Strachan, Europaeum Lecture, Geneva, 9 November 2006, p. 12.
7. J. C. Bohorquez, S. Gourley, A. R. Dixon, M. Spagat & N. F. Johnson, Nature 462, 911-914 (2009).
8. Barabási, A.-L. IEEE Control Syst. Mag. 27(4), 33-42 (2007).
9. Vespignani, A. Nature 464, 984-985 (2010).
10. Helbing, D. (ed.), Managing Complexity: Insights, Concepts, Applications (Springer, Berlin, 2008).
11. Lämmer, S. & Helbing, D., J. Stat. Mech. P04019 (2008).
___________________________________________________
2011 shows that our highly networked society is ever more prone to abrupt change. The future of our complex world depends on building resilience to shocks.
In the 1990s, American political scientist Francis Fukuyama, now at Stanford, predicted that the world was approaching the ‘end of history’ [1]. Like most smart ideas that prove to be wrong, Fukuyama’s was illuminating precisely for its errors. Events this year have helped to reveal why.
Fukuyama argued that after the collapse of the Soviet Union, liberal democracy could be seen as the logical and stable end point of civilization. Yet the prospect that the world will gradually replicate the US model of liberal democracy, as Fukuyama hoped, looks more remote today than it did at the end of the twentieth century.
This year we have seen proliferating protest movements in the fallout from the financial crisis – not just the cries of the marginalized and disaffected, but genuine challenges to the legitimacy of the economic system on which recent liberal democracies have been based. In the face of the grave debt crisis in Greece, the wisdom of deploying democracy’s ultimate tool – the national referendum – to solve it was questioned. The political situation in Russia and Turkey suggests that there is nothing inexorable or irreversible about a process of democratization, while North Africa and the Middle East demonstrate to politicians what political scientists could already have told them: that democratization can itself inflame conflict, especially when it is imposed in the absence of a strong pre-existing state [2,3]. Meanwhile, China continues to show that aggressive capitalism depends on neither liberalism nor democracy. As a recent report of the US National Intelligence Council admits, in the coming years “the Western model of economic liberalism, democracy, and secularism, which many assumed to be inevitable, may lose its luster” [4].
The real shortcoming behind Fukuyama’s thesis, however, was not his faith democracy but that he considered history to be gradualist: tomorrow’s history is more (or less) of the same. The common talk among political analysts now is of ‘discontinuous change’, a notion raised by Irish philosopher Charles Handy 20 years ago [5], and alluded to by President Obama in his speech at the West Point Military Academy last year, when he spoke of ‘moments of change’. Sudden disruptive events, particularly wars, have of course always been a part of history. But they would come and go against a slowly evolving social, cultural and political backdrop. Now the potential for discontinuous social and political change is woven into the very fabric of global affairs.
Take the terrorist attack on the World Trade Centre’s twin towers in 2001. This was said by many to have proved Fukuyama wrong – but on this tenth anniversary of that event we can now see more clearly in what sense that was so. It was not simply that this was a significant historical event – Fukuyama was never claiming that those would cease. Rather, it was a harbinger of the new world order, which the subsequent ‘war on terror’ failed catastrophically to acknowledge. That was a war waged in the old way, by sending armies to battlegrounds (in Afghanistan and Iraq) according to Carl von Clausewitz’s old definition, in his classic 1832 work On War, of a continuation of international politics by other means. But not only were those wars in no sense ‘won, they were barely wars at all – illustrating the remark of American strategic analyst Anthony Cordesman that “one of the lessons of modern war is that war can no longer be called war” [6]. Rather, armed conflict is a diffuse, nebulous affair, no longer corralled from peacetime by declarations and treaties, no longer recognizing generals or even statehood. In its place is a network of insurgents, militias, terrorist cells, suicide bombers, overlapping and sometimes competing ‘enemy’ organizations [7]. Somewhere in this web we have had to say farewell to war and peace.
Network revolutions
The nature of discontinuous change is often misunderstood. It is sometimes said – this is literally the defence of traditional economists in their failure to predict the on-going financial and national-debt crises – that no one can be expected to foresee such radical departures from the previous quotidian. They come, like a hijacked aircraft, out of a clear blue sky. Yet social and political discontinuities are rarely if ever random in that sense, even if there is a certain arbitrary character to their immediate triggers. Rather, they are abrupt in the same way, and for the same reasons, that phase transitions are abrupt in physics. In complex systems, including social ones, discontinuities don’t reflect profound changes in the governing forces but instead derive from the interactions and feedbacks between the component parts. Thus, discontinuities in history are precisely what you'd expect if you start considering social phenomena from a complex-systems perspective.
Experience with natural and technological complex systems teaches us, for example, that highly connected networks of strong interactions create a propensity for avalanches, catastrophic failures, and systemic ruptures [8,9]: in short, for discontinuous change.
So it should come as no surprise that today’s highly networked, interconnected world, replete with cell phones, ipads and social media, is prone to abrupt changes in course. It is much more than idle analogy that connects the cascade of minor failures leading to the 2003 power blackout of eastern North America with the freezing of liquidity in the global banking network in 2007-8.
Some see the revolts in Tunisia and Egypt in this way too, dubbing them ‘Twitter revolutions’ because of the way unrest and news of demonstration were spread on social networks. Although this is an over-simplification, it is abundantly clear that networking supplied the possibility for a random event to trigger a major one. The Tunisian revolt was set in motion by the self-immolation of a street vendor, Mohammed Bouazizi, in Sidi Bouzid, in protest at harsh treatment by officials. Three months earlier there was a similar case in the city of Monastir – but no one knew about it because the news was not spread on Facebook.
It was surely not without reasons that Twitter and Facebook were shut down by both the Tunisian and Egyptian authorities. The issue is not so much whether they ‘caused’ the revolutions, but that their existence – and the concomitant potential for mobilizing the young, educated populations of these countries – can alter the way things happen in the Middle East and beyond. These same tools are now vital to the Occupy protests disrupting complacent financial districts worldwide, from New York to Taipei, drawing attention to issues of social and economic inequality.
Social media seem also to have the potential to facilitate qualitatively new collective behaviours, such as the riots during the summer in the UK. These brief, destructive paroxysms are still an enigma. Unlike previous riots, they were not confined either to particular demographic subsets of the population or to areas of serious social deprivation. They had no obvious agenda, not even a release of suppressed communal fury – although there was surely a link to post-financial-crash austerity policies. One might almost call them events that grew simply because they could. Some British politicians suggested that Twitter should be disabled in such circumstances, displaying not only a loss of perspective (some of the same people celebrated the power of networking in the Arab Spring) but also a failure to understand the new order. After all, police monitoring of Twitter in some UK cities provided information that helped suppress rioting.
What all these events really point towards is the profound impact of globalization. They show how deep and dense the interdependence of economies, cultures and institutions has become, in large part thanks to the pervasive nature of information and communication technologies. And with this transformation come new, spontaneous modes of social and political organization, from terrorist and protest networks to online consumerism – modes that are especially prone to discontinuous change. Nothing will work that fails to take this new interconnectedness into account: not the economy, not policing, not democracy.
The path forwards
Such extreme interdependence makes it hard to find, or even to meaningfully define, the causes of major events. The US subprime mortgage problem caused the financial collapse only in the way Bouazizi’s immolation caused the Arab Spring – it could equally have been something else that set events in motion. The real vulnerabilities were systemic: webs of dependence that became destabilized by, say, runaway profits in the US banking industry, or rising food prices in North Africa. This means that potential solutions must lie there too.
Complex systems can rarely if ever be controlled by top-down measures. Instead, they must be managed by guiding the trajectories from the bottom up [10]. In a much simpler but instructive example, traffic lights may direct flows more efficiently if they are given adaptive autonomy and allowed to self-organize their switching, rather than imposing a rigid, supposedly optimal sequence [11]. The robustness of the Internet to random server failures is precisely due to the fact that no one designed it – it grew its ‘small world’ topology spontaneously.
This does not imply that political interventions are doomed to fail, but just that they must take other forms from those often advanced today. “Complex systems cannot be steered like a bus”, says Dirk Helbing of the Swiss Federal Institute of Technology (ETH) in Zurich, a specialist on the understanding and management of complex social systems. “Attempts to control the systems from the top down may be strong enough to disturb its intrinsic self-organization but not strong enough to re-establish order. The result would be chaos and inefficiency. Modern governance typically changes the institutional framework too quickly to allow individuals and companies to adapt. This destroys the hierarchy of time scales needed to establish stable order.”
But these systems are nevertheless manageable, Helbing insists – not by imposing structures but by creating the rules needed to allow the system to find its own stable organization. “This can’t be ensured by a regulatory authority that monitors the system and tries to enforce specific individual action”, he says.
That’s why theories or ideologies are likely to be less effective at predicting or averting crises than scenario modelling. It’s why problems need to be considered at several hierarchical levels, probably with multiple, overlapping models, and why solutions must have scope for adaptation and flexibility. And although cascading crises and discontinuous changes may be unpredictable, the connections and vulnerabilities that permit them are not. Planning for the future, then, might not be so much a matter of foreseeing what could go wrong as of making our systems and institutions robust enough to withstand a variety of shocks. This is how the new history will work.
References
1. Fukuyama, F. The End of History and the Last Man (Penguin, London, 1992).
2. E. D. Mansfield & Snyder, J. Int. Secur. 20, 5–38 (1995).
3. Cederman, L.-E., Hug, S. & Wenger, A., in Democritization (eds Grimm, S. & Merkel, W.), 15, 509-524 (Routledge, London, 2008).
4. National Intelligence Council, Global Trends 2025: A Transformed World (US Government Printing Office, Washington DC, 2008).
5. Handy, C., The Age of Unreason (Harvard Business School Press, Boston, 1990).
6. In H. Strachan, Europaeum Lecture, Geneva, 9 November 2006, p. 12.
7. J. C. Bohorquez, S. Gourley, A. R. Dixon, M. Spagat & N. F. Johnson, Nature 462, 911-914 (2009).
8. Barabási, A.-L. IEEE Control Syst. Mag. 27(4), 33-42 (2007).
9. Vespignani, A. Nature 464, 984-985 (2010).
10. Helbing, D. (ed.), Managing Complexity: Insights, Concepts, Applications (Springer, Berlin, 2008).
11. Lämmer, S. & Helbing, D., J. Stat. Mech. P04019 (2008).
Subscribe to:
Posts (Atom)




