Saturday, March 24, 2012
On looking good
I seem to have persuaded Charlotte Raven, and doubtless now many others too, that Yves Saint Laurent’s Forever Youth Liberator skin cream is the real deal. I don’t know if my ageing shoulders can bear the weight of this responsibility. All I can say in my defence is that it would have been unfair to YSL to allow my instinct to scoff override my judgement of the science - on which, more here. But as Ms Raven points out, ultimately I reserve judgement. Where I do not reserve judgement is in saying, with what I hope does not seem like cloying gallantry, that it is hard to see why she would feel the need to consider shelling out for this gloop even if it does what it says on the tin. Or rather, it is very easy to understand why she feels the pressure to do so, given the ways of the world, but also evident that she has every cause to ignore it. No look, I’m not saying that if she didn’t look so good without makeup and rejuvenating creams then she’d be well advised to start slapping them on, it’s just that… Oh dear, this is a minefield.
Thursday, March 22, 2012
Wonders of New York
Here’s a piece about an event in NYC in which I took part at the end of last month.
______________________________________________________________________
It was fitting that the ‘Wonder Cabinet’, a public event at a trendy arthouse cinema in New York at the end of February, should have been opened by Lawrence Weschler, director of New York University’s Institute for the Humanities under whose auspices the affair was staged. For Weschler’s Pulitzer-shortlisted Mr Wilson’s Cabinet of Wonder (1995) tells the tale of David Wilson’s bizarre Museum of Jurassic Technology in Los Angeles, in which you can never quite be sure if the bizarre exhibits are factual or not (they usually are, after a fashion). And that was very much the nature of what followed in the ten-hour marathon that Weschler introduced.
Was legendary performance artist Laurie Anderson telling the truth, for instance, about her letter to Thomas Pynchon requesting permission to create an opera based on Gravity’s Rainbow? Expecting no reply from the famously reclusive author, she was surprised to receive a long epistle in which Pynchon proclaimed his admiration for his work and offered his enthusiastic endorsement of her plan. There was just one catch: he insisted that it be scored for solo banjo. Anderson took this to be a uniquely gracious way of saying no. I didn’t doubt her for a second.
The day had a remarkably high quota of such head-scratching and jaw-dropping revelations, the intellectual equivalent of those celluloid rushes in the movies of Coppola, Kubrick and early Spielberg. Even if you thought, as I did, that you know a smattering about bowerbirds – the male of which constructs an elaborate ‘bower’ of twigs and decorates it with scavenged objects to lure a female – seeing them in action during a talk by ornithologist Gail Patricelli of the University of California at Davis was spectacular. Each species has its own architectural style and, most strikingly, its own colour scheme: blue for the Satin Bowerbird (which went to great lengths to steal the researchers’ blue toothbrushes), bone-white and green for the Great Bowerbird. Some of these constructions are exquisite, around a metre in diameter. But all that labour is only the precursor to an elaborate mating ritual in which the most successful males exhibit an enticing boldness without tipping into scary aggression. This means that female choice selects for a wide and subtle range of male social behaviour, among which are sensitivity and responsiveness to the potential mate’s own behavioural signals. And all this for the most anti-climactic of climaxes, an act of copulation that lasts barely two seconds.
Or take the octopus described by virtual-reality visionary (and multi-instrumentalist) Jaron Lanier, which was revealed by secretly installed CCTV to be the mysterious culprit stealing the rare crabs from the aquarium in which it was kept. The octopus would climb out of its tank (it could survive out of water for short periods), clamber into the crabs’ container, help itself and return home to devour the spoils and bury the evidence. And get this: it closed the crab-tank lid behind it to hide its tracks – and in doing so, offered what might be interpreted as evidence for what developmental psychologists call a ‘theory of mind’, an ability to ascribe autonomy and intention to other beings. Octopi and squid would rule the world, Lanier claimed, if it were not for the fact that they have no childhood: abandoned by the mother at birth, the youngsters are passed on none of the learned culture of the elders, and so (unlike bower birds, say) must always begin from scratch.
All this orbited now close to, now more distant from the raison d’être of the event, which was philosopher David Rothenberg’s new book Survival of the Beautiful, an erudite argument for why we should take seriously the notion that non-human creatures have an aesthetic sense that exceeds the austere exigencies of Darwinian adaptation. It’s not just that the bower bird does more than seems strictly necessary to get a mate (although what is ‘necessary’ is open to debate); the expression of preferences by the female seems as elaborately ineffable and multi-valent as anything in human culture. Such reasoning of course stands at risk of becoming anthropomorphic, a danger fully appreciated by Rothenberg and the others who discussed instances of apparent creativity in animals. But it’s conceivable that this question could be turned into hard science. Psychologist Ofer Tchernichovski of the City University of New York hopes, for example, to examine whether birdsong uses the same musical tricks (basically, the creation and subsequent violation of expectation) to elicit emotion, for example by measuring in the song birds the physiological indicators of a change in arousal such as heartbeat and release of the ‘pleasure’ neurotransmitter dopamine that betray an emotional response in humans. Even if you want to quibble over what this will say about the bird’s ‘state of mind’, the question is definitely worth asking.
But it was the peripheral delights of the event, as much as the exploration of Rothenberg’s thesis, that made it a true cabinet of wonders. Lanier elicited another ‘Whoa!’ moment by explaining that the use of non-human avatars in virtual reality – putting people in charge of a lobster’s body, say – amounts to an exploration of the pre-adaptation of the human brain: the kinds of somatic embodiments that it is already adapted to handle, some of which might conceivably be the shapes into which we will evolve. This is a crucial aspect of evolution: it’s not so much that a mutation introduces new shapes and functions, but that it releases a potential that is already latent in the ancestral organism. Beyond this, said Lanier, putting individuals into more abstract avatars can be a wonderful educational tool by engaging mental resources beyond abstract reasoning, just as the fingers of an improvising pianist effortlessly navigate a route through harmonic space that would baffle the logical mind. Children might learn trigonometry much faster by becoming triangles; chemists will discover what it means to be a molecule.
Meanwhile, the iPad apps devised by engagingly modest media artist Scott Snibbe provided the best argument I’ve so far seen for why this device is not simply a different computer interface but a qualitatively new form of information technology, both in cognitive and creative terms. No wonder it was to Snibbe that Björk (“an angel”, he confided) went to realise her multimedia ‘album’ Biophilia. Whether this interactive project represents the future of music or an elaborate game remains to be seen; for me, Snibbe’s guided tour of its possibilities evoked a sensation of tectonic shift akin to that I vaguely recall now on being told that there was this thing on the internet called a ‘search engine’.
But the prize for the most arresting shaggy dog story went again to Anderson. Her attempts to teach her dog to communicate and play the piano were already raised beyond the status of the endearingly kooky by the profound respect in which she evidently held the animal. But when in the course of recounting the dog’s perplexed discovery during a mountain hike that death, in the form of vultures, could descend from above – another 180 degrees of danger to consider – we were all suddenly reminded that we were in downtown Manhattan, just a few blocks from the decade-old hole in the financial district. And we suddenly felt not so far removed from these creatures at all.
______________________________________________________________________
It was fitting that the ‘Wonder Cabinet’, a public event at a trendy arthouse cinema in New York at the end of February, should have been opened by Lawrence Weschler, director of New York University’s Institute for the Humanities under whose auspices the affair was staged. For Weschler’s Pulitzer-shortlisted Mr Wilson’s Cabinet of Wonder (1995) tells the tale of David Wilson’s bizarre Museum of Jurassic Technology in Los Angeles, in which you can never quite be sure if the bizarre exhibits are factual or not (they usually are, after a fashion). And that was very much the nature of what followed in the ten-hour marathon that Weschler introduced.
Was legendary performance artist Laurie Anderson telling the truth, for instance, about her letter to Thomas Pynchon requesting permission to create an opera based on Gravity’s Rainbow? Expecting no reply from the famously reclusive author, she was surprised to receive a long epistle in which Pynchon proclaimed his admiration for his work and offered his enthusiastic endorsement of her plan. There was just one catch: he insisted that it be scored for solo banjo. Anderson took this to be a uniquely gracious way of saying no. I didn’t doubt her for a second.
The day had a remarkably high quota of such head-scratching and jaw-dropping revelations, the intellectual equivalent of those celluloid rushes in the movies of Coppola, Kubrick and early Spielberg. Even if you thought, as I did, that you know a smattering about bowerbirds – the male of which constructs an elaborate ‘bower’ of twigs and decorates it with scavenged objects to lure a female – seeing them in action during a talk by ornithologist Gail Patricelli of the University of California at Davis was spectacular. Each species has its own architectural style and, most strikingly, its own colour scheme: blue for the Satin Bowerbird (which went to great lengths to steal the researchers’ blue toothbrushes), bone-white and green for the Great Bowerbird. Some of these constructions are exquisite, around a metre in diameter. But all that labour is only the precursor to an elaborate mating ritual in which the most successful males exhibit an enticing boldness without tipping into scary aggression. This means that female choice selects for a wide and subtle range of male social behaviour, among which are sensitivity and responsiveness to the potential mate’s own behavioural signals. And all this for the most anti-climactic of climaxes, an act of copulation that lasts barely two seconds.
Or take the octopus described by virtual-reality visionary (and multi-instrumentalist) Jaron Lanier, which was revealed by secretly installed CCTV to be the mysterious culprit stealing the rare crabs from the aquarium in which it was kept. The octopus would climb out of its tank (it could survive out of water for short periods), clamber into the crabs’ container, help itself and return home to devour the spoils and bury the evidence. And get this: it closed the crab-tank lid behind it to hide its tracks – and in doing so, offered what might be interpreted as evidence for what developmental psychologists call a ‘theory of mind’, an ability to ascribe autonomy and intention to other beings. Octopi and squid would rule the world, Lanier claimed, if it were not for the fact that they have no childhood: abandoned by the mother at birth, the youngsters are passed on none of the learned culture of the elders, and so (unlike bower birds, say) must always begin from scratch.
All this orbited now close to, now more distant from the raison d’être of the event, which was philosopher David Rothenberg’s new book Survival of the Beautiful, an erudite argument for why we should take seriously the notion that non-human creatures have an aesthetic sense that exceeds the austere exigencies of Darwinian adaptation. It’s not just that the bower bird does more than seems strictly necessary to get a mate (although what is ‘necessary’ is open to debate); the expression of preferences by the female seems as elaborately ineffable and multi-valent as anything in human culture. Such reasoning of course stands at risk of becoming anthropomorphic, a danger fully appreciated by Rothenberg and the others who discussed instances of apparent creativity in animals. But it’s conceivable that this question could be turned into hard science. Psychologist Ofer Tchernichovski of the City University of New York hopes, for example, to examine whether birdsong uses the same musical tricks (basically, the creation and subsequent violation of expectation) to elicit emotion, for example by measuring in the song birds the physiological indicators of a change in arousal such as heartbeat and release of the ‘pleasure’ neurotransmitter dopamine that betray an emotional response in humans. Even if you want to quibble over what this will say about the bird’s ‘state of mind’, the question is definitely worth asking.
But it was the peripheral delights of the event, as much as the exploration of Rothenberg’s thesis, that made it a true cabinet of wonders. Lanier elicited another ‘Whoa!’ moment by explaining that the use of non-human avatars in virtual reality – putting people in charge of a lobster’s body, say – amounts to an exploration of the pre-adaptation of the human brain: the kinds of somatic embodiments that it is already adapted to handle, some of which might conceivably be the shapes into which we will evolve. This is a crucial aspect of evolution: it’s not so much that a mutation introduces new shapes and functions, but that it releases a potential that is already latent in the ancestral organism. Beyond this, said Lanier, putting individuals into more abstract avatars can be a wonderful educational tool by engaging mental resources beyond abstract reasoning, just as the fingers of an improvising pianist effortlessly navigate a route through harmonic space that would baffle the logical mind. Children might learn trigonometry much faster by becoming triangles; chemists will discover what it means to be a molecule.
Meanwhile, the iPad apps devised by engagingly modest media artist Scott Snibbe provided the best argument I’ve so far seen for why this device is not simply a different computer interface but a qualitatively new form of information technology, both in cognitive and creative terms. No wonder it was to Snibbe that Björk (“an angel”, he confided) went to realise her multimedia ‘album’ Biophilia. Whether this interactive project represents the future of music or an elaborate game remains to be seen; for me, Snibbe’s guided tour of its possibilities evoked a sensation of tectonic shift akin to that I vaguely recall now on being told that there was this thing on the internet called a ‘search engine’.
But the prize for the most arresting shaggy dog story went again to Anderson. Her attempts to teach her dog to communicate and play the piano were already raised beyond the status of the endearingly kooky by the profound respect in which she evidently held the animal. But when in the course of recounting the dog’s perplexed discovery during a mountain hike that death, in the form of vultures, could descend from above – another 180 degrees of danger to consider – we were all suddenly reminded that we were in downtown Manhattan, just a few blocks from the decade-old hole in the financial district. And we suddenly felt not so far removed from these creatures at all.
Wednesday, March 21, 2012
The beauty of an irregular mind
Here’s the news story on this year’s Abel Prize that I’ve just written for Nature. You’ve always got to take a deep breath before diving into the Abel. But it is fun to attempt it.
___________________________________________________________
Maths prize awarded for elucidating the links between numbers and information.
An ‘irregular mind’ is what has won this year’s Abel Prize, one of the most prestigious awards in mathematics, for Endre Szemerédi of the Alfred Rényi Institute of Mathematics in Budapest, Hungary.
This is how Szemerédi was described in a book published two years ago to mark his 70th birthday, which added that “his brain is wired differently than for most mathematicians.”
Szemerédi has been awarded the prize, worth 6m Norwegian krone (about US$1m), “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory”, according to the Norwegian Academy of Science and Letters, which instituted the prize as a kind of ‘mathematics Nobel’ in 2003.
Mathematician Timothy Gowers of Cambridge University, who has worked some of the same areas as Szemerédi, says that he has “absolutely no doubt that the award is extremely appropriate.”
Nils Stenseth, president of the Norwegian Academy of Science, who announced the award today, says that Szemerédi’s work shows how research that is purely curiosity-driven can turn out to have important practical applications. “Szemerédi’s work supplies some of the basis for the whole development of informatics and the internet”, he says, “He showed how number theory can be used to organize large amounts of information in efficient ways.”
Discrete mathematics deals with mathematical structures that are made up of discrete entities rather than smoothly varying ones: for example, integers, graphs (networks), permutations and logic operations. Crudely speaking, it entails a kind of digital rather than analogue maths, which helps to explain its relationship to aspects of computer theory.
Szemerédi was spotted and mentored by another Hungarian pioneer in this field, Paul Erdös, who is widely regarded as one of the greatest mathematicians of the 20th century – even though Szemerédi began his training not in maths at all, but at medical school.
One of his first successes was a proof of a conjecture made in 1936 by Erdös and his colleague Paul Turán concerning the properties of integers. They aimed to establish criteria for whether a series of integers contains arithmetic progressions – sequences of integers that differ by the same amount, such as 3, 6, 9…
In 1975 Szeremédi showed that subsets of any large enough string of integers must contain arithmetic progressions of almost any length [1]. In other words, if you had to pick, say, 1 percent of all the numbers between 1 and some very large number N, you can’t avoid selecting some arithmetic progressions. This was the Erdös-Túran conjecture, now supplanted by Szemerédi’s theorem.
The result connected work on number theory to graph theory, the mathematics of networks of connected points, which Erdös had also studied. The relationship between graphs and permutations of numbers is most famously revealed by the four-colour theorem, which states that it is possible to colour any map (considered as a network of boundaries) with four colours such that no two regions with the same colour share a border. The problem of arithmetic progressions can be considered analogous if one considers giving numbers in a progression the same colour.
Meanwhile, relationships between number sequences become relevant to computer science via so-called sorting networks, which are hypothetical networks of wires, like parallel train tracks, that sort strings of numbers into numerical sequence by making pairwise comparisons and then shunting them from one wire to another. Szemerédi and his Hungarian collaborators Miklós Ajtai and Janós Komlós discovered an optimal sorting network for parallel processing in 1983 [2], one of several of Szemerédi’s contributions to theoretical computer science.
When mathematicians discuss Szemerédi’s work, the word ‘deep’ often emerges – a reflection of the connections it often makes between apparently different fields. “He shows the advantages of working on a whole spectrum of problems”, says Stenseth.
For example, Szemerédi’s theorem brings number theory in contact with the theory of dynamical systems: physical systems that evolve in time, such as a pendulum or a solar system. As Israeli-American mathematician Hillel Furstenberg demonstrated soon after the theorem was published [3], it can be derived in a different way by considering how often a dynamical system returns to a particular state: an aspect of so-called ergodic behaviour, which relates to how thoroughly a dynamical system explores the space of possible states available to it.
Gowers says that many of Szemerédi’s results, including his celebrated theorem, are significantly not so much for what they prove but because of the fertile ideas developed in the course of the proof. For example, Szemerédi’s theorem made use of another of his key results, called the Szemerédi regularity lemma, which has proved central to the analysis of certain types of graphs.
References
1. E. Szemerédi, "On sets of integers containing no k elements in arithmetic progression", Acta Arithmetica 27: 199–245 (1975).
2. M. Ajtai, J. Komlós & E. Szemerédi, "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9 (1983).
3. H. Fürstenberg, "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions", J. D’Analyse Math. 31: 204–256 (1977).
___________________________________________________________
Maths prize awarded for elucidating the links between numbers and information.
An ‘irregular mind’ is what has won this year’s Abel Prize, one of the most prestigious awards in mathematics, for Endre Szemerédi of the Alfred Rényi Institute of Mathematics in Budapest, Hungary.
This is how Szemerédi was described in a book published two years ago to mark his 70th birthday, which added that “his brain is wired differently than for most mathematicians.”
Szemerédi has been awarded the prize, worth 6m Norwegian krone (about US$1m), “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory”, according to the Norwegian Academy of Science and Letters, which instituted the prize as a kind of ‘mathematics Nobel’ in 2003.
Mathematician Timothy Gowers of Cambridge University, who has worked some of the same areas as Szemerédi, says that he has “absolutely no doubt that the award is extremely appropriate.”
Nils Stenseth, president of the Norwegian Academy of Science, who announced the award today, says that Szemerédi’s work shows how research that is purely curiosity-driven can turn out to have important practical applications. “Szemerédi’s work supplies some of the basis for the whole development of informatics and the internet”, he says, “He showed how number theory can be used to organize large amounts of information in efficient ways.”
Discrete mathematics deals with mathematical structures that are made up of discrete entities rather than smoothly varying ones: for example, integers, graphs (networks), permutations and logic operations. Crudely speaking, it entails a kind of digital rather than analogue maths, which helps to explain its relationship to aspects of computer theory.
Szemerédi was spotted and mentored by another Hungarian pioneer in this field, Paul Erdös, who is widely regarded as one of the greatest mathematicians of the 20th century – even though Szemerédi began his training not in maths at all, but at medical school.
One of his first successes was a proof of a conjecture made in 1936 by Erdös and his colleague Paul Turán concerning the properties of integers. They aimed to establish criteria for whether a series of integers contains arithmetic progressions – sequences of integers that differ by the same amount, such as 3, 6, 9…
In 1975 Szeremédi showed that subsets of any large enough string of integers must contain arithmetic progressions of almost any length [1]. In other words, if you had to pick, say, 1 percent of all the numbers between 1 and some very large number N, you can’t avoid selecting some arithmetic progressions. This was the Erdös-Túran conjecture, now supplanted by Szemerédi’s theorem.
The result connected work on number theory to graph theory, the mathematics of networks of connected points, which Erdös had also studied. The relationship between graphs and permutations of numbers is most famously revealed by the four-colour theorem, which states that it is possible to colour any map (considered as a network of boundaries) with four colours such that no two regions with the same colour share a border. The problem of arithmetic progressions can be considered analogous if one considers giving numbers in a progression the same colour.
Meanwhile, relationships between number sequences become relevant to computer science via so-called sorting networks, which are hypothetical networks of wires, like parallel train tracks, that sort strings of numbers into numerical sequence by making pairwise comparisons and then shunting them from one wire to another. Szemerédi and his Hungarian collaborators Miklós Ajtai and Janós Komlós discovered an optimal sorting network for parallel processing in 1983 [2], one of several of Szemerédi’s contributions to theoretical computer science.
When mathematicians discuss Szemerédi’s work, the word ‘deep’ often emerges – a reflection of the connections it often makes between apparently different fields. “He shows the advantages of working on a whole spectrum of problems”, says Stenseth.
For example, Szemerédi’s theorem brings number theory in contact with the theory of dynamical systems: physical systems that evolve in time, such as a pendulum or a solar system. As Israeli-American mathematician Hillel Furstenberg demonstrated soon after the theorem was published [3], it can be derived in a different way by considering how often a dynamical system returns to a particular state: an aspect of so-called ergodic behaviour, which relates to how thoroughly a dynamical system explores the space of possible states available to it.
Gowers says that many of Szemerédi’s results, including his celebrated theorem, are significantly not so much for what they prove but because of the fertile ideas developed in the course of the proof. For example, Szemerédi’s theorem made use of another of his key results, called the Szemerédi regularity lemma, which has proved central to the analysis of certain types of graphs.
References
1. E. Szemerédi, "On sets of integers containing no k elements in arithmetic progression", Acta Arithmetica 27: 199–245 (1975).
2. M. Ajtai, J. Komlós & E. Szemerédi, "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9 (1983).
3. H. Fürstenberg, "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions", J. D’Analyse Math. 31: 204–256 (1977).
Friday, March 16, 2012
Genetic origami
Here’s another piece from BBC Future. Again, for non-UK readers the final version is here.
_______________________________________________________________
What shape is your genome? It sounds like an odd question, for what has shape got to do with genes? And therein lies the problem. Popular discourse in this age of genetics, when the option of having your own genome sequenced seems just round the corner, has focused relentlessly on the image of information imprinted into DNA as a linear, four-letter code of chemical building blocks. Just as no one thinks about how the data in your computer is physically arranged in its microchips, so our view of genetics is largely blind to the way the DNA strands that hold our genes are folded up.
But here’s an instance where an older analogy with computers might serve us better. In the days when data was stored on magnetic tape, you had to worry about whether the tape could actually be fed over the read-out head: if it got tangled, you couldn’t get at the information.
In living cells, DNA certainly is tangled – otherwise the genome couldn’t be crammed inside. In humans and other higher organisms, from insects to elephants, the genetic material is packaged up in several chromosomes.
The issue isn’t, however, simply whether or not this folding leaves genes accessible for reading. For the fact is that there is a kind of information encoded in the packaging itself. Because genes can be effectively switched off by tucking them away, cells have evolved highly sophisticated molecular machinery for organizing and altering the shape of chromosomes. A cell’s behaviour is controlled by manipulations of this shape, as much as by what the genes themselves ‘say’. That’s clear from the fact that genetically identical cells in our body carry out completely different roles – some in the liver, some in the brain or skin.
The fact that these specialized cells can be returned to a non-specialized state that performs any function – as shown, for example, by the cloning of Dolly the sheep from a mammary cell – indicates that the genetic switching induced by shape changes and other modifications of our chromosomes is at least partly reversible. The medical potential of getting cells to re-commit to new types of behaviour – in cloning, stem-cell therapies and tissue engineering – is one of the prime reasons why it’s important to understand the principles behind the organization of folding and shape in our chromosomes.
In shooting at that goal, Tom Sexton, Giacomo Cavalli and their colleagues at the Institute of Human Genetics in Montpellier, France, in collaboration with a team led by Amos Tanay of the Weizmann Institute of Science in Israel, have started by looking at the fruitfly genome. That’s because it is smaller and simpler than the human genome (but not too small or simple to be irrelevant to it), and also because the fly is genetically the best studied and understood of higher creatures. A new paper unveiling a three-dimensional map of the fly’s genome is therefore far from the arcane exercise it might seem – it’s a significant step in revealing how genes really work.
Scientists usually explore the shapes of molecules using techniques for taking microscopic snapshots: electron microscopes themselves, as well as crystallography, which considers how beams of X-rays, electrons or neutrons are reflected by molecules stacked into crystals. But these methods are hard or impossible to apply to molecular structures as complex as chromosomes. Sexton and colleagues use a different approach: a method that reveals which parts of a genome sit close together. This allows the entire map to be patched together piece by piece.
It’s no surprise that the results show the fruitfly genome to be carefully folded and organized, rather than just scrunched up any old how. But the findings put flesh on this skeletal picture. The chromosomes are organized on many levels, rather like a building or a city. There are ‘departments’ – clusters of genes – that do particular jobs, sharply demarcated from one another by boundaries somewhat equivalent to gates or stairwells, where ‘insulator’ proteins clinging to the DNA serve to separate one domain from the next. And inactive genes are often grouped together, like disused shops clustered in a run-down corner of town.
What’s more, the distinct physical domains tend to correspond with parts of the genome that are tagged with chemical ‘marker’ groups, which can modify the activity of genes, rather as if buildings in a particular district of a city all have yellow-painted doors. There’s evidently some benefit for the smooth running of the cell in having a physical arrangement that reflects and reinforces this chemical coding.
It will take a lot more work to figure out how this three-dimensional organization controls the activity of the genes. But the better we can get to grips with the rules, the more chance we will have of imposing our own plans on the genome – silencing or reawakening genes not, as in current genetic engineering, by cutting, pasting and editing the genetic text, but by using origami to hide or reveal it.
Reference: T. Sexton et al., Cell 148, 458-472 (2012); doi:10.1016/j.cell.2012.01.010
_______________________________________________________________
What shape is your genome? It sounds like an odd question, for what has shape got to do with genes? And therein lies the problem. Popular discourse in this age of genetics, when the option of having your own genome sequenced seems just round the corner, has focused relentlessly on the image of information imprinted into DNA as a linear, four-letter code of chemical building blocks. Just as no one thinks about how the data in your computer is physically arranged in its microchips, so our view of genetics is largely blind to the way the DNA strands that hold our genes are folded up.
But here’s an instance where an older analogy with computers might serve us better. In the days when data was stored on magnetic tape, you had to worry about whether the tape could actually be fed over the read-out head: if it got tangled, you couldn’t get at the information.
In living cells, DNA certainly is tangled – otherwise the genome couldn’t be crammed inside. In humans and other higher organisms, from insects to elephants, the genetic material is packaged up in several chromosomes.
The issue isn’t, however, simply whether or not this folding leaves genes accessible for reading. For the fact is that there is a kind of information encoded in the packaging itself. Because genes can be effectively switched off by tucking them away, cells have evolved highly sophisticated molecular machinery for organizing and altering the shape of chromosomes. A cell’s behaviour is controlled by manipulations of this shape, as much as by what the genes themselves ‘say’. That’s clear from the fact that genetically identical cells in our body carry out completely different roles – some in the liver, some in the brain or skin.
The fact that these specialized cells can be returned to a non-specialized state that performs any function – as shown, for example, by the cloning of Dolly the sheep from a mammary cell – indicates that the genetic switching induced by shape changes and other modifications of our chromosomes is at least partly reversible. The medical potential of getting cells to re-commit to new types of behaviour – in cloning, stem-cell therapies and tissue engineering – is one of the prime reasons why it’s important to understand the principles behind the organization of folding and shape in our chromosomes.
In shooting at that goal, Tom Sexton, Giacomo Cavalli and their colleagues at the Institute of Human Genetics in Montpellier, France, in collaboration with a team led by Amos Tanay of the Weizmann Institute of Science in Israel, have started by looking at the fruitfly genome. That’s because it is smaller and simpler than the human genome (but not too small or simple to be irrelevant to it), and also because the fly is genetically the best studied and understood of higher creatures. A new paper unveiling a three-dimensional map of the fly’s genome is therefore far from the arcane exercise it might seem – it’s a significant step in revealing how genes really work.
Scientists usually explore the shapes of molecules using techniques for taking microscopic snapshots: electron microscopes themselves, as well as crystallography, which considers how beams of X-rays, electrons or neutrons are reflected by molecules stacked into crystals. But these methods are hard or impossible to apply to molecular structures as complex as chromosomes. Sexton and colleagues use a different approach: a method that reveals which parts of a genome sit close together. This allows the entire map to be patched together piece by piece.
It’s no surprise that the results show the fruitfly genome to be carefully folded and organized, rather than just scrunched up any old how. But the findings put flesh on this skeletal picture. The chromosomes are organized on many levels, rather like a building or a city. There are ‘departments’ – clusters of genes – that do particular jobs, sharply demarcated from one another by boundaries somewhat equivalent to gates or stairwells, where ‘insulator’ proteins clinging to the DNA serve to separate one domain from the next. And inactive genes are often grouped together, like disused shops clustered in a run-down corner of town.
What’s more, the distinct physical domains tend to correspond with parts of the genome that are tagged with chemical ‘marker’ groups, which can modify the activity of genes, rather as if buildings in a particular district of a city all have yellow-painted doors. There’s evidently some benefit for the smooth running of the cell in having a physical arrangement that reflects and reinforces this chemical coding.
It will take a lot more work to figure out how this three-dimensional organization controls the activity of the genes. But the better we can get to grips with the rules, the more chance we will have of imposing our own plans on the genome – silencing or reawakening genes not, as in current genetic engineering, by cutting, pasting and editing the genetic text, but by using origami to hide or reveal it.
Reference: T. Sexton et al., Cell 148, 458-472 (2012); doi:10.1016/j.cell.2012.01.010
Tuesday, March 13, 2012
Under the radar
I have begun to write a regular column for a new BBC sci/tech website called BBC Future. The catch is that, as it is funded (not for profit) by a source other than the license fee, you can’t view it from the UK. If you’re not in the UK, you should be able to see the column here. It is called Under the Radar, and will aim to highlight papers/work that, for one reason or another (as described below), would be likely to be entirely ignored by most science reporters. The introductory column, the pre-edited version of which is below, starts off by setting out the stall. I have in fact 3 or 4 pieces published here so far, but will space them out a little over the next few posts.
_____________________________________________________________________
Reading science journalism isn’t, in general, an ideal way to learn about what goes on in science. Almost by definition, science news dwells on the exceptional, on the rare advances that promise (even if they don’t succeed) to make a difference to our lives or our view of the universe. But while it’s always fair to confront research with the question ‘so what?’, and while you can hardly expect anyone to be interested in the mundane or the obscure, the fact is that behind much if not most of what is done by scientists lies a good, often extraordinary, story. Yet unless they happen to stumble upon some big advance (or at least, an advance that can be packaged and sold as such), most of those stories are never told.
They languish beneath the forbidding surface of papers published by specialized journals, and you’d often never guess, to glance at them, that they have any connection to anything useful, or that they harbour anything to spark the interest of more than half a dozen specialists in the world. What’s more, science then becomes presented as a succession of breakthroughs, with little indication of the difficulties that intervene between fundamental research and viable applications, or between a smart idea and a proof that it’s correct. In contrast, this column will aim to unearth some of those buried treasures and explain why they’re worth polishing.
Another reason why much of the interesting stuff gets overlooked is that good ideas rarely succeed all at once. Many projects get passed over because at first they haven’t got far enough to cross a reporter’s ‘significance threshold’, and then when the work finally gets to a useful point, it’s deemed no longer news because much of it has been published already.
Take a recent report by Shaoyi Jiang, a chemical engineer at the University of Washington in Seattle, and his colleagues in the Germany-based chemistry journal journal Angewandte Chemie. They’ve made an antimicrobial polymer coating which can be switched between a state in which it kills bacteria (eliminating 99.9% of sprayed-on E. coli) and one where it shrugs off the dead cells and resists the attachment of new ones. That second trick is a valuable asset for a bug-killing film, since even dead bacteria can trigger inflammation.
The thing is, they did this already three years ago. But there’s a key difference now. Before, the switching was a one-shot affair: once the bacteria were killed and removed, you couldn’t get the bactericidal film back. So if more bacteria do slowly get a foothold, you’re stuffed.
That’s why the researchers have laboured to make their films fully reversible, which they’ve achieved with some clever chemistry. They make a polymer layer sporting dangling molecular ‘hairs’ like a carpet, each hair ending in a ring-shaped molecule deadly to bacteria. If the surface is moistened with water, the ring springs open, transformed into a molecular group to which bacteria can’t easily stick. Just add a weak acid – acetic acid, basically vinegar – and the ring snaps closed again, regenerating a bactericidal surface as potent as before.
This work fits with a growing trend to make materials ‘smart’ – able to respond to changes in their environment. Time was when a single function was all you got: a non-adhesive ‘anti-fouling’ film, say, or one that resists corrosion or reduces light reflection (handy for solar cells). But increasingly, we want materials that do different things at different times or under different conditions. Now there’s a host of such protean substances: materials that can be switched between transparent and mirror-like say, or between water-wettable and water-repelling.
Another attraction of Jiang’s coating is that these switchable molecular carpets can in principle be coated onto a wide variety of different surfaces – metal, glass, plastics. The researchers say that it might be used on hospital walls or on the fabric of military uniforms to combat biological weapons. That sort of promise is generally where the journalism stops and the hard work begins, to turn (or not) this neat idea into mass-produced materials that are reliable, safe and affordable.
Reference: Z. Cao et al., Angewandte Chemie International Edition online publication doi:10.1002/anie.201106466.
_____________________________________________________________________
Reading science journalism isn’t, in general, an ideal way to learn about what goes on in science. Almost by definition, science news dwells on the exceptional, on the rare advances that promise (even if they don’t succeed) to make a difference to our lives or our view of the universe. But while it’s always fair to confront research with the question ‘so what?’, and while you can hardly expect anyone to be interested in the mundane or the obscure, the fact is that behind much if not most of what is done by scientists lies a good, often extraordinary, story. Yet unless they happen to stumble upon some big advance (or at least, an advance that can be packaged and sold as such), most of those stories are never told.
They languish beneath the forbidding surface of papers published by specialized journals, and you’d often never guess, to glance at them, that they have any connection to anything useful, or that they harbour anything to spark the interest of more than half a dozen specialists in the world. What’s more, science then becomes presented as a succession of breakthroughs, with little indication of the difficulties that intervene between fundamental research and viable applications, or between a smart idea and a proof that it’s correct. In contrast, this column will aim to unearth some of those buried treasures and explain why they’re worth polishing.
Another reason why much of the interesting stuff gets overlooked is that good ideas rarely succeed all at once. Many projects get passed over because at first they haven’t got far enough to cross a reporter’s ‘significance threshold’, and then when the work finally gets to a useful point, it’s deemed no longer news because much of it has been published already.
Take a recent report by Shaoyi Jiang, a chemical engineer at the University of Washington in Seattle, and his colleagues in the Germany-based chemistry journal journal Angewandte Chemie. They’ve made an antimicrobial polymer coating which can be switched between a state in which it kills bacteria (eliminating 99.9% of sprayed-on E. coli) and one where it shrugs off the dead cells and resists the attachment of new ones. That second trick is a valuable asset for a bug-killing film, since even dead bacteria can trigger inflammation.
The thing is, they did this already three years ago. But there’s a key difference now. Before, the switching was a one-shot affair: once the bacteria were killed and removed, you couldn’t get the bactericidal film back. So if more bacteria do slowly get a foothold, you’re stuffed.
That’s why the researchers have laboured to make their films fully reversible, which they’ve achieved with some clever chemistry. They make a polymer layer sporting dangling molecular ‘hairs’ like a carpet, each hair ending in a ring-shaped molecule deadly to bacteria. If the surface is moistened with water, the ring springs open, transformed into a molecular group to which bacteria can’t easily stick. Just add a weak acid – acetic acid, basically vinegar – and the ring snaps closed again, regenerating a bactericidal surface as potent as before.
This work fits with a growing trend to make materials ‘smart’ – able to respond to changes in their environment. Time was when a single function was all you got: a non-adhesive ‘anti-fouling’ film, say, or one that resists corrosion or reduces light reflection (handy for solar cells). But increasingly, we want materials that do different things at different times or under different conditions. Now there’s a host of such protean substances: materials that can be switched between transparent and mirror-like say, or between water-wettable and water-repelling.
Another attraction of Jiang’s coating is that these switchable molecular carpets can in principle be coated onto a wide variety of different surfaces – metal, glass, plastics. The researchers say that it might be used on hospital walls or on the fabric of military uniforms to combat biological weapons. That sort of promise is generally where the journalism stops and the hard work begins, to turn (or not) this neat idea into mass-produced materials that are reliable, safe and affordable.
Reference: Z. Cao et al., Angewandte Chemie International Edition online publication doi:10.1002/anie.201106466.
Thursday, March 08, 2012
Science and politics cannot be unmixed
One of the leaders in this week’s Nature is mine; here’s the original draft.
____________________________________________________
Paul Nurse will not treat his presidency of the Royal Society as an ivory tower. He has made it clear that he considers that scientists have duties to fulfil and battles to fight beyond the strictly scientific, for example to “expose the bunkum” of politicians who abuse and distort science. This social engagement was evident last week when Nurse delivered the prestigious Dimbleby Lecture, instituted in memory of the British broadcaster Richard Dimbleby. Previous scientific incumbents have included George Porter, Richard Dawkins and Craig Venter.
Nurse identified support for the National Health Service, the need for an immigration policy that attracts foreign scientists, and inspirational science teaching in primary education as some of the priorities for British scientists. These and many of the other issues that he raised, such as increasing scientists’ interactions with industry, commerce and the media, and resisting politicization of climate-change research, are relevant around the globe.
All the more reason not to misinterpret Nurse’s insistence on a separation of science and politics: as he put it more than once, “first science, then politics”. What Nurse rightly warned against here is the intrusion of ideology into the interpretation and acceptance of scientific knowledge, as for example with the Soviet Union’s support of the anti-Mendelian biology of Trofim Lysenko. Given recent accounts of political (and politically endorsed commercial) interference in climate research in the US (see Nature 465, 686; 2010), this is a timely reminder.
But it is all too easy to render this equation too simplistically. For example, Nurse also cited the rejection of Einstein’s “Jewish” relativistic physics by Hitler. But that is not quite how it was. “Jewish physics” was a straw man invented by the anti-Semitic and pro-Nazi physicists Johannes Stark and Philipp Lenard, partly because of professional jealousies and grudges. The Nazi leaders were, however, largely indifferent to what looked like an academic squabble, and in the end lost interest in Stark and Lenard’s risible “Aryan physics” because they needed a physics that actually worked.
Therein lies one reason to be sceptical of the common claim, repeated by Nurse, that science can only flourish in a free society. Historians of science in Nazi Germany such as Kristie Macrakis (in Surviving the Swastika; 1993) have challenged this assertion, which is not made true simply because we would like it to be so. Authoritarian regimes are perfectly capable of putting pragmatism before ideology. The scientific process itself is not impeded by state control in China – quite the contrary – and the old canard that Chinese science lacks innovation and daring is now transparently nonsense. During the Cold War, some Soviet science was vibrant and bold. Even the most notorious example of state repression of science – the trial of Galileo – is apt to be portrayed too simplistically as a conflict of faith and reason rather than a collision of personalities and circumstances (none of which excuses Galileo’s scandalous persecution).
There is a more edifying lesson to be drawn from Nazi Germany that bears on Nurse’s themes. This is that, while political (and religious) ideology has no place in deciding scientific questions, the practice of doing science is inherently political. In that sense, science can never come before politics. Scientists enter into a social contract, not least because they are not their own paymasters. Much if not most scientific research has social and political implications, often broadly visible from the outset. In times of economic and political crisis (like these), scientists must respond intellectually and professionally, and not merely by safeguarding their funding, important though that is.
The consequences of imagining that science can remain aloof from politics became acutely apparent in Germany in 1933, when the consensus view that politics was, as Heisenberg put it, an unseemly “money business” meant that most scientists saw no reason to mount concerted resistance to the expulsion of Jewish colleagues – regarded as a political rather than a moral matter. This ‘apolitical’ attitude can now be seen as a convenient myth that led to acquiescence and made it easy for the German scientists to be manipulated. It would be naïve to imagine that only totalitarianism could create such a situation.
The rare and most prominent exception to ‘apolitical’ behaviour was Einstein, whose outspokenness dismayed even his principled friends Max Planck and Max von Laue. “I do not share your view that the scientist should observe silence in political matters”, he told them. “Does not such restraint signify a lack of responsibility?” There was no hint of such a lack in Nurse’s talk. But we must take care to distinguish the political immunity of scientific reasoning from the political dimensions and obligations of doing science.
____________________________________________________
Paul Nurse will not treat his presidency of the Royal Society as an ivory tower. He has made it clear that he considers that scientists have duties to fulfil and battles to fight beyond the strictly scientific, for example to “expose the bunkum” of politicians who abuse and distort science. This social engagement was evident last week when Nurse delivered the prestigious Dimbleby Lecture, instituted in memory of the British broadcaster Richard Dimbleby. Previous scientific incumbents have included George Porter, Richard Dawkins and Craig Venter.
Nurse identified support for the National Health Service, the need for an immigration policy that attracts foreign scientists, and inspirational science teaching in primary education as some of the priorities for British scientists. These and many of the other issues that he raised, such as increasing scientists’ interactions with industry, commerce and the media, and resisting politicization of climate-change research, are relevant around the globe.
All the more reason not to misinterpret Nurse’s insistence on a separation of science and politics: as he put it more than once, “first science, then politics”. What Nurse rightly warned against here is the intrusion of ideology into the interpretation and acceptance of scientific knowledge, as for example with the Soviet Union’s support of the anti-Mendelian biology of Trofim Lysenko. Given recent accounts of political (and politically endorsed commercial) interference in climate research in the US (see Nature 465, 686; 2010), this is a timely reminder.
But it is all too easy to render this equation too simplistically. For example, Nurse also cited the rejection of Einstein’s “Jewish” relativistic physics by Hitler. But that is not quite how it was. “Jewish physics” was a straw man invented by the anti-Semitic and pro-Nazi physicists Johannes Stark and Philipp Lenard, partly because of professional jealousies and grudges. The Nazi leaders were, however, largely indifferent to what looked like an academic squabble, and in the end lost interest in Stark and Lenard’s risible “Aryan physics” because they needed a physics that actually worked.
Therein lies one reason to be sceptical of the common claim, repeated by Nurse, that science can only flourish in a free society. Historians of science in Nazi Germany such as Kristie Macrakis (in Surviving the Swastika; 1993) have challenged this assertion, which is not made true simply because we would like it to be so. Authoritarian regimes are perfectly capable of putting pragmatism before ideology. The scientific process itself is not impeded by state control in China – quite the contrary – and the old canard that Chinese science lacks innovation and daring is now transparently nonsense. During the Cold War, some Soviet science was vibrant and bold. Even the most notorious example of state repression of science – the trial of Galileo – is apt to be portrayed too simplistically as a conflict of faith and reason rather than a collision of personalities and circumstances (none of which excuses Galileo’s scandalous persecution).
There is a more edifying lesson to be drawn from Nazi Germany that bears on Nurse’s themes. This is that, while political (and religious) ideology has no place in deciding scientific questions, the practice of doing science is inherently political. In that sense, science can never come before politics. Scientists enter into a social contract, not least because they are not their own paymasters. Much if not most scientific research has social and political implications, often broadly visible from the outset. In times of economic and political crisis (like these), scientists must respond intellectually and professionally, and not merely by safeguarding their funding, important though that is.
The consequences of imagining that science can remain aloof from politics became acutely apparent in Germany in 1933, when the consensus view that politics was, as Heisenberg put it, an unseemly “money business” meant that most scientists saw no reason to mount concerted resistance to the expulsion of Jewish colleagues – regarded as a political rather than a moral matter. This ‘apolitical’ attitude can now be seen as a convenient myth that led to acquiescence and made it easy for the German scientists to be manipulated. It would be naïve to imagine that only totalitarianism could create such a situation.
The rare and most prominent exception to ‘apolitical’ behaviour was Einstein, whose outspokenness dismayed even his principled friends Max Planck and Max von Laue. “I do not share your view that the scientist should observe silence in political matters”, he told them. “Does not such restraint signify a lack of responsibility?” There was no hint of such a lack in Nurse’s talk. But we must take care to distinguish the political immunity of scientific reasoning from the political dimensions and obligations of doing science.
Wednesday, March 07, 2012
The unavoidable cost of computation
Here’s the pre-edited version of my latest news story for Nature. I really liked this work. I was lucky to meet Rolf Landauer before he died, and discovered him to be one of those people who is so genial, wry and unaffected that you aren’t awed by how phenomenally clever they are. He was also extremely helpful when I was preparing The Self-Made Tapestry, setting me straight on the genesis of notions about dissipative structures that sometimes assign the credit in the wrong places. Quite aside from that, it is worth making clear that this is in essence the first experimental proof of why Maxwell’s demon can’t do its stuff.
________________________________________________
Physicists have proved that forgetting is the undoing of Maxwell’s demon.
Forgetting always takes a little energy. A team of scientists in France and Germany has now demonstrated exactly how little.
Eric Lutz of the University of Augsburg and his colleagues have found experimental proof of a long-standing claim that erasing information can never be done for free. They present their result in Nature today [1].
In 1961, physicist Rolf Landauer argued that to reset one bit of information – say, to set a binary digit to zero regardless of whether it is initially 1 or 0 – must release at least a certain minimum amount of heat, proportional to the temperature, into the environment.
“Erasing information compresses two states into one”, explains Lutz, currently at the Free University of Berlin. “It is this compression that leads to heat dissipation.”
Landauer’s principle implies a limit on how low the energy dissipation – and thus consumption – of a computer can be. Resetting bits, or equivalent processes that erase information, are essential for operating logic circuits. In effect, these circuits can only work if they can forget – for how else could they perform a second calculation once they have done a first?
The work of Lutz and colleagues now appears to confirm that Landauer’s theory was right. “It is an elegant laboratory realization of Landauer's thought experiments”, says Charles Bennett, an information theorist at IBM Research in Yorktown Heights, New York, and Landauer’s former colleague.
“Landauer's principle has been kicked about by theorists for half a century, but to the best of my knowledge this paper describes the first experimental illustration of it”, agrees Christopher Jarzynski, a chemical physicist at the University of Maryland.
The result doesn’t just verify a practical limit on the energy requirement of computers. It also confirms the theory that safeguards one of the most cherished principles of physical science: the second law of thermodynamics.
This law states that heat will always move from hot to cold. A cup of coffee on your desk always gets cooler, never hotter. It’s equivalent to saying that entropy – the amount of disorder in the universe – always increases.
In the nineteenth century, the Scottish scientist James Clerk Maxwell proposed a scenario that seemed to violate this law. In a gas, hot molecules move faster. Maxwell imagined a microscopic intelligent being, later dubbed a demon, that would open and shut a trapdoor between two compartments to selectively trap ‘hot’ molecules in one of them and cool ones in the other, defying the tendency for heat to spread out and entropy to increase.
Landauer’s theory offered the first compelling reason why Maxwell’s demon couldn’t do its job. The demon would need to erase (‘forget’) the information it used to select the molecules after each operation, and this would release heat and increase entropy, more than counterbalancing the entropy lost by the demon’s legerdemain.
In 2010, physicists in Japan showed that information can indeed be converted to energy by selectively exploiting random thermal fluctuations, just as Maxwell’s demon uses its ‘knowledge’ of molecular motions to build up a reservoir of heat [2]. But Jarzynski points out that the work also demonstrated that selectivity requires the information about fluctuations to be stored.
He says that the experiment of Lutz and colleagues now completes the argument against using Maxwell’s demon to violate the second law, because it shows that “the eventual erasure of this stored information carries a thermodynamic penalty” – which is Landauer's principle.
To test this principle, the researchers created a simple two-state bit: a single microscopic silica particle, 2 micrometres across, held in a ‘light trap’ by a laser beam. The trap contains two ‘valleys’ where the particle can rest, one representing a 1 and the other a 0. It could jump between the two if the energy ‘hill’ separating them is not too high.
The researchers could control this height by the power of the laser. And they could ‘tilt’ the two valleys to tip the bead into one of them, resetting the bit, by moving the physical cell containing the bead slightly out of the laser’s focus.
By very accurately monitoring the position and speed of the particle during a cycle of switching and resetting the bit, they could calculate how much energy was dissipated. Landauer’s limit applies only when the resetting is done infinitely slowly; otherwise, the energy dissipation is greater.
Lutz and colleagues found that, as they used longer switching cycles, the dissipation got smaller, but that this value headed towards a plateau equal to the amount predicted by Landauer.
At present, other inefficiencies mean that computers dissipate at least a thousand times more energy per logic operation than the Landauer limit. This energy dissipation heats up the circuits, and imposes a limit on how small and densely packed they can be without melting. “Heat dissipation in computer chips is one of the major problems hindering their miniaturization”, says Lutz.
But this energy consumption is getting ever lower, and Lutz and colleagues say that it’ll be approaching the Landauer limit within the next couple of decades. Their experiment confirms that, at that point, further improvements in energy efficiency will be prohibited by the laws of physics. “Our experiment clearly shows that you cannot go below Landauer’s limit”, says Lutz. “Engineers will soon have to face that”.
Meanwhile, in fledgling quantum computers, which exploit the rules of quantum physics to achieve greater processing power, this limitation is already being confronted. “Logic processing in quantum computers already is well within
the Landauer regime, and one has to worry about Landauer's principle
all the time”, says physicist Seth Lloyd of the Massachusetts Institute of Technology.
References
1. Bérut, A. et al., Nature 483, 187-189 (2012).
2. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Nat. Phys. 6, 988-992 (2010).
________________________________________________
Physicists have proved that forgetting is the undoing of Maxwell’s demon.
Forgetting always takes a little energy. A team of scientists in France and Germany has now demonstrated exactly how little.
Eric Lutz of the University of Augsburg and his colleagues have found experimental proof of a long-standing claim that erasing information can never be done for free. They present their result in Nature today [1].
In 1961, physicist Rolf Landauer argued that to reset one bit of information – say, to set a binary digit to zero regardless of whether it is initially 1 or 0 – must release at least a certain minimum amount of heat, proportional to the temperature, into the environment.
“Erasing information compresses two states into one”, explains Lutz, currently at the Free University of Berlin. “It is this compression that leads to heat dissipation.”
Landauer’s principle implies a limit on how low the energy dissipation – and thus consumption – of a computer can be. Resetting bits, or equivalent processes that erase information, are essential for operating logic circuits. In effect, these circuits can only work if they can forget – for how else could they perform a second calculation once they have done a first?
The work of Lutz and colleagues now appears to confirm that Landauer’s theory was right. “It is an elegant laboratory realization of Landauer's thought experiments”, says Charles Bennett, an information theorist at IBM Research in Yorktown Heights, New York, and Landauer’s former colleague.
“Landauer's principle has been kicked about by theorists for half a century, but to the best of my knowledge this paper describes the first experimental illustration of it”, agrees Christopher Jarzynski, a chemical physicist at the University of Maryland.
The result doesn’t just verify a practical limit on the energy requirement of computers. It also confirms the theory that safeguards one of the most cherished principles of physical science: the second law of thermodynamics.
This law states that heat will always move from hot to cold. A cup of coffee on your desk always gets cooler, never hotter. It’s equivalent to saying that entropy – the amount of disorder in the universe – always increases.
In the nineteenth century, the Scottish scientist James Clerk Maxwell proposed a scenario that seemed to violate this law. In a gas, hot molecules move faster. Maxwell imagined a microscopic intelligent being, later dubbed a demon, that would open and shut a trapdoor between two compartments to selectively trap ‘hot’ molecules in one of them and cool ones in the other, defying the tendency for heat to spread out and entropy to increase.
Landauer’s theory offered the first compelling reason why Maxwell’s demon couldn’t do its job. The demon would need to erase (‘forget’) the information it used to select the molecules after each operation, and this would release heat and increase entropy, more than counterbalancing the entropy lost by the demon’s legerdemain.
In 2010, physicists in Japan showed that information can indeed be converted to energy by selectively exploiting random thermal fluctuations, just as Maxwell’s demon uses its ‘knowledge’ of molecular motions to build up a reservoir of heat [2]. But Jarzynski points out that the work also demonstrated that selectivity requires the information about fluctuations to be stored.
He says that the experiment of Lutz and colleagues now completes the argument against using Maxwell’s demon to violate the second law, because it shows that “the eventual erasure of this stored information carries a thermodynamic penalty” – which is Landauer's principle.
To test this principle, the researchers created a simple two-state bit: a single microscopic silica particle, 2 micrometres across, held in a ‘light trap’ by a laser beam. The trap contains two ‘valleys’ where the particle can rest, one representing a 1 and the other a 0. It could jump between the two if the energy ‘hill’ separating them is not too high.
The researchers could control this height by the power of the laser. And they could ‘tilt’ the two valleys to tip the bead into one of them, resetting the bit, by moving the physical cell containing the bead slightly out of the laser’s focus.
By very accurately monitoring the position and speed of the particle during a cycle of switching and resetting the bit, they could calculate how much energy was dissipated. Landauer’s limit applies only when the resetting is done infinitely slowly; otherwise, the energy dissipation is greater.
Lutz and colleagues found that, as they used longer switching cycles, the dissipation got smaller, but that this value headed towards a plateau equal to the amount predicted by Landauer.
At present, other inefficiencies mean that computers dissipate at least a thousand times more energy per logic operation than the Landauer limit. This energy dissipation heats up the circuits, and imposes a limit on how small and densely packed they can be without melting. “Heat dissipation in computer chips is one of the major problems hindering their miniaturization”, says Lutz.
But this energy consumption is getting ever lower, and Lutz and colleagues say that it’ll be approaching the Landauer limit within the next couple of decades. Their experiment confirms that, at that point, further improvements in energy efficiency will be prohibited by the laws of physics. “Our experiment clearly shows that you cannot go below Landauer’s limit”, says Lutz. “Engineers will soon have to face that”.
Meanwhile, in fledgling quantum computers, which exploit the rules of quantum physics to achieve greater processing power, this limitation is already being confronted. “Logic processing in quantum computers already is well within
the Landauer regime, and one has to worry about Landauer's principle
all the time”, says physicist Seth Lloyd of the Massachusetts Institute of Technology.
References
1. Bérut, A. et al., Nature 483, 187-189 (2012).
2. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Nat. Phys. 6, 988-992 (2010).
Monday, February 27, 2012
Unmaking history
For Francophones, I have a piece in the February issue of La Recherche on spacetime cloaking, part of a special feature on invisibility. For some reason it’s not included in the online material. But here in any case is how it began in my mother tongue.
_______________________________________________________________
We all have experiences that we’d rather never happened – or perhaps that we just wish no one else had seen. Now researchers have shown how to carry out this kind of editing of history. They use the principles behind invisibility cloaks, which have already been shown to hide objects from light. But instead of hiding objects, we can hide events. In other words, we can apparently carve out a hole in spacetime so that no one on the outside can tell that whatever goes on inside it has ever taken place.
“Such speculations are not fantasy”, insist physicist Martin McCall of Imperial College in London and his colleagues, who came up with the idea last July [1]. They imagine a safe-cracker casting a spacetime cloak over the scene of the crime, so that he can open the safe and remove the contents while a security camera would see just a continuously empty room.
Suppose the cloak was used to conceal someone’s journey from one place to another. Because the device splices together the spacetime on either side of the ‘hole’, it would look as though the person vanished from the starting point and, in the blink of an eye, appeared at her destination. This would then create “the illusion of a Star Trek transporter”, the researchers say.
“It’s definitely a cool idea”, says Ulf Leonhardt, a specialist in invisibility cloaking at the University of St Andrews in Scotland. “Altering the history has been the metier of undemocratic politicians”, he adds, pointing to the way Soviet leaders would doctor photographs to remove individuals who had fallen from favour. “Now altering history has become a subject of physics.”
Lost in spacetime
Conventional invisibility cloaks hide objects by bending light rays around them and then bringing the rays back onto their original trajectory on the far side. That way, it looks to an observer as though the light has passed through an empty space where the hidden object resides. In contrast, the spacetime cloak would manipulate not the path of the rays but their speed. It would be made of materials that slow down light or speed it up. This means that some of the light that would have been scattered by the hidden event is ushered forward to pass before it happened, while the rest was held back until after the event.
These slowed and accelerated rays are then rejoined seamlessly so that there seems to be no gap in spacetime. It’s like bending rays in invisibility cloaks, except that they are bent not in space but in spacetime.
How do you slow down or speed up light? Both have been demonstrated already in some exotic substances such as ultracold gases of alkali metals: light has been both brought to a standstill and speeded up by a factor of 300, so that, bizarrely, a pulse seems to exit the system before it has even arrived. But the spacetime cloak needs to manipulate light in ways that are both simpler and more profound. Light is slowed down in any medium relative to its speed in a vacuum – that is precisely why it bends when it enters water or gas from air, causing the phenomenon of refraction. The amount of slowing down is measured by the refractive index: the bigger this value, the slower the speed relative to a vacuum.
In a spacetime cloak, the light must simply be slowed or speeded up relative to its speed before it entered the cloak. If the cloak itself is surrounded by some cladding material, then the light must be speeded up or retarded only relative to this – there’s no need for fancy tricks that seem to make light travel faster than its speed in a vacuum.
But to obtain perfect and versatile cloaking demands some sophisticated manipulation of the light, for which you need more that just any old transparent materials. For one thing, you need to alter both the electric and the magnetic components of the electromagnetic wave. Most materials (such as glass), being non-magnetic, don’t affect the latter. What’s more, the effects on the electric and magnetic components must be the same, since otherwise some light will be reflected as it enters the material – in this case, making the cloak itself visible. When the electric and magnetic effects are equalized, the material is said to be “impedance matched”. “For a perfect device, we need to modulate the refractive index while also keeping it impedance matched”, explains Paul Kinsler, McCall’s colleague at Imperial.
Hidden recipe
There aren’t really any ordinary materials that would satisfy all these requirements. But they can be met using the same substances that have been used already to make invisibility shields: so-called metamaterials. These are materials made from individual components that interact with electromagnetic radiation in unusual ways. Invisibility cloaks for microwaves have been built in which the metamaterial ‘atoms’ are little electrical circuits etched into copper film, which can pick up the electromagnetic waves like antennae, resonate with them, and re-radiate the energy. Because the precise response of these circuits can be tailored by altering their size and shape, metamaterials can be designed with a range of curious behaviours. For example, they can be given a negative refractive index, so that light rays are bent the wrong way. “Metamaterials that work by resonance offer a large range of strong responses that allow more design freedom”, says Kinsler. “They are also usually designed to have both electric and magnetic responses, which will in general be different from one another.”
Using a combination of these materials, McCall and colleagues offer a prescription for how to put together a spacetime cloak. It’s a tricky business: to divert light around the spacetime hole, one needs to change the optical properties of the cloaking material over time in a particular sequence, switching each layer of material by the right amount at the right moment. “The exact theory requires a perfectly matched and perfectly timed set of changes to both the electric and magnetic properties of the cloak”, says Kinsler.
The result, however, is a sleight of hand more profound than any that normal invisibility shields can offer. “If you turn an ordinary invisibility cloak on and off, you will see a cloaked object disappear and reappear”, explains Kinsler. “With our concept, you never see anything change at all.” At least, not from one side. The spacetime hole opened up by the cloak is not symmetrical – it operates from one side but not the other (although the cloak itself would be invisible from both directions). So an observer on one side might see an event that an observer on the other side will swear never took place.
Could such a device really be used to hide events in the macroscopic world? Physicist John Pendry, also at Imperial (but not part of McCall’s group) and one of the pioneers of invisibility cloaks, considers that unlikely. But he agrees with McCall and colleagues that there might well be more immediate and more practical applications for the technique. “Possible uses might be in a telecommunications switching station, where several packets of information might be competing for the same channel”, he says. “The time cloak could engineer a seamless flow in all channels” – by cloaking interruptions of one signal by another, it would seem as though all had simultaneously flowed unbroken down the same channel.
There could be some more fundamental implications of the work too. This manipulation of spacetime is analogous to what happens at a black hole. Here, light coming from the region near the hole is effectively brought to a standstill at the event horizon, so that time itself seems to be arrested there: an object falling into the hole seems, to an outside observer, to be stopped forever at the event horizon. The parallel between transformation optics and black-hole physics has been pointed out by Leonhardt and his coworkers, who in 2008 revealed an optical analogue of a black hole made from optical fibres. Leonhardt says that the analogy exists for spacetime cloaks also, and that therefore these systems might be used to create the analogue of Hawking radiation: the radiation predicted by Stephen Hawking to be emitted from black holes as a result of the quantum effects of the distortion of spacetime. Such radiation has never been detected yet in astronomical observations of real black holes, but its production at the edge of a spacetime ‘hole’ made by cloaking would provide strong support for Hawking’s idea.
Unlike black holes, however, a spacetime cloak doesn’t really distort spacetime – it just looks as though it does. “I can certainly imagine a transformation device that gives the illusion that causal relationships are distorted or even reversed – a causality editor, rather than our history editor”, says Kinsler. “But the effects generated are only an illusion.”
In the pipeline
In order to manipulate visible light, the component ‘atoms’ of a metamaterial have to be about the same size as the wavelength of the light – less than a micrometre. This means that, while microwave invisibility cloaks have been put together from macroscale components, optical metamaterials are much harder to make.
There’s an easier way, however. Some researchers have realised that another way to perform the necessary light gymnastics is to use transparent substances with unusual optical properties, such as birefringent minerals in which light travels at different speeds in different directions. Objects have been cloaked from visible light in this way using carefully shaped blocks of the mineral calcite (Iceland spar).
In the same spirit, McCall and colleagues realised that sandwiches of existing materials with ‘tunable’ refractive indices might be used to make ‘approximate’ spacetime cloaks. For example, one could use optical fibres whose refractive indices depend on the intensity of the light passing through them. A control beam would manipulate these properties, opening and closing a spacetime cloak for a second beam.
However, as with the ‘simple’ invisibility cloaks made from calcite, the result is that although the object or event can be fully hidden, the cloak itself is not: light is still reflected from it. “Although the event itself can in principle be undetectable, the cloaking process itself isn't”, Kinsler says.
This idea of manipulating the optical properties of optical fibres for spacetime cloaking has already been demonstrated by Moti Fridman and colleagues at Cornell University [2]. Stimulated by the Imperial team’s proposal, they figured out how to put the idea into practice. They use so-called ‘time lenses’ which modify how a light wave propagates not in space, like an ordinary lens, but in time. Just as an ordinary lens can separate different light frequencies in space, and can thus be used to spread out or focus a beam, so a time lens uses the phenomenon of dispersion (the frequency dependence of the speed at which light travels through a medium) to separate frequencies in time, slowing some of them down relative to others.
Because of this equivalence of space and time in the two types of lens, a two-part ‘split time-lens’ can bend a probe beam around a spacetime hole in the same way as two ordinary lenses could bend a light beam around either side of an object to cloak it in space. In the Cornell experiment, a second split time-lens then restored the probe to its original state. In this way, the researchers could temporarily hide the interaction between the probe beam and a second short light pulse, which would otherwise cause the probe signal to be amplified. Fridman and colleagues presented their findings at a Californian meeting of the Optical Society of America in October. “It's a nice experiment, and achieved results remarkably quickly”, says Kinsler. “We were surprised to see it – we were expecting it might take years to do.”
But the spacetime cloaking in this experiment lasts only for a fleeting moment – about 15 picoseconds (trillionths of a second). And Fridman and colleagues admit that the material properties of the optical fibres themselves will make it impossible to extend the gap beyond a little over one millionth of a second. So there’s much work to be done to create a more perfect and more long-lasting cloak. In the meantime, McCall and Kinsler have their eye on other possibilities. Perhaps, they say, we could also edit sound this way by applying the same principles to acoustic waves. As well as hiding things you wish you’d never done, might you be able to literally take back things you wish you’d never said?
1. M. W. McCall, A. Favaro, P. Kinsler & A. Boardman, Journal of Optics 13, 024003 (2011).
2. M. Fridman, A. Farsi, Y. Okawachi & A. L. Gaeta, Nature 481, 62-65 (2012).
_______________________________________________________________
We all have experiences that we’d rather never happened – or perhaps that we just wish no one else had seen. Now researchers have shown how to carry out this kind of editing of history. They use the principles behind invisibility cloaks, which have already been shown to hide objects from light. But instead of hiding objects, we can hide events. In other words, we can apparently carve out a hole in spacetime so that no one on the outside can tell that whatever goes on inside it has ever taken place.
“Such speculations are not fantasy”, insist physicist Martin McCall of Imperial College in London and his colleagues, who came up with the idea last July [1]. They imagine a safe-cracker casting a spacetime cloak over the scene of the crime, so that he can open the safe and remove the contents while a security camera would see just a continuously empty room.
Suppose the cloak was used to conceal someone’s journey from one place to another. Because the device splices together the spacetime on either side of the ‘hole’, it would look as though the person vanished from the starting point and, in the blink of an eye, appeared at her destination. This would then create “the illusion of a Star Trek transporter”, the researchers say.
“It’s definitely a cool idea”, says Ulf Leonhardt, a specialist in invisibility cloaking at the University of St Andrews in Scotland. “Altering the history has been the metier of undemocratic politicians”, he adds, pointing to the way Soviet leaders would doctor photographs to remove individuals who had fallen from favour. “Now altering history has become a subject of physics.”
Lost in spacetime
Conventional invisibility cloaks hide objects by bending light rays around them and then bringing the rays back onto their original trajectory on the far side. That way, it looks to an observer as though the light has passed through an empty space where the hidden object resides. In contrast, the spacetime cloak would manipulate not the path of the rays but their speed. It would be made of materials that slow down light or speed it up. This means that some of the light that would have been scattered by the hidden event is ushered forward to pass before it happened, while the rest was held back until after the event.
These slowed and accelerated rays are then rejoined seamlessly so that there seems to be no gap in spacetime. It’s like bending rays in invisibility cloaks, except that they are bent not in space but in spacetime.
How do you slow down or speed up light? Both have been demonstrated already in some exotic substances such as ultracold gases of alkali metals: light has been both brought to a standstill and speeded up by a factor of 300, so that, bizarrely, a pulse seems to exit the system before it has even arrived. But the spacetime cloak needs to manipulate light in ways that are both simpler and more profound. Light is slowed down in any medium relative to its speed in a vacuum – that is precisely why it bends when it enters water or gas from air, causing the phenomenon of refraction. The amount of slowing down is measured by the refractive index: the bigger this value, the slower the speed relative to a vacuum.
In a spacetime cloak, the light must simply be slowed or speeded up relative to its speed before it entered the cloak. If the cloak itself is surrounded by some cladding material, then the light must be speeded up or retarded only relative to this – there’s no need for fancy tricks that seem to make light travel faster than its speed in a vacuum.
But to obtain perfect and versatile cloaking demands some sophisticated manipulation of the light, for which you need more that just any old transparent materials. For one thing, you need to alter both the electric and the magnetic components of the electromagnetic wave. Most materials (such as glass), being non-magnetic, don’t affect the latter. What’s more, the effects on the electric and magnetic components must be the same, since otherwise some light will be reflected as it enters the material – in this case, making the cloak itself visible. When the electric and magnetic effects are equalized, the material is said to be “impedance matched”. “For a perfect device, we need to modulate the refractive index while also keeping it impedance matched”, explains Paul Kinsler, McCall’s colleague at Imperial.
Hidden recipe
There aren’t really any ordinary materials that would satisfy all these requirements. But they can be met using the same substances that have been used already to make invisibility shields: so-called metamaterials. These are materials made from individual components that interact with electromagnetic radiation in unusual ways. Invisibility cloaks for microwaves have been built in which the metamaterial ‘atoms’ are little electrical circuits etched into copper film, which can pick up the electromagnetic waves like antennae, resonate with them, and re-radiate the energy. Because the precise response of these circuits can be tailored by altering their size and shape, metamaterials can be designed with a range of curious behaviours. For example, they can be given a negative refractive index, so that light rays are bent the wrong way. “Metamaterials that work by resonance offer a large range of strong responses that allow more design freedom”, says Kinsler. “They are also usually designed to have both electric and magnetic responses, which will in general be different from one another.”
Using a combination of these materials, McCall and colleagues offer a prescription for how to put together a spacetime cloak. It’s a tricky business: to divert light around the spacetime hole, one needs to change the optical properties of the cloaking material over time in a particular sequence, switching each layer of material by the right amount at the right moment. “The exact theory requires a perfectly matched and perfectly timed set of changes to both the electric and magnetic properties of the cloak”, says Kinsler.
The result, however, is a sleight of hand more profound than any that normal invisibility shields can offer. “If you turn an ordinary invisibility cloak on and off, you will see a cloaked object disappear and reappear”, explains Kinsler. “With our concept, you never see anything change at all.” At least, not from one side. The spacetime hole opened up by the cloak is not symmetrical – it operates from one side but not the other (although the cloak itself would be invisible from both directions). So an observer on one side might see an event that an observer on the other side will swear never took place.
Could such a device really be used to hide events in the macroscopic world? Physicist John Pendry, also at Imperial (but not part of McCall’s group) and one of the pioneers of invisibility cloaks, considers that unlikely. But he agrees with McCall and colleagues that there might well be more immediate and more practical applications for the technique. “Possible uses might be in a telecommunications switching station, where several packets of information might be competing for the same channel”, he says. “The time cloak could engineer a seamless flow in all channels” – by cloaking interruptions of one signal by another, it would seem as though all had simultaneously flowed unbroken down the same channel.
There could be some more fundamental implications of the work too. This manipulation of spacetime is analogous to what happens at a black hole. Here, light coming from the region near the hole is effectively brought to a standstill at the event horizon, so that time itself seems to be arrested there: an object falling into the hole seems, to an outside observer, to be stopped forever at the event horizon. The parallel between transformation optics and black-hole physics has been pointed out by Leonhardt and his coworkers, who in 2008 revealed an optical analogue of a black hole made from optical fibres. Leonhardt says that the analogy exists for spacetime cloaks also, and that therefore these systems might be used to create the analogue of Hawking radiation: the radiation predicted by Stephen Hawking to be emitted from black holes as a result of the quantum effects of the distortion of spacetime. Such radiation has never been detected yet in astronomical observations of real black holes, but its production at the edge of a spacetime ‘hole’ made by cloaking would provide strong support for Hawking’s idea.
Unlike black holes, however, a spacetime cloak doesn’t really distort spacetime – it just looks as though it does. “I can certainly imagine a transformation device that gives the illusion that causal relationships are distorted or even reversed – a causality editor, rather than our history editor”, says Kinsler. “But the effects generated are only an illusion.”
In the pipeline
In order to manipulate visible light, the component ‘atoms’ of a metamaterial have to be about the same size as the wavelength of the light – less than a micrometre. This means that, while microwave invisibility cloaks have been put together from macroscale components, optical metamaterials are much harder to make.
There’s an easier way, however. Some researchers have realised that another way to perform the necessary light gymnastics is to use transparent substances with unusual optical properties, such as birefringent minerals in which light travels at different speeds in different directions. Objects have been cloaked from visible light in this way using carefully shaped blocks of the mineral calcite (Iceland spar).
In the same spirit, McCall and colleagues realised that sandwiches of existing materials with ‘tunable’ refractive indices might be used to make ‘approximate’ spacetime cloaks. For example, one could use optical fibres whose refractive indices depend on the intensity of the light passing through them. A control beam would manipulate these properties, opening and closing a spacetime cloak for a second beam.
However, as with the ‘simple’ invisibility cloaks made from calcite, the result is that although the object or event can be fully hidden, the cloak itself is not: light is still reflected from it. “Although the event itself can in principle be undetectable, the cloaking process itself isn't”, Kinsler says.
This idea of manipulating the optical properties of optical fibres for spacetime cloaking has already been demonstrated by Moti Fridman and colleagues at Cornell University [2]. Stimulated by the Imperial team’s proposal, they figured out how to put the idea into practice. They use so-called ‘time lenses’ which modify how a light wave propagates not in space, like an ordinary lens, but in time. Just as an ordinary lens can separate different light frequencies in space, and can thus be used to spread out or focus a beam, so a time lens uses the phenomenon of dispersion (the frequency dependence of the speed at which light travels through a medium) to separate frequencies in time, slowing some of them down relative to others.
Because of this equivalence of space and time in the two types of lens, a two-part ‘split time-lens’ can bend a probe beam around a spacetime hole in the same way as two ordinary lenses could bend a light beam around either side of an object to cloak it in space. In the Cornell experiment, a second split time-lens then restored the probe to its original state. In this way, the researchers could temporarily hide the interaction between the probe beam and a second short light pulse, which would otherwise cause the probe signal to be amplified. Fridman and colleagues presented their findings at a Californian meeting of the Optical Society of America in October. “It's a nice experiment, and achieved results remarkably quickly”, says Kinsler. “We were surprised to see it – we were expecting it might take years to do.”
But the spacetime cloaking in this experiment lasts only for a fleeting moment – about 15 picoseconds (trillionths of a second). And Fridman and colleagues admit that the material properties of the optical fibres themselves will make it impossible to extend the gap beyond a little over one millionth of a second. So there’s much work to be done to create a more perfect and more long-lasting cloak. In the meantime, McCall and Kinsler have their eye on other possibilities. Perhaps, they say, we could also edit sound this way by applying the same principles to acoustic waves. As well as hiding things you wish you’d never done, might you be able to literally take back things you wish you’d never said?
1. M. W. McCall, A. Favaro, P. Kinsler & A. Boardman, Journal of Optics 13, 024003 (2011).
2. M. Fridman, A. Farsi, Y. Okawachi & A. L. Gaeta, Nature 481, 62-65 (2012).
Friday, February 24, 2012
Survival in New York
Well, I'm here and just thought it possible that someone in NYC might see this before tomorrow (25 Feb) is up. I'm taking part in this event, linked to David Rothenberg's excellent new book. It's free (the event, not the book), and promises to be great fun. If you're in Manhattan - see you tomorrow?
Thursday, February 16, 2012
Call to arms
I wrote a leader for this week’s Nature on the forthcoming talks for an international Arms Trade Treaty. Here’s the original version.
________________________________________________________________
Scientists have always been some of the strongest voices among those trying to make the world a safer place. Albert Einstein’s commitment to international peace is well known; Andrei Sakharov and Linus Pauling are among the scientists who have been awarded the Nobel Peace prize, as is Joseph Rotblat, the subject of a new autobiography (see Nature 481, 438; 2012), in conjunction with the Pugwash organization that he helped to found. This accords not only with the internationalism of scientific endeavour but with the humanitarian goals that mostly motivate it.
At the same time, the military applications of science and technology are never far from view, and defence funding supports a great deal of research (much of it excellent). There need be no contradiction here. Nations have a right to self-defence, and increasingly armed forces are deployed for peace-keeping rather than aggression. But what constitutes responsible use of military might is delicate and controversial, and peace-keeping is generally necessary only because aggressors have been supplied with military hardware in the first place.
Arms control is a thorny subject for scientists. When, at a session on human rights at a physics conference several years ago, Nature asked if the evident link between the arms trade and human-rights abuses might raise ethical concerns about research on offensive weaponry, the panel shuffled their feet and became tongue-tied.
There are no easy answers to the question of where the ethical boundaries of defence research lie. But all responsible scientists should surely welcome the progress in the United Nations towards an international Arms Trade Treaty (ATT), for which a preparatory meeting in New York next week presages the final negotiations in July. The sale of weapons, from small arms to high-tech missile systems, hinders sustainable development and progress towards the UN’s Millennium Development Goals, and undermines democracy.
Yet there are dangers. Some nations will attempt to have the treaty watered down. That the sole vote against the principle at the UN General Assembly in October 2009 was from Zimbabwe speaks volumes about likely reasons for opposition. But let’s not overlook the fact that in the previous vote a year earlier, Zimbabwe was joined by one other dissenter: the United States, still at that point governed by George W. Bush’s administration. Would any of the current leading US Republican candidates be better disposed towards an ATT?
Paradoxical as it might seem, however, a binding international treaty on the arms trade is not necessarily a step forward anyway. Most of the military technology used for recent human-rights abuses was obtained by legal routes. Such sales from the UK, for example, helped Libya’s former leaders to suppress ‘rebels’ in 2011 and enabled Zimbabwe to launch assaults in the Democratic Republic of Congo in the 1990s.
The British government admits that it anticipates that the Arms Trade Treaty, which it supports, will not reduce arms exports. It says that the criteria for exports “would be based on existing obligations and commitments to prevent human rights abuse” – which have not been notably effective. According to the UK’s Foreign and Commonwealth Office (FCO), the ATT aims “to prevent weapons reaching the hands of terrorists, insurgents and human rights abusers”. But as Libya demonstrated, one person’s insurgents are another’s democratizers, while today’s legitimate rulers can become tomorrow’s human-right abusers.
The FCO says that the treaty “will be good for business, both manufacturing and export sales.” Indeed, arms manufacturers support it as a way of levelling the market playing field. The ATT could simply legitimize business as usual by more clearly demarcating it from a black market, and will not cover peripheral military hardware such as surveillance and IT systems. Some have argued that the treaty will be a mere distraction to the real problem of preventing arms reaching human-rights violators (D. P. Kopel et al., Penn State Law Rev. 114, 101-163; 2010).
So while there are good reasons to call for a strong ATT, it is no panacea. The real question is what a “responsible” arms trade could look like, if this isn’t merely oxymoronic. That would benefit from some hard research on how existing, ‘above-board’ sales have affected governance, political stability and socioeconomic conditions worldwide. Such quantification is challenging and contentious, but several starts have been made (for example, www.unidir.org and www.prio.no/nisat). We need more.
________________________________________________________________
Scientists have always been some of the strongest voices among those trying to make the world a safer place. Albert Einstein’s commitment to international peace is well known; Andrei Sakharov and Linus Pauling are among the scientists who have been awarded the Nobel Peace prize, as is Joseph Rotblat, the subject of a new autobiography (see Nature 481, 438; 2012), in conjunction with the Pugwash organization that he helped to found. This accords not only with the internationalism of scientific endeavour but with the humanitarian goals that mostly motivate it.
At the same time, the military applications of science and technology are never far from view, and defence funding supports a great deal of research (much of it excellent). There need be no contradiction here. Nations have a right to self-defence, and increasingly armed forces are deployed for peace-keeping rather than aggression. But what constitutes responsible use of military might is delicate and controversial, and peace-keeping is generally necessary only because aggressors have been supplied with military hardware in the first place.
Arms control is a thorny subject for scientists. When, at a session on human rights at a physics conference several years ago, Nature asked if the evident link between the arms trade and human-rights abuses might raise ethical concerns about research on offensive weaponry, the panel shuffled their feet and became tongue-tied.
There are no easy answers to the question of where the ethical boundaries of defence research lie. But all responsible scientists should surely welcome the progress in the United Nations towards an international Arms Trade Treaty (ATT), for which a preparatory meeting in New York next week presages the final negotiations in July. The sale of weapons, from small arms to high-tech missile systems, hinders sustainable development and progress towards the UN’s Millennium Development Goals, and undermines democracy.
Yet there are dangers. Some nations will attempt to have the treaty watered down. That the sole vote against the principle at the UN General Assembly in October 2009 was from Zimbabwe speaks volumes about likely reasons for opposition. But let’s not overlook the fact that in the previous vote a year earlier, Zimbabwe was joined by one other dissenter: the United States, still at that point governed by George W. Bush’s administration. Would any of the current leading US Republican candidates be better disposed towards an ATT?
Paradoxical as it might seem, however, a binding international treaty on the arms trade is not necessarily a step forward anyway. Most of the military technology used for recent human-rights abuses was obtained by legal routes. Such sales from the UK, for example, helped Libya’s former leaders to suppress ‘rebels’ in 2011 and enabled Zimbabwe to launch assaults in the Democratic Republic of Congo in the 1990s.
The British government admits that it anticipates that the Arms Trade Treaty, which it supports, will not reduce arms exports. It says that the criteria for exports “would be based on existing obligations and commitments to prevent human rights abuse” – which have not been notably effective. According to the UK’s Foreign and Commonwealth Office (FCO), the ATT aims “to prevent weapons reaching the hands of terrorists, insurgents and human rights abusers”. But as Libya demonstrated, one person’s insurgents are another’s democratizers, while today’s legitimate rulers can become tomorrow’s human-right abusers.
The FCO says that the treaty “will be good for business, both manufacturing and export sales.” Indeed, arms manufacturers support it as a way of levelling the market playing field. The ATT could simply legitimize business as usual by more clearly demarcating it from a black market, and will not cover peripheral military hardware such as surveillance and IT systems. Some have argued that the treaty will be a mere distraction to the real problem of preventing arms reaching human-rights violators (D. P. Kopel et al., Penn State Law Rev. 114, 101-163; 2010).
So while there are good reasons to call for a strong ATT, it is no panacea. The real question is what a “responsible” arms trade could look like, if this isn’t merely oxymoronic. That would benefit from some hard research on how existing, ‘above-board’ sales have affected governance, political stability and socioeconomic conditions worldwide. Such quantification is challenging and contentious, but several starts have been made (for example, www.unidir.org and www.prio.no/nisat). We need more.
Tuesday, February 14, 2012
... but I just want to say this
With the shrill cries of new atheists ringing in my ears (you would not believe some of that stuff, but I won’t go there), I read John Gray’s review of Alain de Botton’s book Religion for Atheists in the New Statesman and it is as though someone has opened a window and let in some air – not because of the book, which I’ve not read, but because of what John says. Sadly you can’t get it online: the nearest thing is here.
Sunday, February 12, 2012
Moving swiftly on
This piece in the Guardian has caused a little storm, and I’m not so naive as to be totally surprised by that. There’s much I could say about it, but frankly it never helps. I’m tired of how little productive dialogue ever seems to stem from these things and figure I will just leave the damned business alone (no doubt to the delight of the more rapid detractors). I will say here only a few things about this pre-edited version, which was necessarily slimmed down to fit the slot in the printed paper: (1) to those who thought I was saying “hey, wouldn’t it be a great idea if sociologists studied religion”, note the reference to Durkheim as a shorthand way of acknowledging that this notion goes back a long, long way; (2) note that I’m not against everything Dawkins stands for on this subject – I agree with him on more than just the matter of faith schools mentioned below, although of course I do disagree with other things. The “for us or against us” attitude that one seems to see so much of in online discussions is the kind of infantile attitude that I figure we should be leaving to the likes of George W. Bush.
______________________________________________________________
The research reported this week showing that American Christians adjust their concept of Jesus to match their own sociopolitical persuasion will surely surprise nobody. Liberals regard Christ primarily as someone who promoted fellowship and caring, say psychologist Lee Ross of Stanford University in California and his colleagues, while conservatives see him as a firm moralist. In other words, he’s like me, only more so.
Yes, it’s pointing out the blindingly obvious. Yet the work offers a timely reminder of how religious thinking operates that has so far been resolutely resisted by some strident “new atheists”.
You might imagine that it’s uncontentious to suggest that religion is essentially a social phenomenon, not least because particular varieties of it – fundamentalist, tolerant, mystical – tend to develop within specific communities united by geography or cultural ties rather than arising at random throughout society. Without entering the speculative debate about whether religiosity has become hardwired by evolution, it seems clear enough that specific types of religious behaviour are as prone to be transmitted through social networks as are, say, obesity and smoking.
Bizarrely, this is ignored by some of the most prominent opponents of religion today. Arguments about science and religion are mostly conducted as if Emile Durkheim had never existed, and all that matters is whether or not religious belief is testable. Many atheists prefer to regard religion as a virus that jumps from one hapless individual to another, or a misdirection of evolutionary instincts – in any case, curable only with a strong shot of reason. These epidemiological and Darwinian models have an elegant simplicity that contamination with broader social and cultural factors would spoil. Yet the result is akin to imagining that, to solve Africa’s AIDS crisis, there is no point in trying to understand African societies.
Thus arch new atheist Sam Harris swatted away my suggestion that we might approach religious belief as a social construct with the contemptuous comment that I was saying something “either trivially true or obscurantist”. I find it equally peculiar that chemist Harry Kroto should insist that “I am not interested in why religion continues” while so devoutly wishing that it would not.
At face value, this apparent lack of interest in how religion actually manifests and propagates in society is odd coming from people who so loudly deplore its prevalence. But I think it may not be so hard to explain.
For one thing, regarding religion as a social phenomenon would force us to see it as something real, like governments or book groups, and not just a self-propagating delusion. It is so much safer and easier to ridicule a literal belief in miracles, virgin births and other supernatural agencies than to consider religion as (among other things) one of the ways that human societies have long chosen to organize their structures of authority and status, for better or worse.
It also means that one might feel compelled to abandon the heroic goal of dislodging God from his status as Creator in favour of asking such questions as whether particular socioeconomic conditions tend to promote intolerant fundamentalism over liberal pluralism. It turns a Manichean conflict between truth and ignorance into a mundane question of why some people are kind or beastly towards others. Yet to suggest that we can relax about some forms of religious belief – that they need offer no obstacle to an acceptance of scientific inquiry and discovery, and will not demand the stoning of infidels – is already, for some new atheists, to have conceded defeat. They will not have been pleased with David Attenborough’s gentle agnosticism on Desert Island Discs, although I doubt that they will dare say so.
The worst of it is that to reject an anthropological approach to religion is, in the end, unscientific. To decide to be uninterested in questions of how and why societies have religion, of why it has the many complexions that it does and how these compete, is a matter of personal taste. But to insist that these are pointless questions is to deny that this important aspect of human behaviour warrants scientific study. Harris’s preference to look to neuroscience – to the individual, not society – will only get you so far, unless you want to argue that brains evolved differently in Kansas (tempting, I admit).
Richard Dawkins is right to worry that faith schools can potentially become training grounds for intolerance, and that daily indoctrination into a particular faith should have no place in education. But I’m sure he’d agree that how people formulate their specific religious beliefs is a much wider question than that. The Stanford research reinforces the fact that a single holy book can provide the basis both for a permissive, enquiring and pro-scientific outlook (think tea and biscuits with Richard Coles) or for apocalyptic, bigoted ignorance (think a Tea Party with Sarah Palin). Might we then, as good scientists alert to the principles of cause and effect, suspect that the real ills of religion originate not in the book itself, but elsewhere?
______________________________________________________________
The research reported this week showing that American Christians adjust their concept of Jesus to match their own sociopolitical persuasion will surely surprise nobody. Liberals regard Christ primarily as someone who promoted fellowship and caring, say psychologist Lee Ross of Stanford University in California and his colleagues, while conservatives see him as a firm moralist. In other words, he’s like me, only more so.
Yes, it’s pointing out the blindingly obvious. Yet the work offers a timely reminder of how religious thinking operates that has so far been resolutely resisted by some strident “new atheists”.
You might imagine that it’s uncontentious to suggest that religion is essentially a social phenomenon, not least because particular varieties of it – fundamentalist, tolerant, mystical – tend to develop within specific communities united by geography or cultural ties rather than arising at random throughout society. Without entering the speculative debate about whether religiosity has become hardwired by evolution, it seems clear enough that specific types of religious behaviour are as prone to be transmitted through social networks as are, say, obesity and smoking.
Bizarrely, this is ignored by some of the most prominent opponents of religion today. Arguments about science and religion are mostly conducted as if Emile Durkheim had never existed, and all that matters is whether or not religious belief is testable. Many atheists prefer to regard religion as a virus that jumps from one hapless individual to another, or a misdirection of evolutionary instincts – in any case, curable only with a strong shot of reason. These epidemiological and Darwinian models have an elegant simplicity that contamination with broader social and cultural factors would spoil. Yet the result is akin to imagining that, to solve Africa’s AIDS crisis, there is no point in trying to understand African societies.
Thus arch new atheist Sam Harris swatted away my suggestion that we might approach religious belief as a social construct with the contemptuous comment that I was saying something “either trivially true or obscurantist”. I find it equally peculiar that chemist Harry Kroto should insist that “I am not interested in why religion continues” while so devoutly wishing that it would not.
At face value, this apparent lack of interest in how religion actually manifests and propagates in society is odd coming from people who so loudly deplore its prevalence. But I think it may not be so hard to explain.
For one thing, regarding religion as a social phenomenon would force us to see it as something real, like governments or book groups, and not just a self-propagating delusion. It is so much safer and easier to ridicule a literal belief in miracles, virgin births and other supernatural agencies than to consider religion as (among other things) one of the ways that human societies have long chosen to organize their structures of authority and status, for better or worse.
It also means that one might feel compelled to abandon the heroic goal of dislodging God from his status as Creator in favour of asking such questions as whether particular socioeconomic conditions tend to promote intolerant fundamentalism over liberal pluralism. It turns a Manichean conflict between truth and ignorance into a mundane question of why some people are kind or beastly towards others. Yet to suggest that we can relax about some forms of religious belief – that they need offer no obstacle to an acceptance of scientific inquiry and discovery, and will not demand the stoning of infidels – is already, for some new atheists, to have conceded defeat. They will not have been pleased with David Attenborough’s gentle agnosticism on Desert Island Discs, although I doubt that they will dare say so.
The worst of it is that to reject an anthropological approach to religion is, in the end, unscientific. To decide to be uninterested in questions of how and why societies have religion, of why it has the many complexions that it does and how these compete, is a matter of personal taste. But to insist that these are pointless questions is to deny that this important aspect of human behaviour warrants scientific study. Harris’s preference to look to neuroscience – to the individual, not society – will only get you so far, unless you want to argue that brains evolved differently in Kansas (tempting, I admit).
Richard Dawkins is right to worry that faith schools can potentially become training grounds for intolerance, and that daily indoctrination into a particular faith should have no place in education. But I’m sure he’d agree that how people formulate their specific religious beliefs is a much wider question than that. The Stanford research reinforces the fact that a single holy book can provide the basis both for a permissive, enquiring and pro-scientific outlook (think tea and biscuits with Richard Coles) or for apocalyptic, bigoted ignorance (think a Tea Party with Sarah Palin). Might we then, as good scientists alert to the principles of cause and effect, suspect that the real ills of religion originate not in the book itself, but elsewhere?
Friday, February 10, 2012
Impractical magic
I have a review of a book about John Dee in the latest issue of Nature. Here's how it started.
_______________________________________________________
The Arch-Conjuror of England: John Dee
by Glyn Parry
Yale University Press, 2011
ISBN 978-0-300-11719-6
335 pages
The late sixteenth-century mathematician and alchemist John Dee exerts a powerful grip on the public imagination. In recent times, he has been the subject of several novels, including The House of Doctor Dee by Peter Ackroyd, and inspired the pop opera Doctor Dee by Damon Albarn of the group Blur. Now, in The Arch-Conjuror of England, historian Glyn Parry gives us probably the most meticulous account of Dee’s career to date.
In some ways, all this attention seems disproportionate. Dee was less important in the philosophy of natural magic than such lesser-known individuals as Giambattista Della Porta and Cornelius Agrippa, and less significant as a transitional figure between magic and science than his contemporaries Della Porta, Bernardino Telesio and Tommaso Campanella, both anti-Aristotelian empiricists from Calabria. Dee’s works, such as the notoriously opaque Monas hieroglyphica, in which the unity of the cosmos was represented in a mystical symbol, were widely deemed impenetrable even in his own day.
There’s no doubt that Dee was prominent during the Elizabethan age – he probably provided the model for both Shakespeare’s Prospero and Ben Jonson’s Subtle in the satire The Alchemist. Yet what surely gives Dee his allure more than anything else is the same thing that lends glamour to Walter Raleigh, Francis Drake and Philip Sidney: they all fell within the orbit of Queen Elizabeth herself. Benjamin Woolley’s earlier biography of Dee draws explicitly on this connection, calling him ‘the queen’s conjuror’. Yet in a real sense he was precisely that, on and off, as his fortunes waxed and waned in the fickle, treacherous Elizabethan court.
There is no way to make sense of Dee without embedding him within the magical cult of Elizabeth, just as this holds the key to Spenser’s epic poem The Faerie Queen and to the flights of fancy in A Midsummer Night’s Dream. To the English, the reign of Elizabeth heralded the dawn of a mystical Protestant awakening. In Germany that dream died in the brutal Thirty Years War; in England it spawned an empire. Dee was the first to coin the phrase ‘the British Empire’, but his vision was less colonialist than a magical yoking of Elizabeth to the Arthurian legend of Albion.
It is one of the strengths of Glyn Parry’s book that he shows how deeply woven magic and the occult sciences were into the fabric of early modern culture. Elizabeth was particularly knowledgeable about alchemy. After all, why would a monarch who had no reason to doubt the possibility of transmutation of gold pass up this chance to fill the royal coffers? Because she believed he could make the philosopher’s stone, the queen was desperate to lure Dee’s former associate, the slippery Edward Kelley, back to England after he left with Dee for Poland and Prague in 1583. The Holy Roman Emperor Rudolf II was equally eager to keep Kelley in Bohemia, making him a baron. Even Dee’s involvement in the failed quest of the adventurer Martin Frobisher to find a northwest passage to the Pacific had an alchemical tint when it was rumoured that Frobisher had found gold-containing ore.
The relationship with Kelley is another element of the popular fascination with Dee. Kelley claimed to be able to converse with angels via Dee’s crystal ball, and Dee’s faith in Kelley’s prophecies and angelic commands never wavered even when the increasingly deranged Kelley told him that the angels had commanded them to swap wives. The inversion of the servant-master relationship as Kelley’s reputation grew in Bohemia makes Dee a pathetic figure towards the end of their ill-fated excursion on the continent – forced on them after Dee blundered in Elizabeth’s court.
He was always doing that. However brilliant his reputation as a magician and mathematician, Dee was hopeless at court politics, regularly backing the wrong horse. He ruined his chances in Prague by passing on Kelley’s angelic reprimand to Rudolf for his errant ways. But Dee can’t be held entirely to blame. Parry makes it clear just how miserable it was for any courtier trying to negotiate the subtle currents of the court, especially in England where the memory of Mary I’s brief and bloody reign still hung in the air along with a lingering fear of papist plots.
This is probably the most meticulous account of Dee’s career to date, although the details aren’t always given shape. Often the political intrigues become as baffling and Byzantine for the reader as they must have been for Dee. But what I really missed was context. It is hard enough to locate Dee in history without hearing about other contemporary figures who also sought to expand natural philosophy, such as Della Porta and Francis Bacon. Bacon in particular was another intellectual whose grand schemes and attempts to gain the queen’s ear were hampered by court rivalries.
But to truly understand Dee’s significance, we need more than the cradle-to-grave story. For example, although Parry patiently explains the numerological and symbolic mysticism of Dee’s Monas hieroglyphica, its preoccupation with divine and Adamic languages can seem sheer delirium if not linked to, say, the later work of the German Jesuit Athanasius Kircher (the most Dee-like figure of the early Enlightenment) or of John Wilkins, one of the Royal Society’s founders.
Likewise, it would have been easier to evaluate Dee’s mathematics if we had been told that this subject had, even until the mid-seventeenth century, a close association both with witchcraft and with mechanical ingenuity, at which Dee also excelled. Wilkins’ Mathematical Magick (1648) was a direct descendant of Dee’s famed Mathematical Preface to a new volume of Euclid. We’d never know from this book that Dee influenced the early modern scientific world via the likes of Robert Fludd, Elias Ashmole and Margaret Cavendish, nor that his works were studied by none other than Robert Boyle, and probably by Isaac Newton. Parry has assembled an important contribution to our understanding of how magic became science. It’s a shame he didn’t see it as part of his task to make that connection.
_______________________________________________________
The Arch-Conjuror of England: John Dee
by Glyn Parry
Yale University Press, 2011
ISBN 978-0-300-11719-6
335 pages
The late sixteenth-century mathematician and alchemist John Dee exerts a powerful grip on the public imagination. In recent times, he has been the subject of several novels, including The House of Doctor Dee by Peter Ackroyd, and inspired the pop opera Doctor Dee by Damon Albarn of the group Blur. Now, in The Arch-Conjuror of England, historian Glyn Parry gives us probably the most meticulous account of Dee’s career to date.
In some ways, all this attention seems disproportionate. Dee was less important in the philosophy of natural magic than such lesser-known individuals as Giambattista Della Porta and Cornelius Agrippa, and less significant as a transitional figure between magic and science than his contemporaries Della Porta, Bernardino Telesio and Tommaso Campanella, both anti-Aristotelian empiricists from Calabria. Dee’s works, such as the notoriously opaque Monas hieroglyphica, in which the unity of the cosmos was represented in a mystical symbol, were widely deemed impenetrable even in his own day.
There’s no doubt that Dee was prominent during the Elizabethan age – he probably provided the model for both Shakespeare’s Prospero and Ben Jonson’s Subtle in the satire The Alchemist. Yet what surely gives Dee his allure more than anything else is the same thing that lends glamour to Walter Raleigh, Francis Drake and Philip Sidney: they all fell within the orbit of Queen Elizabeth herself. Benjamin Woolley’s earlier biography of Dee draws explicitly on this connection, calling him ‘the queen’s conjuror’. Yet in a real sense he was precisely that, on and off, as his fortunes waxed and waned in the fickle, treacherous Elizabethan court.
There is no way to make sense of Dee without embedding him within the magical cult of Elizabeth, just as this holds the key to Spenser’s epic poem The Faerie Queen and to the flights of fancy in A Midsummer Night’s Dream. To the English, the reign of Elizabeth heralded the dawn of a mystical Protestant awakening. In Germany that dream died in the brutal Thirty Years War; in England it spawned an empire. Dee was the first to coin the phrase ‘the British Empire’, but his vision was less colonialist than a magical yoking of Elizabeth to the Arthurian legend of Albion.
It is one of the strengths of Glyn Parry’s book that he shows how deeply woven magic and the occult sciences were into the fabric of early modern culture. Elizabeth was particularly knowledgeable about alchemy. After all, why would a monarch who had no reason to doubt the possibility of transmutation of gold pass up this chance to fill the royal coffers? Because she believed he could make the philosopher’s stone, the queen was desperate to lure Dee’s former associate, the slippery Edward Kelley, back to England after he left with Dee for Poland and Prague in 1583. The Holy Roman Emperor Rudolf II was equally eager to keep Kelley in Bohemia, making him a baron. Even Dee’s involvement in the failed quest of the adventurer Martin Frobisher to find a northwest passage to the Pacific had an alchemical tint when it was rumoured that Frobisher had found gold-containing ore.
The relationship with Kelley is another element of the popular fascination with Dee. Kelley claimed to be able to converse with angels via Dee’s crystal ball, and Dee’s faith in Kelley’s prophecies and angelic commands never wavered even when the increasingly deranged Kelley told him that the angels had commanded them to swap wives. The inversion of the servant-master relationship as Kelley’s reputation grew in Bohemia makes Dee a pathetic figure towards the end of their ill-fated excursion on the continent – forced on them after Dee blundered in Elizabeth’s court.
He was always doing that. However brilliant his reputation as a magician and mathematician, Dee was hopeless at court politics, regularly backing the wrong horse. He ruined his chances in Prague by passing on Kelley’s angelic reprimand to Rudolf for his errant ways. But Dee can’t be held entirely to blame. Parry makes it clear just how miserable it was for any courtier trying to negotiate the subtle currents of the court, especially in England where the memory of Mary I’s brief and bloody reign still hung in the air along with a lingering fear of papist plots.
This is probably the most meticulous account of Dee’s career to date, although the details aren’t always given shape. Often the political intrigues become as baffling and Byzantine for the reader as they must have been for Dee. But what I really missed was context. It is hard enough to locate Dee in history without hearing about other contemporary figures who also sought to expand natural philosophy, such as Della Porta and Francis Bacon. Bacon in particular was another intellectual whose grand schemes and attempts to gain the queen’s ear were hampered by court rivalries.
But to truly understand Dee’s significance, we need more than the cradle-to-grave story. For example, although Parry patiently explains the numerological and symbolic mysticism of Dee’s Monas hieroglyphica, its preoccupation with divine and Adamic languages can seem sheer delirium if not linked to, say, the later work of the German Jesuit Athanasius Kircher (the most Dee-like figure of the early Enlightenment) or of John Wilkins, one of the Royal Society’s founders.
Likewise, it would have been easier to evaluate Dee’s mathematics if we had been told that this subject had, even until the mid-seventeenth century, a close association both with witchcraft and with mechanical ingenuity, at which Dee also excelled. Wilkins’ Mathematical Magick (1648) was a direct descendant of Dee’s famed Mathematical Preface to a new volume of Euclid. We’d never know from this book that Dee influenced the early modern scientific world via the likes of Robert Fludd, Elias Ashmole and Margaret Cavendish, nor that his works were studied by none other than Robert Boyle, and probably by Isaac Newton. Parry has assembled an important contribution to our understanding of how magic became science. It’s a shame he didn’t see it as part of his task to make that connection.
Thursday, February 02, 2012
Democracy, huh?
Here’s my latest Muse for Nature News. But while I’m in that neck of the woods, I very much enjoyed the piece on Dickens in the latest issue. Yes, even Nature is in on that act.
____________________________________________________________
“The people who cast the votes decide nothing”, Josef Stalin is reputed to have said. “The people who count them decide everything.” Little has changed in Russia, if the findings of a new preprint are to be believed. Peter Klimek of the Medical University of Vienna in Austria and his colleagues say that the 2011 election for the Duma (the lower Federal Assembly) in Russia, won by Vladimir Putin’s United Russia party with 49 percent of the votes, shows a clear statistical signature of ballot-rigging [1].
This is not a new accusation. Some have claimed that the Russian statistics show suspicious peaks at multiples of 5 or 10 percent, as though ballot officials simply assigned rounded proportions of votes to meet pre-determined figures. And in December the Wall Street Journal conducted its own analysis of the statistics which led political scientists at the Universities of Michigan and Chicago to concur that there were signs of fraud.
Naturally, Putin denies this. But if you suspect that neither he nor the Wall Street Journal are exactly the most neutral of sources on Russian politics, Klimek and colleagues offer a welcome alternative. They say that the statistical distribution of votes in the Duma election shows over a hundred times more skew than a normal (bell-curve or gaussian) distribution, the expected outcome of a set of independent choices.
The same is true for the contested Ugandan election of February 2011. Both of these statistical distributions are, even at a glance, profoundly different from those of recent elections in, say, Austria, Switzerland and Spain.
Breaking down the numbers into scatter plots of regional votes lays the problems bare. For both Russia and Uganda these distributions are bimodal. Distortion in the main peak suggests ballot rigging which, for Russia, afflicts about 64 percent of districts.
But the second, smaller peaks reveal much cruder fraud. These correspond to districts showing both 100 percent turnout and 100 percent votes for the winning party. As if.
It’s good to see science expose these corruptions of democracy. Yet science also hints that democracy isn’t quite what it’s popularly sold as anyway. Take the choice of voting system. One of the most celebrated results of the branch of economics known as choice theory is that there can be no perfectly fair means of deciding the outcome of a democratic vote. Possible voting schemes are manifold, and their relative merits hotly debated: first-past-the-post (the UK), proportional representation (Scandinavia), schemes for ranking candidates rather than simply selecting one, and so on.
But as economics Nobel laureate Kenneth Arrow showed in the 1950s, none of these systems, nor any other, can satisfy all the criteria of fairness and logic one might demand [2]. For example, a system under which candidate A would be elected from A, B and C should ideally also select A if B is the only alternative. What Arrow’s ‘impossibility theorem’ implies is that either we need to accept that democratic majority rule has some undesirable consequences or we need to find alternatives – which no one has.
Other considerations can undermine the democratic principle too, such as when a bipartisan vote falls within the margin of statistical error. As the Bush vs Gore US election of 2000 showed, the result is then not democratic but legalistic.
And analysis of voting statistics suggests that, regardless of the voting system, our political choices are not free and independent (as most definitions of democracy pretend) but partly the collective result of peer influence. That is one – although not the only – explanation of why some voting statistics don’t follow a gaussian distribution but instead a relationship called a power law [3,4]. Klimek and colleagues find less extreme but significant deviations from gaussian statistics in their analysis of ‘unrigged’ elections [1], which they assume to result from similar collectivization, or as they put it, voter mobilization.
A key premise of current models of voting and opinion formation [5,6] is that most social consensus arises from mutual influence and the spreading of opinion, not from isolated decisions. On the one hand you could say this is just how democratic societies work. On the other, it makes voting a nonlinear process in which small effects (media bias or party budgets, say) can have disproportionately big consequences. At the very least, it makes voting a more complex and less transparent process than is normally assumed.
This isn’t to invalidate Churchill’s famous dictum that democracy is the least bad political system. But let’s not fool ourselves about what it entails.
References
1. Klimek, P., Yegorov, Y., Hanel, R. & Thurner, S. preprint http://www.arxiv.org/abs/1201.3087 (2012).
2. Arrow, K. Social Choice and Individual Values (Yale University Press, New Haven, 1951).
3. Costa Filho, R. N., Almeida, M. P., Andrade, J. S. Jr & Moreira, J. E. Phys. Rev. E 60, 1067-1068 (1999).
4. Costa Filho, R. N., Almeida, M. P., Moreira, J. E. & Andrade, J. S. Jr, Physica A 322, 698-700 (2003).
5. Fortunato S. & Castellano, C. Phys. Rev. Lett. 99, 138701 (2007).
6. D. Stauffer, ‘Opinion dynamics and sociophysics’, in Encyclopedia of Complexity & System Science, ed. R. A. Meyers, 6380-6388. Springer, Heidelberg, 2009.
____________________________________________________________
“The people who cast the votes decide nothing”, Josef Stalin is reputed to have said. “The people who count them decide everything.” Little has changed in Russia, if the findings of a new preprint are to be believed. Peter Klimek of the Medical University of Vienna in Austria and his colleagues say that the 2011 election for the Duma (the lower Federal Assembly) in Russia, won by Vladimir Putin’s United Russia party with 49 percent of the votes, shows a clear statistical signature of ballot-rigging [1].
This is not a new accusation. Some have claimed that the Russian statistics show suspicious peaks at multiples of 5 or 10 percent, as though ballot officials simply assigned rounded proportions of votes to meet pre-determined figures. And in December the Wall Street Journal conducted its own analysis of the statistics which led political scientists at the Universities of Michigan and Chicago to concur that there were signs of fraud.
Naturally, Putin denies this. But if you suspect that neither he nor the Wall Street Journal are exactly the most neutral of sources on Russian politics, Klimek and colleagues offer a welcome alternative. They say that the statistical distribution of votes in the Duma election shows over a hundred times more skew than a normal (bell-curve or gaussian) distribution, the expected outcome of a set of independent choices.
The same is true for the contested Ugandan election of February 2011. Both of these statistical distributions are, even at a glance, profoundly different from those of recent elections in, say, Austria, Switzerland and Spain.
Breaking down the numbers into scatter plots of regional votes lays the problems bare. For both Russia and Uganda these distributions are bimodal. Distortion in the main peak suggests ballot rigging which, for Russia, afflicts about 64 percent of districts.
But the second, smaller peaks reveal much cruder fraud. These correspond to districts showing both 100 percent turnout and 100 percent votes for the winning party. As if.
It’s good to see science expose these corruptions of democracy. Yet science also hints that democracy isn’t quite what it’s popularly sold as anyway. Take the choice of voting system. One of the most celebrated results of the branch of economics known as choice theory is that there can be no perfectly fair means of deciding the outcome of a democratic vote. Possible voting schemes are manifold, and their relative merits hotly debated: first-past-the-post (the UK), proportional representation (Scandinavia), schemes for ranking candidates rather than simply selecting one, and so on.
But as economics Nobel laureate Kenneth Arrow showed in the 1950s, none of these systems, nor any other, can satisfy all the criteria of fairness and logic one might demand [2]. For example, a system under which candidate A would be elected from A, B and C should ideally also select A if B is the only alternative. What Arrow’s ‘impossibility theorem’ implies is that either we need to accept that democratic majority rule has some undesirable consequences or we need to find alternatives – which no one has.
Other considerations can undermine the democratic principle too, such as when a bipartisan vote falls within the margin of statistical error. As the Bush vs Gore US election of 2000 showed, the result is then not democratic but legalistic.
And analysis of voting statistics suggests that, regardless of the voting system, our political choices are not free and independent (as most definitions of democracy pretend) but partly the collective result of peer influence. That is one – although not the only – explanation of why some voting statistics don’t follow a gaussian distribution but instead a relationship called a power law [3,4]. Klimek and colleagues find less extreme but significant deviations from gaussian statistics in their analysis of ‘unrigged’ elections [1], which they assume to result from similar collectivization, or as they put it, voter mobilization.
A key premise of current models of voting and opinion formation [5,6] is that most social consensus arises from mutual influence and the spreading of opinion, not from isolated decisions. On the one hand you could say this is just how democratic societies work. On the other, it makes voting a nonlinear process in which small effects (media bias or party budgets, say) can have disproportionately big consequences. At the very least, it makes voting a more complex and less transparent process than is normally assumed.
This isn’t to invalidate Churchill’s famous dictum that democracy is the least bad political system. But let’s not fool ourselves about what it entails.
References
1. Klimek, P., Yegorov, Y., Hanel, R. & Thurner, S. preprint http://www.arxiv.org/abs/1201.3087 (2012).
2. Arrow, K. Social Choice and Individual Values (Yale University Press, New Haven, 1951).
3. Costa Filho, R. N., Almeida, M. P., Andrade, J. S. Jr & Moreira, J. E. Phys. Rev. E 60, 1067-1068 (1999).
4. Costa Filho, R. N., Almeida, M. P., Moreira, J. E. & Andrade, J. S. Jr, Physica A 322, 698-700 (2003).
5. Fortunato S. & Castellano, C. Phys. Rev. Lett. 99, 138701 (2007).
6. D. Stauffer, ‘Opinion dynamics and sociophysics’, in Encyclopedia of Complexity & System Science, ed. R. A. Meyers, 6380-6388. Springer, Heidelberg, 2009.
Sunday, January 29, 2012
Fake flakes
Tanguy Chouard at Nature has pointed out to me Google’s tribute to the snowflake today:
This is a beautiful example of the kind of bogus flake I collected for my spot in Nine Lessons and Carols for Godless People just before Christmas. Eight-pointed flakes like this are relatively common, because they are easier to draw than six-pointed ones:
(from a Prospect mailing)
(from an Amnesty Christmas card (occasionally sent by yours truly))
More rarely one sees five-pointed examples like this from some wrapping paper in 2010:
Or, more deliciously, this one from the Millibands last year:
I like to point out that the possible sighting of quasicrystalline ice should make us hesitant to be too dismissive of these inventive geometries. What’s more, there do exist claims of pentagonal flakes having been observed, though this seems extremely hard to credit. Of course, in truth quasicrystal ice, even if it exists in very rare circumstances, hardly has five- or eightfold snowflakes as its inevitable corollary. But it’s fun to think about it, especially near the quadricentenary [?] of Kepler’s classic treatise on the snowflake, De nive sexangula.
This is a beautiful example of the kind of bogus flake I collected for my spot in Nine Lessons and Carols for Godless People just before Christmas. Eight-pointed flakes like this are relatively common, because they are easier to draw than six-pointed ones:
(from a Prospect mailing)
(from an Amnesty Christmas card (occasionally sent by yours truly))
More rarely one sees five-pointed examples like this from some wrapping paper in 2010:
Or, more deliciously, this one from the Millibands last year:
I like to point out that the possible sighting of quasicrystalline ice should make us hesitant to be too dismissive of these inventive geometries. What’s more, there do exist claims of pentagonal flakes having been observed, though this seems extremely hard to credit. Of course, in truth quasicrystal ice, even if it exists in very rare circumstances, hardly has five- or eightfold snowflakes as its inevitable corollary. But it’s fun to think about it, especially near the quadricentenary [?] of Kepler’s classic treatise on the snowflake, De nive sexangula.
Thursday, January 26, 2012
Forbidden chemistry
I’ve just published a feature article in New Scientist on “reactions they said could never happen” (at least, that was my brief). A fair bit of the introductory discussion had to be dropped, so here’s the original full text – sorry, a long post. I’m going to put the pdf on my web site, with a few figures added.
_____________________________________________________________________
The award of the 2011 Nobel prize for chemistry to Dan Shechtman for discovering quasicrystals allowed reporters to relish tales of experts being proved wrong. For his heretical suggestion that the packing of atoms in crystals can have a kind of fivefold (quasi)symmetry, Shechtman was ridiculed and ostracized and almost lost his job. The eminent chemist Linus Pauling derided him as a “quasi-scientist”.
Pauling of all people should have known that sometimes it is worth risking being bold and wrong, as he was himself with the structure of DNA in the 1950s. As it turned out, Shechtman was bold and right: quasicrystals do exist, and they earn their ‘impossible’ fivefold symmetry at the cost of not being fully ordered: not truly crystalline in the traditional sense. But while everyone enjoys seeing experts with egg on their faces, there’s a much more illuminating way to think about apparent violations of what is ‘possible’ in chemistry.
Here are some other examples of chemical processes that seemed to break the rules – reactions that ‘shouldn’t’ happen. They demonstrate why chemistry is such a vibrant, exciting science: because it operates on the borders of predictability and certainty. The laws of physics have an air of finality: they don’t tolerate exceptions. No one except cranks expects the conservation of energy to be violated. In biology, in contrast, ‘laws’ seem destined to have exceptions: even the heresy of inheritance of acquired characteristics is permitted by epigenetics. Chemistry sits in the middle ground between the rigidity of physics and the permissiveness of biology. Its basis in physics sets some limits and constraints, but the messy diversity of the elements can often transcend or undermine them.
That’s why chemists often rely on intuition to decide what should or shouldn’t be possible. When his postdoc student Xiao-Dong Wen told Nobel laureate Roald Hoffmann that his computer calculations found graphane – puckered sheets of carbon hexagons with hydrogens attached, with a C:H ratio of 1:1 – was more stable than familiar old benzene, Hoffmann insisted that the calculations were wrong. The superior stability of benzene, he said, “is sacrosanct - it’s hard to argue with it”. But eventually Hoffmann realized that his intuition was wrong: graphane is more stable, though no one has yet succeeded in proving definitively that it can be made.
You could say that chemistry flirts with its own law-breaking inclinations. Chemists often speak of reactions that are ‘forbidden’. For example, symmetry-forbidden reactions are ones that break the rules formulated by Hoffmann in his Nobel-winning work with organic chemist Robert Woodward in 1965 – rules governed by the mathematical symmetry properties of electron orbitals as they are rearranged or recombined by light or heat. Similarly, reactions that fail to conserve the total amount of ‘spin’, a quantum-mechanical property of electrons, are said to be spin-forbidden. And yet neither of these types of ‘forbidden’ reaction is impossible – they merely happen at slower rates. Hoffmann says that he (at Woodward’s insistence) even asserted in their 1965 paper that there were no exceptions to their rules, knowing that this would spur others into finding them.
So this gallery of ‘reactions they said couldn’t happen’ is not a litany of chemists’ conservatism and prejudice (although – let’s be honest – that sometimes played a part). It is a reflection of how chemistry itself exists in an unstable state, needing an intuition of right and wrong but having constantly to readjust that to the lessons of experience. That’s what makes it exciting – it’s not the case that anything might happen, but nevertheless big surprises certainly can. That’s why, however peculiar the claim, the right response in chemistry, perhaps more than any other branch of science, is not “that’s impossible”, but “prove it”.
Crazy tiling
In the early 1980s, Daniel Shechtman was bombarding metal alloys with electrons at the then National Bureau of Standards (NBS) in Gaithersburg, Maryland. Through mathematical analysis of the interference patterns formed as the beams reflected from different layers of the crystals, it was possible to determine exactly how the atoms were packed.
Among the alloys Shechtman studied, a blend of aluminium and manganese produced a beautiful pattern of sharp diffraction spots, which had always been found to be an indicator of crystalline order. But the crystal symmetry suggested by the pattern didn’t make sense. It was fivefold, like that of a pentagon. One of the basic rules of crystallography is that atoms can’t be packed into a regular, repeating arrangement with fivefold symmetry, just as pentagons can’t tile a floor in a periodic way that leaves no gaps.
Pauling wasn’t the only fierce critic of Shectman’s claims. When he persisted with them, his boss at NBS asked him to leave the group. And a paper he submitted in the summer of 1984 was rejected immediately. Only when he found some colleagues to back him up did he get the results published at the end of that year.
Yet the answer to the riddle they posed had been found already. In the 1970s the mathematician Roger Penrose had discovered that two rhombus-shaped tiles could be used to cover a flat plane without gaps and without the pattern ever repeating. In 1981, the crystallographer Alan Mackay found that if an atom were placed at every vertex of such a Penrose tiling, it would produce a diffraction pattern with fivefold symmetry, even though the tiling itself was not perfectly periodic. Shechtman’s alloy was analogous to a three-dimensional Penrose tiling. It was not a perfect crystal, because the atomic arrangement never repeated exactly; it was a quasicrystal.
Since then, many other quasicrystalline alloys have been discovered. They, or structures very much like them in polymers and assemblies of soap-like molecules called micelles. It has even been suggested that water, when confined in very narrow slits, can freeze into quasicrystalline ice.
You can’t have it both ways
For poor Boris Belousov, vindication came too late. When he was awarded the prestigious Lenin prize by the Soviet government in 1980 for his pioneering work on oscillating chemical reactions, he had already been dead for ten years.
Still, at least Belousov lived long enough to see the scorn heaped on his initial work turn to grudging acceptance by many chemists. When he discovered oscillating chemical reactions in the 1950s, he was deemed to have violated one of the most cherished principles of science: the second law of thermodynamics.
This states that all change in the universe must be accompanied by an increase in entropy – crudely speaking, it must leave things less ordered than they were to begin with. Even processes that seem to create order, such as the freezing of water to ice, in fact promote a broader disorder – here by releasing latent heat into the surroundings. This principle is what prohibits many perpetual motion machines (others violate the first law – the conservation of energy – instead). Violations of the second law are thus something that only cranks propose.
But Belousov was no crank. He was a respectable Russian biochemist interested in the mechanisms of metabolism, and specifically in glycolysis: how enzymes break down sugars. To study this process, Belousov devised a cocktail of chemical ingredients that should act like a simplified analogue of glycolysis. He shook them up and watched as the reaction proceeded, turning from clear to yellow.
Then it did something astonishing: it went clear again. Then yellow. Then clear. It began to oscillate repeatedly between these two coloured states. The problem is that entropy can’t possibly increase in both directions. So what’s up?
Belousov wasn’t actually the first to see an oscillating reaction. In 1921 American chemist William Bray reported oscillations in the reaction of hydrogen peroxide and iodate ions. But no one believed him either, even though the ecologist Alfred Lotka had shown in 1910 how oscillations could arise in a simple, hypothetical reaction. As for Belousov, he couldn’t get his findings published anywhere, and in the end he appended them to a paper in a Soviet conference proceedings on a different topic: a Pyrrhic victory, since they then remained almost totally obscure.
But not quite. In the 1960s another Soviet chemist, Anatoly Zhabotinsky, modified Belousov’s reaction mixture so that it switched between red and blue. That was pretty hard for others to ignore. The Belousov-Zhabotinsky (BZ) reaction became recognized as one of a whole class of oscillating reactions, and after it was transmitted to the West in a meeting of Soviet and Western scientists in Prague in 1967, these processes were gradually explained.
They don’t violate the second law after all, for the simple reason that the oscillations don’t last forever. Left to their own devices, they eventually die away and the reaction settles down to an unchanging state. They exist only while the reaction approaches its equilibrium state, and are thus an out-of-equilibrium phenomenon. Since thermodynamics speaks only about equilibrium states and not what happens en route to them, it is not threatened by oscillating reactions.
The oscillations are the result of self-amplifying feedback. As the reaction proceeds, one of the intermediate products (call it A) is autocatalytic: it speeds up the rate of its own production. This makes the reaction accelerate until the reagents are exhausted. But there is a second autocatalytic process that consumes A and produces another product, B, which kicks in when the first process runs out of steam. This too quickly exhausts itself, and the system reverts to the first process. It repeatedly flips back and forth between the two reactions, over-reaching itself first in one direction and then in the other. Lotka showed that the same thing can happen in populations of predators and their prey, which can get caught in alternating cycles of boom and bust.
If the BZ reaction is constantly fed fresh reagents, while the final products are removed, the oscillations can be sustained indefinitely: it remains out of equilibrium. Such oscillations are now know to happen in many chemical processes, including some industrially important reactions on metal catalysts and even in real glycolysis and other biochemical processes. If it takes place in an unstirred mixture, the BZ oscillations can spread from initiating spots as chemical waves, giving rise to complex patterns. Related patterns are the probable cause of many animal pigmentation markings. BZ chemical waves are analogues of the waves of electrical excitation that pass through heart tissue and induce regular heartbeats; if they are disturbed, the waves break up and the result can be a heart attack.
These waves might also form the basis of a novel form of computation. Andrew Adamatsky at the University of the West of England in Bristol is using their interactions to create logic gates, which he believes can be miniaturized to make a genuine “wet’” chemical computer. He and collaborators in Germany and Poland have launched a project called NeuNeu to make chemical circuits that will crudely mimic the behaviour of neurons, including a capacity for self-repair.
The quantum escape clause
It’s very cold in space. So cold that molecules encountering one another in the frigid molecular clouds that pepper the interstellar void should generally lack enough energy to react. In general, reactions proceed via the formation of high-energy intermediate molecules which then reconfigure into lower-energy products. Energy (usually thermal) is needed to get the reactants to get over this barrier, but in space there is next to none.
In the 1970s a Soviet chemist named Vitali Goldanski challenged that dogma. He showed that, with a bit of help from high-energy radiation such as gamma-rays or electron beams, some chemicals could react even when chilled by liquid helium to just four degrees above absolute zero – just a little higher than the coldest parts of space. For example, under these conditions Goldanski found that formaldehyde, a fairly common component of molecular clouds, could link up into polymer chains several hundred molecules long. At that temperature, conventional chemical kinetic theory suggested that the reaction should be so slow as to be virtually frozen.
Why was it possible? Goldanski argued that the reactions were getting help from quantum effects. It is well known that particles governed by quantum rules can get across energy barriers even if they don’t appear to have enough energy to do so. Instead of going over the top, they can pass through the barrier, a process known as tunnelling. It’s possible because of the smeared-out nature of quantum objects: they aren’t simply here or there, but have positions described by a probability distribution. A quantum particle on one side of a barrier has a small probability of suddenly and spontaneously turning up on the other side.
Goldanski saw the signature of quantum tunnelling in his ultracold experiments in the lab: the rate of formaldehyde polymerization didn’t steadily increase with temperature, as conventional kinetic theory predicts, but stayed much the same as the temperature rose.
Goldanski believed that his quantum-assisted reactions in space might have helped the molecular building blocks of life to have assembled there from simple ingredients such as hydrogen cyanide, ammonia and water. He even thought they could help to explain why biological molecules such as amino acids have a preferred ‘handedness’. Most amino acids have so-called chiral carbon atoms, to which four different chemical groups are attached, permitting two mirror-image variants. In living organisms these amino acids are always of the right-handed variety, a long-standing and still unexplained mystery. Goldanski argued that his ultracold reactions could favour one enantiomer over the other, since the tunnelling rates might be highly sensitive to tiny biasing influences such as the polarization of radiation inducing them.
Chemical reactions assisted by quantum tunnelling are now well established – not just in space, but in the living cell. Some enzymes are more efficient catalysts than one would expect classically, because they involve the movement of hydrogen ions – lone protons, which are light enough to experience significant quantum tunnelling.
This counter-intuitive phenomenon can also subvert conventional expectations about what the products of a reaction will be. That was demonstrated very recently by Wesley Allen of the University of Georgia and his coworkers. They trapped a highly reactive free-radical molecule called methylhydroxycarbene, which has unpaired electrons that predispose it to react fast, in an inert matrix of solid argon at 11 degrees Kelvin. This molecule can in theory rearrange its atoms to form vinyl alcohol or acetaldehyde. In practice, however, it shouldn’t have enough energy to get over the barrier to these reactions under these ultracold conditions. But the carbene was transformed nonetheless – because of tunnelling.
“Tunnelling is not specifically a low-temperature phenomenon”, Allen explains. “It occurs at all temperatures. But at low temperatures the thermal activation shuts off, so tunnelling is all that is left.”
What’s more, although the formation of vinyl alcohol has a lower energy barrier, Allen and colleagues found that most of the carbene was transformed instead to acetaldehyde. That defied kinetic theory, which says that the lower the energy barrier to the formation of a product, the faster it will be produced and so the more it dominates the resulting mixture. The researchers figured that although the barrier to formation of acetaldehyde may have been higher, it was also narrower, which meant that it was easier to tunnel through.
Tunnelling through such high barriers as these “was quite a shock to most chemists”, says Allen. He says the result shows that “tunnelling is a broader aspect of chemical kinetics that has been understood in the past”.
Not so noble
Dmitri Mendeleev’s first periodic table in 1869 didn’t just have some gaps for yet-undiscovered elements. It had a whole column missing: a whole family of chemical elements whose existence no one suspected. The lightest of them – helium – was discovered that very same year, and the others began to turn up in the 1890s, starting with argon. The reason they took so long to surface, even though they are abundant (helium is the second most abundant element in the universe) is that they don’t do anything: they are inert, “noble”, not reacting with other elements.
That supposed unreactivity was tested with every extreme chemists could devise. Just after the noble gas argon was discovered in 1894, the French chemist Henri Moissan mixed it with fluorine, the viciously reactive element that he had isolated in 1886, and sent sparks through the mixture. Result: nothing. By 1924, the Austrian chemist Friedrich Paneth pronounced the consensus: “the unreactivity of the noble gas elements belongs to the surest of all experimental results.” Theories of chemical bonding seemed to explain why that was: the noble gases had filled shells of electrons, and therefore no capacity for adding more by sharing electrons in chemical bonds.
Linus Pauling, the chief architect of those theories, didn’t give up. In the 1930s he blagged a rare sample of the noble gas xenon and peruaded his colleague Don Yost at Caltech to try to get it to react with fluorine. After more cooking and sparking, Yost had succeeded only in corroding the walls of his supposedly inert quartz flasks.
Against this intransigent background, it was either a brave or foolish soul who would still try to make compounds from noble gases. But the first person to do so, British chemist Neil Bartlett at the University of British Columbia in Vancouver, was not setting out to be an iconoclast. He was just following some wonderfully plain reasoning.
In 1961 Bartlett discovered that the compound platinum hexafluoride (PtF6), first made three years earlier by US chemists, was an eye-wateringly powerful oxidant. Oxidation – the removal of electrons from a chemical element or compound – is so named because its prototypical form is the reaction with oxygen gas, a substance almost unparalleled in its ability to grab electrons. But Bartlett found that PtF6 can out-oxidize oxygen itself.
In early 1962 Bartlett was preparing a standard undergraduate lecture on inorganic chemistry and happened to glance at a textbook graph of ‘ionization potentials’ of substances: how much energy is needed to remove an electron from them. He noticed that it takes almost exactly the same energy to ionize – that is, to oxidize – oxygen molecules as xenon atoms. He realised that if PtF6 can do it to oxygen, it should do it to xenon too.
So he tried the experiment, simply mixing red gaseous PtF6 and colourless xenon. Straight away, the glass was covered with a yellow material, which Bartlett found to have the formula XePtF6: the first noble-gas compound.
Since then, many other compounds of both xenon and krypton, another noble gas, have been made. Some are explosively unstable: Bartlett nearly lost an eye studying xenon dioxide. Heavy, radioactive radon forms compounds too, although it wasn’t until 2000 that the first compound of argon was reported by a group in Finland. Even now, the noble gases continue to produce surprises. Roald Hoffmann admits to being shocked when, in that same year, a compound of xenon and gold was reported by chemists in Berlin – for gold is supposed to be a noble, unreactive metal too. You can persuade elements to do almost anything, it seems.
Improper bonds
Covalent chemical bonds form when two atoms share a pair of electrons, which act as a glue that binds the union. At least, that’s what we learn at school. But chemists have come to accept that there are plenty of other ways to form bonds.
Take the hydrogen bond – the interaction of electron ‘lone pairs’ on one atom such as oxygen or nitrogen with a hydrogen atom on another molecular group with a slight positive charge. This interaction is now acknowledged as the key to water’s unusual properties and the glue that sticks DNA’s double helix together. But the formation of a second bond by hydrogen, supposedly a one-bond atom, was initially derided in the 1920s as a fictitious kind of chemical “bigamy”.
That, however, was nothing compared to the controversy that surrounded the notion, first put forward in the 1940s, that some organic molecules, such as ‘carbocations’ in which carbon atoms are positively charged, could form short-lived structures over the course of a reaction in which a pair of electrons was dispersed over three rather than two atoms. This arrangement was considered so extraordinary that it became known as non-classical bonding.
The idea was invoked to explain some reactions involving the swapping of dangling groups attached to molecules with bridged carbon rings. In the first step of the reaction, the ‘leaving group’ falls off to create an intermediate carbocation. By rights, the replacement dangling group, with an overall negative charge, should have attached at the same place, at the positively charged atom. But it didn’t: the “reactive centre” of the carbocation seemed able to shift.
Some chemists, especially Saul Winstein at the University of California at Los Angeles, argued that the intermediate carbocation is bridged by a non-classical bond that bridged three carbon atoms in a triangular ring, with its positive charge smeared between them, giving the replacement group more than one place to dock. This bonding structure would temporarily, and rather heretically, give one of the carbon atoms five instead of the usual four bonding partners.
Such an unusual kind of bonding offended the sensibilities of other chemists, most of all Herbert Brown, who was awarded a Nobel prize in 1979 for his work on boron compounds. In 1961 he opened the “non-classical ion” war with a paper dismissing proposals for these structures as lacking “the same care and same sound experimental basis as that which is customary in other areas of experimental organic chemistry”. The ensuing arguments raged for two decades in what Brown called a “holy war”. “By the time the controversy sputtered to a halt in the early 1980s”, says philosopher of chemistry William Goodwin of Rowan University in New Jersey, “a tremendous amount of intellectual energy, resources, and invective had been invested in resolving an issue that was crucial neither to progress in physical organic chemistry generally nor to the subfield of carbocation chemistry.” Both sides accused the rival theory of being ‘soft’ – able to fit any result, and therefore not truly scientific.
Brown and his followers didn’t object in principle to the idea of electrons being smeared over more than two atomic nuclei – that happened in benzene, after all. But they considered the nonclassical ion an unnecessary and faddish imposition for an effect that could be explained by less drastic, more traditional means. The argument was really about how to interpret the experiments that bore on the matter, and it shows that, particularly in chemistry, it could and still can be very hard to apply a kind of Popperian falsification to distinguish between rival theories. Goodwin thinks that the non-classical ion dispute was provoked and sustained by ambiguities built into in the way organic chemists try to understand and describe the mechanisms of their reactions. “Organic chemists have sacrificed unambiguous explanation for something much more useful – a theory that helps them make plausible, but fallible, assessments of the chemical behavior of novel, complex compounds”, he says. As a result, chemistry is naturally prone to arguments that get resolved only when one side or the other runs out of energy – or dies.
The non-classical ion argument raged for two decades, until eventually most chemists except Brown accepted that these ions were real. Ironically, in the course of the debate both Winstein and Brown implied to a young Hungarian emigré chemist, George Olah, that his claim to have isolated a relatively long-lived carbocation – a development that ultimately helped resolve the issue – was unwise. This was another ‘reaction that couldn’t happen’, they advised – the ions were too unstable. But Olah was right, and his work on carbocations earned him a Nobel prize in 1994.
_____________________________________________________________________
The award of the 2011 Nobel prize for chemistry to Dan Shechtman for discovering quasicrystals allowed reporters to relish tales of experts being proved wrong. For his heretical suggestion that the packing of atoms in crystals can have a kind of fivefold (quasi)symmetry, Shechtman was ridiculed and ostracized and almost lost his job. The eminent chemist Linus Pauling derided him as a “quasi-scientist”.
Pauling of all people should have known that sometimes it is worth risking being bold and wrong, as he was himself with the structure of DNA in the 1950s. As it turned out, Shechtman was bold and right: quasicrystals do exist, and they earn their ‘impossible’ fivefold symmetry at the cost of not being fully ordered: not truly crystalline in the traditional sense. But while everyone enjoys seeing experts with egg on their faces, there’s a much more illuminating way to think about apparent violations of what is ‘possible’ in chemistry.
Here are some other examples of chemical processes that seemed to break the rules – reactions that ‘shouldn’t’ happen. They demonstrate why chemistry is such a vibrant, exciting science: because it operates on the borders of predictability and certainty. The laws of physics have an air of finality: they don’t tolerate exceptions. No one except cranks expects the conservation of energy to be violated. In biology, in contrast, ‘laws’ seem destined to have exceptions: even the heresy of inheritance of acquired characteristics is permitted by epigenetics. Chemistry sits in the middle ground between the rigidity of physics and the permissiveness of biology. Its basis in physics sets some limits and constraints, but the messy diversity of the elements can often transcend or undermine them.
That’s why chemists often rely on intuition to decide what should or shouldn’t be possible. When his postdoc student Xiao-Dong Wen told Nobel laureate Roald Hoffmann that his computer calculations found graphane – puckered sheets of carbon hexagons with hydrogens attached, with a C:H ratio of 1:1 – was more stable than familiar old benzene, Hoffmann insisted that the calculations were wrong. The superior stability of benzene, he said, “is sacrosanct - it’s hard to argue with it”. But eventually Hoffmann realized that his intuition was wrong: graphane is more stable, though no one has yet succeeded in proving definitively that it can be made.
You could say that chemistry flirts with its own law-breaking inclinations. Chemists often speak of reactions that are ‘forbidden’. For example, symmetry-forbidden reactions are ones that break the rules formulated by Hoffmann in his Nobel-winning work with organic chemist Robert Woodward in 1965 – rules governed by the mathematical symmetry properties of electron orbitals as they are rearranged or recombined by light or heat. Similarly, reactions that fail to conserve the total amount of ‘spin’, a quantum-mechanical property of electrons, are said to be spin-forbidden. And yet neither of these types of ‘forbidden’ reaction is impossible – they merely happen at slower rates. Hoffmann says that he (at Woodward’s insistence) even asserted in their 1965 paper that there were no exceptions to their rules, knowing that this would spur others into finding them.
So this gallery of ‘reactions they said couldn’t happen’ is not a litany of chemists’ conservatism and prejudice (although – let’s be honest – that sometimes played a part). It is a reflection of how chemistry itself exists in an unstable state, needing an intuition of right and wrong but having constantly to readjust that to the lessons of experience. That’s what makes it exciting – it’s not the case that anything might happen, but nevertheless big surprises certainly can. That’s why, however peculiar the claim, the right response in chemistry, perhaps more than any other branch of science, is not “that’s impossible”, but “prove it”.
Crazy tiling
In the early 1980s, Daniel Shechtman was bombarding metal alloys with electrons at the then National Bureau of Standards (NBS) in Gaithersburg, Maryland. Through mathematical analysis of the interference patterns formed as the beams reflected from different layers of the crystals, it was possible to determine exactly how the atoms were packed.
Among the alloys Shechtman studied, a blend of aluminium and manganese produced a beautiful pattern of sharp diffraction spots, which had always been found to be an indicator of crystalline order. But the crystal symmetry suggested by the pattern didn’t make sense. It was fivefold, like that of a pentagon. One of the basic rules of crystallography is that atoms can’t be packed into a regular, repeating arrangement with fivefold symmetry, just as pentagons can’t tile a floor in a periodic way that leaves no gaps.
Pauling wasn’t the only fierce critic of Shectman’s claims. When he persisted with them, his boss at NBS asked him to leave the group. And a paper he submitted in the summer of 1984 was rejected immediately. Only when he found some colleagues to back him up did he get the results published at the end of that year.
Yet the answer to the riddle they posed had been found already. In the 1970s the mathematician Roger Penrose had discovered that two rhombus-shaped tiles could be used to cover a flat plane without gaps and without the pattern ever repeating. In 1981, the crystallographer Alan Mackay found that if an atom were placed at every vertex of such a Penrose tiling, it would produce a diffraction pattern with fivefold symmetry, even though the tiling itself was not perfectly periodic. Shechtman’s alloy was analogous to a three-dimensional Penrose tiling. It was not a perfect crystal, because the atomic arrangement never repeated exactly; it was a quasicrystal.
Since then, many other quasicrystalline alloys have been discovered. They, or structures very much like them in polymers and assemblies of soap-like molecules called micelles. It has even been suggested that water, when confined in very narrow slits, can freeze into quasicrystalline ice.
You can’t have it both ways
For poor Boris Belousov, vindication came too late. When he was awarded the prestigious Lenin prize by the Soviet government in 1980 for his pioneering work on oscillating chemical reactions, he had already been dead for ten years.
Still, at least Belousov lived long enough to see the scorn heaped on his initial work turn to grudging acceptance by many chemists. When he discovered oscillating chemical reactions in the 1950s, he was deemed to have violated one of the most cherished principles of science: the second law of thermodynamics.
This states that all change in the universe must be accompanied by an increase in entropy – crudely speaking, it must leave things less ordered than they were to begin with. Even processes that seem to create order, such as the freezing of water to ice, in fact promote a broader disorder – here by releasing latent heat into the surroundings. This principle is what prohibits many perpetual motion machines (others violate the first law – the conservation of energy – instead). Violations of the second law are thus something that only cranks propose.
But Belousov was no crank. He was a respectable Russian biochemist interested in the mechanisms of metabolism, and specifically in glycolysis: how enzymes break down sugars. To study this process, Belousov devised a cocktail of chemical ingredients that should act like a simplified analogue of glycolysis. He shook them up and watched as the reaction proceeded, turning from clear to yellow.
Then it did something astonishing: it went clear again. Then yellow. Then clear. It began to oscillate repeatedly between these two coloured states. The problem is that entropy can’t possibly increase in both directions. So what’s up?
Belousov wasn’t actually the first to see an oscillating reaction. In 1921 American chemist William Bray reported oscillations in the reaction of hydrogen peroxide and iodate ions. But no one believed him either, even though the ecologist Alfred Lotka had shown in 1910 how oscillations could arise in a simple, hypothetical reaction. As for Belousov, he couldn’t get his findings published anywhere, and in the end he appended them to a paper in a Soviet conference proceedings on a different topic: a Pyrrhic victory, since they then remained almost totally obscure.
But not quite. In the 1960s another Soviet chemist, Anatoly Zhabotinsky, modified Belousov’s reaction mixture so that it switched between red and blue. That was pretty hard for others to ignore. The Belousov-Zhabotinsky (BZ) reaction became recognized as one of a whole class of oscillating reactions, and after it was transmitted to the West in a meeting of Soviet and Western scientists in Prague in 1967, these processes were gradually explained.
They don’t violate the second law after all, for the simple reason that the oscillations don’t last forever. Left to their own devices, they eventually die away and the reaction settles down to an unchanging state. They exist only while the reaction approaches its equilibrium state, and are thus an out-of-equilibrium phenomenon. Since thermodynamics speaks only about equilibrium states and not what happens en route to them, it is not threatened by oscillating reactions.
The oscillations are the result of self-amplifying feedback. As the reaction proceeds, one of the intermediate products (call it A) is autocatalytic: it speeds up the rate of its own production. This makes the reaction accelerate until the reagents are exhausted. But there is a second autocatalytic process that consumes A and produces another product, B, which kicks in when the first process runs out of steam. This too quickly exhausts itself, and the system reverts to the first process. It repeatedly flips back and forth between the two reactions, over-reaching itself first in one direction and then in the other. Lotka showed that the same thing can happen in populations of predators and their prey, which can get caught in alternating cycles of boom and bust.
If the BZ reaction is constantly fed fresh reagents, while the final products are removed, the oscillations can be sustained indefinitely: it remains out of equilibrium. Such oscillations are now know to happen in many chemical processes, including some industrially important reactions on metal catalysts and even in real glycolysis and other biochemical processes. If it takes place in an unstirred mixture, the BZ oscillations can spread from initiating spots as chemical waves, giving rise to complex patterns. Related patterns are the probable cause of many animal pigmentation markings. BZ chemical waves are analogues of the waves of electrical excitation that pass through heart tissue and induce regular heartbeats; if they are disturbed, the waves break up and the result can be a heart attack.
These waves might also form the basis of a novel form of computation. Andrew Adamatsky at the University of the West of England in Bristol is using their interactions to create logic gates, which he believes can be miniaturized to make a genuine “wet’” chemical computer. He and collaborators in Germany and Poland have launched a project called NeuNeu to make chemical circuits that will crudely mimic the behaviour of neurons, including a capacity for self-repair.
The quantum escape clause
It’s very cold in space. So cold that molecules encountering one another in the frigid molecular clouds that pepper the interstellar void should generally lack enough energy to react. In general, reactions proceed via the formation of high-energy intermediate molecules which then reconfigure into lower-energy products. Energy (usually thermal) is needed to get the reactants to get over this barrier, but in space there is next to none.
In the 1970s a Soviet chemist named Vitali Goldanski challenged that dogma. He showed that, with a bit of help from high-energy radiation such as gamma-rays or electron beams, some chemicals could react even when chilled by liquid helium to just four degrees above absolute zero – just a little higher than the coldest parts of space. For example, under these conditions Goldanski found that formaldehyde, a fairly common component of molecular clouds, could link up into polymer chains several hundred molecules long. At that temperature, conventional chemical kinetic theory suggested that the reaction should be so slow as to be virtually frozen.
Why was it possible? Goldanski argued that the reactions were getting help from quantum effects. It is well known that particles governed by quantum rules can get across energy barriers even if they don’t appear to have enough energy to do so. Instead of going over the top, they can pass through the barrier, a process known as tunnelling. It’s possible because of the smeared-out nature of quantum objects: they aren’t simply here or there, but have positions described by a probability distribution. A quantum particle on one side of a barrier has a small probability of suddenly and spontaneously turning up on the other side.
Goldanski saw the signature of quantum tunnelling in his ultracold experiments in the lab: the rate of formaldehyde polymerization didn’t steadily increase with temperature, as conventional kinetic theory predicts, but stayed much the same as the temperature rose.
Goldanski believed that his quantum-assisted reactions in space might have helped the molecular building blocks of life to have assembled there from simple ingredients such as hydrogen cyanide, ammonia and water. He even thought they could help to explain why biological molecules such as amino acids have a preferred ‘handedness’. Most amino acids have so-called chiral carbon atoms, to which four different chemical groups are attached, permitting two mirror-image variants. In living organisms these amino acids are always of the right-handed variety, a long-standing and still unexplained mystery. Goldanski argued that his ultracold reactions could favour one enantiomer over the other, since the tunnelling rates might be highly sensitive to tiny biasing influences such as the polarization of radiation inducing them.
Chemical reactions assisted by quantum tunnelling are now well established – not just in space, but in the living cell. Some enzymes are more efficient catalysts than one would expect classically, because they involve the movement of hydrogen ions – lone protons, which are light enough to experience significant quantum tunnelling.
This counter-intuitive phenomenon can also subvert conventional expectations about what the products of a reaction will be. That was demonstrated very recently by Wesley Allen of the University of Georgia and his coworkers. They trapped a highly reactive free-radical molecule called methylhydroxycarbene, which has unpaired electrons that predispose it to react fast, in an inert matrix of solid argon at 11 degrees Kelvin. This molecule can in theory rearrange its atoms to form vinyl alcohol or acetaldehyde. In practice, however, it shouldn’t have enough energy to get over the barrier to these reactions under these ultracold conditions. But the carbene was transformed nonetheless – because of tunnelling.
“Tunnelling is not specifically a low-temperature phenomenon”, Allen explains. “It occurs at all temperatures. But at low temperatures the thermal activation shuts off, so tunnelling is all that is left.”
What’s more, although the formation of vinyl alcohol has a lower energy barrier, Allen and colleagues found that most of the carbene was transformed instead to acetaldehyde. That defied kinetic theory, which says that the lower the energy barrier to the formation of a product, the faster it will be produced and so the more it dominates the resulting mixture. The researchers figured that although the barrier to formation of acetaldehyde may have been higher, it was also narrower, which meant that it was easier to tunnel through.
Tunnelling through such high barriers as these “was quite a shock to most chemists”, says Allen. He says the result shows that “tunnelling is a broader aspect of chemical kinetics that has been understood in the past”.
Not so noble
Dmitri Mendeleev’s first periodic table in 1869 didn’t just have some gaps for yet-undiscovered elements. It had a whole column missing: a whole family of chemical elements whose existence no one suspected. The lightest of them – helium – was discovered that very same year, and the others began to turn up in the 1890s, starting with argon. The reason they took so long to surface, even though they are abundant (helium is the second most abundant element in the universe) is that they don’t do anything: they are inert, “noble”, not reacting with other elements.
That supposed unreactivity was tested with every extreme chemists could devise. Just after the noble gas argon was discovered in 1894, the French chemist Henri Moissan mixed it with fluorine, the viciously reactive element that he had isolated in 1886, and sent sparks through the mixture. Result: nothing. By 1924, the Austrian chemist Friedrich Paneth pronounced the consensus: “the unreactivity of the noble gas elements belongs to the surest of all experimental results.” Theories of chemical bonding seemed to explain why that was: the noble gases had filled shells of electrons, and therefore no capacity for adding more by sharing electrons in chemical bonds.
Linus Pauling, the chief architect of those theories, didn’t give up. In the 1930s he blagged a rare sample of the noble gas xenon and peruaded his colleague Don Yost at Caltech to try to get it to react with fluorine. After more cooking and sparking, Yost had succeeded only in corroding the walls of his supposedly inert quartz flasks.
Against this intransigent background, it was either a brave or foolish soul who would still try to make compounds from noble gases. But the first person to do so, British chemist Neil Bartlett at the University of British Columbia in Vancouver, was not setting out to be an iconoclast. He was just following some wonderfully plain reasoning.
In 1961 Bartlett discovered that the compound platinum hexafluoride (PtF6), first made three years earlier by US chemists, was an eye-wateringly powerful oxidant. Oxidation – the removal of electrons from a chemical element or compound – is so named because its prototypical form is the reaction with oxygen gas, a substance almost unparalleled in its ability to grab electrons. But Bartlett found that PtF6 can out-oxidize oxygen itself.
In early 1962 Bartlett was preparing a standard undergraduate lecture on inorganic chemistry and happened to glance at a textbook graph of ‘ionization potentials’ of substances: how much energy is needed to remove an electron from them. He noticed that it takes almost exactly the same energy to ionize – that is, to oxidize – oxygen molecules as xenon atoms. He realised that if PtF6 can do it to oxygen, it should do it to xenon too.
So he tried the experiment, simply mixing red gaseous PtF6 and colourless xenon. Straight away, the glass was covered with a yellow material, which Bartlett found to have the formula XePtF6: the first noble-gas compound.
Since then, many other compounds of both xenon and krypton, another noble gas, have been made. Some are explosively unstable: Bartlett nearly lost an eye studying xenon dioxide. Heavy, radioactive radon forms compounds too, although it wasn’t until 2000 that the first compound of argon was reported by a group in Finland. Even now, the noble gases continue to produce surprises. Roald Hoffmann admits to being shocked when, in that same year, a compound of xenon and gold was reported by chemists in Berlin – for gold is supposed to be a noble, unreactive metal too. You can persuade elements to do almost anything, it seems.
Improper bonds
Covalent chemical bonds form when two atoms share a pair of electrons, which act as a glue that binds the union. At least, that’s what we learn at school. But chemists have come to accept that there are plenty of other ways to form bonds.
Take the hydrogen bond – the interaction of electron ‘lone pairs’ on one atom such as oxygen or nitrogen with a hydrogen atom on another molecular group with a slight positive charge. This interaction is now acknowledged as the key to water’s unusual properties and the glue that sticks DNA’s double helix together. But the formation of a second bond by hydrogen, supposedly a one-bond atom, was initially derided in the 1920s as a fictitious kind of chemical “bigamy”.
That, however, was nothing compared to the controversy that surrounded the notion, first put forward in the 1940s, that some organic molecules, such as ‘carbocations’ in which carbon atoms are positively charged, could form short-lived structures over the course of a reaction in which a pair of electrons was dispersed over three rather than two atoms. This arrangement was considered so extraordinary that it became known as non-classical bonding.
The idea was invoked to explain some reactions involving the swapping of dangling groups attached to molecules with bridged carbon rings. In the first step of the reaction, the ‘leaving group’ falls off to create an intermediate carbocation. By rights, the replacement dangling group, with an overall negative charge, should have attached at the same place, at the positively charged atom. But it didn’t: the “reactive centre” of the carbocation seemed able to shift.
Some chemists, especially Saul Winstein at the University of California at Los Angeles, argued that the intermediate carbocation is bridged by a non-classical bond that bridged three carbon atoms in a triangular ring, with its positive charge smeared between them, giving the replacement group more than one place to dock. This bonding structure would temporarily, and rather heretically, give one of the carbon atoms five instead of the usual four bonding partners.
Such an unusual kind of bonding offended the sensibilities of other chemists, most of all Herbert Brown, who was awarded a Nobel prize in 1979 for his work on boron compounds. In 1961 he opened the “non-classical ion” war with a paper dismissing proposals for these structures as lacking “the same care and same sound experimental basis as that which is customary in other areas of experimental organic chemistry”. The ensuing arguments raged for two decades in what Brown called a “holy war”. “By the time the controversy sputtered to a halt in the early 1980s”, says philosopher of chemistry William Goodwin of Rowan University in New Jersey, “a tremendous amount of intellectual energy, resources, and invective had been invested in resolving an issue that was crucial neither to progress in physical organic chemistry generally nor to the subfield of carbocation chemistry.” Both sides accused the rival theory of being ‘soft’ – able to fit any result, and therefore not truly scientific.
Brown and his followers didn’t object in principle to the idea of electrons being smeared over more than two atomic nuclei – that happened in benzene, after all. But they considered the nonclassical ion an unnecessary and faddish imposition for an effect that could be explained by less drastic, more traditional means. The argument was really about how to interpret the experiments that bore on the matter, and it shows that, particularly in chemistry, it could and still can be very hard to apply a kind of Popperian falsification to distinguish between rival theories. Goodwin thinks that the non-classical ion dispute was provoked and sustained by ambiguities built into in the way organic chemists try to understand and describe the mechanisms of their reactions. “Organic chemists have sacrificed unambiguous explanation for something much more useful – a theory that helps them make plausible, but fallible, assessments of the chemical behavior of novel, complex compounds”, he says. As a result, chemistry is naturally prone to arguments that get resolved only when one side or the other runs out of energy – or dies.
The non-classical ion argument raged for two decades, until eventually most chemists except Brown accepted that these ions were real. Ironically, in the course of the debate both Winstein and Brown implied to a young Hungarian emigré chemist, George Olah, that his claim to have isolated a relatively long-lived carbocation – a development that ultimately helped resolve the issue – was unwise. This was another ‘reaction that couldn’t happen’, they advised – the ions were too unstable. But Olah was right, and his work on carbocations earned him a Nobel prize in 1994.
Subscribe to:
Posts (Atom)