Saturday, March 24, 2012
On looking good
I seem to have persuaded Charlotte Raven, and doubtless now many others too, that Yves Saint Laurent’s Forever Youth Liberator skin cream is the real deal. I don’t know if my ageing shoulders can bear the weight of this responsibility. All I can say in my defence is that it would have been unfair to YSL to allow my instinct to scoff override my judgement of the science - on which, more here. But as Ms Raven points out, ultimately I reserve judgement. Where I do not reserve judgement is in saying, with what I hope does not seem like cloying gallantry, that it is hard to see why she would feel the need to consider shelling out for this gloop even if it does what it says on the tin. Or rather, it is very easy to understand why she feels the pressure to do so, given the ways of the world, but also evident that she has every cause to ignore it. No look, I’m not saying that if she didn’t look so good without makeup and rejuvenating creams then she’d be well advised to start slapping them on, it’s just that… Oh dear, this is a minefield.
Thursday, March 22, 2012
Wonders of New York
Here’s a piece about an event in NYC in which I took part at the end of last month.
______________________________________________________________________
It was fitting that the ‘Wonder Cabinet’, a public event at a trendy arthouse cinema in New York at the end of February, should have been opened by Lawrence Weschler, director of New York University’s Institute for the Humanities under whose auspices the affair was staged. For Weschler’s Pulitzer-shortlisted Mr Wilson’s Cabinet of Wonder (1995) tells the tale of David Wilson’s bizarre Museum of Jurassic Technology in Los Angeles, in which you can never quite be sure if the bizarre exhibits are factual or not (they usually are, after a fashion). And that was very much the nature of what followed in the ten-hour marathon that Weschler introduced.
Was legendary performance artist Laurie Anderson telling the truth, for instance, about her letter to Thomas Pynchon requesting permission to create an opera based on Gravity’s Rainbow? Expecting no reply from the famously reclusive author, she was surprised to receive a long epistle in which Pynchon proclaimed his admiration for his work and offered his enthusiastic endorsement of her plan. There was just one catch: he insisted that it be scored for solo banjo. Anderson took this to be a uniquely gracious way of saying no. I didn’t doubt her for a second.
The day had a remarkably high quota of such head-scratching and jaw-dropping revelations, the intellectual equivalent of those celluloid rushes in the movies of Coppola, Kubrick and early Spielberg. Even if you thought, as I did, that you know a smattering about bowerbirds – the male of which constructs an elaborate ‘bower’ of twigs and decorates it with scavenged objects to lure a female – seeing them in action during a talk by ornithologist Gail Patricelli of the University of California at Davis was spectacular. Each species has its own architectural style and, most strikingly, its own colour scheme: blue for the Satin Bowerbird (which went to great lengths to steal the researchers’ blue toothbrushes), bone-white and green for the Great Bowerbird. Some of these constructions are exquisite, around a metre in diameter. But all that labour is only the precursor to an elaborate mating ritual in which the most successful males exhibit an enticing boldness without tipping into scary aggression. This means that female choice selects for a wide and subtle range of male social behaviour, among which are sensitivity and responsiveness to the potential mate’s own behavioural signals. And all this for the most anti-climactic of climaxes, an act of copulation that lasts barely two seconds.
Or take the octopus described by virtual-reality visionary (and multi-instrumentalist) Jaron Lanier, which was revealed by secretly installed CCTV to be the mysterious culprit stealing the rare crabs from the aquarium in which it was kept. The octopus would climb out of its tank (it could survive out of water for short periods), clamber into the crabs’ container, help itself and return home to devour the spoils and bury the evidence. And get this: it closed the crab-tank lid behind it to hide its tracks – and in doing so, offered what might be interpreted as evidence for what developmental psychologists call a ‘theory of mind’, an ability to ascribe autonomy and intention to other beings. Octopi and squid would rule the world, Lanier claimed, if it were not for the fact that they have no childhood: abandoned by the mother at birth, the youngsters are passed on none of the learned culture of the elders, and so (unlike bower birds, say) must always begin from scratch.
All this orbited now close to, now more distant from the raison d’être of the event, which was philosopher David Rothenberg’s new book Survival of the Beautiful, an erudite argument for why we should take seriously the notion that non-human creatures have an aesthetic sense that exceeds the austere exigencies of Darwinian adaptation. It’s not just that the bower bird does more than seems strictly necessary to get a mate (although what is ‘necessary’ is open to debate); the expression of preferences by the female seems as elaborately ineffable and multi-valent as anything in human culture. Such reasoning of course stands at risk of becoming anthropomorphic, a danger fully appreciated by Rothenberg and the others who discussed instances of apparent creativity in animals. But it’s conceivable that this question could be turned into hard science. Psychologist Ofer Tchernichovski of the City University of New York hopes, for example, to examine whether birdsong uses the same musical tricks (basically, the creation and subsequent violation of expectation) to elicit emotion, for example by measuring in the song birds the physiological indicators of a change in arousal such as heartbeat and release of the ‘pleasure’ neurotransmitter dopamine that betray an emotional response in humans. Even if you want to quibble over what this will say about the bird’s ‘state of mind’, the question is definitely worth asking.
But it was the peripheral delights of the event, as much as the exploration of Rothenberg’s thesis, that made it a true cabinet of wonders. Lanier elicited another ‘Whoa!’ moment by explaining that the use of non-human avatars in virtual reality – putting people in charge of a lobster’s body, say – amounts to an exploration of the pre-adaptation of the human brain: the kinds of somatic embodiments that it is already adapted to handle, some of which might conceivably be the shapes into which we will evolve. This is a crucial aspect of evolution: it’s not so much that a mutation introduces new shapes and functions, but that it releases a potential that is already latent in the ancestral organism. Beyond this, said Lanier, putting individuals into more abstract avatars can be a wonderful educational tool by engaging mental resources beyond abstract reasoning, just as the fingers of an improvising pianist effortlessly navigate a route through harmonic space that would baffle the logical mind. Children might learn trigonometry much faster by becoming triangles; chemists will discover what it means to be a molecule.
Meanwhile, the iPad apps devised by engagingly modest media artist Scott Snibbe provided the best argument I’ve so far seen for why this device is not simply a different computer interface but a qualitatively new form of information technology, both in cognitive and creative terms. No wonder it was to Snibbe that Björk (“an angel”, he confided) went to realise her multimedia ‘album’ Biophilia. Whether this interactive project represents the future of music or an elaborate game remains to be seen; for me, Snibbe’s guided tour of its possibilities evoked a sensation of tectonic shift akin to that I vaguely recall now on being told that there was this thing on the internet called a ‘search engine’.
But the prize for the most arresting shaggy dog story went again to Anderson. Her attempts to teach her dog to communicate and play the piano were already raised beyond the status of the endearingly kooky by the profound respect in which she evidently held the animal. But when in the course of recounting the dog’s perplexed discovery during a mountain hike that death, in the form of vultures, could descend from above – another 180 degrees of danger to consider – we were all suddenly reminded that we were in downtown Manhattan, just a few blocks from the decade-old hole in the financial district. And we suddenly felt not so far removed from these creatures at all.
______________________________________________________________________
It was fitting that the ‘Wonder Cabinet’, a public event at a trendy arthouse cinema in New York at the end of February, should have been opened by Lawrence Weschler, director of New York University’s Institute for the Humanities under whose auspices the affair was staged. For Weschler’s Pulitzer-shortlisted Mr Wilson’s Cabinet of Wonder (1995) tells the tale of David Wilson’s bizarre Museum of Jurassic Technology in Los Angeles, in which you can never quite be sure if the bizarre exhibits are factual or not (they usually are, after a fashion). And that was very much the nature of what followed in the ten-hour marathon that Weschler introduced.
Was legendary performance artist Laurie Anderson telling the truth, for instance, about her letter to Thomas Pynchon requesting permission to create an opera based on Gravity’s Rainbow? Expecting no reply from the famously reclusive author, she was surprised to receive a long epistle in which Pynchon proclaimed his admiration for his work and offered his enthusiastic endorsement of her plan. There was just one catch: he insisted that it be scored for solo banjo. Anderson took this to be a uniquely gracious way of saying no. I didn’t doubt her for a second.
The day had a remarkably high quota of such head-scratching and jaw-dropping revelations, the intellectual equivalent of those celluloid rushes in the movies of Coppola, Kubrick and early Spielberg. Even if you thought, as I did, that you know a smattering about bowerbirds – the male of which constructs an elaborate ‘bower’ of twigs and decorates it with scavenged objects to lure a female – seeing them in action during a talk by ornithologist Gail Patricelli of the University of California at Davis was spectacular. Each species has its own architectural style and, most strikingly, its own colour scheme: blue for the Satin Bowerbird (which went to great lengths to steal the researchers’ blue toothbrushes), bone-white and green for the Great Bowerbird. Some of these constructions are exquisite, around a metre in diameter. But all that labour is only the precursor to an elaborate mating ritual in which the most successful males exhibit an enticing boldness without tipping into scary aggression. This means that female choice selects for a wide and subtle range of male social behaviour, among which are sensitivity and responsiveness to the potential mate’s own behavioural signals. And all this for the most anti-climactic of climaxes, an act of copulation that lasts barely two seconds.
Or take the octopus described by virtual-reality visionary (and multi-instrumentalist) Jaron Lanier, which was revealed by secretly installed CCTV to be the mysterious culprit stealing the rare crabs from the aquarium in which it was kept. The octopus would climb out of its tank (it could survive out of water for short periods), clamber into the crabs’ container, help itself and return home to devour the spoils and bury the evidence. And get this: it closed the crab-tank lid behind it to hide its tracks – and in doing so, offered what might be interpreted as evidence for what developmental psychologists call a ‘theory of mind’, an ability to ascribe autonomy and intention to other beings. Octopi and squid would rule the world, Lanier claimed, if it were not for the fact that they have no childhood: abandoned by the mother at birth, the youngsters are passed on none of the learned culture of the elders, and so (unlike bower birds, say) must always begin from scratch.
All this orbited now close to, now more distant from the raison d’être of the event, which was philosopher David Rothenberg’s new book Survival of the Beautiful, an erudite argument for why we should take seriously the notion that non-human creatures have an aesthetic sense that exceeds the austere exigencies of Darwinian adaptation. It’s not just that the bower bird does more than seems strictly necessary to get a mate (although what is ‘necessary’ is open to debate); the expression of preferences by the female seems as elaborately ineffable and multi-valent as anything in human culture. Such reasoning of course stands at risk of becoming anthropomorphic, a danger fully appreciated by Rothenberg and the others who discussed instances of apparent creativity in animals. But it’s conceivable that this question could be turned into hard science. Psychologist Ofer Tchernichovski of the City University of New York hopes, for example, to examine whether birdsong uses the same musical tricks (basically, the creation and subsequent violation of expectation) to elicit emotion, for example by measuring in the song birds the physiological indicators of a change in arousal such as heartbeat and release of the ‘pleasure’ neurotransmitter dopamine that betray an emotional response in humans. Even if you want to quibble over what this will say about the bird’s ‘state of mind’, the question is definitely worth asking.
But it was the peripheral delights of the event, as much as the exploration of Rothenberg’s thesis, that made it a true cabinet of wonders. Lanier elicited another ‘Whoa!’ moment by explaining that the use of non-human avatars in virtual reality – putting people in charge of a lobster’s body, say – amounts to an exploration of the pre-adaptation of the human brain: the kinds of somatic embodiments that it is already adapted to handle, some of which might conceivably be the shapes into which we will evolve. This is a crucial aspect of evolution: it’s not so much that a mutation introduces new shapes and functions, but that it releases a potential that is already latent in the ancestral organism. Beyond this, said Lanier, putting individuals into more abstract avatars can be a wonderful educational tool by engaging mental resources beyond abstract reasoning, just as the fingers of an improvising pianist effortlessly navigate a route through harmonic space that would baffle the logical mind. Children might learn trigonometry much faster by becoming triangles; chemists will discover what it means to be a molecule.
Meanwhile, the iPad apps devised by engagingly modest media artist Scott Snibbe provided the best argument I’ve so far seen for why this device is not simply a different computer interface but a qualitatively new form of information technology, both in cognitive and creative terms. No wonder it was to Snibbe that Björk (“an angel”, he confided) went to realise her multimedia ‘album’ Biophilia. Whether this interactive project represents the future of music or an elaborate game remains to be seen; for me, Snibbe’s guided tour of its possibilities evoked a sensation of tectonic shift akin to that I vaguely recall now on being told that there was this thing on the internet called a ‘search engine’.
But the prize for the most arresting shaggy dog story went again to Anderson. Her attempts to teach her dog to communicate and play the piano were already raised beyond the status of the endearingly kooky by the profound respect in which she evidently held the animal. But when in the course of recounting the dog’s perplexed discovery during a mountain hike that death, in the form of vultures, could descend from above – another 180 degrees of danger to consider – we were all suddenly reminded that we were in downtown Manhattan, just a few blocks from the decade-old hole in the financial district. And we suddenly felt not so far removed from these creatures at all.
Wednesday, March 21, 2012
The beauty of an irregular mind
Here’s the news story on this year’s Abel Prize that I’ve just written for Nature. You’ve always got to take a deep breath before diving into the Abel. But it is fun to attempt it.
___________________________________________________________
Maths prize awarded for elucidating the links between numbers and information.
An ‘irregular mind’ is what has won this year’s Abel Prize, one of the most prestigious awards in mathematics, for Endre Szemerédi of the Alfred Rényi Institute of Mathematics in Budapest, Hungary.
This is how Szemerédi was described in a book published two years ago to mark his 70th birthday, which added that “his brain is wired differently than for most mathematicians.”
Szemerédi has been awarded the prize, worth 6m Norwegian krone (about US$1m), “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory”, according to the Norwegian Academy of Science and Letters, which instituted the prize as a kind of ‘mathematics Nobel’ in 2003.
Mathematician Timothy Gowers of Cambridge University, who has worked some of the same areas as Szemerédi, says that he has “absolutely no doubt that the award is extremely appropriate.”
Nils Stenseth, president of the Norwegian Academy of Science, who announced the award today, says that Szemerédi’s work shows how research that is purely curiosity-driven can turn out to have important practical applications. “Szemerédi’s work supplies some of the basis for the whole development of informatics and the internet”, he says, “He showed how number theory can be used to organize large amounts of information in efficient ways.”
Discrete mathematics deals with mathematical structures that are made up of discrete entities rather than smoothly varying ones: for example, integers, graphs (networks), permutations and logic operations. Crudely speaking, it entails a kind of digital rather than analogue maths, which helps to explain its relationship to aspects of computer theory.
Szemerédi was spotted and mentored by another Hungarian pioneer in this field, Paul Erdös, who is widely regarded as one of the greatest mathematicians of the 20th century – even though Szemerédi began his training not in maths at all, but at medical school.
One of his first successes was a proof of a conjecture made in 1936 by Erdös and his colleague Paul Turán concerning the properties of integers. They aimed to establish criteria for whether a series of integers contains arithmetic progressions – sequences of integers that differ by the same amount, such as 3, 6, 9…
In 1975 Szeremédi showed that subsets of any large enough string of integers must contain arithmetic progressions of almost any length [1]. In other words, if you had to pick, say, 1 percent of all the numbers between 1 and some very large number N, you can’t avoid selecting some arithmetic progressions. This was the Erdös-Túran conjecture, now supplanted by Szemerédi’s theorem.
The result connected work on number theory to graph theory, the mathematics of networks of connected points, which Erdös had also studied. The relationship between graphs and permutations of numbers is most famously revealed by the four-colour theorem, which states that it is possible to colour any map (considered as a network of boundaries) with four colours such that no two regions with the same colour share a border. The problem of arithmetic progressions can be considered analogous if one considers giving numbers in a progression the same colour.
Meanwhile, relationships between number sequences become relevant to computer science via so-called sorting networks, which are hypothetical networks of wires, like parallel train tracks, that sort strings of numbers into numerical sequence by making pairwise comparisons and then shunting them from one wire to another. Szemerédi and his Hungarian collaborators Miklós Ajtai and Janós Komlós discovered an optimal sorting network for parallel processing in 1983 [2], one of several of Szemerédi’s contributions to theoretical computer science.
When mathematicians discuss Szemerédi’s work, the word ‘deep’ often emerges – a reflection of the connections it often makes between apparently different fields. “He shows the advantages of working on a whole spectrum of problems”, says Stenseth.
For example, Szemerédi’s theorem brings number theory in contact with the theory of dynamical systems: physical systems that evolve in time, such as a pendulum or a solar system. As Israeli-American mathematician Hillel Furstenberg demonstrated soon after the theorem was published [3], it can be derived in a different way by considering how often a dynamical system returns to a particular state: an aspect of so-called ergodic behaviour, which relates to how thoroughly a dynamical system explores the space of possible states available to it.
Gowers says that many of Szemerédi’s results, including his celebrated theorem, are significantly not so much for what they prove but because of the fertile ideas developed in the course of the proof. For example, Szemerédi’s theorem made use of another of his key results, called the Szemerédi regularity lemma, which has proved central to the analysis of certain types of graphs.
References
1. E. Szemerédi, "On sets of integers containing no k elements in arithmetic progression", Acta Arithmetica 27: 199–245 (1975).
2. M. Ajtai, J. Komlós & E. Szemerédi, "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9 (1983).
3. H. Fürstenberg, "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions", J. D’Analyse Math. 31: 204–256 (1977).
___________________________________________________________
Maths prize awarded for elucidating the links between numbers and information.
An ‘irregular mind’ is what has won this year’s Abel Prize, one of the most prestigious awards in mathematics, for Endre Szemerédi of the Alfred Rényi Institute of Mathematics in Budapest, Hungary.
This is how Szemerédi was described in a book published two years ago to mark his 70th birthday, which added that “his brain is wired differently than for most mathematicians.”
Szemerédi has been awarded the prize, worth 6m Norwegian krone (about US$1m), “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory”, according to the Norwegian Academy of Science and Letters, which instituted the prize as a kind of ‘mathematics Nobel’ in 2003.
Mathematician Timothy Gowers of Cambridge University, who has worked some of the same areas as Szemerédi, says that he has “absolutely no doubt that the award is extremely appropriate.”
Nils Stenseth, president of the Norwegian Academy of Science, who announced the award today, says that Szemerédi’s work shows how research that is purely curiosity-driven can turn out to have important practical applications. “Szemerédi’s work supplies some of the basis for the whole development of informatics and the internet”, he says, “He showed how number theory can be used to organize large amounts of information in efficient ways.”
Discrete mathematics deals with mathematical structures that are made up of discrete entities rather than smoothly varying ones: for example, integers, graphs (networks), permutations and logic operations. Crudely speaking, it entails a kind of digital rather than analogue maths, which helps to explain its relationship to aspects of computer theory.
Szemerédi was spotted and mentored by another Hungarian pioneer in this field, Paul Erdös, who is widely regarded as one of the greatest mathematicians of the 20th century – even though Szemerédi began his training not in maths at all, but at medical school.
One of his first successes was a proof of a conjecture made in 1936 by Erdös and his colleague Paul Turán concerning the properties of integers. They aimed to establish criteria for whether a series of integers contains arithmetic progressions – sequences of integers that differ by the same amount, such as 3, 6, 9…
In 1975 Szeremédi showed that subsets of any large enough string of integers must contain arithmetic progressions of almost any length [1]. In other words, if you had to pick, say, 1 percent of all the numbers between 1 and some very large number N, you can’t avoid selecting some arithmetic progressions. This was the Erdös-Túran conjecture, now supplanted by Szemerédi’s theorem.
The result connected work on number theory to graph theory, the mathematics of networks of connected points, which Erdös had also studied. The relationship between graphs and permutations of numbers is most famously revealed by the four-colour theorem, which states that it is possible to colour any map (considered as a network of boundaries) with four colours such that no two regions with the same colour share a border. The problem of arithmetic progressions can be considered analogous if one considers giving numbers in a progression the same colour.
Meanwhile, relationships between number sequences become relevant to computer science via so-called sorting networks, which are hypothetical networks of wires, like parallel train tracks, that sort strings of numbers into numerical sequence by making pairwise comparisons and then shunting them from one wire to another. Szemerédi and his Hungarian collaborators Miklós Ajtai and Janós Komlós discovered an optimal sorting network for parallel processing in 1983 [2], one of several of Szemerédi’s contributions to theoretical computer science.
When mathematicians discuss Szemerédi’s work, the word ‘deep’ often emerges – a reflection of the connections it often makes between apparently different fields. “He shows the advantages of working on a whole spectrum of problems”, says Stenseth.
For example, Szemerédi’s theorem brings number theory in contact with the theory of dynamical systems: physical systems that evolve in time, such as a pendulum or a solar system. As Israeli-American mathematician Hillel Furstenberg demonstrated soon after the theorem was published [3], it can be derived in a different way by considering how often a dynamical system returns to a particular state: an aspect of so-called ergodic behaviour, which relates to how thoroughly a dynamical system explores the space of possible states available to it.
Gowers says that many of Szemerédi’s results, including his celebrated theorem, are significantly not so much for what they prove but because of the fertile ideas developed in the course of the proof. For example, Szemerédi’s theorem made use of another of his key results, called the Szemerédi regularity lemma, which has proved central to the analysis of certain types of graphs.
References
1. E. Szemerédi, "On sets of integers containing no k elements in arithmetic progression", Acta Arithmetica 27: 199–245 (1975).
2. M. Ajtai, J. Komlós & E. Szemerédi, "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9 (1983).
3. H. Fürstenberg, "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions", J. D’Analyse Math. 31: 204–256 (1977).
Friday, March 16, 2012
Genetic origami
Here’s another piece from BBC Future. Again, for non-UK readers the final version is here.
_______________________________________________________________
What shape is your genome? It sounds like an odd question, for what has shape got to do with genes? And therein lies the problem. Popular discourse in this age of genetics, when the option of having your own genome sequenced seems just round the corner, has focused relentlessly on the image of information imprinted into DNA as a linear, four-letter code of chemical building blocks. Just as no one thinks about how the data in your computer is physically arranged in its microchips, so our view of genetics is largely blind to the way the DNA strands that hold our genes are folded up.
But here’s an instance where an older analogy with computers might serve us better. In the days when data was stored on magnetic tape, you had to worry about whether the tape could actually be fed over the read-out head: if it got tangled, you couldn’t get at the information.
In living cells, DNA certainly is tangled – otherwise the genome couldn’t be crammed inside. In humans and other higher organisms, from insects to elephants, the genetic material is packaged up in several chromosomes.
The issue isn’t, however, simply whether or not this folding leaves genes accessible for reading. For the fact is that there is a kind of information encoded in the packaging itself. Because genes can be effectively switched off by tucking them away, cells have evolved highly sophisticated molecular machinery for organizing and altering the shape of chromosomes. A cell’s behaviour is controlled by manipulations of this shape, as much as by what the genes themselves ‘say’. That’s clear from the fact that genetically identical cells in our body carry out completely different roles – some in the liver, some in the brain or skin.
The fact that these specialized cells can be returned to a non-specialized state that performs any function – as shown, for example, by the cloning of Dolly the sheep from a mammary cell – indicates that the genetic switching induced by shape changes and other modifications of our chromosomes is at least partly reversible. The medical potential of getting cells to re-commit to new types of behaviour – in cloning, stem-cell therapies and tissue engineering – is one of the prime reasons why it’s important to understand the principles behind the organization of folding and shape in our chromosomes.
In shooting at that goal, Tom Sexton, Giacomo Cavalli and their colleagues at the Institute of Human Genetics in Montpellier, France, in collaboration with a team led by Amos Tanay of the Weizmann Institute of Science in Israel, have started by looking at the fruitfly genome. That’s because it is smaller and simpler than the human genome (but not too small or simple to be irrelevant to it), and also because the fly is genetically the best studied and understood of higher creatures. A new paper unveiling a three-dimensional map of the fly’s genome is therefore far from the arcane exercise it might seem – it’s a significant step in revealing how genes really work.
Scientists usually explore the shapes of molecules using techniques for taking microscopic snapshots: electron microscopes themselves, as well as crystallography, which considers how beams of X-rays, electrons or neutrons are reflected by molecules stacked into crystals. But these methods are hard or impossible to apply to molecular structures as complex as chromosomes. Sexton and colleagues use a different approach: a method that reveals which parts of a genome sit close together. This allows the entire map to be patched together piece by piece.
It’s no surprise that the results show the fruitfly genome to be carefully folded and organized, rather than just scrunched up any old how. But the findings put flesh on this skeletal picture. The chromosomes are organized on many levels, rather like a building or a city. There are ‘departments’ – clusters of genes – that do particular jobs, sharply demarcated from one another by boundaries somewhat equivalent to gates or stairwells, where ‘insulator’ proteins clinging to the DNA serve to separate one domain from the next. And inactive genes are often grouped together, like disused shops clustered in a run-down corner of town.
What’s more, the distinct physical domains tend to correspond with parts of the genome that are tagged with chemical ‘marker’ groups, which can modify the activity of genes, rather as if buildings in a particular district of a city all have yellow-painted doors. There’s evidently some benefit for the smooth running of the cell in having a physical arrangement that reflects and reinforces this chemical coding.
It will take a lot more work to figure out how this three-dimensional organization controls the activity of the genes. But the better we can get to grips with the rules, the more chance we will have of imposing our own plans on the genome – silencing or reawakening genes not, as in current genetic engineering, by cutting, pasting and editing the genetic text, but by using origami to hide or reveal it.
Reference: T. Sexton et al., Cell 148, 458-472 (2012); doi:10.1016/j.cell.2012.01.010
_______________________________________________________________
What shape is your genome? It sounds like an odd question, for what has shape got to do with genes? And therein lies the problem. Popular discourse in this age of genetics, when the option of having your own genome sequenced seems just round the corner, has focused relentlessly on the image of information imprinted into DNA as a linear, four-letter code of chemical building blocks. Just as no one thinks about how the data in your computer is physically arranged in its microchips, so our view of genetics is largely blind to the way the DNA strands that hold our genes are folded up.
But here’s an instance where an older analogy with computers might serve us better. In the days when data was stored on magnetic tape, you had to worry about whether the tape could actually be fed over the read-out head: if it got tangled, you couldn’t get at the information.
In living cells, DNA certainly is tangled – otherwise the genome couldn’t be crammed inside. In humans and other higher organisms, from insects to elephants, the genetic material is packaged up in several chromosomes.
The issue isn’t, however, simply whether or not this folding leaves genes accessible for reading. For the fact is that there is a kind of information encoded in the packaging itself. Because genes can be effectively switched off by tucking them away, cells have evolved highly sophisticated molecular machinery for organizing and altering the shape of chromosomes. A cell’s behaviour is controlled by manipulations of this shape, as much as by what the genes themselves ‘say’. That’s clear from the fact that genetically identical cells in our body carry out completely different roles – some in the liver, some in the brain or skin.
The fact that these specialized cells can be returned to a non-specialized state that performs any function – as shown, for example, by the cloning of Dolly the sheep from a mammary cell – indicates that the genetic switching induced by shape changes and other modifications of our chromosomes is at least partly reversible. The medical potential of getting cells to re-commit to new types of behaviour – in cloning, stem-cell therapies and tissue engineering – is one of the prime reasons why it’s important to understand the principles behind the organization of folding and shape in our chromosomes.
In shooting at that goal, Tom Sexton, Giacomo Cavalli and their colleagues at the Institute of Human Genetics in Montpellier, France, in collaboration with a team led by Amos Tanay of the Weizmann Institute of Science in Israel, have started by looking at the fruitfly genome. That’s because it is smaller and simpler than the human genome (but not too small or simple to be irrelevant to it), and also because the fly is genetically the best studied and understood of higher creatures. A new paper unveiling a three-dimensional map of the fly’s genome is therefore far from the arcane exercise it might seem – it’s a significant step in revealing how genes really work.
Scientists usually explore the shapes of molecules using techniques for taking microscopic snapshots: electron microscopes themselves, as well as crystallography, which considers how beams of X-rays, electrons or neutrons are reflected by molecules stacked into crystals. But these methods are hard or impossible to apply to molecular structures as complex as chromosomes. Sexton and colleagues use a different approach: a method that reveals which parts of a genome sit close together. This allows the entire map to be patched together piece by piece.
It’s no surprise that the results show the fruitfly genome to be carefully folded and organized, rather than just scrunched up any old how. But the findings put flesh on this skeletal picture. The chromosomes are organized on many levels, rather like a building or a city. There are ‘departments’ – clusters of genes – that do particular jobs, sharply demarcated from one another by boundaries somewhat equivalent to gates or stairwells, where ‘insulator’ proteins clinging to the DNA serve to separate one domain from the next. And inactive genes are often grouped together, like disused shops clustered in a run-down corner of town.
What’s more, the distinct physical domains tend to correspond with parts of the genome that are tagged with chemical ‘marker’ groups, which can modify the activity of genes, rather as if buildings in a particular district of a city all have yellow-painted doors. There’s evidently some benefit for the smooth running of the cell in having a physical arrangement that reflects and reinforces this chemical coding.
It will take a lot more work to figure out how this three-dimensional organization controls the activity of the genes. But the better we can get to grips with the rules, the more chance we will have of imposing our own plans on the genome – silencing or reawakening genes not, as in current genetic engineering, by cutting, pasting and editing the genetic text, but by using origami to hide or reveal it.
Reference: T. Sexton et al., Cell 148, 458-472 (2012); doi:10.1016/j.cell.2012.01.010
Tuesday, March 13, 2012
Under the radar
I have begun to write a regular column for a new BBC sci/tech website called BBC Future. The catch is that, as it is funded (not for profit) by a source other than the license fee, you can’t view it from the UK. If you’re not in the UK, you should be able to see the column here. It is called Under the Radar, and will aim to highlight papers/work that, for one reason or another (as described below), would be likely to be entirely ignored by most science reporters. The introductory column, the pre-edited version of which is below, starts off by setting out the stall. I have in fact 3 or 4 pieces published here so far, but will space them out a little over the next few posts.
_____________________________________________________________________
Reading science journalism isn’t, in general, an ideal way to learn about what goes on in science. Almost by definition, science news dwells on the exceptional, on the rare advances that promise (even if they don’t succeed) to make a difference to our lives or our view of the universe. But while it’s always fair to confront research with the question ‘so what?’, and while you can hardly expect anyone to be interested in the mundane or the obscure, the fact is that behind much if not most of what is done by scientists lies a good, often extraordinary, story. Yet unless they happen to stumble upon some big advance (or at least, an advance that can be packaged and sold as such), most of those stories are never told.
They languish beneath the forbidding surface of papers published by specialized journals, and you’d often never guess, to glance at them, that they have any connection to anything useful, or that they harbour anything to spark the interest of more than half a dozen specialists in the world. What’s more, science then becomes presented as a succession of breakthroughs, with little indication of the difficulties that intervene between fundamental research and viable applications, or between a smart idea and a proof that it’s correct. In contrast, this column will aim to unearth some of those buried treasures and explain why they’re worth polishing.
Another reason why much of the interesting stuff gets overlooked is that good ideas rarely succeed all at once. Many projects get passed over because at first they haven’t got far enough to cross a reporter’s ‘significance threshold’, and then when the work finally gets to a useful point, it’s deemed no longer news because much of it has been published already.
Take a recent report by Shaoyi Jiang, a chemical engineer at the University of Washington in Seattle, and his colleagues in the Germany-based chemistry journal journal Angewandte Chemie. They’ve made an antimicrobial polymer coating which can be switched between a state in which it kills bacteria (eliminating 99.9% of sprayed-on E. coli) and one where it shrugs off the dead cells and resists the attachment of new ones. That second trick is a valuable asset for a bug-killing film, since even dead bacteria can trigger inflammation.
The thing is, they did this already three years ago. But there’s a key difference now. Before, the switching was a one-shot affair: once the bacteria were killed and removed, you couldn’t get the bactericidal film back. So if more bacteria do slowly get a foothold, you’re stuffed.
That’s why the researchers have laboured to make their films fully reversible, which they’ve achieved with some clever chemistry. They make a polymer layer sporting dangling molecular ‘hairs’ like a carpet, each hair ending in a ring-shaped molecule deadly to bacteria. If the surface is moistened with water, the ring springs open, transformed into a molecular group to which bacteria can’t easily stick. Just add a weak acid – acetic acid, basically vinegar – and the ring snaps closed again, regenerating a bactericidal surface as potent as before.
This work fits with a growing trend to make materials ‘smart’ – able to respond to changes in their environment. Time was when a single function was all you got: a non-adhesive ‘anti-fouling’ film, say, or one that resists corrosion or reduces light reflection (handy for solar cells). But increasingly, we want materials that do different things at different times or under different conditions. Now there’s a host of such protean substances: materials that can be switched between transparent and mirror-like say, or between water-wettable and water-repelling.
Another attraction of Jiang’s coating is that these switchable molecular carpets can in principle be coated onto a wide variety of different surfaces – metal, glass, plastics. The researchers say that it might be used on hospital walls or on the fabric of military uniforms to combat biological weapons. That sort of promise is generally where the journalism stops and the hard work begins, to turn (or not) this neat idea into mass-produced materials that are reliable, safe and affordable.
Reference: Z. Cao et al., Angewandte Chemie International Edition online publication doi:10.1002/anie.201106466.
_____________________________________________________________________
Reading science journalism isn’t, in general, an ideal way to learn about what goes on in science. Almost by definition, science news dwells on the exceptional, on the rare advances that promise (even if they don’t succeed) to make a difference to our lives or our view of the universe. But while it’s always fair to confront research with the question ‘so what?’, and while you can hardly expect anyone to be interested in the mundane or the obscure, the fact is that behind much if not most of what is done by scientists lies a good, often extraordinary, story. Yet unless they happen to stumble upon some big advance (or at least, an advance that can be packaged and sold as such), most of those stories are never told.
They languish beneath the forbidding surface of papers published by specialized journals, and you’d often never guess, to glance at them, that they have any connection to anything useful, or that they harbour anything to spark the interest of more than half a dozen specialists in the world. What’s more, science then becomes presented as a succession of breakthroughs, with little indication of the difficulties that intervene between fundamental research and viable applications, or between a smart idea and a proof that it’s correct. In contrast, this column will aim to unearth some of those buried treasures and explain why they’re worth polishing.
Another reason why much of the interesting stuff gets overlooked is that good ideas rarely succeed all at once. Many projects get passed over because at first they haven’t got far enough to cross a reporter’s ‘significance threshold’, and then when the work finally gets to a useful point, it’s deemed no longer news because much of it has been published already.
Take a recent report by Shaoyi Jiang, a chemical engineer at the University of Washington in Seattle, and his colleagues in the Germany-based chemistry journal journal Angewandte Chemie. They’ve made an antimicrobial polymer coating which can be switched between a state in which it kills bacteria (eliminating 99.9% of sprayed-on E. coli) and one where it shrugs off the dead cells and resists the attachment of new ones. That second trick is a valuable asset for a bug-killing film, since even dead bacteria can trigger inflammation.
The thing is, they did this already three years ago. But there’s a key difference now. Before, the switching was a one-shot affair: once the bacteria were killed and removed, you couldn’t get the bactericidal film back. So if more bacteria do slowly get a foothold, you’re stuffed.
That’s why the researchers have laboured to make their films fully reversible, which they’ve achieved with some clever chemistry. They make a polymer layer sporting dangling molecular ‘hairs’ like a carpet, each hair ending in a ring-shaped molecule deadly to bacteria. If the surface is moistened with water, the ring springs open, transformed into a molecular group to which bacteria can’t easily stick. Just add a weak acid – acetic acid, basically vinegar – and the ring snaps closed again, regenerating a bactericidal surface as potent as before.
This work fits with a growing trend to make materials ‘smart’ – able to respond to changes in their environment. Time was when a single function was all you got: a non-adhesive ‘anti-fouling’ film, say, or one that resists corrosion or reduces light reflection (handy for solar cells). But increasingly, we want materials that do different things at different times or under different conditions. Now there’s a host of such protean substances: materials that can be switched between transparent and mirror-like say, or between water-wettable and water-repelling.
Another attraction of Jiang’s coating is that these switchable molecular carpets can in principle be coated onto a wide variety of different surfaces – metal, glass, plastics. The researchers say that it might be used on hospital walls or on the fabric of military uniforms to combat biological weapons. That sort of promise is generally where the journalism stops and the hard work begins, to turn (or not) this neat idea into mass-produced materials that are reliable, safe and affordable.
Reference: Z. Cao et al., Angewandte Chemie International Edition online publication doi:10.1002/anie.201106466.
Thursday, March 08, 2012
Science and politics cannot be unmixed
One of the leaders in this week’s Nature is mine; here’s the original draft.
____________________________________________________
Paul Nurse will not treat his presidency of the Royal Society as an ivory tower. He has made it clear that he considers that scientists have duties to fulfil and battles to fight beyond the strictly scientific, for example to “expose the bunkum” of politicians who abuse and distort science. This social engagement was evident last week when Nurse delivered the prestigious Dimbleby Lecture, instituted in memory of the British broadcaster Richard Dimbleby. Previous scientific incumbents have included George Porter, Richard Dawkins and Craig Venter.
Nurse identified support for the National Health Service, the need for an immigration policy that attracts foreign scientists, and inspirational science teaching in primary education as some of the priorities for British scientists. These and many of the other issues that he raised, such as increasing scientists’ interactions with industry, commerce and the media, and resisting politicization of climate-change research, are relevant around the globe.
All the more reason not to misinterpret Nurse’s insistence on a separation of science and politics: as he put it more than once, “first science, then politics”. What Nurse rightly warned against here is the intrusion of ideology into the interpretation and acceptance of scientific knowledge, as for example with the Soviet Union’s support of the anti-Mendelian biology of Trofim Lysenko. Given recent accounts of political (and politically endorsed commercial) interference in climate research in the US (see Nature 465, 686; 2010), this is a timely reminder.
But it is all too easy to render this equation too simplistically. For example, Nurse also cited the rejection of Einstein’s “Jewish” relativistic physics by Hitler. But that is not quite how it was. “Jewish physics” was a straw man invented by the anti-Semitic and pro-Nazi physicists Johannes Stark and Philipp Lenard, partly because of professional jealousies and grudges. The Nazi leaders were, however, largely indifferent to what looked like an academic squabble, and in the end lost interest in Stark and Lenard’s risible “Aryan physics” because they needed a physics that actually worked.
Therein lies one reason to be sceptical of the common claim, repeated by Nurse, that science can only flourish in a free society. Historians of science in Nazi Germany such as Kristie Macrakis (in Surviving the Swastika; 1993) have challenged this assertion, which is not made true simply because we would like it to be so. Authoritarian regimes are perfectly capable of putting pragmatism before ideology. The scientific process itself is not impeded by state control in China – quite the contrary – and the old canard that Chinese science lacks innovation and daring is now transparently nonsense. During the Cold War, some Soviet science was vibrant and bold. Even the most notorious example of state repression of science – the trial of Galileo – is apt to be portrayed too simplistically as a conflict of faith and reason rather than a collision of personalities and circumstances (none of which excuses Galileo’s scandalous persecution).
There is a more edifying lesson to be drawn from Nazi Germany that bears on Nurse’s themes. This is that, while political (and religious) ideology has no place in deciding scientific questions, the practice of doing science is inherently political. In that sense, science can never come before politics. Scientists enter into a social contract, not least because they are not their own paymasters. Much if not most scientific research has social and political implications, often broadly visible from the outset. In times of economic and political crisis (like these), scientists must respond intellectually and professionally, and not merely by safeguarding their funding, important though that is.
The consequences of imagining that science can remain aloof from politics became acutely apparent in Germany in 1933, when the consensus view that politics was, as Heisenberg put it, an unseemly “money business” meant that most scientists saw no reason to mount concerted resistance to the expulsion of Jewish colleagues – regarded as a political rather than a moral matter. This ‘apolitical’ attitude can now be seen as a convenient myth that led to acquiescence and made it easy for the German scientists to be manipulated. It would be naïve to imagine that only totalitarianism could create such a situation.
The rare and most prominent exception to ‘apolitical’ behaviour was Einstein, whose outspokenness dismayed even his principled friends Max Planck and Max von Laue. “I do not share your view that the scientist should observe silence in political matters”, he told them. “Does not such restraint signify a lack of responsibility?” There was no hint of such a lack in Nurse’s talk. But we must take care to distinguish the political immunity of scientific reasoning from the political dimensions and obligations of doing science.
____________________________________________________
Paul Nurse will not treat his presidency of the Royal Society as an ivory tower. He has made it clear that he considers that scientists have duties to fulfil and battles to fight beyond the strictly scientific, for example to “expose the bunkum” of politicians who abuse and distort science. This social engagement was evident last week when Nurse delivered the prestigious Dimbleby Lecture, instituted in memory of the British broadcaster Richard Dimbleby. Previous scientific incumbents have included George Porter, Richard Dawkins and Craig Venter.
Nurse identified support for the National Health Service, the need for an immigration policy that attracts foreign scientists, and inspirational science teaching in primary education as some of the priorities for British scientists. These and many of the other issues that he raised, such as increasing scientists’ interactions with industry, commerce and the media, and resisting politicization of climate-change research, are relevant around the globe.
All the more reason not to misinterpret Nurse’s insistence on a separation of science and politics: as he put it more than once, “first science, then politics”. What Nurse rightly warned against here is the intrusion of ideology into the interpretation and acceptance of scientific knowledge, as for example with the Soviet Union’s support of the anti-Mendelian biology of Trofim Lysenko. Given recent accounts of political (and politically endorsed commercial) interference in climate research in the US (see Nature 465, 686; 2010), this is a timely reminder.
But it is all too easy to render this equation too simplistically. For example, Nurse also cited the rejection of Einstein’s “Jewish” relativistic physics by Hitler. But that is not quite how it was. “Jewish physics” was a straw man invented by the anti-Semitic and pro-Nazi physicists Johannes Stark and Philipp Lenard, partly because of professional jealousies and grudges. The Nazi leaders were, however, largely indifferent to what looked like an academic squabble, and in the end lost interest in Stark and Lenard’s risible “Aryan physics” because they needed a physics that actually worked.
Therein lies one reason to be sceptical of the common claim, repeated by Nurse, that science can only flourish in a free society. Historians of science in Nazi Germany such as Kristie Macrakis (in Surviving the Swastika; 1993) have challenged this assertion, which is not made true simply because we would like it to be so. Authoritarian regimes are perfectly capable of putting pragmatism before ideology. The scientific process itself is not impeded by state control in China – quite the contrary – and the old canard that Chinese science lacks innovation and daring is now transparently nonsense. During the Cold War, some Soviet science was vibrant and bold. Even the most notorious example of state repression of science – the trial of Galileo – is apt to be portrayed too simplistically as a conflict of faith and reason rather than a collision of personalities and circumstances (none of which excuses Galileo’s scandalous persecution).
There is a more edifying lesson to be drawn from Nazi Germany that bears on Nurse’s themes. This is that, while political (and religious) ideology has no place in deciding scientific questions, the practice of doing science is inherently political. In that sense, science can never come before politics. Scientists enter into a social contract, not least because they are not their own paymasters. Much if not most scientific research has social and political implications, often broadly visible from the outset. In times of economic and political crisis (like these), scientists must respond intellectually and professionally, and not merely by safeguarding their funding, important though that is.
The consequences of imagining that science can remain aloof from politics became acutely apparent in Germany in 1933, when the consensus view that politics was, as Heisenberg put it, an unseemly “money business” meant that most scientists saw no reason to mount concerted resistance to the expulsion of Jewish colleagues – regarded as a political rather than a moral matter. This ‘apolitical’ attitude can now be seen as a convenient myth that led to acquiescence and made it easy for the German scientists to be manipulated. It would be naïve to imagine that only totalitarianism could create such a situation.
The rare and most prominent exception to ‘apolitical’ behaviour was Einstein, whose outspokenness dismayed even his principled friends Max Planck and Max von Laue. “I do not share your view that the scientist should observe silence in political matters”, he told them. “Does not such restraint signify a lack of responsibility?” There was no hint of such a lack in Nurse’s talk. But we must take care to distinguish the political immunity of scientific reasoning from the political dimensions and obligations of doing science.
Wednesday, March 07, 2012
The unavoidable cost of computation
Here’s the pre-edited version of my latest news story for Nature. I really liked this work. I was lucky to meet Rolf Landauer before he died, and discovered him to be one of those people who is so genial, wry and unaffected that you aren’t awed by how phenomenally clever they are. He was also extremely helpful when I was preparing The Self-Made Tapestry, setting me straight on the genesis of notions about dissipative structures that sometimes assign the credit in the wrong places. Quite aside from that, it is worth making clear that this is in essence the first experimental proof of why Maxwell’s demon can’t do its stuff.
________________________________________________
Physicists have proved that forgetting is the undoing of Maxwell’s demon.
Forgetting always takes a little energy. A team of scientists in France and Germany has now demonstrated exactly how little.
Eric Lutz of the University of Augsburg and his colleagues have found experimental proof of a long-standing claim that erasing information can never be done for free. They present their result in Nature today [1].
In 1961, physicist Rolf Landauer argued that to reset one bit of information – say, to set a binary digit to zero regardless of whether it is initially 1 or 0 – must release at least a certain minimum amount of heat, proportional to the temperature, into the environment.
“Erasing information compresses two states into one”, explains Lutz, currently at the Free University of Berlin. “It is this compression that leads to heat dissipation.”
Landauer’s principle implies a limit on how low the energy dissipation – and thus consumption – of a computer can be. Resetting bits, or equivalent processes that erase information, are essential for operating logic circuits. In effect, these circuits can only work if they can forget – for how else could they perform a second calculation once they have done a first?
The work of Lutz and colleagues now appears to confirm that Landauer’s theory was right. “It is an elegant laboratory realization of Landauer's thought experiments”, says Charles Bennett, an information theorist at IBM Research in Yorktown Heights, New York, and Landauer’s former colleague.
“Landauer's principle has been kicked about by theorists for half a century, but to the best of my knowledge this paper describes the first experimental illustration of it”, agrees Christopher Jarzynski, a chemical physicist at the University of Maryland.
The result doesn’t just verify a practical limit on the energy requirement of computers. It also confirms the theory that safeguards one of the most cherished principles of physical science: the second law of thermodynamics.
This law states that heat will always move from hot to cold. A cup of coffee on your desk always gets cooler, never hotter. It’s equivalent to saying that entropy – the amount of disorder in the universe – always increases.
In the nineteenth century, the Scottish scientist James Clerk Maxwell proposed a scenario that seemed to violate this law. In a gas, hot molecules move faster. Maxwell imagined a microscopic intelligent being, later dubbed a demon, that would open and shut a trapdoor between two compartments to selectively trap ‘hot’ molecules in one of them and cool ones in the other, defying the tendency for heat to spread out and entropy to increase.
Landauer’s theory offered the first compelling reason why Maxwell’s demon couldn’t do its job. The demon would need to erase (‘forget’) the information it used to select the molecules after each operation, and this would release heat and increase entropy, more than counterbalancing the entropy lost by the demon’s legerdemain.
In 2010, physicists in Japan showed that information can indeed be converted to energy by selectively exploiting random thermal fluctuations, just as Maxwell’s demon uses its ‘knowledge’ of molecular motions to build up a reservoir of heat [2]. But Jarzynski points out that the work also demonstrated that selectivity requires the information about fluctuations to be stored.
He says that the experiment of Lutz and colleagues now completes the argument against using Maxwell’s demon to violate the second law, because it shows that “the eventual erasure of this stored information carries a thermodynamic penalty” – which is Landauer's principle.
To test this principle, the researchers created a simple two-state bit: a single microscopic silica particle, 2 micrometres across, held in a ‘light trap’ by a laser beam. The trap contains two ‘valleys’ where the particle can rest, one representing a 1 and the other a 0. It could jump between the two if the energy ‘hill’ separating them is not too high.
The researchers could control this height by the power of the laser. And they could ‘tilt’ the two valleys to tip the bead into one of them, resetting the bit, by moving the physical cell containing the bead slightly out of the laser’s focus.
By very accurately monitoring the position and speed of the particle during a cycle of switching and resetting the bit, they could calculate how much energy was dissipated. Landauer’s limit applies only when the resetting is done infinitely slowly; otherwise, the energy dissipation is greater.
Lutz and colleagues found that, as they used longer switching cycles, the dissipation got smaller, but that this value headed towards a plateau equal to the amount predicted by Landauer.
At present, other inefficiencies mean that computers dissipate at least a thousand times more energy per logic operation than the Landauer limit. This energy dissipation heats up the circuits, and imposes a limit on how small and densely packed they can be without melting. “Heat dissipation in computer chips is one of the major problems hindering their miniaturization”, says Lutz.
But this energy consumption is getting ever lower, and Lutz and colleagues say that it’ll be approaching the Landauer limit within the next couple of decades. Their experiment confirms that, at that point, further improvements in energy efficiency will be prohibited by the laws of physics. “Our experiment clearly shows that you cannot go below Landauer’s limit”, says Lutz. “Engineers will soon have to face that”.
Meanwhile, in fledgling quantum computers, which exploit the rules of quantum physics to achieve greater processing power, this limitation is already being confronted. “Logic processing in quantum computers already is well within
the Landauer regime, and one has to worry about Landauer's principle
all the time”, says physicist Seth Lloyd of the Massachusetts Institute of Technology.
References
1. Bérut, A. et al., Nature 483, 187-189 (2012).
2. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Nat. Phys. 6, 988-992 (2010).
________________________________________________
Physicists have proved that forgetting is the undoing of Maxwell’s demon.
Forgetting always takes a little energy. A team of scientists in France and Germany has now demonstrated exactly how little.
Eric Lutz of the University of Augsburg and his colleagues have found experimental proof of a long-standing claim that erasing information can never be done for free. They present their result in Nature today [1].
In 1961, physicist Rolf Landauer argued that to reset one bit of information – say, to set a binary digit to zero regardless of whether it is initially 1 or 0 – must release at least a certain minimum amount of heat, proportional to the temperature, into the environment.
“Erasing information compresses two states into one”, explains Lutz, currently at the Free University of Berlin. “It is this compression that leads to heat dissipation.”
Landauer’s principle implies a limit on how low the energy dissipation – and thus consumption – of a computer can be. Resetting bits, or equivalent processes that erase information, are essential for operating logic circuits. In effect, these circuits can only work if they can forget – for how else could they perform a second calculation once they have done a first?
The work of Lutz and colleagues now appears to confirm that Landauer’s theory was right. “It is an elegant laboratory realization of Landauer's thought experiments”, says Charles Bennett, an information theorist at IBM Research in Yorktown Heights, New York, and Landauer’s former colleague.
“Landauer's principle has been kicked about by theorists for half a century, but to the best of my knowledge this paper describes the first experimental illustration of it”, agrees Christopher Jarzynski, a chemical physicist at the University of Maryland.
The result doesn’t just verify a practical limit on the energy requirement of computers. It also confirms the theory that safeguards one of the most cherished principles of physical science: the second law of thermodynamics.
This law states that heat will always move from hot to cold. A cup of coffee on your desk always gets cooler, never hotter. It’s equivalent to saying that entropy – the amount of disorder in the universe – always increases.
In the nineteenth century, the Scottish scientist James Clerk Maxwell proposed a scenario that seemed to violate this law. In a gas, hot molecules move faster. Maxwell imagined a microscopic intelligent being, later dubbed a demon, that would open and shut a trapdoor between two compartments to selectively trap ‘hot’ molecules in one of them and cool ones in the other, defying the tendency for heat to spread out and entropy to increase.
Landauer’s theory offered the first compelling reason why Maxwell’s demon couldn’t do its job. The demon would need to erase (‘forget’) the information it used to select the molecules after each operation, and this would release heat and increase entropy, more than counterbalancing the entropy lost by the demon’s legerdemain.
In 2010, physicists in Japan showed that information can indeed be converted to energy by selectively exploiting random thermal fluctuations, just as Maxwell’s demon uses its ‘knowledge’ of molecular motions to build up a reservoir of heat [2]. But Jarzynski points out that the work also demonstrated that selectivity requires the information about fluctuations to be stored.
He says that the experiment of Lutz and colleagues now completes the argument against using Maxwell’s demon to violate the second law, because it shows that “the eventual erasure of this stored information carries a thermodynamic penalty” – which is Landauer's principle.
To test this principle, the researchers created a simple two-state bit: a single microscopic silica particle, 2 micrometres across, held in a ‘light trap’ by a laser beam. The trap contains two ‘valleys’ where the particle can rest, one representing a 1 and the other a 0. It could jump between the two if the energy ‘hill’ separating them is not too high.
The researchers could control this height by the power of the laser. And they could ‘tilt’ the two valleys to tip the bead into one of them, resetting the bit, by moving the physical cell containing the bead slightly out of the laser’s focus.
By very accurately monitoring the position and speed of the particle during a cycle of switching and resetting the bit, they could calculate how much energy was dissipated. Landauer’s limit applies only when the resetting is done infinitely slowly; otherwise, the energy dissipation is greater.
Lutz and colleagues found that, as they used longer switching cycles, the dissipation got smaller, but that this value headed towards a plateau equal to the amount predicted by Landauer.
At present, other inefficiencies mean that computers dissipate at least a thousand times more energy per logic operation than the Landauer limit. This energy dissipation heats up the circuits, and imposes a limit on how small and densely packed they can be without melting. “Heat dissipation in computer chips is one of the major problems hindering their miniaturization”, says Lutz.
But this energy consumption is getting ever lower, and Lutz and colleagues say that it’ll be approaching the Landauer limit within the next couple of decades. Their experiment confirms that, at that point, further improvements in energy efficiency will be prohibited by the laws of physics. “Our experiment clearly shows that you cannot go below Landauer’s limit”, says Lutz. “Engineers will soon have to face that”.
Meanwhile, in fledgling quantum computers, which exploit the rules of quantum physics to achieve greater processing power, this limitation is already being confronted. “Logic processing in quantum computers already is well within
the Landauer regime, and one has to worry about Landauer's principle
all the time”, says physicist Seth Lloyd of the Massachusetts Institute of Technology.
References
1. Bérut, A. et al., Nature 483, 187-189 (2012).
2. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Nat. Phys. 6, 988-992 (2010).
Subscribe to:
Posts (Atom)