Here’s my latest news story for Nature.
______________________________________________________________
A tiny island in the Pacific was already using a kind of binary arithmetic in the Middle Ages
Binary arithmetic, the basis of all digital computation today, is usually said to have been invented at the start of the eighteenth century by the German mathematician Gottfried Leibniz. But a new study shows that a kind of binary was already in use three hundred years earlier among the people of the tiny Pacific island of Mangareva in French Polynesia.
The discovery, made by consulting historical records of the now almost wholly assimilated Mangarevan culture and language and reported in the Proceedings of the National Academy of Sciences [1], suggests that some of the advantages of the binary system adduced by Leibniz might create a cognitive motivation for this system to arise spontaneously even in a society without advanced science and technology.
Pure binary arithmetic works in base 2 rather than the conventional base 10 (the latter quite possibly a consequence of counting on ten fingers). This means that numbers are enumerated as powers of 2: instead of units, tens, hundreds (10**2) and thousands (10**3), the digits of a binary number refer to 1 (2**0), 2 (2**1), 4 (2**2), 8 (2**3) and so on.
Every whole number can be represented in this way using just 1s and 0s, which is why they can be encoded in computers in a system of on-off electrical pulses or switches. The number 13 in binary is 1101 (2**3+2**2+(0x2)+1), for example.
Leibniz pointed out in 1703 that to do simple arithmetic in binary, such as addition and multiplication, you don’t need to remember a whole lot of ‘facts’ about numbers, such as 5+4=9 or 6x7=42. Instead, you need only apply a few simple rules. For addition, say, you just add the 1s and 0s, remembering that 1+1=1 in the next position: 100+101=1001.
The downside to binary is that large numbers require lots of digits. But according to psychologists Andrea Bender and Sieghard Beller of the University of Bergen in Norway, the Mangarevan people found an ingenious answer to that, which they were apparently using even before 1450 AD.
Mangareva is a volcanic island first settled around 500-800 AD, which probably had a population of several thousand before substantial interactions with Europeans began in the eighteenth century. Its highly stratified society survived mostly on seafood and root crops, and needed a number system to quantify large transactions in trade and tributes to chieftains.
Only about 600 Mangarevan speakers now remain on the island, and in any case its indigenous number system has long been superseded by Arabic digits owing to French colonialism. But Bender and Beller have reconstructed it from descriptions written by (mostly European) authors in the nineteenth and early twentieth centuries [2].
They find that the former Mangarevans combined a base 10 with a binary system. They had number words for 1 to10, and then for 10 multiplied by several powers of 2: 10 (takau, denoted K in the new work), 20 (denoted P), 40 (T) and 80 (V). In this notation, for example, 70 is TPK and 57 is TK7.
Bender and Beller show that this system retains the key arithmetical simplifications of true binary, in that you don’t need to memorize lots of number facts but just to enact a few simple rules, such as 2xK=P and 2xP=T.
There are complications with the system too, but the authors argue that “the advantages outweigh the disadvantages.”
Cognitive scientist Rafael Nuñez of the University of California at San Diego points out that some notion of binary systems is actually older than Mangarevan culture. “It can be traced back to at least ancient China, around the 9th century BC”, he says – it can be found in the I Ching, which inspired Leibniz. Nuñez adds that “other ancient groups, such as the Maya, used sophisticated combinations of binary and decimal systems to keep track of time and astronomical phenomena. Thus, the cognitive advantages underlying the Mangarevan counting system may not be unique.”
All the same, say Bender and Beller, a ‘mixed’ system like this isn’t easy or obvious to create. “It’s puzzling that anybody would come up with such a solution, especially on a tiny island with a small population”, Bender and Beller say. “But this very fact also demonstrates just how important culture is for the development of numerical cognition”, they add – for example, how in this case dealing with big numbers can motivate inventive solutions.
Nuñez agrees that the study shows “the primacy of cultural factors underlying the invention of number systems, and the diversity in human numerical cognition.”
References
1. Bender, A. & Beller, S. Proc. Natl. Acad. Sci. USA doi:10.1073/pnas.1309160110 (2013).
2. Bender, A., J. Polynesian Soc. 122, in press (2013).
Thursday, December 19, 2013
Wednesday, December 18, 2013
Mining black holes
Here’s something that boggled my mind, and which I wrote up for BBC Future.
__________________________________________________________
It’s a staple of science fiction: highly advanced civilizations getting their energy by mining black holes, extracting it from collapsed stars or making artificial mini-holes that power spaceships. These aren’t idle or quasi-magical speculations, for physicists have believed for at least 30 years that it might be possible. However, sci-fi writers wishing to draw on this technological miracle are going to have to get more inventive, for a paper published in the premier physics journal Physical Review Letters now argues that mining black holes would not be as productive as was thought.
The classical view of black holes as stars that have burnt out and collapsed under their own gravity to an infinitesimally small point in space – a singularity – offered little prospect that they were anything other than dead, barren light traps. Inside the so-called event horizon around the hole’s absurdly dense centre, nothing can escape from the hole’s gravity, and it just sits there forever like a blot on spacetime.
But that changed once Stephen Hawking and others brought quantum physics to bear on this picture. Hawking showed in the 1970s that black holes don’t last forever, and that to the world outside the event horizon they are not black at all. He argued that black holes emit energy from their boundaries in the form of radiation produced by quantum fluctuations of empty space itself. Eventually this Hawking radiation leads to evaporation of the black hole itself.
That happens so slowly that black holes with the mass of a star are still hardly less than eternal. But might it be possible to induce a black hole to release all its Hawking radiation sooner, so that in effect it becomes like a ball of fuel? In 1983 physicists George Unruh and Robert Wald suggested how to do that. One could lower a box down close to the hole’s event horizon, let it fill up with Hawking radiation, and then bring it back up again, just like filling a bucket with water from a well. Performed repeatedly, this manoeuvre would gradually strip the black hole of its ‘hot atmosphere’ of radiation. True, you’d need a mighty rope and winding mechanism to prevent the box from being tugged beyond the event horizon and swallowed, but in principle it could be done.
Or can it? Adam Brown of the Princeton Center for Theoretical Science says that it would take far longer than Unruh and Wald anticipated. He shows that the attempt would cause the black hole to swell and engulf the box. “Rather than using the box to rob the black hole of its radiation”, he writes, “the black hole instead robs us of our box.”
The problem, says Brown, lies with the plain old mechanics of the rope holding the box. Because it would be in a gravitational field, the rope would be subject to the inevitable constraint that it can’t be heavier than its own strength can support. This is true even for exotic ‘ropes’ that aren’t material at all, such as electric or magnetic fields: they too have an energy density and thus (via E=mc**2) an effective mass.
For an ordinary rope hanging down in the Earth’s gravity, the tension in the rope increases with height, because it is carrying more of its own weight. But weirdly, in a very strong gravitational field, where spacetime itself is highly curved, the tension remains the same all along the length. However, for the rope to be stable, it turns out that this tension must exactly equal the mass per unit length of the rope: the rope has to be in effect at breaking point purely to support its own weight, so that there is no strength left over to support the box that will collect Hawking radiation.
Another constraint on the rope is that it mustn’t disintegrate. Close to a black hole, the intense Hawking radiation creates a hot environment. If the rope is lowered too close to the event horizon, where the radiation is most plentiful, there’s a danger that the temperature will exceed that at which all ordinary matter – in other words, atoms themselves – melt into a gloop of their constituent quarks. If you make the rope too light, it’s more likely to melt. But if you make it too heavy, the rope itself is in danger of collapsing under its own gravity.
There’s another complication too. Brown shows that the box itself can’t be wider than a single wavelength of the Hawking radiation it is collecting, since otherwise the effects of relativity will pull it awry and cause the rope to break. That would make the collection process very cumbersome in any case – it would have to happen one photon (‘light particle’) at a time. To collect Hawking radiation of the wavelength of light, the boxes could be no bigger than typical bacteria, and to collect X-rays you’d need atom-sized boxes.
So here’s the deal. If you get too close to the black hole, the rope might melt or snap – or, if it’s made too massive to avoid that, it might collapse into itself. But if you try mining at a more cautious distance, there isn’t so much Hawking radiation there to collect. And Brown shows that even the best compromise makes energy extraction much slower than Unruh and Wald suggested.
Yet there is a better way, he says: do away with boxes altogether. In 1994, Albion Lawrence and Emil Martinec of the University of Chicago proposed that one could simply dip strings into a black hole and let Hawking radiation run up them like oil up the wick of an oil lamp. This was thought to be a slower process than hauling up boxes full of Hawking radiation, because each string carries up only one photon at a time. But Brown’s analysis shows that they would in fact both mine the hole at the same (slow) rate. Since dangling boxes introduce more potential for malfunction, Brown therefore argues that the preferable way to draw the energy from black holes is to puncture the event horizon with lots of photon-wicking strings, and let them drain it out of existence.
Reference: A. R. Brown, Physical Review Letters 111, 211301 (2013)
__________________________________________________________
It’s a staple of science fiction: highly advanced civilizations getting their energy by mining black holes, extracting it from collapsed stars or making artificial mini-holes that power spaceships. These aren’t idle or quasi-magical speculations, for physicists have believed for at least 30 years that it might be possible. However, sci-fi writers wishing to draw on this technological miracle are going to have to get more inventive, for a paper published in the premier physics journal Physical Review Letters now argues that mining black holes would not be as productive as was thought.
The classical view of black holes as stars that have burnt out and collapsed under their own gravity to an infinitesimally small point in space – a singularity – offered little prospect that they were anything other than dead, barren light traps. Inside the so-called event horizon around the hole’s absurdly dense centre, nothing can escape from the hole’s gravity, and it just sits there forever like a blot on spacetime.
But that changed once Stephen Hawking and others brought quantum physics to bear on this picture. Hawking showed in the 1970s that black holes don’t last forever, and that to the world outside the event horizon they are not black at all. He argued that black holes emit energy from their boundaries in the form of radiation produced by quantum fluctuations of empty space itself. Eventually this Hawking radiation leads to evaporation of the black hole itself.
That happens so slowly that black holes with the mass of a star are still hardly less than eternal. But might it be possible to induce a black hole to release all its Hawking radiation sooner, so that in effect it becomes like a ball of fuel? In 1983 physicists George Unruh and Robert Wald suggested how to do that. One could lower a box down close to the hole’s event horizon, let it fill up with Hawking radiation, and then bring it back up again, just like filling a bucket with water from a well. Performed repeatedly, this manoeuvre would gradually strip the black hole of its ‘hot atmosphere’ of radiation. True, you’d need a mighty rope and winding mechanism to prevent the box from being tugged beyond the event horizon and swallowed, but in principle it could be done.
Or can it? Adam Brown of the Princeton Center for Theoretical Science says that it would take far longer than Unruh and Wald anticipated. He shows that the attempt would cause the black hole to swell and engulf the box. “Rather than using the box to rob the black hole of its radiation”, he writes, “the black hole instead robs us of our box.”
The problem, says Brown, lies with the plain old mechanics of the rope holding the box. Because it would be in a gravitational field, the rope would be subject to the inevitable constraint that it can’t be heavier than its own strength can support. This is true even for exotic ‘ropes’ that aren’t material at all, such as electric or magnetic fields: they too have an energy density and thus (via E=mc**2) an effective mass.
For an ordinary rope hanging down in the Earth’s gravity, the tension in the rope increases with height, because it is carrying more of its own weight. But weirdly, in a very strong gravitational field, where spacetime itself is highly curved, the tension remains the same all along the length. However, for the rope to be stable, it turns out that this tension must exactly equal the mass per unit length of the rope: the rope has to be in effect at breaking point purely to support its own weight, so that there is no strength left over to support the box that will collect Hawking radiation.
Another constraint on the rope is that it mustn’t disintegrate. Close to a black hole, the intense Hawking radiation creates a hot environment. If the rope is lowered too close to the event horizon, where the radiation is most plentiful, there’s a danger that the temperature will exceed that at which all ordinary matter – in other words, atoms themselves – melt into a gloop of their constituent quarks. If you make the rope too light, it’s more likely to melt. But if you make it too heavy, the rope itself is in danger of collapsing under its own gravity.
There’s another complication too. Brown shows that the box itself can’t be wider than a single wavelength of the Hawking radiation it is collecting, since otherwise the effects of relativity will pull it awry and cause the rope to break. That would make the collection process very cumbersome in any case – it would have to happen one photon (‘light particle’) at a time. To collect Hawking radiation of the wavelength of light, the boxes could be no bigger than typical bacteria, and to collect X-rays you’d need atom-sized boxes.
So here’s the deal. If you get too close to the black hole, the rope might melt or snap – or, if it’s made too massive to avoid that, it might collapse into itself. But if you try mining at a more cautious distance, there isn’t so much Hawking radiation there to collect. And Brown shows that even the best compromise makes energy extraction much slower than Unruh and Wald suggested.
Yet there is a better way, he says: do away with boxes altogether. In 1994, Albion Lawrence and Emil Martinec of the University of Chicago proposed that one could simply dip strings into a black hole and let Hawking radiation run up them like oil up the wick of an oil lamp. This was thought to be a slower process than hauling up boxes full of Hawking radiation, because each string carries up only one photon at a time. But Brown’s analysis shows that they would in fact both mine the hole at the same (slow) rate. Since dangling boxes introduce more potential for malfunction, Brown therefore argues that the preferable way to draw the energy from black holes is to puncture the event horizon with lots of photon-wicking strings, and let them drain it out of existence.
Reference: A. R. Brown, Physical Review Letters 111, 211301 (2013)
Tuesday, December 10, 2013
Who are you calling selfish?
There’s a bit of a fracas going on about David Dobbs’ article in Aeon on the obsolescence of the ‘selfish gene’: see here and here. In the first of these, Jerry Coyne has criticised the article as woefully misinformed; the second is Richard Dawkins’ response to it. There’s a crucial distinction here (though I’m not sure Coyne really wants to acknowledge it) between the accuracy of Dobbs’ scientific claims and the appropriateness of his objections to the selfish-gene metaphor. Steven Pinker apparently considers the problem here to be the fact that it “seems to be a congenital problem with science journalists [that] they think that it's a profound and revolutionary discovery that genes are regulated”. It might be nice if it were really that simple. But it isn’t. The real issue is whether the fact that genes are regulated (and perhaps more crucially, networked) means that “selfishness” is still an illuminating way to describe how they operate.
Take, for example, Coyne’s point that the polyphenism that Dobbs talks about – the fact that the same genes can create radically different phenotypes in a single organism – is triggered by a regulatory gene. (Coyne doesn’t say whether such a gene has yet been identified for grasshoppers or caterpillar/butterflies, but I’m happy to believe that this is indeed the probable origin of the morphological switch, regardless of whether we know the details.) In this sense, then, the transformation is certainly still under ‘genetic control’ and therefore adaptive in the same sense as any other genetic trait.
But does it mean that the regulatory gene in question – let’s call it gene A – is ‘selfish’? It’s hard to see any meaningful way in which this can be true. Coyne offers his own view of what it means: “during the process of natural selection, genes ‘act’ as if they were selfish.” In other words, it’s a metaphor. You know what, I think we got that already. We didn’t imagine it meant that genes habitually push to the front of queues and steal other genes’ wallets. What we need, though, is some notion of what he thinks “selfish” itself means in this context. That the gene plays a part in its own replication? I guess that’s what Coyne means, because later he says a selfish gene “promotes the reproduction of itself or its carrier.” But hang on – so it’s selfish if it promotes the reproduction of its carrier, meaning all those other genes too? So it’s not behaving in a way that is actually at the expense of other genes, but in fact benefits them? Like, say, the way we might play an active role in society so that it doesn’t collapse and we get shot up by looters? Sorry, so where exactly does the selfishness come in – or do you mean that the gene acts “as if with enlightened self-interest” – which, behaviourists and indeed linguists will tell you, is not the same as selfishness?
Let’s see if we can figure out which of these versions of the pathetic fallacy we’re talking about here. (I fear I might be patronising if I point out that ‘pathetic fallacy’ is not a term of abuse, as though to say that the idea of the selfish gene is pathetically fallacious, but on past experience I’ve found it’s best not to underestimate some folks’ unfamiliarity with figures of speech, particularly if they can otherwise extract offence from them.) This adaptation of gene A relies on the other genes whose expression is modified by the switch doing what they need to do in response to the signal from A. If they don’t ‘comply’, A gets no advantage. Likewise, the adaptation that enables the other genes to realise these alternative phenotypes in response to A’s signal relies on A actually giving that signal at the appropriate time. In other words, there is an intimate cooperativity required here between the way the genes operate, if any of them is to enjoy the mutual benefit.
Now, selfish geneticists might say “But A doesn’t care about those other genes, it is only working for its own benefit!” Well, they might say that, but I hope they won’t, for then they’d be showing that they have fallen for their own metaphor. Gene A doesn’t of course care about its own survival either. A gene doesn’t care about anything; it’s just a bit of a molecule. To use its ‘indifference’ to its fellow genes as an argument for why it is ‘selfish’ is absurd. You could of course say “isn’t it equally spurious to call this behaviour cooperative?” But cooperative behaviour in inanimate particles has a clear meaning in chemical physics: it means that the result depends on the collective interactions between the particles: it can’t result from the behaviour of any one of them acting alone. It doesn’t mean the particles are ‘nice’, or even that they act as if they were ‘nice’.
This sort of argument for why genes can be better regarded as cooperative than selfish is well rehearsed. It is a key aspect of the objections to the selfish-gene metaphor raised by people like Gabriel Dover, Denis Noble and Steven Rose. Noble’s argument in The Music of Life is particularly compelling, and the fact that it is seldom addressed by selfish geneticists, who prefer to imply that it’s just ignorant journalists who get this stuff wrong, is I think something of a backhanded compliment to Denis. (Let me, for the record, point out that Jerry Coyne has certainly laid into Noble in no uncertain terms – but I haven’t seen a good refutation of his specific criticisms of the selfish-gene metaphor.)
To his credit, Richard Dawkins himself does acknowledge some of this. In the 30th anniversary edition of The Selfish Gene he says, for example, “Another good alternative to The Selfish Gene would have been The Cooperative Gene.” That’s because, he says, genes sometimes act in mutually supportive gangs. “Natural selection therefore sees to it that gangs of mutually compatible—which is almost to say cooperating— genes are favoured in the presence of each other.” The genes are, however, individually still “selfish”, Dawkins says, because they are not cooperating for the benefit of the others. But that assertion only makes sense if you ascribe intentions to the genes – in other words, if you fall for the metaphor (I guess it is for reasons like this that Steven Rose thinks Dawkins doesn’t really understand what a metaphor is). All you can say is that a mutual operation of genes works to their collective benefit. It is simply meaningless to say that in such a circumstance they are acting “as if” they are selfish, just as it is meaningless to say that they are acting “as if” they are altruistic. It is, in effect, implanting a value judgement where none is warranted. Why do that? Well, I’ll come to that shortly.
There’s a deeper level to this debate, however, which I don’t see Coyne taking on board at all. It is about causality. The argument for selfish geneism seems to be that if a gene’s activity results in a change in phenotype, the gene is responsible for it – that it is the ‘cause’. This is equivalent to the old argument that the assassination of Archduke Ferdinand caused World War I. I like to think of it another way. Suppose Howard Webb referees Chelsea vs Manchester United, and Chelsea win 1-0 (I’m going to trust US readers to make the necessary changes mutatis mutandis). Webb has obviously ‘caused’ that result, as well as all the moves that led to it, because he blew the whistle that began the game (and indeed, intervened several times during the match too). The next season Webb referees the same game, but this time Man U triumph 2-0. That’s weird, because the teams have identical players, the pitch is the same, and so on. But Webb did a few things differently this time – he awarded Man U a penalty, say – so he’s obviously the cause of the difference.
The fact is that, of course, the course and outcome of both matches relies on all the players knowing what is required of them, and doing it. There’s already other crucial information in the system. Howard Webb wasn’t the cause of any of it, except in the important sense that without him either chaos would have ensued or the matches would never have started.
If this seems like a fatuous example, or a thin analogy (and sure, best not to push it too far), take a look at Hoel et al., PNAS 110, 19790; 2013 (here). This makes it clear that there are some complex systems in which causality must be seen as a property of higher-level modes of organization, and can’t be meaningfully ascribed to a microscopic event. If that is true in genetics, then neither evo nor devo can necessarily be considered to be under the causal control of specific genes. I don’t mean that the genes don’t underlie the processes, but just that causality does not reside therein. Or to be clear (because there’s a pathological inclination for words to be twisted in some of these disputes), there are of course plenty of cases where specific adaptive phenotypes can be attributed to specific genes (and so can be considered the result of selection at the genetic level), but there’s no reason to think that this is the generic or universal picture, and plenty of reason not to. That doesn’t deny the crucial importance of genes in evo/devo, any more than one would deny the importance of individual actions and decisions in the outbreak of World War I.
One might want to say that if the selfish gene’ metaphor works for Coyne, why not let him have it – it’s only a metaphor, after all. And I’m not unsympathetic to that. But it is of course not just Coyne – this metaphor has powerfully affected the way genetics and evolution have been presented to the public. And I don’t think it is at all unlikely (nor does Gabby Dover) that it has contributed in a major way to the prevailing notion of the “one-gene one-trait” picture that now even geneticists are finding an albatross: how can genes be operating in cooperative networks if each is only looking out for itself? I’m not saying that the selfish geneticists deny that they do, only that one of the many problems with the selfish gene picture is that it implies relentless individualism.
We should probably be honest about this too: it is surely no coincidence that the most vocal adherents of the selfish gene are the same folks who are most vocally anti-religious. It’s hard not to suspect that one of the attractions of this picture is its very harshness: not only does the universe not care in the slightest about your welfare (and I agree with that) but the most fundamental principles of life are positively ‘unkind’ and antagonistic – nasty if you like – and thus as far as it’s possible to get from your fluffy divine benevolence. Can’t you sense a gleeful “take that!” in the way Richard Dawkins serves up this stuff?
I might be unfair here, but I guess I’m searching for a reason why these smart folks are so reluctant to relinquish what is demonstrably a bad metaphor. After all, as Larry Moran (who is no slouch when it comes to beating on religion, although he picks his targets – creationists and ID-ers – rather more selectively) has pointed out, the selfish gene has been largely dead for decades in evolutionary biology.
One final point, since it seems to be a common trope in cases like this for scientists to decry journalists’ ignorance of their subject’s history. Forgive me if I’m wrong, but I have never seen selfish geneticists acknowledge that or explain why their definition of ‘selfish gene’ is different from that typically used in the 1980s by leading thinkers such as Francis Crick, Leslie Orgel, Gabriel Dover, and Ford Doolittle (Nature 284, 601 & 604; 1980). Those guys used selfishness specifically to refer to that subset of genes or genetic elements that have a propensity to proliferate in multiple copies throughout the genome – it was not a property of all genes that enabled them to benefit from natural selection. Indeed, this kind of selfish DNA, said Orgel and Crick, makes no specific contribution to the phenotype. Dawkins mentioned such genetic elements in The Selfish Gene, but selfish geneticists have subsequently been quite happy to see this ‘selfishness’ become a universal attribute of genes. That is evidently not how Crick saw it: he and Orgel make the distinction with ‘business as usual’ genetic selection very explicit.
Theirs seems to be a much more viable idea of selfishness, for the multiple copies of genes don’t benefit the organism. At best this accumulation of ‘junk’ is neutral to the organism, but it is potentially detrimental in the long term, providing a good illustration of the short-termism of natural selection. In this sense, then, selfishness is not a property that enables evolution to happen, but an inevitable by-product caused by its difficulty in dealing with parasitic freeloaders (for a modern view, see J. H. Werren, PNAS 108 (supplement 2), 10863; 2011). I’d much rather see selfishness reserved for this kind of situation. And so would many others. It seems to me that Coyne does a disservice by not acknowledging that the ‘selfish’ metaphor has a long and distinguished history of being applied only in this very restrictive and particular context.
I don’t want to be unnecessarily confrontational. Coyne has done a fine and important job in the past of defending evolution against idiotic attacks, and arguably this is just a debate about the packaging of a process whose basic details are not in doubt. But it’s because I do what I do that I think that packaging is important.
Take, for example, Coyne’s point that the polyphenism that Dobbs talks about – the fact that the same genes can create radically different phenotypes in a single organism – is triggered by a regulatory gene. (Coyne doesn’t say whether such a gene has yet been identified for grasshoppers or caterpillar/butterflies, but I’m happy to believe that this is indeed the probable origin of the morphological switch, regardless of whether we know the details.) In this sense, then, the transformation is certainly still under ‘genetic control’ and therefore adaptive in the same sense as any other genetic trait.
But does it mean that the regulatory gene in question – let’s call it gene A – is ‘selfish’? It’s hard to see any meaningful way in which this can be true. Coyne offers his own view of what it means: “during the process of natural selection, genes ‘act’ as if they were selfish.” In other words, it’s a metaphor. You know what, I think we got that already. We didn’t imagine it meant that genes habitually push to the front of queues and steal other genes’ wallets. What we need, though, is some notion of what he thinks “selfish” itself means in this context. That the gene plays a part in its own replication? I guess that’s what Coyne means, because later he says a selfish gene “promotes the reproduction of itself or its carrier.” But hang on – so it’s selfish if it promotes the reproduction of its carrier, meaning all those other genes too? So it’s not behaving in a way that is actually at the expense of other genes, but in fact benefits them? Like, say, the way we might play an active role in society so that it doesn’t collapse and we get shot up by looters? Sorry, so where exactly does the selfishness come in – or do you mean that the gene acts “as if with enlightened self-interest” – which, behaviourists and indeed linguists will tell you, is not the same as selfishness?
Let’s see if we can figure out which of these versions of the pathetic fallacy we’re talking about here. (I fear I might be patronising if I point out that ‘pathetic fallacy’ is not a term of abuse, as though to say that the idea of the selfish gene is pathetically fallacious, but on past experience I’ve found it’s best not to underestimate some folks’ unfamiliarity with figures of speech, particularly if they can otherwise extract offence from them.) This adaptation of gene A relies on the other genes whose expression is modified by the switch doing what they need to do in response to the signal from A. If they don’t ‘comply’, A gets no advantage. Likewise, the adaptation that enables the other genes to realise these alternative phenotypes in response to A’s signal relies on A actually giving that signal at the appropriate time. In other words, there is an intimate cooperativity required here between the way the genes operate, if any of them is to enjoy the mutual benefit.
Now, selfish geneticists might say “But A doesn’t care about those other genes, it is only working for its own benefit!” Well, they might say that, but I hope they won’t, for then they’d be showing that they have fallen for their own metaphor. Gene A doesn’t of course care about its own survival either. A gene doesn’t care about anything; it’s just a bit of a molecule. To use its ‘indifference’ to its fellow genes as an argument for why it is ‘selfish’ is absurd. You could of course say “isn’t it equally spurious to call this behaviour cooperative?” But cooperative behaviour in inanimate particles has a clear meaning in chemical physics: it means that the result depends on the collective interactions between the particles: it can’t result from the behaviour of any one of them acting alone. It doesn’t mean the particles are ‘nice’, or even that they act as if they were ‘nice’.
This sort of argument for why genes can be better regarded as cooperative than selfish is well rehearsed. It is a key aspect of the objections to the selfish-gene metaphor raised by people like Gabriel Dover, Denis Noble and Steven Rose. Noble’s argument in The Music of Life is particularly compelling, and the fact that it is seldom addressed by selfish geneticists, who prefer to imply that it’s just ignorant journalists who get this stuff wrong, is I think something of a backhanded compliment to Denis. (Let me, for the record, point out that Jerry Coyne has certainly laid into Noble in no uncertain terms – but I haven’t seen a good refutation of his specific criticisms of the selfish-gene metaphor.)
To his credit, Richard Dawkins himself does acknowledge some of this. In the 30th anniversary edition of The Selfish Gene he says, for example, “Another good alternative to The Selfish Gene would have been The Cooperative Gene.” That’s because, he says, genes sometimes act in mutually supportive gangs. “Natural selection therefore sees to it that gangs of mutually compatible—which is almost to say cooperating— genes are favoured in the presence of each other.” The genes are, however, individually still “selfish”, Dawkins says, because they are not cooperating for the benefit of the others. But that assertion only makes sense if you ascribe intentions to the genes – in other words, if you fall for the metaphor (I guess it is for reasons like this that Steven Rose thinks Dawkins doesn’t really understand what a metaphor is). All you can say is that a mutual operation of genes works to their collective benefit. It is simply meaningless to say that in such a circumstance they are acting “as if” they are selfish, just as it is meaningless to say that they are acting “as if” they are altruistic. It is, in effect, implanting a value judgement where none is warranted. Why do that? Well, I’ll come to that shortly.
There’s a deeper level to this debate, however, which I don’t see Coyne taking on board at all. It is about causality. The argument for selfish geneism seems to be that if a gene’s activity results in a change in phenotype, the gene is responsible for it – that it is the ‘cause’. This is equivalent to the old argument that the assassination of Archduke Ferdinand caused World War I. I like to think of it another way. Suppose Howard Webb referees Chelsea vs Manchester United, and Chelsea win 1-0 (I’m going to trust US readers to make the necessary changes mutatis mutandis). Webb has obviously ‘caused’ that result, as well as all the moves that led to it, because he blew the whistle that began the game (and indeed, intervened several times during the match too). The next season Webb referees the same game, but this time Man U triumph 2-0. That’s weird, because the teams have identical players, the pitch is the same, and so on. But Webb did a few things differently this time – he awarded Man U a penalty, say – so he’s obviously the cause of the difference.
The fact is that, of course, the course and outcome of both matches relies on all the players knowing what is required of them, and doing it. There’s already other crucial information in the system. Howard Webb wasn’t the cause of any of it, except in the important sense that without him either chaos would have ensued or the matches would never have started.
If this seems like a fatuous example, or a thin analogy (and sure, best not to push it too far), take a look at Hoel et al., PNAS 110, 19790; 2013 (here). This makes it clear that there are some complex systems in which causality must be seen as a property of higher-level modes of organization, and can’t be meaningfully ascribed to a microscopic event. If that is true in genetics, then neither evo nor devo can necessarily be considered to be under the causal control of specific genes. I don’t mean that the genes don’t underlie the processes, but just that causality does not reside therein. Or to be clear (because there’s a pathological inclination for words to be twisted in some of these disputes), there are of course plenty of cases where specific adaptive phenotypes can be attributed to specific genes (and so can be considered the result of selection at the genetic level), but there’s no reason to think that this is the generic or universal picture, and plenty of reason not to. That doesn’t deny the crucial importance of genes in evo/devo, any more than one would deny the importance of individual actions and decisions in the outbreak of World War I.
One might want to say that if the selfish gene’ metaphor works for Coyne, why not let him have it – it’s only a metaphor, after all. And I’m not unsympathetic to that. But it is of course not just Coyne – this metaphor has powerfully affected the way genetics and evolution have been presented to the public. And I don’t think it is at all unlikely (nor does Gabby Dover) that it has contributed in a major way to the prevailing notion of the “one-gene one-trait” picture that now even geneticists are finding an albatross: how can genes be operating in cooperative networks if each is only looking out for itself? I’m not saying that the selfish geneticists deny that they do, only that one of the many problems with the selfish gene picture is that it implies relentless individualism.
We should probably be honest about this too: it is surely no coincidence that the most vocal adherents of the selfish gene are the same folks who are most vocally anti-religious. It’s hard not to suspect that one of the attractions of this picture is its very harshness: not only does the universe not care in the slightest about your welfare (and I agree with that) but the most fundamental principles of life are positively ‘unkind’ and antagonistic – nasty if you like – and thus as far as it’s possible to get from your fluffy divine benevolence. Can’t you sense a gleeful “take that!” in the way Richard Dawkins serves up this stuff?
I might be unfair here, but I guess I’m searching for a reason why these smart folks are so reluctant to relinquish what is demonstrably a bad metaphor. After all, as Larry Moran (who is no slouch when it comes to beating on religion, although he picks his targets – creationists and ID-ers – rather more selectively) has pointed out, the selfish gene has been largely dead for decades in evolutionary biology.
One final point, since it seems to be a common trope in cases like this for scientists to decry journalists’ ignorance of their subject’s history. Forgive me if I’m wrong, but I have never seen selfish geneticists acknowledge that or explain why their definition of ‘selfish gene’ is different from that typically used in the 1980s by leading thinkers such as Francis Crick, Leslie Orgel, Gabriel Dover, and Ford Doolittle (Nature 284, 601 & 604; 1980). Those guys used selfishness specifically to refer to that subset of genes or genetic elements that have a propensity to proliferate in multiple copies throughout the genome – it was not a property of all genes that enabled them to benefit from natural selection. Indeed, this kind of selfish DNA, said Orgel and Crick, makes no specific contribution to the phenotype. Dawkins mentioned such genetic elements in The Selfish Gene, but selfish geneticists have subsequently been quite happy to see this ‘selfishness’ become a universal attribute of genes. That is evidently not how Crick saw it: he and Orgel make the distinction with ‘business as usual’ genetic selection very explicit.
Theirs seems to be a much more viable idea of selfishness, for the multiple copies of genes don’t benefit the organism. At best this accumulation of ‘junk’ is neutral to the organism, but it is potentially detrimental in the long term, providing a good illustration of the short-termism of natural selection. In this sense, then, selfishness is not a property that enables evolution to happen, but an inevitable by-product caused by its difficulty in dealing with parasitic freeloaders (for a modern view, see J. H. Werren, PNAS 108 (supplement 2), 10863; 2011). I’d much rather see selfishness reserved for this kind of situation. And so would many others. It seems to me that Coyne does a disservice by not acknowledging that the ‘selfish’ metaphor has a long and distinguished history of being applied only in this very restrictive and particular context.
I don’t want to be unnecessarily confrontational. Coyne has done a fine and important job in the past of defending evolution against idiotic attacks, and arguably this is just a debate about the packaging of a process whose basic details are not in doubt. But it’s because I do what I do that I think that packaging is important.
Monday, December 09, 2013
Birds reveal a new facet of their personality
Here’s the original of my latest news story for Nature.
_____________________________________________________________
Some birds are predisposed to signal their intentions more clearly than others.
Some animals, like some people, are more aggressive than others - it's just the way they are. But new research suggests that, for birds at least, this personality is more subtle. Some are inclined to give out exaggerated signs of their aggressiveness, others to underplay it.
It's rather like the menacing biker who turns out to be a pussy-cat, or the wimpy geek who will break a bottle over your head. But the analogy with humans goes only so far, because many birds announce their aggression about mating and territory not by appearance but by song and gesture.
For example, the song sparrow indicates its intention to attack a dummy bird or a loudspeaker playing back its songs by either vocalizing distinctive ‘soft songs’ or by fluttering its wings (so-called wing waves), both of which are perceived as threatening [2].
Both aggressive signaling and the ensuing aggressive behaviour varies from one bird to another, in a way that correlates with other personality traits such as boldness [1]. But these attributes also vary for a single individual at different times – they can have particularly grouchy or placid days. The degree of aggression implied by the precursory signals generally reflects the actual behaviour – it is what evolutionary biologists call an “honest signal”.
But not entirely honest. Earlier this year Michael Beecher and colleagues at the University of Washington in Seattle showed that there’s some variability in aggressive signaling that doesn’t match the behaviour: a bird might act stroppy but not follow through with an attack [2].
This variability could be just random, an imponderable quirk of bird-brains. But now Beecher and colleagues say it isn’t [3].
The researchers studied 69 labelled male song sparrows in their natural habitat during autumn and spring. They played the birds their own songs (which elicit aggression just as ‘stranger songs’ do) and watched how they responded – whether they displayed the aggression signals of soft songs and wing waves, and whether they followed through by attacking the loudspeakers or a dummy bird.
They found that, after allowing for variations that provide an honest signal of a bird’s fluctuations in aggressive mood, the remaining variability – if you like, the dishonest part of it – seems to be consistently displayed by particular birds.
Some have a predisposition – consistent from one season to the next – to give out false signals of how aggressive they intend to be, suggesting either too much or too little. Others are more consistently ‘honest’. Beecher and colleagues say that this behaviour too seems to be a robust characteristic of an individual bird’s personality, which the researchers call “communicativeness”.
“This is an important and novel paper”, says William Searcy, a behavioural ecologist at the University of Miami. “I think it’s highly likely that behaviours one can define in song sparrows can be identified in other birds, and other animals as well”, adds Jeremy Hyman of Western Carolina University, a specialist in bird behaviour.
Habitual ‘over-signallers’ may be good bluffers, who gamble on scaring away rivals that they won’t actually dare fight. ‘Under-signallers’, who attack without much warning, are harder to explain. “This behaviour is intriguing, and hasn't really been discussed in theory”, says Beecher. “There are benefits to signaling – a fight is avoided, potentially beneficial to both parties – so why not do it?”
One possibility is that under-signallers are genuine tough guys, so likely to win a bout that it’s not worth their while bothering with scare tactics. In this case the behaviour could be a beneficial adaptation. But another possibility is that some individuals just aren’t very good at getting the signaling codes right – it’s not an adaptation but a mistake.
“I don’t think there is enough evidence yet to know whether individual adaptive or error-based theories are more correct”, says Hyman. He adds that why personality traits exist at all is still a big question, but says “I think there’s enough evidence of links between personality and fitness to conclude that behavioural variation isn’t [adaptively] neutral.”
References
1. Bell, A., Hankison, S. J. & Laslowski, K. L. Anim. Behav. 77, 771-783 (2009).
2. Akçay, Ç., Campbell, S. E., Tom, M. E. & Beecher, M. D., Proc. R. Soc. B 280, 20122517 (2013).
3. Akçay, Ç., Campbell, S. E. & Beecher, M. D., Proc. R. Soc. B 281, 20132496 (2014).
_____________________________________________________________
Some birds are predisposed to signal their intentions more clearly than others.
Some animals, like some people, are more aggressive than others - it's just the way they are. But new research suggests that, for birds at least, this personality is more subtle. Some are inclined to give out exaggerated signs of their aggressiveness, others to underplay it.
It's rather like the menacing biker who turns out to be a pussy-cat, or the wimpy geek who will break a bottle over your head. But the analogy with humans goes only so far, because many birds announce their aggression about mating and territory not by appearance but by song and gesture.
For example, the song sparrow indicates its intention to attack a dummy bird or a loudspeaker playing back its songs by either vocalizing distinctive ‘soft songs’ or by fluttering its wings (so-called wing waves), both of which are perceived as threatening [2].
Both aggressive signaling and the ensuing aggressive behaviour varies from one bird to another, in a way that correlates with other personality traits such as boldness [1]. But these attributes also vary for a single individual at different times – they can have particularly grouchy or placid days. The degree of aggression implied by the precursory signals generally reflects the actual behaviour – it is what evolutionary biologists call an “honest signal”.
But not entirely honest. Earlier this year Michael Beecher and colleagues at the University of Washington in Seattle showed that there’s some variability in aggressive signaling that doesn’t match the behaviour: a bird might act stroppy but not follow through with an attack [2].
This variability could be just random, an imponderable quirk of bird-brains. But now Beecher and colleagues say it isn’t [3].
The researchers studied 69 labelled male song sparrows in their natural habitat during autumn and spring. They played the birds their own songs (which elicit aggression just as ‘stranger songs’ do) and watched how they responded – whether they displayed the aggression signals of soft songs and wing waves, and whether they followed through by attacking the loudspeakers or a dummy bird.
They found that, after allowing for variations that provide an honest signal of a bird’s fluctuations in aggressive mood, the remaining variability – if you like, the dishonest part of it – seems to be consistently displayed by particular birds.
Some have a predisposition – consistent from one season to the next – to give out false signals of how aggressive they intend to be, suggesting either too much or too little. Others are more consistently ‘honest’. Beecher and colleagues say that this behaviour too seems to be a robust characteristic of an individual bird’s personality, which the researchers call “communicativeness”.
“This is an important and novel paper”, says William Searcy, a behavioural ecologist at the University of Miami. “I think it’s highly likely that behaviours one can define in song sparrows can be identified in other birds, and other animals as well”, adds Jeremy Hyman of Western Carolina University, a specialist in bird behaviour.
Habitual ‘over-signallers’ may be good bluffers, who gamble on scaring away rivals that they won’t actually dare fight. ‘Under-signallers’, who attack without much warning, are harder to explain. “This behaviour is intriguing, and hasn't really been discussed in theory”, says Beecher. “There are benefits to signaling – a fight is avoided, potentially beneficial to both parties – so why not do it?”
One possibility is that under-signallers are genuine tough guys, so likely to win a bout that it’s not worth their while bothering with scare tactics. In this case the behaviour could be a beneficial adaptation. But another possibility is that some individuals just aren’t very good at getting the signaling codes right – it’s not an adaptation but a mistake.
“I don’t think there is enough evidence yet to know whether individual adaptive or error-based theories are more correct”, says Hyman. He adds that why personality traits exist at all is still a big question, but says “I think there’s enough evidence of links between personality and fitness to conclude that behavioural variation isn’t [adaptively] neutral.”
References
1. Bell, A., Hankison, S. J. & Laslowski, K. L. Anim. Behav. 77, 771-783 (2009).
2. Akçay, Ç., Campbell, S. E., Tom, M. E. & Beecher, M. D., Proc. R. Soc. B 280, 20122517 (2013).
3. Akçay, Ç., Campbell, S. E. & Beecher, M. D., Proc. R. Soc. B 281, 20132496 (2014).
Sunday, December 08, 2013
Quantum computers: when, what, who and why
I have a piece in December’s Prospect on quantum computing – here’s the original draft.
__________________________________________________
When people first hear about quantum computers, a common response is “where and when can I get one?” But that’s the wrong question, and not just because you’ll be disappointed with the answer. Quantum computers are often said to promise faster, bigger, more multi-layered computation – but they are not, and might never be, an upgrade of your laptop. They’re just not that sort of machine. So what are they, and why do we want them?
You could argue that your laptop is already a quantum computer, because the laws of quantum physics govern the ways electrical currents pass through its ultra-small transistors and wires. Partly that’s just saying that ultimately quantum physics governs all the properties of materials. Increasingly, however, strange quantum effects that don’t usually manifest in the everyday world, such as the ability of electrons to leap through walls, are becoming important as the scale of microelectronics shrinks. This ‘quantum tunnelling’, for example, is the basis of flash memory, and also threatens to make transistors ‘leaky’ as they get ever smaller.
Real quantum computers go far beyond any of that, however. In the end, all of today’s computers work using old-fashioned binary logic: by encoding information in strings of 1’s and 0’s, represented for example by electrical pulses in circuits or by flashes of light in optical fibres. These so-called bits are manipulated in logic gates, built from electronic components such as transistors. Here a particular set of input bits prompt the gate to produce another set of output bits. That’s what computation is; the rest is a question of building software and interfaces that turn these bits into a letter to Mum glowing on the screen.
Quantum computers will also use 1’s and 0’s, but with a crucial difference. As well as having one or the other of these values, a quantum bit (qubit) could have any mixture of them. Counter-intuitively, it can be simultaneously a 1 and a 0, or 1 with a tiny bit of 0, and so on. These mixtures are called superpositions, and they are a fundamental feature of objects that obey quantum rules. A photon of light, for example, can be polarized either vertically or horizontally, or can be in a superposition of both polarizations.
That gives qubits access to a vast range of states, so you can encode much more information in them. [OK, I’m keeping this in for now in the interests of honesty to the moment – but watch this space for an explanation of why this is far too simplistic, and perhaps even too erroneous, a way to describe quantum computing…] In short, it enables quantum computers to perform very many calculations simultaneously where a classical computer can do only one at a time with any given set of bits. It is this that provides the quantum computer with its tremendous speed-up. To factorize a big number classically (to find all its divisors), a computer plods through all the possible answers, while a quantum computer can assess them all, encoded in superpositions of qubits, at basically the same time.
So where’s the catch? It is that quantum phenomena such as superpositions are generally very delicate. They get easily disrupted or destroyed by disturbances from the surrounding environment, particularly the randomizing effects of heat. So to make such states usually requires very low temperatures. This fragility of quantum effects means that, while the question of what you could do with a quantum computer has been explored extensively already by physicists and mathematicians, actually building a device that can do any of it is taxing electrical engineers and applied physicists to the limit.
Now there are signs of real progress. The community was set buzzing two years ago when a Canadian company called D-Wave (“the world’s first commercial quantum computing company”) announced that it had created the first practical quantum computer: a black box, if you will, that could actually solve stuff. But several researchers questioned whether D-Wave’s device was really a true quantum computer at all, or just a fancy box of tricks that made token nods towards quantum effects. It employs an approach called ‘quantum annealing’, which is different from most theories of quantum computing and for which any real advantages over classical computing have yet to be shown.
At Raytheon BBN Technologies, based in Cambridge, Massachusetts, researchers are convinced that they are closing in on the real thing. Conveniently close to Harvard and the Massachusetts Institute of Technology, BBN was founded in 1948 and was intimately involved in the development of the earliest military networks that became the Internet. In 2009 the company became a subsidiary of the US defence contractor Raytheon. It has been seeking to develop so-called quantum information technologies since 2001, when the company’s researchers devised an optical telecommunications network that could exchange light signals between their headquarters and nearby Harvard and Boston Universities that encoded information in superpositions of photons. Such networks, which could be immune to eavesdropping, have now been developed in many places in the world.
But the quantum computer, which actually does number-crunching, is a bigger challenge. To make qubits, Raytheon BBN uses the same fundamental circuit components as D-Wave does. Called superconducting Josephson junctions, these are metal contacts cooled so deeply that they have become superconductors (that is, they have no electrical resistance), electrically connected to each other via a thin barrier of insulating material. Superconductivity is itself a quantum-mechanical effect, which is why it requires low temperatures, and the superconducting current can flow in distinct quantum states. A Josephson junction helps to filter out all but two states, which correspond to the binary 1’s and 0’s. It is possible to manipulate these states, for example creating specific superpositions, using pulses of microwave radiation. That’s the physical basis of BBN’s qubit circuits, which have to be cooled to within a daunting 50 thousandths of a degree of absolute zero.
Even then, the superpositions don’t last long. Yet to do practical quantum computing, they only need to survive for at least as long as the time needed to juggle with them in quantum logic gates. In recent years, says Zachary Dutton, lead scientist of Raytheon BBN’s Quantum Information Processing group, these so-called coherence times have increased dramatically, and are now at a level – tens to hundreds of microseconds – where the devices can actually perform logic processing.
Another critical issue for these quantum gates is the so-called error rate: how accurately they can be switched between states by the microwave signal. If you get this a little wrong – say, by making too much of one state in the superposition – the errors accumulate until, even if one stores the same information several times for cross-checking, too many mistakes derail the whole computation. Getting the error rate small enough to avoid this remains one of the key tasks.
At present the Raytheon BBN team, who are collaborating with computer giants IBM, doesn’t have anything even vaguely like a quantum computer. Rather, they are focusing on getting very small systems – currently three qubits, but soon to be eight – to work well enough that they can be assembled into large-scale circuits. “If you looked at a circuit diagram of a quantum computer”, says Dutton, “this would be a little piece of it.” The extreme cooling “needn’t be a showstopper”, he adds, because refrigeration technologies have advanced so much in recent years, for example so that they don’t need constant refilling with a coolant such as liquid helium.
Exotic quantum states in ultracold superconducting wires might sound like a complicated basis for making qubits. But the same approach is being taken by several of the leading academic centres of quantum computing, including MIT, Yale and the University of California at Santa Barbara. It’s by no means the only option. Another popular approach, for example, is to encode information in the quantum-mechanical energy states of individual atoms or ions suspended in free space using electromagnetic fields to trap them there. The information can be programmed, manipulated and read out using lasers to probe and alter the states of the trapped ions. Christopher Monroe, who is using this approach at the University of Maryland, feels that “there will be some interesting results in the next several years in both Josephson junction and [ion-trap] atomic machines”. He concurs that, unlike the 512-qubit D-Wave devices, those under development at Raytheon BBN are “legitimately quantum”.
What would you use a quantum computer for? Monroe says that the first demonstrations of quantum computing will probably be solving “some esoteric physics problem”, not providing a general-purpose computer. There are, however, some important possible uses that anyone can appreciate. Fast factorizing of huge numbers is one such, since all current data encryption methods rely on the difficulty of doing this with classical computers. Quantum computers would change the whole game in data security.
For basic science, one of the most appealing applications would be to perform computer simulations of molecules and materials. These are governed by quantum rules, and classical computers are forced to solve the equations by laborious and merely approximate mathematical methods. Quantum computers, in contrast, could map such quantum behaviour directly and exactly into its algorithms, so that simulations that take days currently might be possible in seconds, helping to make better predictions of the properties of new drugs and materials.
Currently, the most taxing computational problems are tackled by massive, expensive supercomputers housed in a few specialized centres and leased to users. That’s what the initial market for quantum computers will look like too, says Dutton – not really a market at all, but a highly centralized oligopoly. But of course all computers used to be like this: huge mainframes dedicated to recondite problems. Mindful of IBM founder Thomas Watson’s (possibly apocryphal) prediction in 1943 that this is what computers would always be – Watson is said to have forecast a world market for perhaps five of them in total – it would be an unwise prophet who forecasts where quantum computers might be decades down the line.
__________________________________________________
When people first hear about quantum computers, a common response is “where and when can I get one?” But that’s the wrong question, and not just because you’ll be disappointed with the answer. Quantum computers are often said to promise faster, bigger, more multi-layered computation – but they are not, and might never be, an upgrade of your laptop. They’re just not that sort of machine. So what are they, and why do we want them?
You could argue that your laptop is already a quantum computer, because the laws of quantum physics govern the ways electrical currents pass through its ultra-small transistors and wires. Partly that’s just saying that ultimately quantum physics governs all the properties of materials. Increasingly, however, strange quantum effects that don’t usually manifest in the everyday world, such as the ability of electrons to leap through walls, are becoming important as the scale of microelectronics shrinks. This ‘quantum tunnelling’, for example, is the basis of flash memory, and also threatens to make transistors ‘leaky’ as they get ever smaller.
Real quantum computers go far beyond any of that, however. In the end, all of today’s computers work using old-fashioned binary logic: by encoding information in strings of 1’s and 0’s, represented for example by electrical pulses in circuits or by flashes of light in optical fibres. These so-called bits are manipulated in logic gates, built from electronic components such as transistors. Here a particular set of input bits prompt the gate to produce another set of output bits. That’s what computation is; the rest is a question of building software and interfaces that turn these bits into a letter to Mum glowing on the screen.
Quantum computers will also use 1’s and 0’s, but with a crucial difference. As well as having one or the other of these values, a quantum bit (qubit) could have any mixture of them. Counter-intuitively, it can be simultaneously a 1 and a 0, or 1 with a tiny bit of 0, and so on. These mixtures are called superpositions, and they are a fundamental feature of objects that obey quantum rules. A photon of light, for example, can be polarized either vertically or horizontally, or can be in a superposition of both polarizations.
That gives qubits access to a vast range of states, so you can encode much more information in them. [OK, I’m keeping this in for now in the interests of honesty to the moment – but watch this space for an explanation of why this is far too simplistic, and perhaps even too erroneous, a way to describe quantum computing…] In short, it enables quantum computers to perform very many calculations simultaneously where a classical computer can do only one at a time with any given set of bits. It is this that provides the quantum computer with its tremendous speed-up. To factorize a big number classically (to find all its divisors), a computer plods through all the possible answers, while a quantum computer can assess them all, encoded in superpositions of qubits, at basically the same time.
So where’s the catch? It is that quantum phenomena such as superpositions are generally very delicate. They get easily disrupted or destroyed by disturbances from the surrounding environment, particularly the randomizing effects of heat. So to make such states usually requires very low temperatures. This fragility of quantum effects means that, while the question of what you could do with a quantum computer has been explored extensively already by physicists and mathematicians, actually building a device that can do any of it is taxing electrical engineers and applied physicists to the limit.
Now there are signs of real progress. The community was set buzzing two years ago when a Canadian company called D-Wave (“the world’s first commercial quantum computing company”) announced that it had created the first practical quantum computer: a black box, if you will, that could actually solve stuff. But several researchers questioned whether D-Wave’s device was really a true quantum computer at all, or just a fancy box of tricks that made token nods towards quantum effects. It employs an approach called ‘quantum annealing’, which is different from most theories of quantum computing and for which any real advantages over classical computing have yet to be shown.
At Raytheon BBN Technologies, based in Cambridge, Massachusetts, researchers are convinced that they are closing in on the real thing. Conveniently close to Harvard and the Massachusetts Institute of Technology, BBN was founded in 1948 and was intimately involved in the development of the earliest military networks that became the Internet. In 2009 the company became a subsidiary of the US defence contractor Raytheon. It has been seeking to develop so-called quantum information technologies since 2001, when the company’s researchers devised an optical telecommunications network that could exchange light signals between their headquarters and nearby Harvard and Boston Universities that encoded information in superpositions of photons. Such networks, which could be immune to eavesdropping, have now been developed in many places in the world.
But the quantum computer, which actually does number-crunching, is a bigger challenge. To make qubits, Raytheon BBN uses the same fundamental circuit components as D-Wave does. Called superconducting Josephson junctions, these are metal contacts cooled so deeply that they have become superconductors (that is, they have no electrical resistance), electrically connected to each other via a thin barrier of insulating material. Superconductivity is itself a quantum-mechanical effect, which is why it requires low temperatures, and the superconducting current can flow in distinct quantum states. A Josephson junction helps to filter out all but two states, which correspond to the binary 1’s and 0’s. It is possible to manipulate these states, for example creating specific superpositions, using pulses of microwave radiation. That’s the physical basis of BBN’s qubit circuits, which have to be cooled to within a daunting 50 thousandths of a degree of absolute zero.
Even then, the superpositions don’t last long. Yet to do practical quantum computing, they only need to survive for at least as long as the time needed to juggle with them in quantum logic gates. In recent years, says Zachary Dutton, lead scientist of Raytheon BBN’s Quantum Information Processing group, these so-called coherence times have increased dramatically, and are now at a level – tens to hundreds of microseconds – where the devices can actually perform logic processing.
Another critical issue for these quantum gates is the so-called error rate: how accurately they can be switched between states by the microwave signal. If you get this a little wrong – say, by making too much of one state in the superposition – the errors accumulate until, even if one stores the same information several times for cross-checking, too many mistakes derail the whole computation. Getting the error rate small enough to avoid this remains one of the key tasks.
At present the Raytheon BBN team, who are collaborating with computer giants IBM, doesn’t have anything even vaguely like a quantum computer. Rather, they are focusing on getting very small systems – currently three qubits, but soon to be eight – to work well enough that they can be assembled into large-scale circuits. “If you looked at a circuit diagram of a quantum computer”, says Dutton, “this would be a little piece of it.” The extreme cooling “needn’t be a showstopper”, he adds, because refrigeration technologies have advanced so much in recent years, for example so that they don’t need constant refilling with a coolant such as liquid helium.
Exotic quantum states in ultracold superconducting wires might sound like a complicated basis for making qubits. But the same approach is being taken by several of the leading academic centres of quantum computing, including MIT, Yale and the University of California at Santa Barbara. It’s by no means the only option. Another popular approach, for example, is to encode information in the quantum-mechanical energy states of individual atoms or ions suspended in free space using electromagnetic fields to trap them there. The information can be programmed, manipulated and read out using lasers to probe and alter the states of the trapped ions. Christopher Monroe, who is using this approach at the University of Maryland, feels that “there will be some interesting results in the next several years in both Josephson junction and [ion-trap] atomic machines”. He concurs that, unlike the 512-qubit D-Wave devices, those under development at Raytheon BBN are “legitimately quantum”.
What would you use a quantum computer for? Monroe says that the first demonstrations of quantum computing will probably be solving “some esoteric physics problem”, not providing a general-purpose computer. There are, however, some important possible uses that anyone can appreciate. Fast factorizing of huge numbers is one such, since all current data encryption methods rely on the difficulty of doing this with classical computers. Quantum computers would change the whole game in data security.
For basic science, one of the most appealing applications would be to perform computer simulations of molecules and materials. These are governed by quantum rules, and classical computers are forced to solve the equations by laborious and merely approximate mathematical methods. Quantum computers, in contrast, could map such quantum behaviour directly and exactly into its algorithms, so that simulations that take days currently might be possible in seconds, helping to make better predictions of the properties of new drugs and materials.
Currently, the most taxing computational problems are tackled by massive, expensive supercomputers housed in a few specialized centres and leased to users. That’s what the initial market for quantum computers will look like too, says Dutton – not really a market at all, but a highly centralized oligopoly. But of course all computers used to be like this: huge mainframes dedicated to recondite problems. Mindful of IBM founder Thomas Watson’s (possibly apocryphal) prediction in 1943 that this is what computers would always be – Watson is said to have forecast a world market for perhaps five of them in total – it would be an unwise prophet who forecasts where quantum computers might be decades down the line.
Tuesday, December 03, 2013
What innovation really is
Here is my current Crucible column for Chemistry World. The plot above shows how chemistry’s ‘connectedness’ to other disciplines falls out in this analysis of citations – the size of the circles reflects the number of papers included in the analysis, the colours show the interdisciplinarity: the bluer, the more so.
_____________________________________________________________
How do you write a hit paper? The rise of bibliometrics and citation data-mining threatens to turn the answer into a reductive prescription: have many coauthors, make the paper longer, choose an assertive, catchy title. Yet the truth is that we have always known what generates the best chance of success: have a really interesting and productive idea, report it clearly and carefully, and publish it in a good journal.
That’s why a new paper analysing the ingredients of high-impact scientific papers (as defined by their citation counts) is best not viewed as another ‘how to’ formula. Rather, what Brian Uzzi and Ben Jones of Northwestern University in Illinois and their colleagues have supplied [B. Uzzi et al., Science 342, 468 (2013)] is a retrospective account of why some papers made their mark. It’s a bit like examining why the Beatles’ songs are so popular – it’s one thing to explain it, quite another to use that knowledge to write another “Eleanor Rigby”.
The real value of this work is in underlining the importance of innovative thinking – as well as clarifying what genuine novelty consists of. The idea is ingenious in itself (my guess is that if the researchers trained their lens on their own paper, it would predict considerable impact). While it is hard to quantify the intrinsic novelty of the ideas expressed in a paper, the reference list generally gives a fair indication of the intellectual heritage on which they draw.
So if the references are all taken from a narrow body of highly specialized and specific work, the chances are that the paper itself represents just another incremental advance in that area, and is going to have limited appeal outside a tiny circle. But a paper with a bizarrely diverse selection of references – here the Journal of Natural Products, there Kierkegaard’s Fear and Trembling – all too probably indicates a comparable incoherence in the authors’ minds.
What Uzzi and colleagues consider, then, is the balance between ‘typical’ and ‘atypical’ in the reference list. Using a database of 17.9 million papers in the Web of Science spanning all scientific fields (in fact they have ventured into the humanities too), the researchers looked at how often all possible pairs of papers (or journals) in a given year were cited together. A comparison against purely random pairings then reveals how ‘conventional’ such a pairing is, enabling an enumeration of the conventionality of any paper’s entire reference list.
It will surprise no one to hear that scientific papers are on the whole highly conservative by this measure. But Uzzi and colleagues figured that the relatively unusual combinations of citations – those in the tail of the distributions – might be particularly revealing. They found that even these tended to be ‘typical’: the less-common pairings of journal A, say, tend to be with journals G and K rather than more random.
The real story emerges when these citation patterns are compared between high-impact and low-impact papers. The former are no less firmly embedded in convention – except, crucially, for the unusual reference combinations in the tail of their distributions, which show a strong degree of novelty. In other words, these papers anchor themselves to a substantial body of related, specialized work, but inject into it ideas and results from farther afield than lower-impact papers tend to reach. “Thus, novelty and conventionality are not opposing factors in the production of science”, Uzzi and colleagues say. As one might imagine, novelty in this sense seems to appear more often in papers written by collaborating teams, which can mine insights from different disciplines.
What does all this mean in chemistry? The paper itself gives no breakdown by discipline, but Brian Uzzi has kindly supplied me with a few indicators. On the basis of how often papers within the discipline cite ones from outside, chemistry scores highly as an interdisciplinary subject – second only to biology, comparable to medical research, and better than, say, physics or earth sciences. Moreover, its cross-disciplinary handshakes are very diverse, although the affinities with medicine and biology are evident. On this basis, the common claim that chemistry is the “central science” seems well justified.
No doubt individual case histories of high-impact chemical papers would tell instructive stories. Two papers from Chemical Communications in 1994 offer a representative snapshot. One, on polymer synthesis, reaches out only to other journals of polymer and organic chemistry but without even the benefit of conventional pairings therein. It had 12 citations. Another, on the synthesis of derivatized gold nanoparticles, combines popular pairings such as JACS-Angewandte Chemie with novel links to the literature on clusters; it had nearly 4,000 citations.
If you want a moral, it is surely to talk to people outside your group, and ideally outside your department, and if possible work with them. But at the same time don’t neglect the core of your own subject. Easily said, I know – but the best advice usually is.
Monday, December 02, 2013
Ome sweet ome?
Here’s my latest piece for the Prospect blog.
________________________________________________________________
Chances are that every biologist now has an ome to go to. This suffix, first introduced in the genome (the sum total of all an organism’s genes), can now be found attached to just about every aspect of life’s molecular basis. There is the proteome (the full complement of protein molecules in an organism), the glycome (all the sugars), the epigenome (all the non-genetically encoded regulation of gene activity), the lipidome (all the fatty-acid lipids of cell membranes). Omes embrace wider concepts too. The metabolome comprises all the molecules involved in metabolism; the interactome is the network of interactions between genes and other molecules; the phenome is the total of all distinct observable traits (phenotypes), and so on. The integrome is the ome of all the omes: an ome from ome, you might say.
The proliferation of these neologisms has understandably attracted criticisms and ridicule, and even the founding editor of a new journal called Omics told Nature that “most of them will not make sense.” Some researchers suggest that they are just a way of investing an established field – such as the study of metabolic biochemical processes – with the kudos that has become attached to genomics. They are also a marketing ploy: if you have an ome, you surely need your own distinct funding stream.
Geneticist Jonathan Eisen of the University of California at Davis talks about “badomics”, and sees the spread of omes as a pernicious meme that adds clutter and confusion, as well as implying a sometimes misleading analogy to the aims and concepts of genomics. He compares it with the indiscriminate appending of -gate to every political blunder post-Watergate. “Some of the omes I have the most trouble with are not even remotely comprehensive, but are simply collections of a small set of some facts about one minor entity”, says Eisen, citing for example the nascentosome (incompletely assembled protein molecules) and the predatosome (genes involved in bacterial predation).
This scepticism is valid, but it doesn’t necessarily get to the core of what is both bad and potentially constructive in the omics fad. An ome is basically a list of parts, whether those are physical entities such as molecules or more abstract such as connections or properties. There is great potential value in such a list, provided that it is comprehensive. If one can consult the proteome to look up the chemical structure of a protein associated with a disease-linked gene, say, then one might be able to design a drug molecule that intervenes in the protein’s behaviour. But a list of parts is not an explanation for their collective function, as any electrical engineer or car mechanic will tell you. Omes are in fact the modern equivalent of what Francis Bacon in the seventeenth century called ‘histories’ – exhaustive collections of all possible facts about a given phenomenon, such as cold or comets. Bacon was convinced that preparing histories was the essential first step in natural philosophy, and he set about devising a scheme for distilling these heaps of facts into real knowledge and insight. But that scheme was absurdly elaborate and never even completed, let alone put into practice. The early scientists found, in spite of their Baconian convictions, that this could never be the way to do science – they were compelled to draw up hypotheses and theories, even before all the ‘facts’ were in, for otherwise there is no way to prioritize or organise what you are looking for.
This is another way of saying that omics will not be science until it works within a framework that allows for hypothesis-testing. Merely searching vast databases for correlations is worse than futile, because it will inevitably produce false positives – spurious relationships between events or entities – while remaining silent about the root mechanisms. There’s a difference between knowing which parts work together and knowing how they do so.
It seems that the converse is also true: causative principles might not announce themselves at the level of the basic components. This has become embarrassingly clear in genomics: for many traits or diseases that are evidently inheritable, it has proved possible to identify only a small fraction of the genes responsible, even with the whole human genome at our fingertips. Causation might stem instead from higher levels of organization.
But that leads to one of the positive aspects of the omics craze. It was largely stimulated in the first place by the anticlimactic realization of how much was left unsaid by the human genome projects. We need to know not just what genes we have, but what protein molecules they encode (for these are ultimately the cell’s primary machinery), and how much the gene is actually used, or ‘transcribed’. Enter the proteome and transcriptome. Then we need to know how genes and proteins act together – the interactome, metabolome and so forth – and what other molecules are crucially involved – the glycome, lipidome and so on. What’s more, because some of these sets of molecules are closer to the physiological end of an organism’s functioning, it seems likely that we might find clearer, less ambiguous and more immediate markers of disease and pathology in these other omes than in the genome. Profiling of lipids, for example, might point to incipient diet-related disease.
In other words, the proliferation of omes marks a recognition – never doubted, but long sidelined by the glamour of genomics – that there is much more to life than genes, many of which are better regarded not as ruthless dictators of the cell but as referees that keep the game on track. Omics could thus represent the start – even if clumsy and too overtly list-obsessed – of a return to a more integrated view of what life is.
________________________________________________________________
Chances are that every biologist now has an ome to go to. This suffix, first introduced in the genome (the sum total of all an organism’s genes), can now be found attached to just about every aspect of life’s molecular basis. There is the proteome (the full complement of protein molecules in an organism), the glycome (all the sugars), the epigenome (all the non-genetically encoded regulation of gene activity), the lipidome (all the fatty-acid lipids of cell membranes). Omes embrace wider concepts too. The metabolome comprises all the molecules involved in metabolism; the interactome is the network of interactions between genes and other molecules; the phenome is the total of all distinct observable traits (phenotypes), and so on. The integrome is the ome of all the omes: an ome from ome, you might say.
The proliferation of these neologisms has understandably attracted criticisms and ridicule, and even the founding editor of a new journal called Omics told Nature that “most of them will not make sense.” Some researchers suggest that they are just a way of investing an established field – such as the study of metabolic biochemical processes – with the kudos that has become attached to genomics. They are also a marketing ploy: if you have an ome, you surely need your own distinct funding stream.
Geneticist Jonathan Eisen of the University of California at Davis talks about “badomics”, and sees the spread of omes as a pernicious meme that adds clutter and confusion, as well as implying a sometimes misleading analogy to the aims and concepts of genomics. He compares it with the indiscriminate appending of -gate to every political blunder post-Watergate. “Some of the omes I have the most trouble with are not even remotely comprehensive, but are simply collections of a small set of some facts about one minor entity”, says Eisen, citing for example the nascentosome (incompletely assembled protein molecules) and the predatosome (genes involved in bacterial predation).
This scepticism is valid, but it doesn’t necessarily get to the core of what is both bad and potentially constructive in the omics fad. An ome is basically a list of parts, whether those are physical entities such as molecules or more abstract such as connections or properties. There is great potential value in such a list, provided that it is comprehensive. If one can consult the proteome to look up the chemical structure of a protein associated with a disease-linked gene, say, then one might be able to design a drug molecule that intervenes in the protein’s behaviour. But a list of parts is not an explanation for their collective function, as any electrical engineer or car mechanic will tell you. Omes are in fact the modern equivalent of what Francis Bacon in the seventeenth century called ‘histories’ – exhaustive collections of all possible facts about a given phenomenon, such as cold or comets. Bacon was convinced that preparing histories was the essential first step in natural philosophy, and he set about devising a scheme for distilling these heaps of facts into real knowledge and insight. But that scheme was absurdly elaborate and never even completed, let alone put into practice. The early scientists found, in spite of their Baconian convictions, that this could never be the way to do science – they were compelled to draw up hypotheses and theories, even before all the ‘facts’ were in, for otherwise there is no way to prioritize or organise what you are looking for.
This is another way of saying that omics will not be science until it works within a framework that allows for hypothesis-testing. Merely searching vast databases for correlations is worse than futile, because it will inevitably produce false positives – spurious relationships between events or entities – while remaining silent about the root mechanisms. There’s a difference between knowing which parts work together and knowing how they do so.
It seems that the converse is also true: causative principles might not announce themselves at the level of the basic components. This has become embarrassingly clear in genomics: for many traits or diseases that are evidently inheritable, it has proved possible to identify only a small fraction of the genes responsible, even with the whole human genome at our fingertips. Causation might stem instead from higher levels of organization.
But that leads to one of the positive aspects of the omics craze. It was largely stimulated in the first place by the anticlimactic realization of how much was left unsaid by the human genome projects. We need to know not just what genes we have, but what protein molecules they encode (for these are ultimately the cell’s primary machinery), and how much the gene is actually used, or ‘transcribed’. Enter the proteome and transcriptome. Then we need to know how genes and proteins act together – the interactome, metabolome and so forth – and what other molecules are crucially involved – the glycome, lipidome and so on. What’s more, because some of these sets of molecules are closer to the physiological end of an organism’s functioning, it seems likely that we might find clearer, less ambiguous and more immediate markers of disease and pathology in these other omes than in the genome. Profiling of lipids, for example, might point to incipient diet-related disease.
In other words, the proliferation of omes marks a recognition – never doubted, but long sidelined by the glamour of genomics – that there is much more to life than genes, many of which are better regarded not as ruthless dictators of the cell but as referees that keep the game on track. Omics could thus represent the start – even if clumsy and too overtly list-obsessed – of a return to a more integrated view of what life is.
Friday, November 29, 2013
Open season on dark matter
Here’s my last story for BBC Future.
_______________________________________________________________
Who will find dark matter first? We’re looking everywhere for this elusive stuff: deep underground, out in space, in the tunnels of particle colliders. After the Higgs boson, this is the next Big Hunt for modern physics, and arguably there’s even more at stake, since we think there’s more than four times more dark matter than there is all the stuff we can actually see.
And you can join the hunt. It’s probably not worth turning out your cupboards to see if there’s any dark matter lurking at the back, but there is a different way that all comers – at least, those with mathematical skills – can contribute. A team of astronomers has reported that crowdsourcing has improved the computational methods they will use to map out the dark matter dispersed through distant galaxies – which is where it was discovered in the first place.
The hypothesis of dark matter is needed to explain why galaxies hold together. Without its gravitational effects, rotating galaxies would fly apart, something that has been known since the 1930s. Yet although this stuff is inferred from its gravity, there’s nothing visible to astronomers – it doesn’t seem to absorb or emit light of any sort. That seems to make it a kind of matter different from any of the fundamental particles currently known. There are several theories for what dark matter might be, but they all have to start from negative clues: what we don’t know or what it doesn’t do.
The current favourite invokes a new fundamental particle called a WIMP: a weakly interacting massive particle. “Weakly interacting” means that it barely feels ordinary matter at all, but can just pass straight through it. However, the idea is that those feeble interactions are just enough to make a WIMP occasionally collide with a particle of ordinary matter and generate an observable effect in the form of a little burst of light that has no other discernible cause. Such flashes would be a telltale signature of dark matter.
To see them, it’s necessary to mask out all other possible causes – in particular, to exclude collisions involving cosmic rays, which are ordinary particles such as electrons and protons streaming through space after being generated in violent astrophysical processes such as supernovae. Cosmic rays are eventually soaked up by rock as they penetrate the earth, and so several dark-matter detectors are situated far underground, at the bottom of deep mineshafts. They comprise sensitive light detectors that surround a reservoir of fluid and look for inordinately rare dark-matter flashes.
One such experiment, called LUX and located in a mine in South Dakota, has recently reported the results of the first several months of operation. LUX looks for collisions of WIMPs within a tank of liquid xenon. So far, it hasn’t seen any. That wouldn’t be such a big deal if it wasn’t for the fact that some earlier experiments, have reported a few unexplained events that could possibly have been caused by WIMPs. LUX is one of the most sensitive dark-matter experiments now running, and if those earlier signals were genuinely caused by dark matter, LUX would have been expected to see such things too. So the new results suggest that the earlier, enticing findings were a false alarm.
Another experiment, called the Alpha Magnetic Spectrometer (AMS) and carried on board the International Space Station, looks for signals from the mutual annihilation of colliding WIMPs. And there are hopes that the Large Hadron Collider at CERN in Geneva might, once it resumes operation in 2014, be able to conduct particle smashes at the energies where some theories suggest that WIMPs might actually be produced from scratch, and so put these theories to the test.
In the meantime, the more information we can collect about dark matter in the cosmos, the better placed we are to figure out where and how to look for it. That’s the motivation for making more detailed astronomical observations of galaxies where dark matter is thought to reside. The largest concentrations of the stuff are thought to be in gravitationally attracting groups of galaxies called galaxy clusters, where dark matter can apparently outweigh ordinary matter by a factor of up to a hundredfold. By mapping out where the dark matter sits in these clusters relative to their visible matter, it should be possible to deduce some of the basic properties that its mysterious particles have, such as whether they are ‘cold’ and easy slowed down by gravity, or hot and thus less easily retarded.
One way of doing this mapping is to look for dark matter via its so-called gravitational lensing effect. As Einstein’s theory of general relativity predicted, gravitational fields can bend light. This means that dark matter (and ordinary matter too) can act like a lens: the light coming from distant objects can be distorted when it passes by a dense clump of matter. David Harvey of the University of Edinburgh, Thomas Kitching of University College London, and their coworkers are using this lensing effect to find out how dark matter is distributed in galaxy clusters.
To do that, they need an efficient computational method that can convert observations of gravitational lensing by a cluster into its inferred dark-matter distribution. Such methods exist, but the researchers suspected they could do better. Or rather, someone else could.
Crowd-sourcing as a way of gathering and analysing large bodies of data is already well established in astronomy, most notably in the Zooniverse scheme, in which participants volunteer their services to classify data into different categories: to sort galaxies or lunar craters into their fundamental shape classes, for example. Humans are still often better at making these judgements than automated methods, and Zooniverse provides a platform for distributing and collating their efforts.
What Harvey and colleagues needed was rather more sophisticated than sorting data into boxes. To create an algorithm for actually analysing such data, you need to have some expertise. So they turned to Kaggle, a web platform that (for a time-based fee) connects people with a large data set to data analysts who might be able to crunch it for them. Last year Kitching and his international collaborators used Kaggle to generate the basic gravitational-lensing data for dark-matter mapping. Now he and his colleagues have shown that even the analysis of the data can be effectively ‘outsourced’ this way.
The researchers presented the challenge in the form of a competition called “Observing Dark Worlds”, in which the authors of the three best algorithms would receive cash prizes totalling $20,000 donated by the financial company Winton Capital Management. They found that the three winning entries could improve significantly on the performance of a standard, public algorithm for this problem, pinpointing the dark matter clumps with an accuracy around 30% better. Winton Capital benefitted too: Kitching says that “they managed to find some new recruits from the winners, at a fraction of the ordinary recruiting costs.”
It’s not clear that the ordinary citizen can quite compete at this level – the overall winner of Dark Worlds was Tim Salismans, who this year gained a PhD in analysis of “big data” at the Erasmus University Rotterdam. The other two winners were professionals too. But that is part of the point of the exercise too: crowd-sourcing is not just about soliciting routine, low-level effort from an untrained army of volunteers, but also about connecting skilled individuals to problems that would benefit from their expertise. And the search for dark matter needs all the help it can get.
_______________________________________________________________
Who will find dark matter first? We’re looking everywhere for this elusive stuff: deep underground, out in space, in the tunnels of particle colliders. After the Higgs boson, this is the next Big Hunt for modern physics, and arguably there’s even more at stake, since we think there’s more than four times more dark matter than there is all the stuff we can actually see.
And you can join the hunt. It’s probably not worth turning out your cupboards to see if there’s any dark matter lurking at the back, but there is a different way that all comers – at least, those with mathematical skills – can contribute. A team of astronomers has reported that crowdsourcing has improved the computational methods they will use to map out the dark matter dispersed through distant galaxies – which is where it was discovered in the first place.
The hypothesis of dark matter is needed to explain why galaxies hold together. Without its gravitational effects, rotating galaxies would fly apart, something that has been known since the 1930s. Yet although this stuff is inferred from its gravity, there’s nothing visible to astronomers – it doesn’t seem to absorb or emit light of any sort. That seems to make it a kind of matter different from any of the fundamental particles currently known. There are several theories for what dark matter might be, but they all have to start from negative clues: what we don’t know or what it doesn’t do.
The current favourite invokes a new fundamental particle called a WIMP: a weakly interacting massive particle. “Weakly interacting” means that it barely feels ordinary matter at all, but can just pass straight through it. However, the idea is that those feeble interactions are just enough to make a WIMP occasionally collide with a particle of ordinary matter and generate an observable effect in the form of a little burst of light that has no other discernible cause. Such flashes would be a telltale signature of dark matter.
To see them, it’s necessary to mask out all other possible causes – in particular, to exclude collisions involving cosmic rays, which are ordinary particles such as electrons and protons streaming through space after being generated in violent astrophysical processes such as supernovae. Cosmic rays are eventually soaked up by rock as they penetrate the earth, and so several dark-matter detectors are situated far underground, at the bottom of deep mineshafts. They comprise sensitive light detectors that surround a reservoir of fluid and look for inordinately rare dark-matter flashes.
One such experiment, called LUX and located in a mine in South Dakota, has recently reported the results of the first several months of operation. LUX looks for collisions of WIMPs within a tank of liquid xenon. So far, it hasn’t seen any. That wouldn’t be such a big deal if it wasn’t for the fact that some earlier experiments, have reported a few unexplained events that could possibly have been caused by WIMPs. LUX is one of the most sensitive dark-matter experiments now running, and if those earlier signals were genuinely caused by dark matter, LUX would have been expected to see such things too. So the new results suggest that the earlier, enticing findings were a false alarm.
Another experiment, called the Alpha Magnetic Spectrometer (AMS) and carried on board the International Space Station, looks for signals from the mutual annihilation of colliding WIMPs. And there are hopes that the Large Hadron Collider at CERN in Geneva might, once it resumes operation in 2014, be able to conduct particle smashes at the energies where some theories suggest that WIMPs might actually be produced from scratch, and so put these theories to the test.
In the meantime, the more information we can collect about dark matter in the cosmos, the better placed we are to figure out where and how to look for it. That’s the motivation for making more detailed astronomical observations of galaxies where dark matter is thought to reside. The largest concentrations of the stuff are thought to be in gravitationally attracting groups of galaxies called galaxy clusters, where dark matter can apparently outweigh ordinary matter by a factor of up to a hundredfold. By mapping out where the dark matter sits in these clusters relative to their visible matter, it should be possible to deduce some of the basic properties that its mysterious particles have, such as whether they are ‘cold’ and easy slowed down by gravity, or hot and thus less easily retarded.
One way of doing this mapping is to look for dark matter via its so-called gravitational lensing effect. As Einstein’s theory of general relativity predicted, gravitational fields can bend light. This means that dark matter (and ordinary matter too) can act like a lens: the light coming from distant objects can be distorted when it passes by a dense clump of matter. David Harvey of the University of Edinburgh, Thomas Kitching of University College London, and their coworkers are using this lensing effect to find out how dark matter is distributed in galaxy clusters.
To do that, they need an efficient computational method that can convert observations of gravitational lensing by a cluster into its inferred dark-matter distribution. Such methods exist, but the researchers suspected they could do better. Or rather, someone else could.
Crowd-sourcing as a way of gathering and analysing large bodies of data is already well established in astronomy, most notably in the Zooniverse scheme, in which participants volunteer their services to classify data into different categories: to sort galaxies or lunar craters into their fundamental shape classes, for example. Humans are still often better at making these judgements than automated methods, and Zooniverse provides a platform for distributing and collating their efforts.
What Harvey and colleagues needed was rather more sophisticated than sorting data into boxes. To create an algorithm for actually analysing such data, you need to have some expertise. So they turned to Kaggle, a web platform that (for a time-based fee) connects people with a large data set to data analysts who might be able to crunch it for them. Last year Kitching and his international collaborators used Kaggle to generate the basic gravitational-lensing data for dark-matter mapping. Now he and his colleagues have shown that even the analysis of the data can be effectively ‘outsourced’ this way.
The researchers presented the challenge in the form of a competition called “Observing Dark Worlds”, in which the authors of the three best algorithms would receive cash prizes totalling $20,000 donated by the financial company Winton Capital Management. They found that the three winning entries could improve significantly on the performance of a standard, public algorithm for this problem, pinpointing the dark matter clumps with an accuracy around 30% better. Winton Capital benefitted too: Kitching says that “they managed to find some new recruits from the winners, at a fraction of the ordinary recruiting costs.”
It’s not clear that the ordinary citizen can quite compete at this level – the overall winner of Dark Worlds was Tim Salismans, who this year gained a PhD in analysis of “big data” at the Erasmus University Rotterdam. The other two winners were professionals too. But that is part of the point of the exercise too: crowd-sourcing is not just about soliciting routine, low-level effort from an untrained army of volunteers, but also about connecting skilled individuals to problems that would benefit from their expertise. And the search for dark matter needs all the help it can get.
Happy birthday MRS
Of all the regular meetings that I used to attend as a Nature editor, the one I enjoyed most was the annual Fall meeting of the US Materials Research Society. Partly because it was in Boston, but also because it was always full of diverse and interesting stuff, as well as being of a just about manageable scale. So I have a fondness for the MRS and was glad to be asked to write a series of portraits of areas in materials science for the MRS Bulletin to mark the society’s 40th anniversary. The result is a piece too long to set down here, but the kind folks at MRS Bulletin seem to have made the article freely available online here.
Tuesday, November 26, 2013
Shape-shifting
Oh, here’s one from BBC Future that I almost missed – the latest in ‘illusion optics’. I have a little video discussion of this too.
__________________________________________________________
In the tradition whereby science mines myth and legend for metaphors to describe its innovations, you might call this shape-shifting. Admittedly, the device reported in the journal Physical Review Letters by researchers in China is not going to equal Actaeon’s transformation into a stag, Metis into a fly, or Proteus into whatever he pleased. But it offers an experimental proof-of-principle that, using ideas and techniques related to invisibility cloaking, one object can be given the appearance of another. Oh, and the device does invisibility too.
This versatility is what marks out the ‘cloak’ made by Tie Jun Cui of the Southeast University in Nanjing, China, and his coworkers at Lanzhou University as distinct from the now considerable body of work on invisibility cloaks and other types of “transformation optics”. Surprisingly, perhaps, this versatility comes from a design that is actually easier to fabricate than many of the ‘invisibility cloaks’ made previously. The catch is that these shape-changes are not something you can actually see, but are apparent only when the transformed object is being detected from the effect it has on the electrical conductivity of the medium in which it is embedded.
The most sophisticated ‘invisibility cloaks’ made so far use structures called metamaterials to bend light around the hidden object, rather like water flowing around an obstacle in a stream. If the light rays from behind the object are brought back together again at the front, then to an observer they seem not to have deviated at all, but simply to have passed through empty space.
Researchers have also shown that, by rerouting light in other ways, a metamaterial cloak can enable so-called ‘illusion optics’ that gives one thing the appearance of another. However, with metamaterials this is a one-shot trick: the cloak would produce the same, single visual illusion regardless of what is hidden within it. What’s more, genuine invisibility and illusion optics are tremendously challenging to achieve with metamaterials, which no one really yet knows how to make in a way that will work with visible light for all the wavelengths we see. So at present, invisibility cloaks have been limited either to microwave frequencies or to simplified, partial cloaks in which an object may be hidden but the cloak itself is visible.
What’s more, each cloak only does one sort of transformation, for which it is designed at the outset. Cui and colleagues say that a multi-purpose shape-shifting cloak could be produced by making the components active rather than passive. That’s to say, rather than redirecting light along specified routes, they might be switchable so that the light can take different paths when the device is configured differently. You might compare it to a fixed rail track (passive), where there’s only one route, and a track with sets of points (active) for rerouting.
Active cloaks have not been much explored so far beyond the theory. Now Cui and his coworkers have made one. It hides or transforms objects that are sensed electrically, in a process that the researchers compare to the medical technology called electrical impedance tomography. Here, electrical currents or voltages measured on the surface of an object or region are used to infer the conductivity within it, and thereby to deduce the hidden structure. A similar technique is used in geophysics to look at buried rock structures using electrodes at the surface or down boreholes, and in industrial processes to look for buried pipes. It’s a little like using radar to reconstruct the shape of an object from the way it reflects and reshapes the echo.
Here, hiding an object would mean constructing a cloak to manipulate the electrical conductivity around it so that it seems as though the object isn’t there. And transforming its appearance involves rejigging the electric field so that the measurements made at a distance would infer an embedded object of a different shape. Cui and colleagues have built a two-dimensional version of such an illusionistic cloak, consisting of a network of resistors joined in a concentric ‘spider’s web’ pattern on an electrically conducting disk, with the cloaked region in a space at their centre.
To detect the object, an electrode at one position on the plate sets up an electric field, and this is measured around the periphery of the plate. Last year Cui and his colleagues made a passive version of an invisibility cloak, in which the resistor network guided electric currents around the central hole so as to give the impression, when the field was measured at the edges of the disk, that the cloak and its core were just part of the uniform background medium. Now they have wired up such a resistor network so that the voltage across each component, and thus the current passing through it, can be altered in a way that changes the apparent shape of the cloaked region, as inferred from measurements made at the disk’s edge.
In this way, the researchers could alter the ‘appearance’ of the central region to look invisible, or like a perfectly conducting material, or like a hole with zero conductivity. And all that’s needed is some nifty soldering to create the network from standard resistors, without any of the complications of metamaterials. That means it should be relatively easy to make the cloaks bigger, or indeed smaller.
In theory this device could sustain the illusion even if the probe signal changes in some way (such as its position), by using a rapid feedback mechanism to recalculate how the voltages across the resistors need to be altered to keep the same appearance. The researchers say that it might even work for oscillating electrical fields, as long as their frequency is not too high – in other words, perhaps to mask or transform objects being sensed by radio waves. Here the resistor network would be constantly tuned to cancel out distortions in the probe signal. And because resistors warm up, the device could also be used to manipulate appearances as sensed by changes in the heat flow through the cloaked region.
Reference: Q. Ma et al., Physical Review Letters 111, 173901 (2013).
__________________________________________________________
In the tradition whereby science mines myth and legend for metaphors to describe its innovations, you might call this shape-shifting. Admittedly, the device reported in the journal Physical Review Letters by researchers in China is not going to equal Actaeon’s transformation into a stag, Metis into a fly, or Proteus into whatever he pleased. But it offers an experimental proof-of-principle that, using ideas and techniques related to invisibility cloaking, one object can be given the appearance of another. Oh, and the device does invisibility too.
This versatility is what marks out the ‘cloak’ made by Tie Jun Cui of the Southeast University in Nanjing, China, and his coworkers at Lanzhou University as distinct from the now considerable body of work on invisibility cloaks and other types of “transformation optics”. Surprisingly, perhaps, this versatility comes from a design that is actually easier to fabricate than many of the ‘invisibility cloaks’ made previously. The catch is that these shape-changes are not something you can actually see, but are apparent only when the transformed object is being detected from the effect it has on the electrical conductivity of the medium in which it is embedded.
The most sophisticated ‘invisibility cloaks’ made so far use structures called metamaterials to bend light around the hidden object, rather like water flowing around an obstacle in a stream. If the light rays from behind the object are brought back together again at the front, then to an observer they seem not to have deviated at all, but simply to have passed through empty space.
Researchers have also shown that, by rerouting light in other ways, a metamaterial cloak can enable so-called ‘illusion optics’ that gives one thing the appearance of another. However, with metamaterials this is a one-shot trick: the cloak would produce the same, single visual illusion regardless of what is hidden within it. What’s more, genuine invisibility and illusion optics are tremendously challenging to achieve with metamaterials, which no one really yet knows how to make in a way that will work with visible light for all the wavelengths we see. So at present, invisibility cloaks have been limited either to microwave frequencies or to simplified, partial cloaks in which an object may be hidden but the cloak itself is visible.
What’s more, each cloak only does one sort of transformation, for which it is designed at the outset. Cui and colleagues say that a multi-purpose shape-shifting cloak could be produced by making the components active rather than passive. That’s to say, rather than redirecting light along specified routes, they might be switchable so that the light can take different paths when the device is configured differently. You might compare it to a fixed rail track (passive), where there’s only one route, and a track with sets of points (active) for rerouting.
Active cloaks have not been much explored so far beyond the theory. Now Cui and his coworkers have made one. It hides or transforms objects that are sensed electrically, in a process that the researchers compare to the medical technology called electrical impedance tomography. Here, electrical currents or voltages measured on the surface of an object or region are used to infer the conductivity within it, and thereby to deduce the hidden structure. A similar technique is used in geophysics to look at buried rock structures using electrodes at the surface or down boreholes, and in industrial processes to look for buried pipes. It’s a little like using radar to reconstruct the shape of an object from the way it reflects and reshapes the echo.
Here, hiding an object would mean constructing a cloak to manipulate the electrical conductivity around it so that it seems as though the object isn’t there. And transforming its appearance involves rejigging the electric field so that the measurements made at a distance would infer an embedded object of a different shape. Cui and colleagues have built a two-dimensional version of such an illusionistic cloak, consisting of a network of resistors joined in a concentric ‘spider’s web’ pattern on an electrically conducting disk, with the cloaked region in a space at their centre.
To detect the object, an electrode at one position on the plate sets up an electric field, and this is measured around the periphery of the plate. Last year Cui and his colleagues made a passive version of an invisibility cloak, in which the resistor network guided electric currents around the central hole so as to give the impression, when the field was measured at the edges of the disk, that the cloak and its core were just part of the uniform background medium. Now they have wired up such a resistor network so that the voltage across each component, and thus the current passing through it, can be altered in a way that changes the apparent shape of the cloaked region, as inferred from measurements made at the disk’s edge.
In this way, the researchers could alter the ‘appearance’ of the central region to look invisible, or like a perfectly conducting material, or like a hole with zero conductivity. And all that’s needed is some nifty soldering to create the network from standard resistors, without any of the complications of metamaterials. That means it should be relatively easy to make the cloaks bigger, or indeed smaller.
In theory this device could sustain the illusion even if the probe signal changes in some way (such as its position), by using a rapid feedback mechanism to recalculate how the voltages across the resistors need to be altered to keep the same appearance. The researchers say that it might even work for oscillating electrical fields, as long as their frequency is not too high – in other words, perhaps to mask or transform objects being sensed by radio waves. Here the resistor network would be constantly tuned to cancel out distortions in the probe signal. And because resistors warm up, the device could also be used to manipulate appearances as sensed by changes in the heat flow through the cloaked region.
Reference: Q. Ma et al., Physical Review Letters 111, 173901 (2013).
Thursday, November 21, 2013
The LHC comes to London
Here’s my latest piece for the Prospect blog. I also have a piece in the latest issue of the magazine on quantum computing, but I’ll post that shortly.
______________________________________________________________________
It may come as a surprise that not all physicists are thrilled by the excitement about the Higgs boson, now boosted further by the award of the physics Nobel prize to Peter Higgs and François Englert, who first postulated its existence. Some of them feel twinges of resentment at the way the European centre for particle physics CERN in Switzerland, where the discovery was made with the Large Hadron Collider (LHC), has managed to engineer public perception to imply that the LHC itself, and particle physics generally, is at the centre of gravity of modern physics. In fact most physicists don’t work on the questions that predominate at CERN, and the key concepts of the discipline are merely exemplified by, and not defined by, those issues.
I have shared some of this frustration at the skewed view that wants to make all physicists into particle-smashers. But after taking a preview tour of the new exhibition Collider just opening at London’s Science Museum, I am persuaded that griping is not the proper response. It is true that CERN has enviable public-relations resources, but the transformation of an important scientific result (the Higgs discovery) into an extraordinary cultural event isn’t a triumph of style over substance. It marks a shift in science communication that other disciplines can usefully learn from. Collider reflects this.
The exhibition has ambitions beyond the usual pedagogical display of facts and figures, evident from the way that the creative team behind it brought in theatrical expertise: video designer Finn Ross, who worked on the stage play of Mark Haddon’s The Curious Incident of the Dog in the Night Time, and playwright Michael Wynne. They have helped to recreate a sense of what it is like to actually work at CERN. The exhibits, many of them lumps of hardware from the LHC, are displayed in a mock-up of the centre’s offices (with somewhat over-generous proportions) and corridors, complete with poster ads for recondite conferences and the “CERN choir”. Faux whiteboards and blackboards – some with explanatory notes, others just covered with decorative maths – abound. Actors in a video presentation aim to convince us of the ordinariness of the men and women who work here, as well as of their passionate engagement and excitement with the questions they are exploring.
The result is that the findings of the LHC’s experiments so far – which are difficult to explain at the best of times, although most interested folks have probably gathered by now that the Higgs boson is a particle responsible for giving some other fundamental particles their mass – are not, as in the traditional science-museum model, spruced up and served up to the public as it were on a plate, in the form of carefully honed metaphors. The makeshift feel of the environment, a work-in-progress with spanners and bits of kit still lying around, is itself an excellent metaphor for the science itself: still under construction, making use of what is to hand, its final shape as yet undetermined. The experience is as much about what it means to do science as it is about what the science tells us.
This is a good thing, and the fact that CERN itself has become a kind of living exhibition – with more than 100,000 visitors a year and an active outreach programme with strong involvement of schools – is worth celebrating. The short presentations at the preview event also made it clear why scientists need help in thinking about public engagement. It has never been a secret that Peter Higgs himself has little interest in the hoopla and celebrity that his Nobel award has sent stratospheric. In a rare appearance here, he admitted to being concerned that all the emphasis on the particle now named after him might eclipse the other exciting questions the LHC will explore. Those are what will take us truly into uncharted territory; the Higgs boson is the last, expected part in the puzzle we have already assembled (the so-called Standard Model), whereas questions about whether all known particles have “supersymmetric” partners, and what dark matter is, demand hitherto untested physics.
Higgs is the classic scientist’s scientist, interested only in the work. When asked how he visualized the Higgs boson himself, he didn’t launch into the stock image of Margaret Thatcher moving through a cocktail party and “accreting mass” in the form of hangers-on, but just said that he didn’t visualize it at all, since he considers it impossible to visualize fundamental particles. He said he had little idea of why what seemed to be a previous lack of public interest in science has now become a hunger for it.
All this is not uncommon in scientists, who are not interested in developing pretty pictures and fancy words to communicate their thoughts. That no doubt helps them get on with the job, but it is why they need leaders such as CERN’s current director general Rolf-Dieter Heuer, who can step back and think about the message and the role in society. Hearteningly, Heuer asserted that “the interest in society was always there – we scientists just made the mistake of not satisfying it.”
As Heuer pointed out, the bigger picture is mind-boggling. “It took us fifty years to complete the Standard Model”, he said. “But ninety-five percent of the universe is still unknown. It’s time to enter the dark universe.”
______________________________________________________________________
It may come as a surprise that not all physicists are thrilled by the excitement about the Higgs boson, now boosted further by the award of the physics Nobel prize to Peter Higgs and François Englert, who first postulated its existence. Some of them feel twinges of resentment at the way the European centre for particle physics CERN in Switzerland, where the discovery was made with the Large Hadron Collider (LHC), has managed to engineer public perception to imply that the LHC itself, and particle physics generally, is at the centre of gravity of modern physics. In fact most physicists don’t work on the questions that predominate at CERN, and the key concepts of the discipline are merely exemplified by, and not defined by, those issues.
I have shared some of this frustration at the skewed view that wants to make all physicists into particle-smashers. But after taking a preview tour of the new exhibition Collider just opening at London’s Science Museum, I am persuaded that griping is not the proper response. It is true that CERN has enviable public-relations resources, but the transformation of an important scientific result (the Higgs discovery) into an extraordinary cultural event isn’t a triumph of style over substance. It marks a shift in science communication that other disciplines can usefully learn from. Collider reflects this.
The exhibition has ambitions beyond the usual pedagogical display of facts and figures, evident from the way that the creative team behind it brought in theatrical expertise: video designer Finn Ross, who worked on the stage play of Mark Haddon’s The Curious Incident of the Dog in the Night Time, and playwright Michael Wynne. They have helped to recreate a sense of what it is like to actually work at CERN. The exhibits, many of them lumps of hardware from the LHC, are displayed in a mock-up of the centre’s offices (with somewhat over-generous proportions) and corridors, complete with poster ads for recondite conferences and the “CERN choir”. Faux whiteboards and blackboards – some with explanatory notes, others just covered with decorative maths – abound. Actors in a video presentation aim to convince us of the ordinariness of the men and women who work here, as well as of their passionate engagement and excitement with the questions they are exploring.
The result is that the findings of the LHC’s experiments so far – which are difficult to explain at the best of times, although most interested folks have probably gathered by now that the Higgs boson is a particle responsible for giving some other fundamental particles their mass – are not, as in the traditional science-museum model, spruced up and served up to the public as it were on a plate, in the form of carefully honed metaphors. The makeshift feel of the environment, a work-in-progress with spanners and bits of kit still lying around, is itself an excellent metaphor for the science itself: still under construction, making use of what is to hand, its final shape as yet undetermined. The experience is as much about what it means to do science as it is about what the science tells us.
This is a good thing, and the fact that CERN itself has become a kind of living exhibition – with more than 100,000 visitors a year and an active outreach programme with strong involvement of schools – is worth celebrating. The short presentations at the preview event also made it clear why scientists need help in thinking about public engagement. It has never been a secret that Peter Higgs himself has little interest in the hoopla and celebrity that his Nobel award has sent stratospheric. In a rare appearance here, he admitted to being concerned that all the emphasis on the particle now named after him might eclipse the other exciting questions the LHC will explore. Those are what will take us truly into uncharted territory; the Higgs boson is the last, expected part in the puzzle we have already assembled (the so-called Standard Model), whereas questions about whether all known particles have “supersymmetric” partners, and what dark matter is, demand hitherto untested physics.
Higgs is the classic scientist’s scientist, interested only in the work. When asked how he visualized the Higgs boson himself, he didn’t launch into the stock image of Margaret Thatcher moving through a cocktail party and “accreting mass” in the form of hangers-on, but just said that he didn’t visualize it at all, since he considers it impossible to visualize fundamental particles. He said he had little idea of why what seemed to be a previous lack of public interest in science has now become a hunger for it.
All this is not uncommon in scientists, who are not interested in developing pretty pictures and fancy words to communicate their thoughts. That no doubt helps them get on with the job, but it is why they need leaders such as CERN’s current director general Rolf-Dieter Heuer, who can step back and think about the message and the role in society. Hearteningly, Heuer asserted that “the interest in society was always there – we scientists just made the mistake of not satisfying it.”
As Heuer pointed out, the bigger picture is mind-boggling. “It took us fifty years to complete the Standard Model”, he said. “But ninety-five percent of the universe is still unknown. It’s time to enter the dark universe.”
Wednesday, November 13, 2013
Sceptical chemists
Here’s my latest Crucible column for the November issue of Chemistry World. It’s something that’s always puzzled me. I suppose I could lazily claim that the Comments section below the piece proves my point, but obviously the voices there are self-selecting. (All the same, enlisting Boyle to the cause of climate skepticism is laughable. And Boyle was, among other things, determined to keep politics out of his science.)
_______________________________________________________________________
“While global warming is recognised, I am not sure that all the reasons have been fully explored. Carbon dioxide is a contributor, but what about cyclic changes caused by the Earth’s relationship in distance to the Sun?”
“While climate change is occurring, the drivers of change are less clear.”
It’s those pesky climate sceptics again, right? Well yes – but ones who read Chemistry and Industry, and who are therefore likely to be chemists of some description. When the magazine ran a survey in 2007 on its readers’ attitudes to climate change, it felt compelled to admit that “there are still some readers who remain deeply sceptical of the role of carbon dioxide in global warming, or of the need to take action.”
“Our survey revealed there remain those who question whether the problem exists or if reducing carbon dioxide emissions will have any effect at all,” wrote C&I’s Cath O’Driscoll. The respondents who felt that “the industry should be doing more to help tackle climate change” were in a clear majority of 72% - but that left 28% who didn’t. This is even more than the one in five members of the general population who, as the IPCC releases its 5th Report on Climate Change, now seem to doubt that global warming is real.
This squares with my subjective impression, on seeing the Letters pages of Chemistry World (and its predecessor) over the years, that the proportion of this magazine’s readers who are climate sceptics is rather higher than the 3% of the world’s climate scientists apparently still undecided about the causes (or reality) of global warming. A letter from 2007 complaining about “the enormous resources being put into the campaign to bring down carbon emissions on the debatable belief that atmospheric carbon dioxide level is the main driver of climate change rather than the result of it” seemed fairly representative of this subset.
Could it be that chemists are somehow more prone to climate scepticism than other scientists? I believe there is reason to think so, although I’m of course aware that this means some of you might already be sharpening your quills.
One of the most prominent sceptics has been Jack Barrett, formerly a well-respected chemical spectroscopist at Imperial College whose tutorial texts were published by the RSC. Barrett now runs the campaigning group Barrett Bellamy Climate with another famous sceptic, naturalist David Bellamy. Several other high-profile merchants of doubt, such as Nicholas Drapela (fired by Oregon State University last year) and Andrew Montford, trained as chemists. It’s not clear if there is strong chemical expertise in the Australian climate-sceptic Lavoisier Group, but they choose to identify themselves with Lavoisier’s challenge to the mistaken “orthodoxy” of phlogiston.
If, as I suspect, a chemical training seems to confer no real insulation against the misapprehensions evident in the non-scientific public, why should that be? One possible reason is that anyone who has spent a lifetime in the chemical industry (especially in petrochemicals), assailed by the antipathy of some eco-campaigners to anything that smacks of chemistry, will be likely to develop an instinctive aversion to, and distrust of, scare stories about environmental issues. That would be understandable, even if it were motivated more by heart than mind.
But I wonder if there’s another factor too. (Given that I’ve already dug a hole with some readers, I might as well jump in it.) If I were asked to make gross generalizations about the character of different fields of science, I would suggest that physicists are idealistic, biologists are conservative, and chemists are best described by that useful rustic Americanism, “ornery”. None of these are negative judgements – they all have pros as well as cons. But there does seem to be a contrarian streak that runs through the chemically trained, from William Crookes and Henry Armstrong to James Lovelock, Kary Mullis, Martin Fleischmann and of course the king of them all, Linus Pauling (who I’d have put money on being some kind of climate sceptic). This is part of what makes chemistry fun, but it is not without its complications.
In any event, it could be important for chemists to consider whether (and if so, why) there is an unusually high proportion of climate-change doubters in their ranks. Of course, it’s equally true that chemists have made major contributions to the understanding of climate, beginning with Svante Arrhenius’s intuition of the greenhouse effect in 1896 and continuing through to the work of atmospheric chemists such as Paul Crutzen. Spectroscopists, indeed, have played a vital role in understanding the issues in the planet’s radiative balance, and chemists have been foremost in identifying and tackling other environmental problems such as ozone depletion and acid rain. Chemistry has a huge part to play in finding solutions to the daunting problems that the IPCC report documents. A vocal contingent of contrarians won’t alter that.
_______________________________________________________________________
“While global warming is recognised, I am not sure that all the reasons have been fully explored. Carbon dioxide is a contributor, but what about cyclic changes caused by the Earth’s relationship in distance to the Sun?”
“While climate change is occurring, the drivers of change are less clear.”
It’s those pesky climate sceptics again, right? Well yes – but ones who read Chemistry and Industry, and who are therefore likely to be chemists of some description. When the magazine ran a survey in 2007 on its readers’ attitudes to climate change, it felt compelled to admit that “there are still some readers who remain deeply sceptical of the role of carbon dioxide in global warming, or of the need to take action.”
“Our survey revealed there remain those who question whether the problem exists or if reducing carbon dioxide emissions will have any effect at all,” wrote C&I’s Cath O’Driscoll. The respondents who felt that “the industry should be doing more to help tackle climate change” were in a clear majority of 72% - but that left 28% who didn’t. This is even more than the one in five members of the general population who, as the IPCC releases its 5th Report on Climate Change, now seem to doubt that global warming is real.
This squares with my subjective impression, on seeing the Letters pages of Chemistry World (and its predecessor) over the years, that the proportion of this magazine’s readers who are climate sceptics is rather higher than the 3% of the world’s climate scientists apparently still undecided about the causes (or reality) of global warming. A letter from 2007 complaining about “the enormous resources being put into the campaign to bring down carbon emissions on the debatable belief that atmospheric carbon dioxide level is the main driver of climate change rather than the result of it” seemed fairly representative of this subset.
Could it be that chemists are somehow more prone to climate scepticism than other scientists? I believe there is reason to think so, although I’m of course aware that this means some of you might already be sharpening your quills.
One of the most prominent sceptics has been Jack Barrett, formerly a well-respected chemical spectroscopist at Imperial College whose tutorial texts were published by the RSC. Barrett now runs the campaigning group Barrett Bellamy Climate with another famous sceptic, naturalist David Bellamy. Several other high-profile merchants of doubt, such as Nicholas Drapela (fired by Oregon State University last year) and Andrew Montford, trained as chemists. It’s not clear if there is strong chemical expertise in the Australian climate-sceptic Lavoisier Group, but they choose to identify themselves with Lavoisier’s challenge to the mistaken “orthodoxy” of phlogiston.
If, as I suspect, a chemical training seems to confer no real insulation against the misapprehensions evident in the non-scientific public, why should that be? One possible reason is that anyone who has spent a lifetime in the chemical industry (especially in petrochemicals), assailed by the antipathy of some eco-campaigners to anything that smacks of chemistry, will be likely to develop an instinctive aversion to, and distrust of, scare stories about environmental issues. That would be understandable, even if it were motivated more by heart than mind.
But I wonder if there’s another factor too. (Given that I’ve already dug a hole with some readers, I might as well jump in it.) If I were asked to make gross generalizations about the character of different fields of science, I would suggest that physicists are idealistic, biologists are conservative, and chemists are best described by that useful rustic Americanism, “ornery”. None of these are negative judgements – they all have pros as well as cons. But there does seem to be a contrarian streak that runs through the chemically trained, from William Crookes and Henry Armstrong to James Lovelock, Kary Mullis, Martin Fleischmann and of course the king of them all, Linus Pauling (who I’d have put money on being some kind of climate sceptic). This is part of what makes chemistry fun, but it is not without its complications.
In any event, it could be important for chemists to consider whether (and if so, why) there is an unusually high proportion of climate-change doubters in their ranks. Of course, it’s equally true that chemists have made major contributions to the understanding of climate, beginning with Svante Arrhenius’s intuition of the greenhouse effect in 1896 and continuing through to the work of atmospheric chemists such as Paul Crutzen. Spectroscopists, indeed, have played a vital role in understanding the issues in the planet’s radiative balance, and chemists have been foremost in identifying and tackling other environmental problems such as ozone depletion and acid rain. Chemistry has a huge part to play in finding solutions to the daunting problems that the IPCC report documents. A vocal contingent of contrarians won’t alter that.
Saturday, November 09, 2013
Reviewing the Reich
Time to catch up a little with what has been happening with my new book Serving The Reich. It has had some nice reviews in the Observer, the Guardian, and Nature. I have also talked about the issues on the Nature podcast, of which there is now an extended version. I’ve also discussed it for the Guardian science podcast, although that’s apparently not yet online. It seems I’ll be talking about the book next year at the Brighton Science Festival, the Oxford Literary Festival (probably in tandem with Graham Farmelo, who has written a nicely complementary book on Churchill and the bomb) and the Hay Festival – I hope to have dates and details soon.
Friday, November 01, 2013
WIMPs are the new Higgs
Here’s a blog posting for Prospect. You can see a little video podcast about it too.
________________________________________________________________
So with the Higgs particle sighted and the gongs distributed, physics seems finally ready to move on. Unless the Higgs had remained elusive, or had turned out to have much more mass than theories predicted, it was always going to be the end of a story: the final piece of a puzzle assembled over the past several decades. But now the hope is that the Large Hadron Collider, and several other big machines and experiments worldwide, will be able to open a new book, containing physics that we don’t yet understand at all. And the first chapter seems likely to be all about dark matter.
Depending on how you look at it, this is one of the most exciting or the most frightening problems facing physicists today. We have ‘known’ about dark matter for around 80 years, and yet we still don’t have a clue what it is. And this is a pretty big problem, because there seems to be more than five times as much dark matter as there is ordinary matter in the universe.
It’s necessary to invoke dark matter to explain why rotating galaxies don’t fly apart: there’s not enough visible matter to hold them together by gravity, and so some additional, unseen mass appears to be essential to fulfil that role. But it must be deeply strange stuff – since it apparently doesn’t emit or absorb light or any other electromagnetic radiation (whence ‘dark’), it can’t be composed of any of the fundamental subatomic particles known so far. There are several other astronomical observations that support the existence of dark matter, but so far theories about what it might consist of are pretty much ad hoc guesses.
Take the current favourite: particles called WIMPs, which stands for weakly interacting massive particles. Pull that technical moniker apart and you’re left with little more than a tautology, a bland restatement of the fact that we known dark matter must have mass but can’t interact (or barely) in any other way with light or regular matter.
It’s that “barely” on which hopes are pinned for detecting the stuff. Perhaps, just once in a blue moon, a WIMP careening through space does bump into common-or-garden atoms, and so discloses clues about its identity. The idea here is that, as well as gravity, WIMPs might also respond to another of the four fundamental forces of nature, called the weak nuclear force – the most exotic and hardest to explain of the forces. An atom knocked by a WIMP should emit light, which could be detected by sensitive cameras. To hope to see such a rare event in an experiment on earth, it’s necessary to exclude all other kinds of colliding cosmic particles, such as cosmic rays, which is why detectors hoping to spot a WIMP are typically housed deep underground.
One such, called LUX, sits at the foot of a 1500m mineshaft in the Black Hills of South Dakota, and has just announced the results of its first three months of WIMP-hunting. LUX stands for Large Underground Xenon experiment, because it seeks WIMP collisions within a cylinder filled with liquid xenon, and it is the most sensitive of the dark-matter detectors currently operating.
The result? Nothing. Not a single glimmer of a dark-matter atom-crash. But this tells us something worth knowing, which is that previous claims by other experiments, such as the Cryogenic Dark Matter Search in a Minnesota mine, to have seen potential dark-matter events probably now have to be rejected. What’s more, every time a dark-matter experiment fails to see anything, we discover more about where not to look: the possibilities are narrowed.
The LUX results are the highest-profile in a flurry of recent reports from dark-matter experiments. An experiment called DAMIC has just described early test runs at the underground SNOLAB laboratory in a mine near Sudbury in Canada, which hosts a variety of detectors for exotic particles, although the full experiment won’t be operating until next year. And a detector called the Alpha Magnetic Spectrometer (AMS) carried on board the International Space Station can spot the antimatter particles called positrons that should be produced when two WIMPs collide and annihilate. In April AMS reported a mysterious signal that might have – possibly, just about – been “consistent” (as they say) with positrons from dark-matter annihilation, but could also have more mundane explanations. LUX now makes the latter interpretation by far the most likely, although an international group of researchers has just clarified the constraints the AMS data place on what dark-matter can and can’t be like.
What now? LUX has plenty of searching still to do over the next two years. It’s even possible that dark-matter particles might be produced in the high-energy collisions of the LHC. But it is also possible that we’ve been barking up the wrong tree after all – for example, that what we think is dark matter is in fact a symptom of some other, unguessed physical principle. We’re still literally groping around in the dark.
________________________________________________________________
So with the Higgs particle sighted and the gongs distributed, physics seems finally ready to move on. Unless the Higgs had remained elusive, or had turned out to have much more mass than theories predicted, it was always going to be the end of a story: the final piece of a puzzle assembled over the past several decades. But now the hope is that the Large Hadron Collider, and several other big machines and experiments worldwide, will be able to open a new book, containing physics that we don’t yet understand at all. And the first chapter seems likely to be all about dark matter.
Depending on how you look at it, this is one of the most exciting or the most frightening problems facing physicists today. We have ‘known’ about dark matter for around 80 years, and yet we still don’t have a clue what it is. And this is a pretty big problem, because there seems to be more than five times as much dark matter as there is ordinary matter in the universe.
It’s necessary to invoke dark matter to explain why rotating galaxies don’t fly apart: there’s not enough visible matter to hold them together by gravity, and so some additional, unseen mass appears to be essential to fulfil that role. But it must be deeply strange stuff – since it apparently doesn’t emit or absorb light or any other electromagnetic radiation (whence ‘dark’), it can’t be composed of any of the fundamental subatomic particles known so far. There are several other astronomical observations that support the existence of dark matter, but so far theories about what it might consist of are pretty much ad hoc guesses.
Take the current favourite: particles called WIMPs, which stands for weakly interacting massive particles. Pull that technical moniker apart and you’re left with little more than a tautology, a bland restatement of the fact that we known dark matter must have mass but can’t interact (or barely) in any other way with light or regular matter.
It’s that “barely” on which hopes are pinned for detecting the stuff. Perhaps, just once in a blue moon, a WIMP careening through space does bump into common-or-garden atoms, and so discloses clues about its identity. The idea here is that, as well as gravity, WIMPs might also respond to another of the four fundamental forces of nature, called the weak nuclear force – the most exotic and hardest to explain of the forces. An atom knocked by a WIMP should emit light, which could be detected by sensitive cameras. To hope to see such a rare event in an experiment on earth, it’s necessary to exclude all other kinds of colliding cosmic particles, such as cosmic rays, which is why detectors hoping to spot a WIMP are typically housed deep underground.
One such, called LUX, sits at the foot of a 1500m mineshaft in the Black Hills of South Dakota, and has just announced the results of its first three months of WIMP-hunting. LUX stands for Large Underground Xenon experiment, because it seeks WIMP collisions within a cylinder filled with liquid xenon, and it is the most sensitive of the dark-matter detectors currently operating.
The result? Nothing. Not a single glimmer of a dark-matter atom-crash. But this tells us something worth knowing, which is that previous claims by other experiments, such as the Cryogenic Dark Matter Search in a Minnesota mine, to have seen potential dark-matter events probably now have to be rejected. What’s more, every time a dark-matter experiment fails to see anything, we discover more about where not to look: the possibilities are narrowed.
The LUX results are the highest-profile in a flurry of recent reports from dark-matter experiments. An experiment called DAMIC has just described early test runs at the underground SNOLAB laboratory in a mine near Sudbury in Canada, which hosts a variety of detectors for exotic particles, although the full experiment won’t be operating until next year. And a detector called the Alpha Magnetic Spectrometer (AMS) carried on board the International Space Station can spot the antimatter particles called positrons that should be produced when two WIMPs collide and annihilate. In April AMS reported a mysterious signal that might have – possibly, just about – been “consistent” (as they say) with positrons from dark-matter annihilation, but could also have more mundane explanations. LUX now makes the latter interpretation by far the most likely, although an international group of researchers has just clarified the constraints the AMS data place on what dark-matter can and can’t be like.
What now? LUX has plenty of searching still to do over the next two years. It’s even possible that dark-matter particles might be produced in the high-energy collisions of the LHC. But it is also possible that we’ve been barking up the wrong tree after all – for example, that what we think is dark matter is in fact a symptom of some other, unguessed physical principle. We’re still literally groping around in the dark.
Uncertainty about uncertainty
Here’s a news story I have written for Physics World. It makes me realize I still don’t understand the uncertainty principle, or at least not in the way I thought I did – so it doesn’t, then, apply to successive measurements on an individual quantum particle?!
But while on the topic of Heisenberg, I discuss my new book Serving the Reich on the latest Nature podcast, following a very nice review in the magazine from Robert Crease. I’m told there will be an extended version of the interview put up on the Nature site soon. I’ve also discussed the book and its context for the Guardian science podcast, which I guess will also appear soon.
____________________________________________________________
How well did Werner Heisenberg understand the uncertainty principle for which he is best known? When he proposed this central notion of quantum theory in 1927 [1], he offered a physical picture to help it make intuitive sense, based on the idea that it’s hard to measure a quantum particle without disturbing it. Over the past ten years an argument has been unfolding about whether Heisenberg’s original analogy was right or wrong. Some researchers have argued that Heisenberg’s ‘thought experiment’ isn’t in fact restricted by the uncertainty relation – and several groups recently claimed to have proved that experimentally.
But now another team of theorists has defended Heisenberg’s original intuition. And the argument shows no sign of abating, with each side sticking to their guns. The discrepancy might boil down to the irresolvable issue of what Heisenberg actually meant.
Heisenberg’s principle states that we can’t measure certain pairs of variables for a quantum object – position and momentum, say – both with arbitrary accuracy. The better we know one, the fuzzier the other becomes. The uncertainty principle says that the product of the uncertainties in position and momentum can be no smaller than a simple fraction of Planck’s constant h.
Heisenberg explained this by imagining a microscope that tries to image a particle like an electron [1]. If photons bounce off it, we can “see” and locate it, but at the expense of imparting energy and changing its momentum. The more gently it is probed, the less the momentum is perturbed but then the less clearly it can be “seen.” He presented this idea in terms of a tradeoff between the ‘error’ of a position measurement (Δx), owing to instrumental limitations, and the resulting ‘disturbance’ in the momentum (Δp).
Subsequent work by others showed that the uncertainty principle does not rely on this disturbance argument – it applies to a whole ensemble of identically prepared particles, even if every particle is measured only once to obtain either its position or its momentum. As a result, Heisenberg abandoned the argument based on his thought experiment. But this didn’t mean it was wrong.
In 1988, however, Masanao Ozawa, now at Nagoya University in Japan, argued that Heisenberg’s original relationship between error and disturbance doesn’t represent a fundamental limit of uncertainty [2]. In 2003 he proposed an alternative relationship in which, although the two quantities remain related, their product can be arbitrarily small [3].
Last year Ozama teamed up with Yuji Hasegawa at the University of Vienna and coworkers to see if his revised formulation of the uncertainty principle held up experimentally. Looking at the position and momentum of spin-polarized neutrons, they found that, as Ozawa predicted, error and disturbance still involve a tradeoff but with a product that can be smaller than Heisenberg’s limit [4].
At much the same time, Aephraim Steinberg and coworkers at the University of Toronto conducted an optical test of Ozawa’s relationship, which also seemed to bear out his prediction [5]. Ozawa has since collaborated with researchers at Tohoku University in another optical study, with the same result [6].
Despite all this, Paul Busch at the University of York in England and coworkers now defend Heisenberg’s position, saying that Ozawa’s argument does not apply to the situation Heisenberg described [7]. “Ozawa's inequality allows arbitrarily small error products for a joint approximate measurement of position and momentum, while ours doesn’t”, says Busch. “Ours says if the error is kept small, the disturbance must be large.”
“The two approaches differ in their definition of Δx and Δp, and there is the freedom to make these different choices”, explains quantum theorist Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany. “Busch et al. claim to have the proper definition, and they prove that their uncertainty relation always holds, with no chance for experimental violation.”
The disagreement, then, is all about which definition is best. Ozawa’s is based on the variance in two measurements made sequentially on a particular quantum state, whereas that of Busch and colleagues considers the fundamental performance limits of a particular measuring device, and thus is independent of the initial quantum state. “We think that must have been Heisenberg's intention”, says Busch.
But Ozawa feels Busch and colleagues are focusing on instrumental limitations that have little relevance to the way devices are actually used. “My theory suggest if you use your measuring apparatus as suggested by the maker, you can make better measurement than Heisenberg's relation”, he says. “They now prove that if you use it very badly – if, say, you use a microscope instead of telescope to see the moon – you cannot violate Heisenberg's relation. Thus, their formulation is not interesting.”
Steinberg and colleagues have already responded to Busch et al. in a preprint that tries to clarify the differences between their definition and Ozawa’s. What Busch and colleagues quantify, they say, “is not how much the state that one measures is disturbed, but rather how much ‘disturbing power’ the measuring apparatus has.”
“Heisenberg's original formula holds if you ask about "disturbing power," but the less restrictive inequalities of Ozawa hold if you ask about the disturbance to particular states”, says Steinberg. “I personally think these are two different but both interesting questions.” But he feels Ozawa’s formulation is closer to the spirit of Heisenberg’s.
In any case, all sides agree that the uncertainty principle is not, as some popular accounts imply, about the mechanical effects of measurement – the ‘kick’ to the system. “It is not the mechanical kick but the quantum nature of the interaction and of the measuring probes, such as a photon, that are responsible for the uncontrollable quantum disturbance”, says Busch.
In part the argument comes down to what Heisenberg had in mind. “I cannot exactly say how much Heisenberg understood about the uncertainty principle”, Ozawa says. “But”, he adds, “I can say we know much more than Heisenberg.”
References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. Lett. 60, 385 (1988).
3. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
4. J. Erhart et al., Nat. Phys. 8, 185 (2013).
5. L. A. Rozema et al., Phys. Rev. Lett. 109, 100404 (2012)
6. S.-Y. Baek, F. Kaneda, M. Ozawa & K. Edamatsu, Sci. Rep. 3, 2221 (2013).
7. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
8. L. A. Rozema, D. H. Mahler, A. Hayat & A. M. Steinberg, http://www.arxiv.org/1307.3604 (2013).
But while on the topic of Heisenberg, I discuss my new book Serving the Reich on the latest Nature podcast, following a very nice review in the magazine from Robert Crease. I’m told there will be an extended version of the interview put up on the Nature site soon. I’ve also discussed the book and its context for the Guardian science podcast, which I guess will also appear soon.
____________________________________________________________
How well did Werner Heisenberg understand the uncertainty principle for which he is best known? When he proposed this central notion of quantum theory in 1927 [1], he offered a physical picture to help it make intuitive sense, based on the idea that it’s hard to measure a quantum particle without disturbing it. Over the past ten years an argument has been unfolding about whether Heisenberg’s original analogy was right or wrong. Some researchers have argued that Heisenberg’s ‘thought experiment’ isn’t in fact restricted by the uncertainty relation – and several groups recently claimed to have proved that experimentally.
But now another team of theorists has defended Heisenberg’s original intuition. And the argument shows no sign of abating, with each side sticking to their guns. The discrepancy might boil down to the irresolvable issue of what Heisenberg actually meant.
Heisenberg’s principle states that we can’t measure certain pairs of variables for a quantum object – position and momentum, say – both with arbitrary accuracy. The better we know one, the fuzzier the other becomes. The uncertainty principle says that the product of the uncertainties in position and momentum can be no smaller than a simple fraction of Planck’s constant h.
Heisenberg explained this by imagining a microscope that tries to image a particle like an electron [1]. If photons bounce off it, we can “see” and locate it, but at the expense of imparting energy and changing its momentum. The more gently it is probed, the less the momentum is perturbed but then the less clearly it can be “seen.” He presented this idea in terms of a tradeoff between the ‘error’ of a position measurement (Δx), owing to instrumental limitations, and the resulting ‘disturbance’ in the momentum (Δp).
Subsequent work by others showed that the uncertainty principle does not rely on this disturbance argument – it applies to a whole ensemble of identically prepared particles, even if every particle is measured only once to obtain either its position or its momentum. As a result, Heisenberg abandoned the argument based on his thought experiment. But this didn’t mean it was wrong.
In 1988, however, Masanao Ozawa, now at Nagoya University in Japan, argued that Heisenberg’s original relationship between error and disturbance doesn’t represent a fundamental limit of uncertainty [2]. In 2003 he proposed an alternative relationship in which, although the two quantities remain related, their product can be arbitrarily small [3].
Last year Ozama teamed up with Yuji Hasegawa at the University of Vienna and coworkers to see if his revised formulation of the uncertainty principle held up experimentally. Looking at the position and momentum of spin-polarized neutrons, they found that, as Ozawa predicted, error and disturbance still involve a tradeoff but with a product that can be smaller than Heisenberg’s limit [4].
At much the same time, Aephraim Steinberg and coworkers at the University of Toronto conducted an optical test of Ozawa’s relationship, which also seemed to bear out his prediction [5]. Ozawa has since collaborated with researchers at Tohoku University in another optical study, with the same result [6].
Despite all this, Paul Busch at the University of York in England and coworkers now defend Heisenberg’s position, saying that Ozawa’s argument does not apply to the situation Heisenberg described [7]. “Ozawa's inequality allows arbitrarily small error products for a joint approximate measurement of position and momentum, while ours doesn’t”, says Busch. “Ours says if the error is kept small, the disturbance must be large.”
“The two approaches differ in their definition of Δx and Δp, and there is the freedom to make these different choices”, explains quantum theorist Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany. “Busch et al. claim to have the proper definition, and they prove that their uncertainty relation always holds, with no chance for experimental violation.”
The disagreement, then, is all about which definition is best. Ozawa’s is based on the variance in two measurements made sequentially on a particular quantum state, whereas that of Busch and colleagues considers the fundamental performance limits of a particular measuring device, and thus is independent of the initial quantum state. “We think that must have been Heisenberg's intention”, says Busch.
But Ozawa feels Busch and colleagues are focusing on instrumental limitations that have little relevance to the way devices are actually used. “My theory suggest if you use your measuring apparatus as suggested by the maker, you can make better measurement than Heisenberg's relation”, he says. “They now prove that if you use it very badly – if, say, you use a microscope instead of telescope to see the moon – you cannot violate Heisenberg's relation. Thus, their formulation is not interesting.”
Steinberg and colleagues have already responded to Busch et al. in a preprint that tries to clarify the differences between their definition and Ozawa’s. What Busch and colleagues quantify, they say, “is not how much the state that one measures is disturbed, but rather how much ‘disturbing power’ the measuring apparatus has.”
“Heisenberg's original formula holds if you ask about "disturbing power," but the less restrictive inequalities of Ozawa hold if you ask about the disturbance to particular states”, says Steinberg. “I personally think these are two different but both interesting questions.” But he feels Ozawa’s formulation is closer to the spirit of Heisenberg’s.
In any case, all sides agree that the uncertainty principle is not, as some popular accounts imply, about the mechanical effects of measurement – the ‘kick’ to the system. “It is not the mechanical kick but the quantum nature of the interaction and of the measuring probes, such as a photon, that are responsible for the uncontrollable quantum disturbance”, says Busch.
In part the argument comes down to what Heisenberg had in mind. “I cannot exactly say how much Heisenberg understood about the uncertainty principle”, Ozawa says. “But”, he adds, “I can say we know much more than Heisenberg.”
References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. Lett. 60, 385 (1988).
3. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
4. J. Erhart et al., Nat. Phys. 8, 185 (2013).
5. L. A. Rozema et al., Phys. Rev. Lett. 109, 100404 (2012)
6. S.-Y. Baek, F. Kaneda, M. Ozawa & K. Edamatsu, Sci. Rep. 3, 2221 (2013).
7. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
8. L. A. Rozema, D. H. Mahler, A. Hayat & A. M. Steinberg, http://www.arxiv.org/1307.3604 (2013).
On the edge
In working on my next book (details soon), I have recently been in touch with a well-known science-fiction author, who very understandably felt he should take the precaution of saying that our correspondence was private and not intended for blurb-mining. He said he’d had a bad experience of providing a blurb years back and had vowed to have a blanket ban on that henceforth.
That’s fair enough, but I’m glad I’m able to remain open to the idea. I often have to decline (not that my opinion is likely to shift many copies), but if I never did it at all then I’d miss out on seeing some interesting material. I certainly had no hesitation in offering quotes for a book just published by OUP, Aid on the Edge of Chaos by Ben Ramalingam. Having seen the rather stunning list of endorsements on Amazon, I’m inclined to say I’m not worthy anyway, but there’s no doubt that Ben’s book deserves it (along with the glowing reader reviews so far). Quite aside from the whole perspective on aid, the book provides one of the best concise summaries I have seen of complexity science and its relation to human affairs generally – it is worth reading for that alone.
The book’s primary thesis is that these ideas should inform a rethinking of the entire basis of international aid. In particular, aid needs to be adaptive, interconnected and bottom-up, rather than being governed by lumbering international bodies with fixed objectives and templates. But Ben is able to present this idea in a way that does not offer some glib, vague panacea, but is closely tied in with the practical realities of the matter. It is a view wholly in accord with the kind of thinking that was (and hopefully still is) being fostered by the FuturICT project, although aid was one of the social systems that I don’t think they had considered in any real detail – I certainly had not.
I very much hope this book gets seen and, more importantly, acted on. There are plans afoot for its ideas to be debated at the Wellcome Trust's centre in London in January, which is sure to be an interesting event.