Thursday, December 19, 2013

Binary used in Polynesia 600 years ago

Here’s my latest news story for Nature.

______________________________________________________________

A tiny island in the Pacific was already using a kind of binary arithmetic in the Middle Ages

Binary arithmetic, the basis of all digital computation today, is usually said to have been invented at the start of the eighteenth century by the German mathematician Gottfried Leibniz. But a new study shows that a kind of binary was already in use three hundred years earlier among the people of the tiny Pacific island of Mangareva in French Polynesia.

The discovery, made by consulting historical records of the now almost wholly assimilated Mangarevan culture and language and reported in the Proceedings of the National Academy of Sciences [1], suggests that some of the advantages of the binary system adduced by Leibniz might create a cognitive motivation for this system to arise spontaneously even in a society without advanced science and technology.

Pure binary arithmetic works in base 2 rather than the conventional base 10 (the latter quite possibly a consequence of counting on ten fingers). This means that numbers are enumerated as powers of 2: instead of units, tens, hundreds (10**2) and thousands (10**3), the digits of a binary number refer to 1 (2**0), 2 (2**1), 4 (2**2), 8 (2**3) and so on.

Every whole number can be represented in this way using just 1s and 0s, which is why they can be encoded in computers in a system of on-off electrical pulses or switches. The number 13 in binary is 1101 (2**3+2**2+(0x2)+1), for example.

Leibniz pointed out in 1703 that to do simple arithmetic in binary, such as addition and multiplication, you don’t need to remember a whole lot of ‘facts’ about numbers, such as 5+4=9 or 6x7=42. Instead, you need only apply a few simple rules. For addition, say, you just add the 1s and 0s, remembering that 1+1=1 in the next position: 100+101=1001.

The downside to binary is that large numbers require lots of digits. But according to psychologists Andrea Bender and Sieghard Beller of the University of Bergen in Norway, the Mangarevan people found an ingenious answer to that, which they were apparently using even before 1450 AD.

Mangareva is a volcanic island first settled around 500-800 AD, which probably had a population of several thousand before substantial interactions with Europeans began in the eighteenth century. Its highly stratified society survived mostly on seafood and root crops, and needed a number system to quantify large transactions in trade and tributes to chieftains.

Only about 600 Mangarevan speakers now remain on the island, and in any case its indigenous number system has long been superseded by Arabic digits owing to French colonialism. But Bender and Beller have reconstructed it from descriptions written by (mostly European) authors in the nineteenth and early twentieth centuries [2].

They find that the former Mangarevans combined a base 10 with a binary system. They had number words for 1 to10, and then for 10 multiplied by several powers of 2: 10 (takau, denoted K in the new work), 20 (denoted P), 40 (T) and 80 (V). In this notation, for example, 70 is TPK and 57 is TK7.

Bender and Beller show that this system retains the key arithmetical simplifications of true binary, in that you don’t need to memorize lots of number facts but just to enact a few simple rules, such as 2xK=P and 2xP=T.

There are complications with the system too, but the authors argue that “the advantages outweigh the disadvantages.”

Cognitive scientist Rafael Nuñez of the University of California at San Diego points out that some notion of binary systems is actually older than Mangarevan culture. “It can be traced back to at least ancient China, around the 9th century BC”, he says – it can be found in the I Ching, which inspired Leibniz. Nuñez adds that “other ancient groups, such as the Maya, used sophisticated combinations of binary and decimal systems to keep track of time and astronomical phenomena. Thus, the cognitive advantages underlying the Mangarevan counting system may not be unique.”

All the same, say Bender and Beller, a ‘mixed’ system like this isn’t easy or obvious to create. “It’s puzzling that anybody would come up with such a solution, especially on a tiny island with a small population”, Bender and Beller say. “But this very fact also demonstrates just how important culture is for the development of numerical cognition”, they add – for example, how in this case dealing with big numbers can motivate inventive solutions.

Nuñez agrees that the study shows “the primacy of cultural factors underlying the invention of number systems, and the diversity in human numerical cognition.”

References
1. Bender, A. & Beller, S. Proc. Natl. Acad. Sci. USA doi:10.1073/pnas.1309160110 (2013).
2. Bender, A., J. Polynesian Soc. 122, in press (2013).

Wednesday, December 18, 2013

Mining black holes

Here’s something that boggled my mind, and which I wrote up for BBC Future.

__________________________________________________________

It’s a staple of science fiction: highly advanced civilizations getting their energy by mining black holes, extracting it from collapsed stars or making artificial mini-holes that power spaceships. These aren’t idle or quasi-magical speculations, for physicists have believed for at least 30 years that it might be possible. However, sci-fi writers wishing to draw on this technological miracle are going to have to get more inventive, for a paper published in the premier physics journal Physical Review Letters now argues that mining black holes would not be as productive as was thought.

The classical view of black holes as stars that have burnt out and collapsed under their own gravity to an infinitesimally small point in space – a singularity – offered little prospect that they were anything other than dead, barren light traps. Inside the so-called event horizon around the hole’s absurdly dense centre, nothing can escape from the hole’s gravity, and it just sits there forever like a blot on spacetime.

But that changed once Stephen Hawking and others brought quantum physics to bear on this picture. Hawking showed in the 1970s that black holes don’t last forever, and that to the world outside the event horizon they are not black at all. He argued that black holes emit energy from their boundaries in the form of radiation produced by quantum fluctuations of empty space itself. Eventually this Hawking radiation leads to evaporation of the black hole itself.

That happens so slowly that black holes with the mass of a star are still hardly less than eternal. But might it be possible to induce a black hole to release all its Hawking radiation sooner, so that in effect it becomes like a ball of fuel? In 1983 physicists George Unruh and Robert Wald suggested how to do that. One could lower a box down close to the hole’s event horizon, let it fill up with Hawking radiation, and then bring it back up again, just like filling a bucket with water from a well. Performed repeatedly, this manoeuvre would gradually strip the black hole of its ‘hot atmosphere’ of radiation. True, you’d need a mighty rope and winding mechanism to prevent the box from being tugged beyond the event horizon and swallowed, but in principle it could be done.

Or can it? Adam Brown of the Princeton Center for Theoretical Science says that it would take far longer than Unruh and Wald anticipated. He shows that the attempt would cause the black hole to swell and engulf the box. “Rather than using the box to rob the black hole of its radiation”, he writes, “the black hole instead robs us of our box.”

The problem, says Brown, lies with the plain old mechanics of the rope holding the box. Because it would be in a gravitational field, the rope would be subject to the inevitable constraint that it can’t be heavier than its own strength can support. This is true even for exotic ‘ropes’ that aren’t material at all, such as electric or magnetic fields: they too have an energy density and thus (via E=mc**2) an effective mass.

For an ordinary rope hanging down in the Earth’s gravity, the tension in the rope increases with height, because it is carrying more of its own weight. But weirdly, in a very strong gravitational field, where spacetime itself is highly curved, the tension remains the same all along the length. However, for the rope to be stable, it turns out that this tension must exactly equal the mass per unit length of the rope: the rope has to be in effect at breaking point purely to support its own weight, so that there is no strength left over to support the box that will collect Hawking radiation.

Another constraint on the rope is that it mustn’t disintegrate. Close to a black hole, the intense Hawking radiation creates a hot environment. If the rope is lowered too close to the event horizon, where the radiation is most plentiful, there’s a danger that the temperature will exceed that at which all ordinary matter – in other words, atoms themselves – melt into a gloop of their constituent quarks. If you make the rope too light, it’s more likely to melt. But if you make it too heavy, the rope itself is in danger of collapsing under its own gravity.

There’s another complication too. Brown shows that the box itself can’t be wider than a single wavelength of the Hawking radiation it is collecting, since otherwise the effects of relativity will pull it awry and cause the rope to break. That would make the collection process very cumbersome in any case – it would have to happen one photon (‘light particle’) at a time. To collect Hawking radiation of the wavelength of light, the boxes could be no bigger than typical bacteria, and to collect X-rays you’d need atom-sized boxes.

So here’s the deal. If you get too close to the black hole, the rope might melt or snap – or, if it’s made too massive to avoid that, it might collapse into itself. But if you try mining at a more cautious distance, there isn’t so much Hawking radiation there to collect. And Brown shows that even the best compromise makes energy extraction much slower than Unruh and Wald suggested.

Yet there is a better way, he says: do away with boxes altogether. In 1994, Albion Lawrence and Emil Martinec of the University of Chicago proposed that one could simply dip strings into a black hole and let Hawking radiation run up them like oil up the wick of an oil lamp. This was thought to be a slower process than hauling up boxes full of Hawking radiation, because each string carries up only one photon at a time. But Brown’s analysis shows that they would in fact both mine the hole at the same (slow) rate. Since dangling boxes introduce more potential for malfunction, Brown therefore argues that the preferable way to draw the energy from black holes is to puncture the event horizon with lots of photon-wicking strings, and let them drain it out of existence.

Reference: A. R. Brown, Physical Review Letters 111, 211301 (2013)

Tuesday, December 10, 2013

Who are you calling selfish?

There’s a bit of a fracas going on about David Dobbs’ article in Aeon on the obsolescence of the ‘selfish gene’: see here and here. In the first of these, Jerry Coyne has criticised the article as woefully misinformed; the second is Richard Dawkins’ response to it. There’s a crucial distinction here (though I’m not sure Coyne really wants to acknowledge it) between the accuracy of Dobbs’ scientific claims and the appropriateness of his objections to the selfish-gene metaphor. Steven Pinker apparently considers the problem here to be the fact that it “seems to be a congenital problem with science journalists [that] they think that it's a profound and revolutionary discovery that genes are regulated”. It might be nice if it were really that simple. But it isn’t. The real issue is whether the fact that genes are regulated (and perhaps more crucially, networked) means that “selfishness” is still an illuminating way to describe how they operate.

Take, for example, Coyne’s point that the polyphenism that Dobbs talks about – the fact that the same genes can create radically different phenotypes in a single organism – is triggered by a regulatory gene. (Coyne doesn’t say whether such a gene has yet been identified for grasshoppers or caterpillar/butterflies, but I’m happy to believe that this is indeed the probable origin of the morphological switch, regardless of whether we know the details.) In this sense, then, the transformation is certainly still under ‘genetic control’ and therefore adaptive in the same sense as any other genetic trait.

But does it mean that the regulatory gene in question – let’s call it gene A – is ‘selfish’? It’s hard to see any meaningful way in which this can be true. Coyne offers his own view of what it means: “during the process of natural selection, genes ‘act’ as if they were selfish.” In other words, it’s a metaphor. You know what, I think we got that already. We didn’t imagine it meant that genes habitually push to the front of queues and steal other genes’ wallets. What we need, though, is some notion of what he thinks “selfish” itself means in this context. That the gene plays a part in its own replication? I guess that’s what Coyne means, because later he says a selfish gene “promotes the reproduction of itself or its carrier.” But hang on – so it’s selfish if it promotes the reproduction of its carrier, meaning all those other genes too? So it’s not behaving in a way that is actually at the expense of other genes, but in fact benefits them? Like, say, the way we might play an active role in society so that it doesn’t collapse and we get shot up by looters? Sorry, so where exactly does the selfishness come in – or do you mean that the gene acts “as if with enlightened self-interest” – which, behaviourists and indeed linguists will tell you, is not the same as selfishness?

Let’s see if we can figure out which of these versions of the pathetic fallacy we’re talking about here. (I fear I might be patronising if I point out that ‘pathetic fallacy’ is not a term of abuse, as though to say that the idea of the selfish gene is pathetically fallacious, but on past experience I’ve found it’s best not to underestimate some folks’ unfamiliarity with figures of speech, particularly if they can otherwise extract offence from them.) This adaptation of gene A relies on the other genes whose expression is modified by the switch doing what they need to do in response to the signal from A. If they don’t ‘comply’, A gets no advantage. Likewise, the adaptation that enables the other genes to realise these alternative phenotypes in response to A’s signal relies on A actually giving that signal at the appropriate time. In other words, there is an intimate cooperativity required here between the way the genes operate, if any of them is to enjoy the mutual benefit.

Now, selfish geneticists might say “But A doesn’t care about those other genes, it is only working for its own benefit!” Well, they might say that, but I hope they won’t, for then they’d be showing that they have fallen for their own metaphor. Gene A doesn’t of course care about its own survival either. A gene doesn’t care about anything; it’s just a bit of a molecule. To use its ‘indifference’ to its fellow genes as an argument for why it is ‘selfish’ is absurd. You could of course say “isn’t it equally spurious to call this behaviour cooperative?” But cooperative behaviour in inanimate particles has a clear meaning in chemical physics: it means that the result depends on the collective interactions between the particles: it can’t result from the behaviour of any one of them acting alone. It doesn’t mean the particles are ‘nice’, or even that they act as if they were ‘nice’.

This sort of argument for why genes can be better regarded as cooperative than selfish is well rehearsed. It is a key aspect of the objections to the selfish-gene metaphor raised by people like Gabriel Dover, Denis Noble and Steven Rose. Noble’s argument in The Music of Life is particularly compelling, and the fact that it is seldom addressed by selfish geneticists, who prefer to imply that it’s just ignorant journalists who get this stuff wrong, is I think something of a backhanded compliment to Denis. (Let me, for the record, point out that Jerry Coyne has certainly laid into Noble in no uncertain terms – but I haven’t seen a good refutation of his specific criticisms of the selfish-gene metaphor.)

To his credit, Richard Dawkins himself does acknowledge some of this. In the 30th anniversary edition of The Selfish Gene he says, for example, “Another good alternative to The Selfish Gene would have been The Cooperative Gene.” That’s because, he says, genes sometimes act in mutually supportive gangs. “Natural selection therefore sees to it that gangs of mutually compatible—which is almost to say cooperating— genes are favoured in the presence of each other.” The genes are, however, individually still “selfish”, Dawkins says, because they are not cooperating for the benefit of the others. But that assertion only makes sense if you ascribe intentions to the genes – in other words, if you fall for the metaphor (I guess it is for reasons like this that Steven Rose thinks Dawkins doesn’t really understand what a metaphor is). All you can say is that a mutual operation of genes works to their collective benefit. It is simply meaningless to say that in such a circumstance they are acting “as if” they are selfish, just as it is meaningless to say that they are acting “as if” they are altruistic. It is, in effect, implanting a value judgement where none is warranted. Why do that? Well, I’ll come to that shortly.

There’s a deeper level to this debate, however, which I don’t see Coyne taking on board at all. It is about causality. The argument for selfish geneism seems to be that if a gene’s activity results in a change in phenotype, the gene is responsible for it – that it is the ‘cause’. This is equivalent to the old argument that the assassination of Archduke Ferdinand caused World War I. I like to think of it another way. Suppose Howard Webb referees Chelsea vs Manchester United, and Chelsea win 1-0 (I’m going to trust US readers to make the necessary changes mutatis mutandis). Webb has obviously ‘caused’ that result, as well as all the moves that led to it, because he blew the whistle that began the game (and indeed, intervened several times during the match too). The next season Webb referees the same game, but this time Man U triumph 2-0. That’s weird, because the teams have identical players, the pitch is the same, and so on. But Webb did a few things differently this time – he awarded Man U a penalty, say – so he’s obviously the cause of the difference.

The fact is that, of course, the course and outcome of both matches relies on all the players knowing what is required of them, and doing it. There’s already other crucial information in the system. Howard Webb wasn’t the cause of any of it, except in the important sense that without him either chaos would have ensued or the matches would never have started.

If this seems like a fatuous example, or a thin analogy (and sure, best not to push it too far), take a look at Hoel et al., PNAS 110, 19790; 2013 (here). This makes it clear that there are some complex systems in which causality must be seen as a property of higher-level modes of organization, and can’t be meaningfully ascribed to a microscopic event. If that is true in genetics, then neither evo nor devo can necessarily be considered to be under the causal control of specific genes. I don’t mean that the genes don’t underlie the processes, but just that causality does not reside therein. Or to be clear (because there’s a pathological inclination for words to be twisted in some of these disputes), there are of course plenty of cases where specific adaptive phenotypes can be attributed to specific genes (and so can be considered the result of selection at the genetic level), but there’s no reason to think that this is the generic or universal picture, and plenty of reason not to. That doesn’t deny the crucial importance of genes in evo/devo, any more than one would deny the importance of individual actions and decisions in the outbreak of World War I.

One might want to say that if the selfish gene’ metaphor works for Coyne, why not let him have it – it’s only a metaphor, after all. And I’m not unsympathetic to that. But it is of course not just Coyne – this metaphor has powerfully affected the way genetics and evolution have been presented to the public. And I don’t think it is at all unlikely (nor does Gabby Dover) that it has contributed in a major way to the prevailing notion of the “one-gene one-trait” picture that now even geneticists are finding an albatross: how can genes be operating in cooperative networks if each is only looking out for itself? I’m not saying that the selfish geneticists deny that they do, only that one of the many problems with the selfish gene picture is that it implies relentless individualism.

We should probably be honest about this too: it is surely no coincidence that the most vocal adherents of the selfish gene are the same folks who are most vocally anti-religious. It’s hard not to suspect that one of the attractions of this picture is its very harshness: not only does the universe not care in the slightest about your welfare (and I agree with that) but the most fundamental principles of life are positively ‘unkind’ and antagonistic – nasty if you like – and thus as far as it’s possible to get from your fluffy divine benevolence. Can’t you sense a gleeful “take that!” in the way Richard Dawkins serves up this stuff?

I might be unfair here, but I guess I’m searching for a reason why these smart folks are so reluctant to relinquish what is demonstrably a bad metaphor. After all, as Larry Moran (who is no slouch when it comes to beating on religion, although he picks his targets – creationists and ID-ers – rather more selectively) has pointed out, the selfish gene has been largely dead for decades in evolutionary biology.

One final point, since it seems to be a common trope in cases like this for scientists to decry journalists’ ignorance of their subject’s history. Forgive me if I’m wrong, but I have never seen selfish geneticists acknowledge that or explain why their definition of ‘selfish gene’ is different from that typically used in the 1980s by leading thinkers such as Francis Crick, Leslie Orgel, Gabriel Dover, and Ford Doolittle (Nature 284, 601 & 604; 1980). Those guys used selfishness specifically to refer to that subset of genes or genetic elements that have a propensity to proliferate in multiple copies throughout the genome – it was not a property of all genes that enabled them to benefit from natural selection. Indeed, this kind of selfish DNA, said Orgel and Crick, makes no specific contribution to the phenotype. Dawkins mentioned such genetic elements in The Selfish Gene, but selfish geneticists have subsequently been quite happy to see this ‘selfishness’ become a universal attribute of genes. That is evidently not how Crick saw it: he and Orgel make the distinction with ‘business as usual’ genetic selection very explicit.

Theirs seems to be a much more viable idea of selfishness, for the multiple copies of genes don’t benefit the organism. At best this accumulation of ‘junk’ is neutral to the organism, but it is potentially detrimental in the long term, providing a good illustration of the short-termism of natural selection. In this sense, then, selfishness is not a property that enables evolution to happen, but an inevitable by-product caused by its difficulty in dealing with parasitic freeloaders (for a modern view, see J. H. Werren, PNAS 108 (supplement 2), 10863; 2011). I’d much rather see selfishness reserved for this kind of situation. And so would many others. It seems to me that Coyne does a disservice by not acknowledging that the ‘selfish’ metaphor has a long and distinguished history of being applied only in this very restrictive and particular context.

I don’t want to be unnecessarily confrontational. Coyne has done a fine and important job in the past of defending evolution against idiotic attacks, and arguably this is just a debate about the packaging of a process whose basic details are not in doubt. But it’s because I do what I do that I think that packaging is important.

Monday, December 09, 2013

Birds reveal a new facet of their personality

Here’s the original of my latest news story for Nature.

_____________________________________________________________

Some birds are predisposed to signal their intentions more clearly than others.

Some animals, like some people, are more aggressive than others - it's just the way they are. But new research suggests that, for birds at least, this personality is more subtle. Some are inclined to give out exaggerated signs of their aggressiveness, others to underplay it.

It's rather like the menacing biker who turns out to be a pussy-cat, or the wimpy geek who will break a bottle over your head. But the analogy with humans goes only so far, because many birds announce their aggression about mating and territory not by appearance but by song and gesture.

For example, the song sparrow indicates its intention to attack a dummy bird or a loudspeaker playing back its songs by either vocalizing distinctive ‘soft songs’ or by fluttering its wings (so-called wing waves), both of which are perceived as threatening [2].

Both aggressive signaling and the ensuing aggressive behaviour varies from one bird to another, in a way that correlates with other personality traits such as boldness [1]. But these attributes also vary for a single individual at different times – they can have particularly grouchy or placid days. The degree of aggression implied by the precursory signals generally reflects the actual behaviour – it is what evolutionary biologists call an “honest signal”.

But not entirely honest. Earlier this year Michael Beecher and colleagues at the University of Washington in Seattle showed that there’s some variability in aggressive signaling that doesn’t match the behaviour: a bird might act stroppy but not follow through with an attack [2].

This variability could be just random, an imponderable quirk of bird-brains. But now Beecher and colleagues say it isn’t [3].

The researchers studied 69 labelled male song sparrows in their natural habitat during autumn and spring. They played the birds their own songs (which elicit aggression just as ‘stranger songs’ do) and watched how they responded – whether they displayed the aggression signals of soft songs and wing waves, and whether they followed through by attacking the loudspeakers or a dummy bird.

They found that, after allowing for variations that provide an honest signal of a bird’s fluctuations in aggressive mood, the remaining variability – if you like, the dishonest part of it – seems to be consistently displayed by particular birds.

Some have a predisposition – consistent from one season to the next – to give out false signals of how aggressive they intend to be, suggesting either too much or too little. Others are more consistently ‘honest’. Beecher and colleagues say that this behaviour too seems to be a robust characteristic of an individual bird’s personality, which the researchers call “communicativeness”.

“This is an important and novel paper”, says William Searcy, a behavioural ecologist at the University of Miami. “I think it’s highly likely that behaviours one can define in song sparrows can be identified in other birds, and other animals as well”, adds Jeremy Hyman of Western Carolina University, a specialist in bird behaviour.

Habitual ‘over-signallers’ may be good bluffers, who gamble on scaring away rivals that they won’t actually dare fight. ‘Under-signallers’, who attack without much warning, are harder to explain. “This behaviour is intriguing, and hasn't really been discussed in theory”, says Beecher. “There are benefits to signaling – a fight is avoided, potentially beneficial to both parties – so why not do it?”

One possibility is that under-signallers are genuine tough guys, so likely to win a bout that it’s not worth their while bothering with scare tactics. In this case the behaviour could be a beneficial adaptation. But another possibility is that some individuals just aren’t very good at getting the signaling codes right – it’s not an adaptation but a mistake.

“I don’t think there is enough evidence yet to know whether individual adaptive or error-based theories are more correct”, says Hyman. He adds that why personality traits exist at all is still a big question, but says “I think there’s enough evidence of links between personality and fitness to conclude that behavioural variation isn’t [adaptively] neutral.”

References
1. Bell, A., Hankison, S. J. & Laslowski, K. L. Anim. Behav. 77, 771-783 (2009).
2. Akçay, Ç., Campbell, S. E., Tom, M. E. & Beecher, M. D., Proc. R. Soc. B 280, 20122517 (2013).
3. Akçay, Ç., Campbell, S. E. & Beecher, M. D., Proc. R. Soc. B 281, 20132496 (2014).

Sunday, December 08, 2013

Quantum computers: when, what, who and why

I have a piece in December’s Prospect on quantum computing – here’s the original draft.

__________________________________________________

When people first hear about quantum computers, a common response is “where and when can I get one?” But that’s the wrong question, and not just because you’ll be disappointed with the answer. Quantum computers are often said to promise faster, bigger, more multi-layered computation – but they are not, and might never be, an upgrade of your laptop. They’re just not that sort of machine. So what are they, and why do we want them?

You could argue that your laptop is already a quantum computer, because the laws of quantum physics govern the ways electrical currents pass through its ultra-small transistors and wires. Partly that’s just saying that ultimately quantum physics governs all the properties of materials. Increasingly, however, strange quantum effects that don’t usually manifest in the everyday world, such as the ability of electrons to leap through walls, are becoming important as the scale of microelectronics shrinks. This ‘quantum tunnelling’, for example, is the basis of flash memory, and also threatens to make transistors ‘leaky’ as they get ever smaller.

Real quantum computers go far beyond any of that, however. In the end, all of today’s computers work using old-fashioned binary logic: by encoding information in strings of 1’s and 0’s, represented for example by electrical pulses in circuits or by flashes of light in optical fibres. These so-called bits are manipulated in logic gates, built from electronic components such as transistors. Here a particular set of input bits prompt the gate to produce another set of output bits. That’s what computation is; the rest is a question of building software and interfaces that turn these bits into a letter to Mum glowing on the screen.

Quantum computers will also use 1’s and 0’s, but with a crucial difference. As well as having one or the other of these values, a quantum bit (qubit) could have any mixture of them. Counter-intuitively, it can be simultaneously a 1 and a 0, or 1 with a tiny bit of 0, and so on. These mixtures are called superpositions, and they are a fundamental feature of objects that obey quantum rules. A photon of light, for example, can be polarized either vertically or horizontally, or can be in a superposition of both polarizations.

That gives qubits access to a vast range of states, so you can encode much more information in them. [OK, I’m keeping this in for now in the interests of honesty to the moment – but watch this space for an explanation of why this is far too simplistic, and perhaps even too erroneous, a way to describe quantum computing…] In short, it enables quantum computers to perform very many calculations simultaneously where a classical computer can do only one at a time with any given set of bits. It is this that provides the quantum computer with its tremendous speed-up. To factorize a big number classically (to find all its divisors), a computer plods through all the possible answers, while a quantum computer can assess them all, encoded in superpositions of qubits, at basically the same time.

So where’s the catch? It is that quantum phenomena such as superpositions are generally very delicate. They get easily disrupted or destroyed by disturbances from the surrounding environment, particularly the randomizing effects of heat. So to make such states usually requires very low temperatures. This fragility of quantum effects means that, while the question of what you could do with a quantum computer has been explored extensively already by physicists and mathematicians, actually building a device that can do any of it is taxing electrical engineers and applied physicists to the limit.

Now there are signs of real progress. The community was set buzzing two years ago when a Canadian company called D-Wave (“the world’s first commercial quantum computing company”) announced that it had created the first practical quantum computer: a black box, if you will, that could actually solve stuff. But several researchers questioned whether D-Wave’s device was really a true quantum computer at all, or just a fancy box of tricks that made token nods towards quantum effects. It employs an approach called ‘quantum annealing’, which is different from most theories of quantum computing and for which any real advantages over classical computing have yet to be shown.

At Raytheon BBN Technologies, based in Cambridge, Massachusetts, researchers are convinced that they are closing in on the real thing. Conveniently close to Harvard and the Massachusetts Institute of Technology, BBN was founded in 1948 and was intimately involved in the development of the earliest military networks that became the Internet. In 2009 the company became a subsidiary of the US defence contractor Raytheon. It has been seeking to develop so-called quantum information technologies since 2001, when the company’s researchers devised an optical telecommunications network that could exchange light signals between their headquarters and nearby Harvard and Boston Universities that encoded information in superpositions of photons. Such networks, which could be immune to eavesdropping, have now been developed in many places in the world.

But the quantum computer, which actually does number-crunching, is a bigger challenge. To make qubits, Raytheon BBN uses the same fundamental circuit components as D-Wave does. Called superconducting Josephson junctions, these are metal contacts cooled so deeply that they have become superconductors (that is, they have no electrical resistance), electrically connected to each other via a thin barrier of insulating material. Superconductivity is itself a quantum-mechanical effect, which is why it requires low temperatures, and the superconducting current can flow in distinct quantum states. A Josephson junction helps to filter out all but two states, which correspond to the binary 1’s and 0’s. It is possible to manipulate these states, for example creating specific superpositions, using pulses of microwave radiation. That’s the physical basis of BBN’s qubit circuits, which have to be cooled to within a daunting 50 thousandths of a degree of absolute zero.

Even then, the superpositions don’t last long. Yet to do practical quantum computing, they only need to survive for at least as long as the time needed to juggle with them in quantum logic gates. In recent years, says Zachary Dutton, lead scientist of Raytheon BBN’s Quantum Information Processing group, these so-called coherence times have increased dramatically, and are now at a level – tens to hundreds of microseconds – where the devices can actually perform logic processing.

Another critical issue for these quantum gates is the so-called error rate: how accurately they can be switched between states by the microwave signal. If you get this a little wrong – say, by making too much of one state in the superposition – the errors accumulate until, even if one stores the same information several times for cross-checking, too many mistakes derail the whole computation. Getting the error rate small enough to avoid this remains one of the key tasks.

At present the Raytheon BBN team, who are collaborating with computer giants IBM, doesn’t have anything even vaguely like a quantum computer. Rather, they are focusing on getting very small systems – currently three qubits, but soon to be eight – to work well enough that they can be assembled into large-scale circuits. “If you looked at a circuit diagram of a quantum computer”, says Dutton, “this would be a little piece of it.” The extreme cooling “needn’t be a showstopper”, he adds, because refrigeration technologies have advanced so much in recent years, for example so that they don’t need constant refilling with a coolant such as liquid helium.

Exotic quantum states in ultracold superconducting wires might sound like a complicated basis for making qubits. But the same approach is being taken by several of the leading academic centres of quantum computing, including MIT, Yale and the University of California at Santa Barbara. It’s by no means the only option. Another popular approach, for example, is to encode information in the quantum-mechanical energy states of individual atoms or ions suspended in free space using electromagnetic fields to trap them there. The information can be programmed, manipulated and read out using lasers to probe and alter the states of the trapped ions. Christopher Monroe, who is using this approach at the University of Maryland, feels that “there will be some interesting results in the next several years in both Josephson junction and [ion-trap] atomic machines”. He concurs that, unlike the 512-qubit D-Wave devices, those under development at Raytheon BBN are “legitimately quantum”.

What would you use a quantum computer for? Monroe says that the first demonstrations of quantum computing will probably be solving “some esoteric physics problem”, not providing a general-purpose computer. There are, however, some important possible uses that anyone can appreciate. Fast factorizing of huge numbers is one such, since all current data encryption methods rely on the difficulty of doing this with classical computers. Quantum computers would change the whole game in data security.

For basic science, one of the most appealing applications would be to perform computer simulations of molecules and materials. These are governed by quantum rules, and classical computers are forced to solve the equations by laborious and merely approximate mathematical methods. Quantum computers, in contrast, could map such quantum behaviour directly and exactly into its algorithms, so that simulations that take days currently might be possible in seconds, helping to make better predictions of the properties of new drugs and materials.

Currently, the most taxing computational problems are tackled by massive, expensive supercomputers housed in a few specialized centres and leased to users. That’s what the initial market for quantum computers will look like too, says Dutton – not really a market at all, but a highly centralized oligopoly. But of course all computers used to be like this: huge mainframes dedicated to recondite problems. Mindful of IBM founder Thomas Watson’s (possibly apocryphal) prediction in 1943 that this is what computers would always be – Watson is said to have forecast a world market for perhaps five of them in total – it would be an unwise prophet who forecasts where quantum computers might be decades down the line.

Tuesday, December 03, 2013

What innovation really is


Here is my current Crucible column for Chemistry World. The plot above shows how chemistry’s ‘connectedness’ to other disciplines falls out in this analysis of citations – the size of the circles reflects the number of papers included in the analysis, the colours show the interdisciplinarity: the bluer, the more so.

_____________________________________________________________

How do you write a hit paper? The rise of bibliometrics and citation data-mining threatens to turn the answer into a reductive prescription: have many coauthors, make the paper longer, choose an assertive, catchy title. Yet the truth is that we have always known what generates the best chance of success: have a really interesting and productive idea, report it clearly and carefully, and publish it in a good journal.

That’s why a new paper analysing the ingredients of high-impact scientific papers (as defined by their citation counts) is best not viewed as another ‘how to’ formula. Rather, what Brian Uzzi and Ben Jones of Northwestern University in Illinois and their colleagues have supplied [B. Uzzi et al., Science 342, 468 (2013)] is a retrospective account of why some papers made their mark. It’s a bit like examining why the Beatles’ songs are so popular – it’s one thing to explain it, quite another to use that knowledge to write another “Eleanor Rigby”.

The real value of this work is in underlining the importance of innovative thinking – as well as clarifying what genuine novelty consists of. The idea is ingenious in itself (my guess is that if the researchers trained their lens on their own paper, it would predict considerable impact). While it is hard to quantify the intrinsic novelty of the ideas expressed in a paper, the reference list generally gives a fair indication of the intellectual heritage on which they draw.

So if the references are all taken from a narrow body of highly specialized and specific work, the chances are that the paper itself represents just another incremental advance in that area, and is going to have limited appeal outside a tiny circle. But a paper with a bizarrely diverse selection of references – here the Journal of Natural Products, there Kierkegaard’s Fear and Trembling – all too probably indicates a comparable incoherence in the authors’ minds.

What Uzzi and colleagues consider, then, is the balance between ‘typical’ and ‘atypical’ in the reference list. Using a database of 17.9 million papers in the Web of Science spanning all scientific fields (in fact they have ventured into the humanities too), the researchers looked at how often all possible pairs of papers (or journals) in a given year were cited together. A comparison against purely random pairings then reveals how ‘conventional’ such a pairing is, enabling an enumeration of the conventionality of any paper’s entire reference list.

It will surprise no one to hear that scientific papers are on the whole highly conservative by this measure. But Uzzi and colleagues figured that the relatively unusual combinations of citations – those in the tail of the distributions – might be particularly revealing. They found that even these tended to be ‘typical’: the less-common pairings of journal A, say, tend to be with journals G and K rather than more random.

The real story emerges when these citation patterns are compared between high-impact and low-impact papers. The former are no less firmly embedded in convention – except, crucially, for the unusual reference combinations in the tail of their distributions, which show a strong degree of novelty. In other words, these papers anchor themselves to a substantial body of related, specialized work, but inject into it ideas and results from farther afield than lower-impact papers tend to reach. “Thus, novelty and conventionality are not opposing factors in the production of science”, Uzzi and colleagues say. As one might imagine, novelty in this sense seems to appear more often in papers written by collaborating teams, which can mine insights from different disciplines.

What does all this mean in chemistry? The paper itself gives no breakdown by discipline, but Brian Uzzi has kindly supplied me with a few indicators. On the basis of how often papers within the discipline cite ones from outside, chemistry scores highly as an interdisciplinary subject – second only to biology, comparable to medical research, and better than, say, physics or earth sciences. Moreover, its cross-disciplinary handshakes are very diverse, although the affinities with medicine and biology are evident. On this basis, the common claim that chemistry is the “central science” seems well justified.

No doubt individual case histories of high-impact chemical papers would tell instructive stories. Two papers from Chemical Communications in 1994 offer a representative snapshot. One, on polymer synthesis, reaches out only to other journals of polymer and organic chemistry but without even the benefit of conventional pairings therein. It had 12 citations. Another, on the synthesis of derivatized gold nanoparticles, combines popular pairings such as JACS-Angewandte Chemie with novel links to the literature on clusters; it had nearly 4,000 citations.

If you want a moral, it is surely to talk to people outside your group, and ideally outside your department, and if possible work with them. But at the same time don’t neglect the core of your own subject. Easily said, I know – but the best advice usually is.

Monday, December 02, 2013

Ome sweet ome?

Here’s my latest piece for the Prospect blog.

________________________________________________________________

Chances are that every biologist now has an ome to go to. This suffix, first introduced in the genome (the sum total of all an organism’s genes), can now be found attached to just about every aspect of life’s molecular basis. There is the proteome (the full complement of protein molecules in an organism), the glycome (all the sugars), the epigenome (all the non-genetically encoded regulation of gene activity), the lipidome (all the fatty-acid lipids of cell membranes). Omes embrace wider concepts too. The metabolome comprises all the molecules involved in metabolism; the interactome is the network of interactions between genes and other molecules; the phenome is the total of all distinct observable traits (phenotypes), and so on. The integrome is the ome of all the omes: an ome from ome, you might say.

The proliferation of these neologisms has understandably attracted criticisms and ridicule, and even the founding editor of a new journal called Omics told Nature that “most of them will not make sense.” Some researchers suggest that they are just a way of investing an established field – such as the study of metabolic biochemical processes – with the kudos that has become attached to genomics. They are also a marketing ploy: if you have an ome, you surely need your own distinct funding stream.

Geneticist Jonathan Eisen of the University of California at Davis talks about “badomics”, and sees the spread of omes as a pernicious meme that adds clutter and confusion, as well as implying a sometimes misleading analogy to the aims and concepts of genomics. He compares it with the indiscriminate appending of -gate to every political blunder post-Watergate. “Some of the omes I have the most trouble with are not even remotely comprehensive, but are simply collections of a small set of some facts about one minor entity”, says Eisen, citing for example the nascentosome (incompletely assembled protein molecules) and the predatosome (genes involved in bacterial predation).

This scepticism is valid, but it doesn’t necessarily get to the core of what is both bad and potentially constructive in the omics fad. An ome is basically a list of parts, whether those are physical entities such as molecules or more abstract such as connections or properties. There is great potential value in such a list, provided that it is comprehensive. If one can consult the proteome to look up the chemical structure of a protein associated with a disease-linked gene, say, then one might be able to design a drug molecule that intervenes in the protein’s behaviour. But a list of parts is not an explanation for their collective function, as any electrical engineer or car mechanic will tell you. Omes are in fact the modern equivalent of what Francis Bacon in the seventeenth century called ‘histories’ – exhaustive collections of all possible facts about a given phenomenon, such as cold or comets. Bacon was convinced that preparing histories was the essential first step in natural philosophy, and he set about devising a scheme for distilling these heaps of facts into real knowledge and insight. But that scheme was absurdly elaborate and never even completed, let alone put into practice. The early scientists found, in spite of their Baconian convictions, that this could never be the way to do science – they were compelled to draw up hypotheses and theories, even before all the ‘facts’ were in, for otherwise there is no way to prioritize or organise what you are looking for.

This is another way of saying that omics will not be science until it works within a framework that allows for hypothesis-testing. Merely searching vast databases for correlations is worse than futile, because it will inevitably produce false positives – spurious relationships between events or entities – while remaining silent about the root mechanisms. There’s a difference between knowing which parts work together and knowing how they do so.

It seems that the converse is also true: causative principles might not announce themselves at the level of the basic components. This has become embarrassingly clear in genomics: for many traits or diseases that are evidently inheritable, it has proved possible to identify only a small fraction of the genes responsible, even with the whole human genome at our fingertips. Causation might stem instead from higher levels of organization.

But that leads to one of the positive aspects of the omics craze. It was largely stimulated in the first place by the anticlimactic realization of how much was left unsaid by the human genome projects. We need to know not just what genes we have, but what protein molecules they encode (for these are ultimately the cell’s primary machinery), and how much the gene is actually used, or ‘transcribed’. Enter the proteome and transcriptome. Then we need to know how genes and proteins act together – the interactome, metabolome and so forth – and what other molecules are crucially involved – the glycome, lipidome and so on. What’s more, because some of these sets of molecules are closer to the physiological end of an organism’s functioning, it seems likely that we might find clearer, less ambiguous and more immediate markers of disease and pathology in these other omes than in the genome. Profiling of lipids, for example, might point to incipient diet-related disease.

In other words, the proliferation of omes marks a recognition – never doubted, but long sidelined by the glamour of genomics – that there is much more to life than genes, many of which are better regarded not as ruthless dictators of the cell but as referees that keep the game on track. Omics could thus represent the start – even if clumsy and too overtly list-obsessed – of a return to a more integrated view of what life is.