Friday, November 29, 2013

Open season on dark matter

Here’s my last story for BBC Future.

_______________________________________________________________

Who will find dark matter first? We’re looking everywhere for this elusive stuff: deep underground, out in space, in the tunnels of particle colliders. After the Higgs boson, this is the next Big Hunt for modern physics, and arguably there’s even more at stake, since we think there’s more than four times more dark matter than there is all the stuff we can actually see.

And you can join the hunt. It’s probably not worth turning out your cupboards to see if there’s any dark matter lurking at the back, but there is a different way that all comers – at least, those with mathematical skills – can contribute. A team of astronomers has reported that crowdsourcing has improved the computational methods they will use to map out the dark matter dispersed through distant galaxies – which is where it was discovered in the first place.

The hypothesis of dark matter is needed to explain why galaxies hold together. Without its gravitational effects, rotating galaxies would fly apart, something that has been known since the 1930s. Yet although this stuff is inferred from its gravity, there’s nothing visible to astronomers – it doesn’t seem to absorb or emit light of any sort. That seems to make it a kind of matter different from any of the fundamental particles currently known. There are several theories for what dark matter might be, but they all have to start from negative clues: what we don’t know or what it doesn’t do.

The current favourite invokes a new fundamental particle called a WIMP: a weakly interacting massive particle. “Weakly interacting” means that it barely feels ordinary matter at all, but can just pass straight through it. However, the idea is that those feeble interactions are just enough to make a WIMP occasionally collide with a particle of ordinary matter and generate an observable effect in the form of a little burst of light that has no other discernible cause. Such flashes would be a telltale signature of dark matter.

To see them, it’s necessary to mask out all other possible causes – in particular, to exclude collisions involving cosmic rays, which are ordinary particles such as electrons and protons streaming through space after being generated in violent astrophysical processes such as supernovae. Cosmic rays are eventually soaked up by rock as they penetrate the earth, and so several dark-matter detectors are situated far underground, at the bottom of deep mineshafts. They comprise sensitive light detectors that surround a reservoir of fluid and look for inordinately rare dark-matter flashes.

One such experiment, called LUX and located in a mine in South Dakota, has recently reported the results of the first several months of operation. LUX looks for collisions of WIMPs within a tank of liquid xenon. So far, it hasn’t seen any. That wouldn’t be such a big deal if it wasn’t for the fact that some earlier experiments, have reported a few unexplained events that could possibly have been caused by WIMPs. LUX is one of the most sensitive dark-matter experiments now running, and if those earlier signals were genuinely caused by dark matter, LUX would have been expected to see such things too. So the new results suggest that the earlier, enticing findings were a false alarm.

Another experiment, called the Alpha Magnetic Spectrometer (AMS) and carried on board the International Space Station, looks for signals from the mutual annihilation of colliding WIMPs. And there are hopes that the Large Hadron Collider at CERN in Geneva might, once it resumes operation in 2014, be able to conduct particle smashes at the energies where some theories suggest that WIMPs might actually be produced from scratch, and so put these theories to the test.

In the meantime, the more information we can collect about dark matter in the cosmos, the better placed we are to figure out where and how to look for it. That’s the motivation for making more detailed astronomical observations of galaxies where dark matter is thought to reside. The largest concentrations of the stuff are thought to be in gravitationally attracting groups of galaxies called galaxy clusters, where dark matter can apparently outweigh ordinary matter by a factor of up to a hundredfold. By mapping out where the dark matter sits in these clusters relative to their visible matter, it should be possible to deduce some of the basic properties that its mysterious particles have, such as whether they are ‘cold’ and easy slowed down by gravity, or hot and thus less easily retarded.

One way of doing this mapping is to look for dark matter via its so-called gravitational lensing effect. As Einstein’s theory of general relativity predicted, gravitational fields can bend light. This means that dark matter (and ordinary matter too) can act like a lens: the light coming from distant objects can be distorted when it passes by a dense clump of matter. David Harvey of the University of Edinburgh, Thomas Kitching of University College London, and their coworkers are using this lensing effect to find out how dark matter is distributed in galaxy clusters.

To do that, they need an efficient computational method that can convert observations of gravitational lensing by a cluster into its inferred dark-matter distribution. Such methods exist, but the researchers suspected they could do better. Or rather, someone else could.

Crowd-sourcing as a way of gathering and analysing large bodies of data is already well established in astronomy, most notably in the Zooniverse scheme, in which participants volunteer their services to classify data into different categories: to sort galaxies or lunar craters into their fundamental shape classes, for example. Humans are still often better at making these judgements than automated methods, and Zooniverse provides a platform for distributing and collating their efforts.

What Harvey and colleagues needed was rather more sophisticated than sorting data into boxes. To create an algorithm for actually analysing such data, you need to have some expertise. So they turned to Kaggle, a web platform that (for a time-based fee) connects people with a large data set to data analysts who might be able to crunch it for them. Last year Kitching and his international collaborators used Kaggle to generate the basic gravitational-lensing data for dark-matter mapping. Now he and his colleagues have shown that even the analysis of the data can be effectively ‘outsourced’ this way.

The researchers presented the challenge in the form of a competition called “Observing Dark Worlds”, in which the authors of the three best algorithms would receive cash prizes totalling $20,000 donated by the financial company Winton Capital Management. They found that the three winning entries could improve significantly on the performance of a standard, public algorithm for this problem, pinpointing the dark matter clumps with an accuracy around 30% better. Winton Capital benefitted too: Kitching says that “they managed to find some new recruits from the winners, at a fraction of the ordinary recruiting costs.”

It’s not clear that the ordinary citizen can quite compete at this level – the overall winner of Dark Worlds was Tim Salismans, who this year gained a PhD in analysis of “big data” at the Erasmus University Rotterdam. The other two winners were professionals too. But that is part of the point of the exercise too: crowd-sourcing is not just about soliciting routine, low-level effort from an untrained army of volunteers, but also about connecting skilled individuals to problems that would benefit from their expertise. And the search for dark matter needs all the help it can get.

Happy birthday MRS

Of all the regular meetings that I used to attend as a Nature editor, the one I enjoyed most was the annual Fall meeting of the US Materials Research Society. Partly because it was in Boston, but also because it was always full of diverse and interesting stuff, as well as being of a just about manageable scale. So I have a fondness for the MRS and was glad to be asked to write a series of portraits of areas in materials science for the MRS Bulletin to mark the society’s 40th anniversary. The result is a piece too long to set down here, but the kind folks at MRS Bulletin seem to have made the article freely available online here.

Tuesday, November 26, 2013

Shape-shifting

Oh, here’s one from BBC Future that I almost missed – the latest in ‘illusion optics’. I have a little video discussion of this too.

__________________________________________________________

In the tradition whereby science mines myth and legend for metaphors to describe its innovations, you might call this shape-shifting. Admittedly, the device reported in the journal Physical Review Letters by researchers in China is not going to equal Actaeon’s transformation into a stag, Metis into a fly, or Proteus into whatever he pleased. But it offers an experimental proof-of-principle that, using ideas and techniques related to invisibility cloaking, one object can be given the appearance of another. Oh, and the device does invisibility too.

This versatility is what marks out the ‘cloak’ made by Tie Jun Cui of the Southeast University in Nanjing, China, and his coworkers at Lanzhou University as distinct from the now considerable body of work on invisibility cloaks and other types of “transformation optics”. Surprisingly, perhaps, this versatility comes from a design that is actually easier to fabricate than many of the ‘invisibility cloaks’ made previously. The catch is that these shape-changes are not something you can actually see, but are apparent only when the transformed object is being detected from the effect it has on the electrical conductivity of the medium in which it is embedded.

The most sophisticated ‘invisibility cloaks’ made so far use structures called metamaterials to bend light around the hidden object, rather like water flowing around an obstacle in a stream. If the light rays from behind the object are brought back together again at the front, then to an observer they seem not to have deviated at all, but simply to have passed through empty space.

Researchers have also shown that, by rerouting light in other ways, a metamaterial cloak can enable so-called ‘illusion optics’ that gives one thing the appearance of another. However, with metamaterials this is a one-shot trick: the cloak would produce the same, single visual illusion regardless of what is hidden within it. What’s more, genuine invisibility and illusion optics are tremendously challenging to achieve with metamaterials, which no one really yet knows how to make in a way that will work with visible light for all the wavelengths we see. So at present, invisibility cloaks have been limited either to microwave frequencies or to simplified, partial cloaks in which an object may be hidden but the cloak itself is visible.

What’s more, each cloak only does one sort of transformation, for which it is designed at the outset. Cui and colleagues say that a multi-purpose shape-shifting cloak could be produced by making the components active rather than passive. That’s to say, rather than redirecting light along specified routes, they might be switchable so that the light can take different paths when the device is configured differently. You might compare it to a fixed rail track (passive), where there’s only one route, and a track with sets of points (active) for rerouting.

Active cloaks have not been much explored so far beyond the theory. Now Cui and his coworkers have made one. It hides or transforms objects that are sensed electrically, in a process that the researchers compare to the medical technology called electrical impedance tomography. Here, electrical currents or voltages measured on the surface of an object or region are used to infer the conductivity within it, and thereby to deduce the hidden structure. A similar technique is used in geophysics to look at buried rock structures using electrodes at the surface or down boreholes, and in industrial processes to look for buried pipes. It’s a little like using radar to reconstruct the shape of an object from the way it reflects and reshapes the echo.

Here, hiding an object would mean constructing a cloak to manipulate the electrical conductivity around it so that it seems as though the object isn’t there. And transforming its appearance involves rejigging the electric field so that the measurements made at a distance would infer an embedded object of a different shape. Cui and colleagues have built a two-dimensional version of such an illusionistic cloak, consisting of a network of resistors joined in a concentric ‘spider’s web’ pattern on an electrically conducting disk, with the cloaked region in a space at their centre.

To detect the object, an electrode at one position on the plate sets up an electric field, and this is measured around the periphery of the plate. Last year Cui and his colleagues made a passive version of an invisibility cloak, in which the resistor network guided electric currents around the central hole so as to give the impression, when the field was measured at the edges of the disk, that the cloak and its core were just part of the uniform background medium. Now they have wired up such a resistor network so that the voltage across each component, and thus the current passing through it, can be altered in a way that changes the apparent shape of the cloaked region, as inferred from measurements made at the disk’s edge.

In this way, the researchers could alter the ‘appearance’ of the central region to look invisible, or like a perfectly conducting material, or like a hole with zero conductivity. And all that’s needed is some nifty soldering to create the network from standard resistors, without any of the complications of metamaterials. That means it should be relatively easy to make the cloaks bigger, or indeed smaller.

In theory this device could sustain the illusion even if the probe signal changes in some way (such as its position), by using a rapid feedback mechanism to recalculate how the voltages across the resistors need to be altered to keep the same appearance. The researchers say that it might even work for oscillating electrical fields, as long as their frequency is not too high – in other words, perhaps to mask or transform objects being sensed by radio waves. Here the resistor network would be constantly tuned to cancel out distortions in the probe signal. And because resistors warm up, the device could also be used to manipulate appearances as sensed by changes in the heat flow through the cloaked region.

Reference: Q. Ma et al., Physical Review Letters 111, 173901 (2013).

Thursday, November 21, 2013

The LHC comes to London

Here’s my latest piece for the Prospect blog. I also have a piece in the latest issue of the magazine on quantum computing, but I’ll post that shortly.

______________________________________________________________________

It may come as a surprise that not all physicists are thrilled by the excitement about the Higgs boson, now boosted further by the award of the physics Nobel prize to Peter Higgs and François Englert, who first postulated its existence. Some of them feel twinges of resentment at the way the European centre for particle physics CERN in Switzerland, where the discovery was made with the Large Hadron Collider (LHC), has managed to engineer public perception to imply that the LHC itself, and particle physics generally, is at the centre of gravity of modern physics. In fact most physicists don’t work on the questions that predominate at CERN, and the key concepts of the discipline are merely exemplified by, and not defined by, those issues.

I have shared some of this frustration at the skewed view that wants to make all physicists into particle-smashers. But after taking a preview tour of the new exhibition Collider just opening at London’s Science Museum, I am persuaded that griping is not the proper response. It is true that CERN has enviable public-relations resources, but the transformation of an important scientific result (the Higgs discovery) into an extraordinary cultural event isn’t a triumph of style over substance. It marks a shift in science communication that other disciplines can usefully learn from. Collider reflects this.

The exhibition has ambitions beyond the usual pedagogical display of facts and figures, evident from the way that the creative team behind it brought in theatrical expertise: video designer Finn Ross, who worked on the stage play of Mark Haddon’s The Curious Incident of the Dog in the Night Time, and playwright Michael Wynne. They have helped to recreate a sense of what it is like to actually work at CERN. The exhibits, many of them lumps of hardware from the LHC, are displayed in a mock-up of the centre’s offices (with somewhat over-generous proportions) and corridors, complete with poster ads for recondite conferences and the “CERN choir”. Faux whiteboards and blackboards – some with explanatory notes, others just covered with decorative maths – abound. Actors in a video presentation aim to convince us of the ordinariness of the men and women who work here, as well as of their passionate engagement and excitement with the questions they are exploring.

The result is that the findings of the LHC’s experiments so far – which are difficult to explain at the best of times, although most interested folks have probably gathered by now that the Higgs boson is a particle responsible for giving some other fundamental particles their mass – are not, as in the traditional science-museum model, spruced up and served up to the public as it were on a plate, in the form of carefully honed metaphors. The makeshift feel of the environment, a work-in-progress with spanners and bits of kit still lying around, is itself an excellent metaphor for the science itself: still under construction, making use of what is to hand, its final shape as yet undetermined. The experience is as much about what it means to do science as it is about what the science tells us.

This is a good thing, and the fact that CERN itself has become a kind of living exhibition – with more than 100,000 visitors a year and an active outreach programme with strong involvement of schools – is worth celebrating. The short presentations at the preview event also made it clear why scientists need help in thinking about public engagement. It has never been a secret that Peter Higgs himself has little interest in the hoopla and celebrity that his Nobel award has sent stratospheric. In a rare appearance here, he admitted to being concerned that all the emphasis on the particle now named after him might eclipse the other exciting questions the LHC will explore. Those are what will take us truly into uncharted territory; the Higgs boson is the last, expected part in the puzzle we have already assembled (the so-called Standard Model), whereas questions about whether all known particles have “supersymmetric” partners, and what dark matter is, demand hitherto untested physics.

Higgs is the classic scientist’s scientist, interested only in the work. When asked how he visualized the Higgs boson himself, he didn’t launch into the stock image of Margaret Thatcher moving through a cocktail party and “accreting mass” in the form of hangers-on, but just said that he didn’t visualize it at all, since he considers it impossible to visualize fundamental particles. He said he had little idea of why what seemed to be a previous lack of public interest in science has now become a hunger for it.

All this is not uncommon in scientists, who are not interested in developing pretty pictures and fancy words to communicate their thoughts. That no doubt helps them get on with the job, but it is why they need leaders such as CERN’s current director general Rolf-Dieter Heuer, who can step back and think about the message and the role in society. Hearteningly, Heuer asserted that “the interest in society was always there – we scientists just made the mistake of not satisfying it.”

As Heuer pointed out, the bigger picture is mind-boggling. “It took us fifty years to complete the Standard Model”, he said. “But ninety-five percent of the universe is still unknown. It’s time to enter the dark universe.”

Wednesday, November 13, 2013

Sceptical chemists

Here’s my latest Crucible column for the November issue of Chemistry World. It’s something that’s always puzzled me. I suppose I could lazily claim that the Comments section below the piece proves my point, but obviously the voices there are self-selecting. (All the same, enlisting Boyle to the cause of climate skepticism is laughable. And Boyle was, among other things, determined to keep politics out of his science.)

_______________________________________________________________________

“While global warming is recognised, I am not sure that all the reasons have been fully explored. Carbon dioxide is a contributor, but what about cyclic changes caused by the Earth’s relationship in distance to the Sun?”

“While climate change is occurring, the drivers of change are less clear.”

It’s those pesky climate sceptics again, right? Well yes – but ones who read Chemistry and Industry, and who are therefore likely to be chemists of some description. When the magazine ran a survey in 2007 on its readers’ attitudes to climate change, it felt compelled to admit that “there are still some readers who remain deeply sceptical of the role of carbon dioxide in global warming, or of the need to take action.”

“Our survey revealed there remain those who question whether the problem exists or if reducing carbon dioxide emissions will have any effect at all,” wrote C&I’s Cath O’Driscoll. The respondents who felt that “the industry should be doing more to help tackle climate change” were in a clear majority of 72% - but that left 28% who didn’t. This is even more than the one in five members of the general population who, as the IPCC releases its 5th Report on Climate Change, now seem to doubt that global warming is real.

This squares with my subjective impression, on seeing the Letters pages of Chemistry World (and its predecessor) over the years, that the proportion of this magazine’s readers who are climate sceptics is rather higher than the 3% of the world’s climate scientists apparently still undecided about the causes (or reality) of global warming. A letter from 2007 complaining about “the enormous resources being put into the campaign to bring down carbon emissions on the debatable belief that atmospheric carbon dioxide level is the main driver of climate change rather than the result of it” seemed fairly representative of this subset.

Could it be that chemists are somehow more prone to climate scepticism than other scientists? I believe there is reason to think so, although I’m of course aware that this means some of you might already be sharpening your quills.

One of the most prominent sceptics has been Jack Barrett, formerly a well-respected chemical spectroscopist at Imperial College whose tutorial texts were published by the RSC. Barrett now runs the campaigning group Barrett Bellamy Climate with another famous sceptic, naturalist David Bellamy. Several other high-profile merchants of doubt, such as Nicholas Drapela (fired by Oregon State University last year) and Andrew Montford, trained as chemists. It’s not clear if there is strong chemical expertise in the Australian climate-sceptic Lavoisier Group, but they choose to identify themselves with Lavoisier’s challenge to the mistaken “orthodoxy” of phlogiston.

If, as I suspect, a chemical training seems to confer no real insulation against the misapprehensions evident in the non-scientific public, why should that be? One possible reason is that anyone who has spent a lifetime in the chemical industry (especially in petrochemicals), assailed by the antipathy of some eco-campaigners to anything that smacks of chemistry, will be likely to develop an instinctive aversion to, and distrust of, scare stories about environmental issues. That would be understandable, even if it were motivated more by heart than mind.

But I wonder if there’s another factor too. (Given that I’ve already dug a hole with some readers, I might as well jump in it.) If I were asked to make gross generalizations about the character of different fields of science, I would suggest that physicists are idealistic, biologists are conservative, and chemists are best described by that useful rustic Americanism, “ornery”. None of these are negative judgements – they all have pros as well as cons. But there does seem to be a contrarian streak that runs through the chemically trained, from William Crookes and Henry Armstrong to James Lovelock, Kary Mullis, Martin Fleischmann and of course the king of them all, Linus Pauling (who I’d have put money on being some kind of climate sceptic). This is part of what makes chemistry fun, but it is not without its complications.

In any event, it could be important for chemists to consider whether (and if so, why) there is an unusually high proportion of climate-change doubters in their ranks. Of course, it’s equally true that chemists have made major contributions to the understanding of climate, beginning with Svante Arrhenius’s intuition of the greenhouse effect in 1896 and continuing through to the work of atmospheric chemists such as Paul Crutzen. Spectroscopists, indeed, have played a vital role in understanding the issues in the planet’s radiative balance, and chemists have been foremost in identifying and tackling other environmental problems such as ozone depletion and acid rain. Chemistry has a huge part to play in finding solutions to the daunting problems that the IPCC report documents. A vocal contingent of contrarians won’t alter that.

Saturday, November 09, 2013

Reviewing the Reich

Time to catch up a little with what has been happening with my new book Serving The Reich. It has had some nice reviews in the Observer, the Guardian, and Nature. I have also talked about the issues on the Nature podcast, of which there is now an extended version. I’ve also discussed it for the Guardian science podcast, although that’s apparently not yet online. It seems I’ll be talking about the book next year at the Brighton Science Festival, the Oxford Literary Festival (probably in tandem with Graham Farmelo, who has written a nicely complementary book on Churchill and the bomb) and the Hay Festival – I hope to have dates and details soon.

Friday, November 01, 2013

WIMPs are the new Higgs

Here’s a blog posting for Prospect. You can see a little video podcast about it too.

________________________________________________________________

So with the Higgs particle sighted and the gongs distributed, physics seems finally ready to move on. Unless the Higgs had remained elusive, or had turned out to have much more mass than theories predicted, it was always going to be the end of a story: the final piece of a puzzle assembled over the past several decades. But now the hope is that the Large Hadron Collider, and several other big machines and experiments worldwide, will be able to open a new book, containing physics that we don’t yet understand at all. And the first chapter seems likely to be all about dark matter.

Depending on how you look at it, this is one of the most exciting or the most frightening problems facing physicists today. We have ‘known’ about dark matter for around 80 years, and yet we still don’t have a clue what it is. And this is a pretty big problem, because there seems to be more than five times as much dark matter as there is ordinary matter in the universe.

It’s necessary to invoke dark matter to explain why rotating galaxies don’t fly apart: there’s not enough visible matter to hold them together by gravity, and so some additional, unseen mass appears to be essential to fulfil that role. But it must be deeply strange stuff – since it apparently doesn’t emit or absorb light or any other electromagnetic radiation (whence ‘dark’), it can’t be composed of any of the fundamental subatomic particles known so far. There are several other astronomical observations that support the existence of dark matter, but so far theories about what it might consist of are pretty much ad hoc guesses.

Take the current favourite: particles called WIMPs, which stands for weakly interacting massive particles. Pull that technical moniker apart and you’re left with little more than a tautology, a bland restatement of the fact that we known dark matter must have mass but can’t interact (or barely) in any other way with light or regular matter.

It’s that “barely” on which hopes are pinned for detecting the stuff. Perhaps, just once in a blue moon, a WIMP careening through space does bump into common-or-garden atoms, and so discloses clues about its identity. The idea here is that, as well as gravity, WIMPs might also respond to another of the four fundamental forces of nature, called the weak nuclear force – the most exotic and hardest to explain of the forces. An atom knocked by a WIMP should emit light, which could be detected by sensitive cameras. To hope to see such a rare event in an experiment on earth, it’s necessary to exclude all other kinds of colliding cosmic particles, such as cosmic rays, which is why detectors hoping to spot a WIMP are typically housed deep underground.

One such, called LUX, sits at the foot of a 1500m mineshaft in the Black Hills of South Dakota, and has just announced the results of its first three months of WIMP-hunting. LUX stands for Large Underground Xenon experiment, because it seeks WIMP collisions within a cylinder filled with liquid xenon, and it is the most sensitive of the dark-matter detectors currently operating.

The result? Nothing. Not a single glimmer of a dark-matter atom-crash. But this tells us something worth knowing, which is that previous claims by other experiments, such as the Cryogenic Dark Matter Search in a Minnesota mine, to have seen potential dark-matter events probably now have to be rejected. What’s more, every time a dark-matter experiment fails to see anything, we discover more about where not to look: the possibilities are narrowed.

The LUX results are the highest-profile in a flurry of recent reports from dark-matter experiments. An experiment called DAMIC has just described early test runs at the underground SNOLAB laboratory in a mine near Sudbury in Canada, which hosts a variety of detectors for exotic particles, although the full experiment won’t be operating until next year. And a detector called the Alpha Magnetic Spectrometer (AMS) carried on board the International Space Station can spot the antimatter particles called positrons that should be produced when two WIMPs collide and annihilate. In April AMS reported a mysterious signal that might have – possibly, just about – been “consistent” (as they say) with positrons from dark-matter annihilation, but could also have more mundane explanations. LUX now makes the latter interpretation by far the most likely, although an international group of researchers has just clarified the constraints the AMS data place on what dark-matter can and can’t be like.

What now? LUX has plenty of searching still to do over the next two years. It’s even possible that dark-matter particles might be produced in the high-energy collisions of the LHC. But it is also possible that we’ve been barking up the wrong tree after all – for example, that what we think is dark matter is in fact a symptom of some other, unguessed physical principle. We’re still literally groping around in the dark.

Uncertainty about uncertainty

Here’s a news story I have written for Physics World. It makes me realize I still don’t understand the uncertainty principle, or at least not in the way I thought I did – so it doesn’t, then, apply to successive measurements on an individual quantum particle?!

But while on the topic of Heisenberg, I discuss my new book Serving the Reich on the latest Nature podcast, following a very nice review in the magazine from Robert Crease. I’m told there will be an extended version of the interview put up on the Nature site soon. I’ve also discussed the book and its context for the Guardian science podcast, which I guess will also appear soon.

____________________________________________________________

How well did Werner Heisenberg understand the uncertainty principle for which he is best known? When he proposed this central notion of quantum theory in 1927 [1], he offered a physical picture to help it make intuitive sense, based on the idea that it’s hard to measure a quantum particle without disturbing it. Over the past ten years an argument has been unfolding about whether Heisenberg’s original analogy was right or wrong. Some researchers have argued that Heisenberg’s ‘thought experiment’ isn’t in fact restricted by the uncertainty relation – and several groups recently claimed to have proved that experimentally.

But now another team of theorists has defended Heisenberg’s original intuition. And the argument shows no sign of abating, with each side sticking to their guns. The discrepancy might boil down to the irresolvable issue of what Heisenberg actually meant.

Heisenberg’s principle states that we can’t measure certain pairs of variables for a quantum object – position and momentum, say – both with arbitrary accuracy. The better we know one, the fuzzier the other becomes. The uncertainty principle says that the product of the uncertainties in position and momentum can be no smaller than a simple fraction of Planck’s constant h.

Heisenberg explained this by imagining a microscope that tries to image a particle like an electron [1]. If photons bounce off it, we can “see” and locate it, but at the expense of imparting energy and changing its momentum. The more gently it is probed, the less the momentum is perturbed but then the less clearly it can be “seen.” He presented this idea in terms of a tradeoff between the ‘error’ of a position measurement (Δx), owing to instrumental limitations, and the resulting ‘disturbance’ in the momentum (Δp).

Subsequent work by others showed that the uncertainty principle does not rely on this disturbance argument – it applies to a whole ensemble of identically prepared particles, even if every particle is measured only once to obtain either its position or its momentum. As a result, Heisenberg abandoned the argument based on his thought experiment. But this didn’t mean it was wrong.

In 1988, however, Masanao Ozawa, now at Nagoya University in Japan, argued that Heisenberg’s original relationship between error and disturbance doesn’t represent a fundamental limit of uncertainty [2]. In 2003 he proposed an alternative relationship in which, although the two quantities remain related, their product can be arbitrarily small [3].

Last year Ozama teamed up with Yuji Hasegawa at the University of Vienna and coworkers to see if his revised formulation of the uncertainty principle held up experimentally. Looking at the position and momentum of spin-polarized neutrons, they found that, as Ozawa predicted, error and disturbance still involve a tradeoff but with a product that can be smaller than Heisenberg’s limit [4].

At much the same time, Aephraim Steinberg and coworkers at the University of Toronto conducted an optical test of Ozawa’s relationship, which also seemed to bear out his prediction [5]. Ozawa has since collaborated with researchers at Tohoku University in another optical study, with the same result [6].

Despite all this, Paul Busch at the University of York in England and coworkers now defend Heisenberg’s position, saying that Ozawa’s argument does not apply to the situation Heisenberg described [7]. “Ozawa's inequality allows arbitrarily small error products for a joint approximate measurement of position and momentum, while ours doesn’t”, says Busch. “Ours says if the error is kept small, the disturbance must be large.”

“The two approaches differ in their definition of Δx and Δp, and there is the freedom to make these different choices”, explains quantum theorist Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany. “Busch et al. claim to have the proper definition, and they prove that their uncertainty relation always holds, with no chance for experimental violation.”

The disagreement, then, is all about which definition is best. Ozawa’s is based on the variance in two measurements made sequentially on a particular quantum state, whereas that of Busch and colleagues considers the fundamental performance limits of a particular measuring device, and thus is independent of the initial quantum state. “We think that must have been Heisenberg's intention”, says Busch.

But Ozawa feels Busch and colleagues are focusing on instrumental limitations that have little relevance to the way devices are actually used. “My theory suggest if you use your measuring apparatus as suggested by the maker, you can make better measurement than Heisenberg's relation”, he says. “They now prove that if you use it very badly – if, say, you use a microscope instead of telescope to see the moon – you cannot violate Heisenberg's relation. Thus, their formulation is not interesting.”

Steinberg and colleagues have already responded to Busch et al. in a preprint that tries to clarify the differences between their definition and Ozawa’s. What Busch and colleagues quantify, they say, “is not how much the state that one measures is disturbed, but rather how much ‘disturbing power’ the measuring apparatus has.”

“Heisenberg's original formula holds if you ask about "disturbing power," but the less restrictive inequalities of Ozawa hold if you ask about the disturbance to particular states”, says Steinberg. “I personally think these are two different but both interesting questions.” But he feels Ozawa’s formulation is closer to the spirit of Heisenberg’s.

In any case, all sides agree that the uncertainty principle is not, as some popular accounts imply, about the mechanical effects of measurement – the ‘kick’ to the system. “It is not the mechanical kick but the quantum nature of the interaction and of the measuring probes, such as a photon, that are responsible for the uncontrollable quantum disturbance”, says Busch.

In part the argument comes down to what Heisenberg had in mind. “I cannot exactly say how much Heisenberg understood about the uncertainty principle”, Ozawa says. “But”, he adds, “I can say we know much more than Heisenberg.”

References

1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. Lett. 60, 385 (1988).
3. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
4. J. Erhart et al., Nat. Phys. 8, 185 (2013).
5. L. A. Rozema et al., Phys. Rev. Lett. 109, 100404 (2012)
6. S.-Y. Baek, F. Kaneda, M. Ozawa & K. Edamatsu, Sci. Rep. 3, 2221 (2013).
7. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
8. L. A. Rozema, D. H. Mahler, A. Hayat & A. M. Steinberg, http://www.arxiv.org/1307.3604 (2013).

On the edge


In working on my next book (details soon), I have recently been in touch with a well-known science-fiction author, who very understandably felt he should take the precaution of saying that our correspondence was private and not intended for blurb-mining. He said he’d had a bad experience of providing a blurb years back and had vowed to have a blanket ban on that henceforth.

That’s fair enough, but I’m glad I’m able to remain open to the idea. I often have to decline (not that my opinion is likely to shift many copies), but if I never did it at all then I’d miss out on seeing some interesting material. I certainly had no hesitation in offering quotes for a book just published by OUP, Aid on the Edge of Chaos by Ben Ramalingam. Having seen the rather stunning list of endorsements on Amazon, I’m inclined to say I’m not worthy anyway, but there’s no doubt that Ben’s book deserves it (along with the glowing reader reviews so far). Quite aside from the whole perspective on aid, the book provides one of the best concise summaries I have seen of complexity science and its relation to human affairs generally – it is worth reading for that alone.

The book’s primary thesis is that these ideas should inform a rethinking of the entire basis of international aid. In particular, aid needs to be adaptive, interconnected and bottom-up, rather than being governed by lumbering international bodies with fixed objectives and templates. But Ben is able to present this idea in a way that does not offer some glib, vague panacea, but is closely tied in with the practical realities of the matter. It is a view wholly in accord with the kind of thinking that was (and hopefully still is) being fostered by the FuturICT project, although aid was one of the social systems that I don’t think they had considered in any real detail – I certainly had not.

I very much hope this book gets seen and, more importantly, acted on. There are plans afoot for its ideas to be debated at the Wellcome Trust's centre in London in January, which is sure to be an interesting event.

Thursday, October 24, 2013

Death of the artist?

Anxiety about the e-future – and in particular who it is going to make redundant – seems suddenly to be bursting out of every corner of the media. There was David Byrne in the Guardian Saturday Review recently worrying that Spotify and Pandora are going to eliminate the income stream for new musicians. Will Self, reviewing film critic Mark Kermode’s new book in the same supplement, talked about the ‘Gutenberg minds’ like Kermode (and himself) who are in denial about how “our art forms and our criticisms of those art forms will soon belong only to the academy and the museum” – digital media are not only undermining the role of the professional critic but changing the whole nature of what criticism is. Then we have Dave Eggers’ new novel The Circle, a satire on the Google/Facebook/Aamazon/Apple takeover of everything and the tyranny of social media. Meanwhile, Jonathan Franzen rails against the media dumbing-down of serious discourse about anything, anywhere, as attention spans shrink to nano-tweet dimensions.

Well, me, I haven’t a clue. I’m tempted to say that this is all a bit drummed up and Greenfield-esque, and that I don’t see those traits in my kids, but then, they are my kids and cruelly deprived of iPads and iPhones, and in any case are only 3 and 8. To say any such thing is surely to invite my words to come back in ten years time and sneer at my naivety. I’ve not the slightest doubt that I’m in any case wedded to all kinds of moribund forms, from the album (of the long-playing variety) to the fully punctuated text to the over-stuffed bookshelf.

But not unrelated to this issue is Philip Hensher’s spat with an academic over his refusal to write an unpaid introduction to an academic text. Hensher’s claim that it is becoming harder for authors to make a living and to have any expectation of getting paid (or paid in any significant way) for much of what they have to do is at least partly a concern from the same stable as Byrne’s – that we now have a culture that is used to getting words, music and images for next to nothing, and there is no money left for the artists.

They’re not wrong. The question of literary festivals is one that many authors are becoming increasingly fed up about, as the Guardian article on Hensher acknowledges. Personally I almost always enjoy literary festivals, and will gladly do them if it’s feasible for my schedule. The Hay Festival, which Guy Walters grumbles about, is one of the best – always fun (if usually muddy), something the family can come to, and a week’s worth of complimentary tickets seems considerable compensation for the lack of a fee. (And yes, six bottles of wine – but at least they’re from Berry Bros, and many literary festivals don’t even offer that.) But I’m also conscious that for middling-to-lower-list writers like me, it is extremely hard to say no to these things even if we wanted to. There’s the fact that publishers would be ‘disappointed’ and probably in the end disgruntled. But more than anything, there’s the sad egotistic fear that failing to appear, or even to be invited, means that you’re slipping closer to the edge of the ‘literary community’. I suspect that this fear, more than anything, is what has allowed literary festivals to proliferate so astonishingly. Well, and the fact that I’m probably not alone in being very easily satisfied (which might be essentially the same as saying that if you’re not a big name, you’re not hard to flatter). Being put up in that lovely country house hotel in Cumbria and given an evening meal has always seemed to me perfectly adequate remuneration for talking at the Words by the Water Festival (ah, so kind of you to ask again, yes I’d love to…).

But the Cambridge professor calling Hensher “priggish and ungracious” for refusing to write for free is another matter. Hensher was in fact far more gracious in response than he had any reason to be. When I am regularly asked to give up a day’s work to travel to give a talk at some academic institution (“we will of course pay your travelling costs”), I generally consider it to be a reflection of the fact that (i) academic departments simply don’t have a budget for paying speakers, and (ii) academics can very easily forget that, whereas they draw their salary while attending conferences and delivering seminars, writers don’t have a salary except for (sometimes) when they write. And so I often go and do it anyway, if I like the folks who have invited me, and/or think it will be interesting. Let alone anything else, it is good to get out and meet people. Same with unpaid writing, of which I could do a fair bit if I agreed to: I’ll contribute an article to a special issue or edited volume if I feel it would be interesting to do so, but it is rare indeed that there will be any acknowledgement that, unlike an academic, I’d then be working for free. But for a writer to be called ‘ungracious’ for refusing an ‘invitation’ to do such unpaid work is pretty despicable.

Tuesday, October 22, 2013

Before small worlds

Here is my latest piece for BBC Future. I have also posted a little comment on the work on a Youtube channel that I am in the process of creating: see here. It’s an experiment, about which I will say more later.

____________________________________________________________

“Everyone on this planet is separated by only six other people”, claims a character in John Guare’s 1990 play Six Degrees of Separation, which provided us with the defining image of our social networks. “It’s a small world”, we say when we meet someone at a party who turns out to share a mutual friend. And it really is: the average number of links connecting you to any other random person might not be exactly six – it depends on how you define links, for one thing – but it is a small number of about that size.

But has it always been this way? It’s tempting to think so. Jazz musicians in the early 20th century were united by barely three degrees of separation. Much further back, scientists in the seventeenth century maintained a dense social network via letters, as did humanist scholars of the Renaissance. But those were specialized groups. Intellectual and aristocratic elites in history might have all known one another, but was it a small world for ordinary folk too, when mail deliveries and road travel were hard and dangerous and many people were illiterate anyway? That’s what networks expert Mark Newman of the University of Michigan at Ann Arbor and his coworkers have set out to establish.

The modern understanding of small-world social networks has come largely from direct experiments. Guare took his idea from experiments conducted in the late 1960s by social scientist Stanley Milgram of Harvard University and his coworkers. In one study they attempted to get letters to a Boston stockbroker by sending them to random people in Omaha, Nebraska, bearing only the addressee’s name and profession and the fact that he worked in Boston. Those who received the letter were asked to forward it to anyone they knew who might be better placed to help it on its way.

Most of the letters didn’t arrive at all. But of those that did, an average of only six journeys were needed to get them there. A much larger-scale re-run of the experiment in 2003 using email forwarding found an almost identical result: the average ‘chain length’ for messages delivered to the target was between 5 and 7 [P. S. Dodds, R. Muhamad & D. J. Watts, Science 301, 827 (2013)].

Needless to say, it’s not possible to conduct such epistolary experiments for former ages. But there are other ways to figure out what human social networks in history looked like. These networks don’t only spread news, information and rumour, but also things that are decidedly less welcome, such as disease. Many diseases are passed between individuals by direct, sometime intimate contact, and so the spread of an epidemic can reflect the web of human contacts on which it happens.

This is in fact one of the prime motivations for mapping out human contact networks. Epidemiologists now understand that the structure of the network – whether it is a small world or not, say – can have a profound effect on the way a disease spreads. For some types of small world, infectious diseases can pervade the entire population no matter how slow small the chance of infection is, and can be very hard to root out entirely once this happens. Some computer viruses are like this, lurking indefinitely on a few computers somewhere in the world.

Newman and colleagues admit that networks of physical contact, which spread disease, are not the same as networks of social contact: you can infect people you don’t know. But in earlier times most human interactions were conducted face to face, and in relatively small communities people rarely saw someone who they didn’t recognize.

The fact that diseases spread relatively slowly in the pre-industrial world already suggests that it was not a small world. For example, it took at least three years for the Black Death to spread through Europe, Scandinavia and Russia in the 14th century, beginning in the Levant and the Mediterranean ports.

However, network researchers have discovered that it takes only a very small number of ‘long-distance’ links to turn a ‘large world’ network, such as a grid in which each individual is connected only to their nearby neighbours, into a small world.

Newman and colleagues have used this well documented spread of the Black Death to figure out what the underlying network of physical contacts looked like. The disease was spread both by direct person-to-person transmission of the pathogenic bacterium and by being carried by rats and fleas. But neither rats or fleas travel far unless carried by humans, for example on the ships that arrived at the European ports. So transmission reflects the nature of human mobility and contact.

The researchers argue that the crucial point is not how quickly or slowly the disease spread, but what the pattern was like. It moved through the Western world rather like an ink blot spreading across a map of Europe: a steady advance of the ‘disease front’. The researchers’ computer simulations and calculations show that this is possible only if the typical path length linking two people in the network is long: if it’s not a small world. If there were enough long-range links to produce a small world, then the pattern would look quite different: not an expanding ‘stain’ but a blotchy spread in which new outbreaks get seeded far from the origin of the infection.

A: Spreading of an infectious disease in a "large world"

B: Spreading in a "small world"

So if the world was still ‘large’ in the 14th century, when did it become ‘small’? Newman and colleagues hope that other epidemiological data might reveal that, but they guess that it happened with the advent of long-distance transportation in the 19th century, which seems also to have been the time that rapidly spreading epidemics appeared. There’s always a price for progress.

Reference: S. A. Marvel, T. Martin, C. R. Doering, D. Lusseau & M. E. J. Newman, preprint http://www.arxiv.org/abs/1310.2636.

Thursday, October 10, 2013

Colour in the Making



I have just received delivery of Colour in the Making: From Old Wisdom to New Brilliance, a book published by Black Dog, in which I have an essay on colour technology in the nineteenth century. And I can say without bias that the book is stunning. This is the first time I have seen what else it contains, and it is a gorgeous compendium of information about pigments, colour theory, and colour technology and use in visual art from medieval painting to printing and photography. There are also essays on medieval paints by Mark Clarke and on digital colour mixing by Carinna Parraman. This book is perhaps rather too weighty to be a genuine coffee-table volume, but is a feast for the eyes, and anyone with even a passing interest in colour should get it. I will put my essay up on my website soon.

Friday, October 04, 2013

The name game

My new book Serving the Reich is published on 10 October. Here is one of the little offshoots, a piece for Research Fortnight (which the kind folks there have made available for free) on the perils of naming in science. (Jim, I told you I’d steal that quote.)

___________________________________________________________________

Where would quantum physics be without Planck’s constant, the Schrödinger equation, the Bohr atom or Heisenberg’s uncertainty principle – or, more recently, Feynman diagrams, Bell’s inequality and Hawking radiation? You might not know what all these things are, but you know who discovered them.

Surely it’s right and proper that scientists should get the credit for what they do, after all. Or is it? This is what Einstein had to say on the matter:

“When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding. In science, moreover, the work of the individual is so bound up with that of his scientific predecessors and contemporaries that it appears almost as an impersonal product of his generation.”

Whether by design or fate, Einstein seems to have avoided having his name explicitly attached to his greatest works, the theories of special and general relativity. (The “Einstein coefficient” is an obscure quantity almost no one uses.)

But Einstein was working in the period when this fad for naming equations, units and the other paraphernalia of science after their discoverers had barely begun. The quantum pioneers were in fact among those who started it. The Dutch physicist Peter Debye insisted, against the wishes of Hitler’s government, that the new Kaiser Wilhelm Institute of Physics in Berlin, which he headed from 1935 to 1939, be called the Max Planck Institute. He had Planck’s name carved in stone over the entrance, and after the war the entire Kaiser Wilhelm Gesellschaft – the network of semi-private German research institutes – was renamed the Max Planck Society, the title that it bears today.

But Debye himself now exemplifies the perils of this practice. In 2006 he was accused in a book by a Dutch journalist of having collaborated with the Nazi government during his time in Germany, and of endorsing their anti-Semitic measures. In response, the University of Utrecht was panicked into removing Debye’s name from its Institute for Nanomaterials Science, saying that “recent evidence is not compatible with the example of using Debye’s name”. Likewise, the University of Maastricht in Debye’s home city asked for permission to rename the Debye Prize, a science award sponsored by the philanthropic Hustinx Foundation in Maastricht.

It’s now generally agreed that these accusations were unfair – Debye was no worse than the vast majority of physicists working in Nazi Germany, and certainly bears no more discredit than Max Planck himself, the grand old man of German physics, whose prevarication and obedience to the state prevented him from voicing opposition to measures that he clearly abhorred. (Recognizing this, the Universities of Utrecht and Maastricht have now relented.) Far more culpable was Werner Heisenberg, who allegedly told the occupied Dutch scientists in 1943 that “history legitimizes Germany to rule Europe and later the world”. He gave propaganda lectures on behalf of the government during the war, and led the German quest to harness nuclear power. Yet no one has questioned the legitimacy of the German Research Foundation’s Heisenberg Professorships.

Here, then, is one of the pitfalls of science’s obsession with naming: what happens when the person you’re celebrating turns out to have a questionable past? Debye, Planck and Heisenberg are all debatable cases: scarcely anyone in positions of influence in Germany under Hitler emerged without some blemish. But it leaves a bitter taste in the mouth to have to call the influence of electric fields on atomic quantum energy states the Stark effect, after its discoverer the Nobel laureate Johannes Stark – an ardent Nazi and anti-Semite, and one of the most unpleasant scientists who ever lived.

Some might say: get over it. No one should expect that people who do great things are themselves great people, and besides, being a nasty piece of work shouldn’t deprive you of credit for what you discover. Both of these things are true. But nevertheless science seems to impose names on everything it can, from awards to units, to a degree that is unparalleled in other fields: we speak of atonality, cubism, deconstructionism, not Schoenbergism, Picassoism and Derridism. This is so much the opposite of scientists’ insistence, à la Einstein, that it doesn’t matter who made the discovery that it seems worth at least pondering on the matter.

Why does science want to immortalize its greats this way? It is not as though there aren’t alternatives: we can have heliocentrism instead of Copernicanism, the law of constant proportions for Proust’s law, and so on. What’s more, naming a law or feature of nature for what it says or does, and not who saw or said it first, avoids arguments about the latter. We know, for example, that the Copernican system didn’t originate with Copernicus, that George Gabriel Stokes didn’t discover Stokes’ law, that Peter Higgs was not alone in proposing the Higgs particle. Naming laws and ideas for people is probably in part a sublimation of scientists’ obsession with priority. It certainly feeds it.

The stakes are higher, however, when it comes to naming institutions, as Utrecht’s Debye Institute discovered. There’s no natural justice which supports the name you choose to put on your lintel – it’s a more or less arbitrary decision, and if your scientific patron saint suddenly seems less saintly, it doesn’t do your reputation any good. Leen Dorsman, a historian of science and philosophy at Utrecht, was scathing about what he called this “American habit” during the “Debye affair”:

“The motive is not to honour great men, it is a sales argument. The name on the façade of the institute shouts: Look at us, look how important we are, we are affiliated with a genuine Nobel laureate.”

While acknowledging that Debye himself contributed to the tendency in Germany, Dorsman says that it was rare in the egalitarian society of the Netherlands until recently. At Utrecht University itself, he attributes it to a governance crisis that led to the appointment of leaders “who had undergone the influence of new public management ideas.” It is this board, he says, that began naming buildings and institutions in the 1990s as a way to restore the university’s self-confidence.

“My opinion is that you should avoid this”, Dorsman says. “There is always something in someone’s past that you wouldn’t like to be confronted with later on, as with Debye.” He adds that even if there isn’t, naming an institution after a “great scientist” risks allying it with a particular school of thought or direction of research, which could cause ill feeling among employees who don’t share that affiliation.

If nevertheless you feel the need to immortalize your alumni this way, the moral seems to be that you’d better ask first how well you really know them. The imposing Francis Crick Institute for biomedical research under construction in London looks fairly secure in the respect – Crick had his quirks, but he seems to have been a well-liked, upfront and decent fellow. Is anyone, however, now going to take their chance with a James Watson Research Centre? And if not, shouldn’t we think a bit more carefully about why not?

David and Goliath - who do you cheer for?

I have just reviewed Malcolm Gladwell’s new book for Nature. I had my reservations, but on seeing Steven Poole’s acerbic job in today’s New Statesman I do wonder whether in the end I gave this a slightly easy ride. Steven rarely passes up a chance to stick the boot in, but I can’t argue with his rather damning assessment of Gladwell’s argument. Anyway, here’s mine.
___________________________________________________

David and Goliath: Underdogs, Misfits and the Art of Battling Giants
Malcolm Gladwell
Penguin Books

We think of David as the weedy foe of mighty Goliath, but he had the upper hand all along. The Israelite shepherd boy was nimble and could use his deadly weapon without getting close to his opponent. Given the skill of ancient slingers, this was more like fighting pistol against sword. David won because he changed the rules; Goliath, like everyone else, was anticipating hand-to-hand combat.

That biblical story about power and how it is used, misused and misinterpreted is the frame for Malcolm Gladwell’s David and Goliath. “The powerful are not as powerful as they seem”, he argues, “nor the weak as weak.” Weaker sports teams can win by playing unconventionally. The children of rich families are handicapped by complacency. Smaller school classes don’t necessarily produce better results.

Gladwell describes a police chief who cuts crime by buying Thanksgiving turkeys for problem families, the doctor who cured children with a drug cocktail everyone thought to be lethal. The apparent indicators of strength, such as wealth or military superiority, can prove to be weakness; what look like impediments, such as broken homes or dyslexia, can work to one’s advantage. Provincial high-flyers may under-achieve at Harvard because they’re unaccustomed to being surrounded by even more brilliant peers, whereas at a mediocre university they’d have excelled. Even if some of these conclusions seem obvious in retrospect, Gladwell is a consummate story-teller and you feel you would never have articulated the point until he spelt it out.

But don’t we all know of counter-examples? Who is demoralized and who thrives from the intellectual stimulus depends on particular personal attributes and all kinds of other intangibles. More often than not, dyslexia and broken homes are disadvantages. The achievement of a school or university class may depend more on what is taught, and how, and why, than on size. The case of physician Jay Freireich, who developed an unconventional but ultimately successful treatment for childhood leukaemia, is particularly unsettling. If Freireich had good medical reasons for administering untested mixtures of aggressive anti-cancer drugs, they aren’t explained here. Instead, there is simply a description of his bullish determination to try them out come what may, apparently engendered by his grim upbringing. Yet determination alone can – as with Robert Koch’s misguided conviction that the tuberculosis extract tuberculin would cure the disease – equally prove disastrous.

Even the biblical meta-narrative is confusing. So David wasn’t after all the plucky hero overcoming the odds, but more like Indiana Jones defeating the sword-twirling opponent by pulling out his pistol and shooting him? Was that cheating, or just thinking outside the box? There are endless examples of the stronger side winning, whether in sport, business or war, no matter how ingenious their opponents. Mostly, money does buy privilege and success. So why does David win sometimes and sometimes Goliath? Is it even clear which is which (poor Goliath might even have suffered from a vision impairment)?

These complications are becoming clear, for example in criminology. Gladwell is very interested in why some crime-prevention strategies work and others don’t. But while his “winning hearts and minds” case studies are surely a part of the solution, recent results from behavioural economics and game theory suggest that there are no easy answers beyond the fact that some sort of punishment (ideally centralized, not vigilante) is needed for social stability. Some studies suggest that excessive punishment can be counter-productive. Others show that people do not punish simply to guard their own interests, but will impose penalties on others even to their own detriment. Responses to punishment are culturally variable. In other words, punishment is a complex matter, and resists simple prescriptions.

Besides, winning is itself a slippery notion. Gladwell’s sympathies are for the underdog, the oppressed, the marginalized. But occasionally his stories celebrate a very narrow view of what constitutes success: becoming a Hollywood mogul or the president of an investment banking firm – David turned Goliath, with little regard for what makes people genuinely inspiring, happy or worthy.

None of this is a problem of Gladwell’s exposition, which is always intelligent and perceptive. It’s a problem of form. His books, like those of legions of inferior imitators, present a Big Idea. But it’s an idea that only works selectively, and it’s hard for him or anyone else to say why. These human stories are too context-dependent to deliver a take-home message, at least beyond the advice not always to expect the obvious outcome.

Perhaps Gladwell’s approach does not lend itself to book-length exposition. In The Tipping Point he pulled it off, but his follow-ups Blink, about the reliability of the gut response, and Outliers, a previous take on what makes people succeed, similarly had theses that unravelled the more you thought about them. What remains in this case are ten examples of Gladwell’s true forte: the long-form essay, engaging, surprising and smooth as a New York latte.

Who reads the letters?

I often wonder how the letters pages of newspapers and magazines work. For the main articles, most publications use some form of fact-checking. But what can you do about letters in which anyone can make any claim? Does anyone check up on them before publishing? I was struck by a recent letter in New Statesman, for example, which purportedly came from David Cameron’s former schoolteacher. Who could say if it was genuine? (And, while loath to offer the slightest succour to Cameron, is it quite proper for a former teacher to be revealing stuff about his onetime pupils?)

The problem is particularly acute for science. Many a time this or that sound scientific article has been challenged by a letter from an obvious crank. Of course, sometimes factual errors are indeed pointed out this way, but who can tell which is which? I’ve seen letters printed that a newspaper’s science editor would surely have trashed very easily.

This is the case with a letter in the Observer last Sunday from a chap keen to perpetuate the myth that the world’s climate scientists are hiding behind a veil of secrecy. Philip Symmons says that he hasn’t been able to work out for himself if the models currently used for climate projections are actually capable of accurate hindcasts of past climate, since those dastardly folks at the Hadley Centre refuse to let him have the information, even after he has invoked the Freedom of Information Act. What are they afraid of, eh? What are they hiding?

If the Letters editor had asked Robin McKie, I’m sure he would have lost no time in pointing out that this is utter nonsense. The hindcast simulations Symmons is looking for are freely available to all in the last IPCC report (2007 – Figure 9.5). I found that figure after all of five minutes’ checking on the web. And incidentally, the results are extremely striking – without anthropogenic forcings, the hindcasts go badly astray after about 1950, but with them they stay right on track.

It’s clear, then, that Symmons in fact has no interest in actually getting an answer to his question – he just wants to cast aspersions. I can’t figure out why the Observer would let him do that, given how easy it should be to discover that his letter is nonsense. Surely they aren’t still feeling that one needs to present “both sides”?

Friday, September 27, 2013

Space is (quite) cold

Here’s my latest piece for BBC Future.

___________________________________________________________

How cold is it in space? That question is sure to prompt the geeks among us to pipe up with “2.7 degree kelvin”, which is the temperature produced by the uniform background radiation or ‘afterglow’ from the Big Bang. (Degrees Kelvin (K) here are degrees above absolute zero, with a degree on the kelvin scale being almost the same as those on the centigrade scale.)

But hang on. Evidently you don’t hit 2.7 K the moment you step outside the Earth’s atmosphere. Heat is streaming from the Sun to warm the Earth, and it will also warm other objects exposed to its rays. Take the Moon, which has virtually no atmosphere to complicate things. On the sunlit side the Moon is hotter than the Sahara – it can top 120 oC. But on the dark side it can drop to around minus 170 oC.

So just how cold can it go in our own cosmic neighbourhood? This isn’t an idle question if you’re thinking of sending spacecraft up there (let alone people). It’s particularly pertinent if you’re doing that precisely because space is cold, in order to do experiments in low-temperature physics.

There’s no need for that just to keep the apparatus cold – you only need liquid-helium coolant to get below 4 K in the lab, and some experiments have come to within just a few billionths of a kelvin of absolute zero. But some low-temperature experiments are being planned that also demand zero gravity. You can get that on Earth for a short time in freefall air flights, but for longer than a few seconds you need to go into space.

One such experiment, called MAQRO, hopes to test fundamental features of quantum theory and perhaps to search for subtle effects in a quantum picture of gravity – something that physicists can so far see only in the haziest terms. The scientists behind MAQRO have now worked out whether it will in fact be possible to get cold enough, on a spacecraft carrying the equipment, for the tests to work.

MAQRO was proposed last year by Rainer Kaltenbaek and Markus Aspelmeyer of the University of Vienna and their collaborators [R. Kaltenbaek et al., Experimental Astronomy 34, 123 (2012)]. The experiment would study one of the most profound puzzles in quantum theory: how or why do the rules of quantum physics, which govern fundamental particles like electrons and atoms, give way to the ‘classical’ physics of the everyday world? Why do quantum particles sometimes behave like waves whereas footballs don’t?

No one fully understands this so-called quantum-to-classical transition. But one of the favourite explanations invokes an idea called decoherence, which means that in effect the quantum behaviour of a system gets jumbled and ultimately erased because of the disruptive effects of the environment. These effects become stronger the more particles the system contains, because then there are more options for the environment to interfere. For objects large enough to see, containing countless trillions of atoms, decoherence happens in an instant, washing out quantum effects in favour of classical behaviour.

In this picture, it should be possible to preserve ‘quantum-ness’ in any system, no matter how big, if you could isolate it perfectly from its environment. In principle, even footballs would then show wave-particle duality and could exist in two states, or two places, at once. But some theories, as yet still speculative and untested, insist that something else will prevent this weird behaviour in large, massive objects, perhaps because of effects that would disclose something about a still elusive quantum theory of gravity.

So the stakes for MAQRO could be big. The experimental apparatus itself wouldn’t be too exotic. Kaltenbaek and colleagues propose to use laser beams to place a ‘big’ particle (about a tenth of a micrometre across) in two quantum states at once, called a superposition, and then to probe with the lasers how decoherence destroys this superposition (or not). The apparatus would have to be very cold because, as with most quantum effects, heat would disrupt a delicate superposition. And performing the experiment in zero gravity on a spacecraft could show whether gravity does indeed play a role in the quantum-to-classical transition. Putting it all on a spacecraft would be about as close to perfect isolation from the environment as one can imagine.

But now Kaltenbaek and colleagues, in collaboration with researchers at the leading European space-technology company Astrium Satellites in Friedrichshafen, Germany, have worked out just how cold the apparatus could really get. They imagine sticking a ‘bench’ with all the experimental components on the back of a disk-shaped spacecraft, with the disk, and several further layers of thermal insulation, shielding it from the Sun. So while the main body of the spacecraft would be kept at about 300 K (27 oC), which its operating equipment would require, the bench could be much colder.

But how much? The researchers calculate that, with three concentric thermal shields between the main disk of the spacecraft and the bench, black on their front surface to optimize radiation of heat and gold-plated on the reverse to minimize heating from the shield below, it should be possible to get the temperature of the bench itself down to 27 K. Much of the warming would come through the struts holding the bench and shields to the main disk.

That’s not really cold enough for the MAQRO experiment to work well. But the test particle itself would be held in free space above the bench, and this would be colder. On its own it could reach 8 K, but with all the other experimental components around it, all radiating heat, it reaches 16 K. This, they calculate, would be enough to test the decoherence rates predicted for all the major theories which currently propose that intrinsic mass (perhaps via gravity) will enforce decoherence in a large object. In other words, MAQRO should be cold enough to spot if these models are wrong.

Could it discriminate between any theories that aren’t ruled out? That’s another matter, which remains to be seen. But simply knowing that size matters in quantum mechanics would be a major finding. The bigger question, of course, is whether anyone will consider MAQRO – a cheap experiment as space science goes – worth a shot.

Reference: G. Hechenblaikner et al., preprint at http://www.arxiv.org/abs/1309.3234

Thursday, September 19, 2013

Fearful symmetry


So the plan is that I’ll be writing a regular (ideally weekly) blog piece for Prospect from now on. Here is the current one, stemming from a gig last night that was a lot of fun.

_________________________________________________________

Roger Penrose makes his own rules. He is one of the most distinguished mathematical physicists in the world, but also (this doesn’t necessarily follow) one of the most inventive thinkers. It was his work on the theory of general relativity in the 1960s, especially on how the gravity of collapsing stars can produce black-hole ‘singularities’ in spacetime, that set Stephen Hawking on a course to rewrite black-hole physics. That research made Penrose’s name in science, but his mind ranges much further. In The Emperor’s New Mind (1989) he proposed that the human mind can handle problems that are formally ‘non-computable’, meaning that any computer trying to solve them by executing a set of logical rules (as all computers do) would chunter away forever without coming to a conclusion. This property of the mind, Penrose said, might stem from the brain’s use of some sort of quantum-mechanical principle, perhaps involving quantum gravity. In collaboration with anaesthetist Stuart Hameroff, he suggested in Shadows of the Mind (1994) what that principle might be, involving quantum behaviour in protein filaments called microtubules in neurons. Neuroscientists scoffed, glazed over, or muttered “Oh, physicists…”

So when I introduced a talk by Penrose this week at the Royal Institution, I commented that he is known for ideas that most others wouldn’t even imagine, let alone dare voice. I didn’t, however, expect to encounter some new ones that evening.

Penrose was speaking about the discovery for which he is perhaps best known among the public: the so-called Penrose tiling, a pair of rhombus-shaped tiles that can be used to tile a flat surface forever without the pattern ever repeating. It turns out that this pattern is peppered with objects that have five- or ten-fold symmetry: like a pentagon, they can be superimposed on themselves when rotated a fifth of a full turn. That is very strange, because fivefold symmetry is known to be rigorously forbidden for any two-dimensional packing of shapes. (Try it with ordinary pentagons and you quickly find that you get lots of gaps.) The Penrose tiling doesn’t have this ‘forbidden symmetry’ in a perfect form, but it almost does.


These tilings – there are other shapes that have an equivalent result – are strikingly beautiful, with a mixture of regularity and disorder that is somehow pleasing. This is doubtless why, as Penrose explained, many architects worldwide have made use of them. But they also have a deeper significance. After Penrose described the tiling in the 1970s, the crystallographer Alan Mackay – one of the unsung polymathic savants of British science – showed in 1981 that if you imagine putting atoms at the corners of the tiles and bouncing X-rays off them (the standard technique of X-ray crystallography for deducing the atomic structures of crystals) you can get a pattern of reflections that looks for all the world like that of a perfect crystal with the forbidden five- and tenfold symmetries. Four years later, such a material (a metal alloy) was found in the real world by the Israeli materials scientist Daniel Shechtman and his coworkers. This was dubbed a quasicrystal, and the discovery won Shechtman the Nobel prize in Chemistry in 2011. Penrose tilings can explain how quasicrystals attain their ‘impossible’ structure.

In his talk Penrose explained the richness of these tilings, manipulating transparencies (remember them?) like a prestidigitator in ways that elicited several gasps of delight as new patterns suddenly came into view. But it was in the Q&A session that we got a glimpse of Penrose’s wildly lateral thinking. Assembling a tiling (and thus a quasicrystal) is a very delicate business, because if you add a tile in the wrong place or orientation, somewhere further down the line the pattern fouls up. But how could atoms in a quasicrystal know that they have to come together in a certain way here to avoid a problem right over there? Maybe, Penrose said, they make use of the bizarre quantum-mechanical property called entanglement, which foxed Einstein, in which two particles can affect one another instantaneously over any distance. Crikey.

In Penrose’s mind it all links up: quasicrystals, non-computable problems, the universe… You can use these tiles, he said, to represent the rules of how things interact in a hypothetical universe in which everything is then non-computable: the rules are well defined, but you can never use them to predict what is going to happen until it actually happens.

But my favourite anecdote had Penrose inspecting a new Penrose tiling being laid out on the concourse of some university. Looking it over, he felt uneasy. Eventually he saw why: the builders, seeing an empty space at the edge of the tiling, had stuck another tile there that didn’t respect the proper rules for their assembly. No one else would have noticed, but Penrose saw that what it meant was that “the tiling would go wrong somewhere in the middle of the lawn”. Not that it was ever going to reach that far – but it was a flaw in that hypothetical continuation, that imaginary universe, and for a mathematician that wouldn’t do. The tile had to go.

Tuesday, September 17, 2013

Quantum theory reloaded

I have finally published a long-gestated piece in Nature (501, p154; 12 September) on quantum reconstructions. It has been one of the most interesting features I can remember working on, but was necessarily reduced drastically from the unwieldy first draft. Here (long post alert) is an intermediate version that contains a fair bit more than the final article could accommodate.

__________________________________________________________

Quantum theory works. It allows us to calculate the shapes of molecules, the behaviour of semiconductor devices, the trajectories of light, with stunning accuracy. But nagging inconsistencies, paradoxes and counter-intuitive effects play around the margins: entanglement, collapse of the wave function, the effect of the observer. Can Schrödinger’s cat really be alive and dead at once? Does reality correspond to a superposition of all possible quantum states, as the “many worlds” interpretation insists?

Most users don’t worry too much about these nagging puzzles. In the words of the physicist David Mermin of Cornell University, they “shut up and calculate”. That is, after all, one way of interpreting the famous Copenhagen interpretation of quantum theory developed in the 1920s by Niels Bohr, Werner Heisenberg and their collaborators, which states that the theory tells us all we can meaningfully know about the world and that the apparent weirdness, such as wave-particle duality, is just how things are.

But there have always been some researchers who aren’t content with this. They want to know what quantum theory means – what it really tells us about the world it describes with such precision. Ever since Bohr argued with Einstein, who could not accept his “get over it” attitude to quantum theory’s seeming refusal to assign objective properties, there has been continual and sometimes furious debate over the interpretations or “foundations” of quantum theory. The basic question, says physicist Maximilian Schlosshauer of the University of Portland in Oregon, is this: “What is it about this world that forces us to navigate it with the help of such an abstract entity as quantum theory?”

A small community of physicists and philosophers has now come to suspect that these arguments are doomed to remain unresolved so long as we cling to quantum theory as it currently stands, with its exotic paraphernalia of wavefunctions, superpositions, entangled states and the uncertainty principle. They suspect that we’re stuck with seemingly irreconcilable disputes about interpretation because we don’t really have the right form of the theory in the first place. We’re looking at it from the wrong angle, making its shadow odd, spiky, hard to decode. If we could only find the right perspective, all would be clear.

But to find it, they say, we will have to rebuild quantum theory from scratch: to tear up the work of Bohr, Heisenberg and Schrödinger and start again. This is the project known as quantum reconstruction. “The program of reconstructions starts with some fundamental physical principles – hopefully only a small number of them, and with principles that are physically meaningful and reasonable and that we all can agree on – and then shows the structure of quantum theory emerges as a consequence of these principles”, says Schlosshauer. He adds that this approach, which began in earnest over a decade ago, “has gained a lot of momentum in the past years and has already helped us understand why we have a theory as strange as quantum theory to begin with.”

One hundred years ago the Bohr atom placed the quantum hypothesis advanced by Max Planck and Einstein at the heart of the structure of the physical universe. Attempts to derive the structure of the quantum atom from first principles produced Erwin Schrödinger’s quantum mechanics and the Copenhagen interpretation. Now the time seems ripe for asking if all this was just an ad hoc heuristic tool that is due for replacement with something better. Quantum reconstructionists are a diverse bunch, each with a different view of what the project should entail. But one thing they share in common is that, in seeking to resolve the outstanding foundational ‘problems’ of quantum theory, they respond much as the proverbial Irishman when asked for directions to Dublin: “I wouldn’t start from here.”

That’s at the core of the discontent evinced by one of the key reconstructionists, Christopher Fuchs of the Perimeter Institute for Theoretical Physics in Waterloo, Canada [now moved to Raytheon], at most physicists’ efforts to grapple with quantum foundations. He points out that the fundamental axioms of special relativity can be expressed in a form anyone can understand: in any moving frame, the speed of light stays constant and the laws of physics stay the same. In contrast, efforts to write down the axioms of quantum theory rapidly degenerate into a welter of arcane symbols. Fuchs suspects that, if we find the right axioms, they will be a transparent as those of relativity [1].

“The very best quantum-foundational effort”, he says, “will be the one that can write a story – literally a story, all in plain words – so compelling and so masterful in its imagery that the mathematics of quantum mechanics in all its exact technical detail will fall out as a matter of course.” Fuchs takes inspiration from quantum pioneer John Wheeler, who once claimed that if we really understood the central point of quantum theory, we ought to be able to state it in one simple sentence.

“Despite all the posturing and grimacing over the paradoxes and mysteries, none of them ask in any serious way, ‘Why do we have this theory in the first place?’” says Fuchs. “They see the task as one of patching a leaking boat, not one of seeking the principle that has kept the boat floating this long. My guess is that if we can understand what has kept the theory afloat, we’ll understand that it was never leaky to begin with.”

We can rebuild it

One of the earliest attempts at reconstruction came in 2001, when Lucien Hardy, then at Oxford University, proposed that quantum theory might be derived from a small set of “very reasonable” axioms [2]. These axioms describe how states are described by variables or probability measurements, and how these states may be combined and interconverted. Hardy assumes that any state may be specified by the number K of probabilities needed to describe it uniquely, and that there are N ‘pure’ states that can be reliably distinguished in a single measurement. For example, for either a coin toss or a quantum bit (qubit), N=2. A key (if seemingly innocuous) axiom is that for a composite system we get K and N by multiplying those parameters for each of the components: Kab = KaKb, say. It follows from this that K and N must be related according to K=N**r, where r = 1,2,3… For a classical system each state has a single probability (50 percent for heads, say), so that K=N. But that possibility is ruled out by a so-called ‘continuity axiom’, which describes how states are transformed one to another. For a classical system this happens discontinuously – a head is flipped to a tail – whereas for quantum systems the transformation can be continuous: the two pure states of a qubit can be mixed together in any degree. (That is not, Hardy stresses, the same as assuming a quantum superposition – so ‘quantumness’ isn’t being inserted by fiat.) The simplest relationship consistent with the continuity axiom is therefore K=N**2, which corresponds to a quantum picture.

But as physicist Rafael Sorkin of Syracuse University in New York had previously pointed out [3], there seems to be no fundamental reason why the higher-order theories (requiring N**3, N**4 measurements and so forth) should not also exist and have real effects. For example, Hardy says, the famous double-slit experiment for quantum particles adds a new behaviour (interference) where classical theory would just predict the outcome to be the sum of two single-slit experiments. But whereas quantum theory predicts nothing new on adding a third slit, a higher-order theory would introduce a new effect in that case – an experimental prediction, albeit one that might be very hard to detect experimentally.

In this way Hardy claims to have begun to set up quantum theory as a general theory of probability, which he thinks could have been derived in principle by nineteenth-century mathematicians without any knowledge of the empirical motivations that led Planck and Einstein to initiate quantum mechanics at the start of the twentieth century.

Indeed, perhaps the most startling aspect of quantum reconstruction is that what seemed to the pioneers of quantum theory such as Planck, Einstein and Bohr to be revolutionary about it – the quantization rather than continuum of energy – may in fact be something of a sideshow. Quantization is not an axiomatic concept in quantum reconstructions, but emerges from them. “The historical development of quantum mechanics may have led us a little astray in our view of what it is all about”, says Schlosshauer. “The whole talk of waves versus particles, quantization and so on has made many people gravitate toward interpretations where wavefunctions represent some kind of actual physical wave property, creating a lot of confusion. Quantum mechanics is not a descriptive theory of nature, and that to read it as such is to misunderstand its role.”

The new QBism

Fuchs says that Hardy’s paper “convinced me to pursue the idea that a quantum state is not just like a set of probability distributions, but very literally is a probability distribution itself – a quantification of partial belief, and nothing more.” He says “it hit me over the head like a hammer and has shaped my thinking ever since” – although he admits that Hardy does not draw the same lesson from the work himself.

Fuchs was particularly troubled by the concept of entanglement. According to Schrödinger, who coined the term in the first place, this “is the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought” [4]. In most common expositions of the theory, entanglement is depicted as seeming to permit the kind of instantaneous ‘action at a distance’ Einstein’s theory of relativity forbade. Entangled particles have interdependent states, such that a measurement on one of them is instantaneously ‘felt’ by the other. For example, two photons can be entangled such that they have opposed orientations of polarization (vertical or horizontal). Before a measurement is made on the photons, their polarization is indeterminate: all we know is that these are correlated. But if we measure one photon, collapsing the probabilities into a well-defined outcome, then we automatically and instantaneously determine the other’s polarization too, no matter how far apart the two photons are. In 1935 Einstein and coworkers presented this as a paradox intended to undermine the probabilistic Copenhagen interpretation; but experiments on photons in the 1980s showed that it really happens [5]. Entanglement, far from being a contrived quirk, is the key to quantum information theory and its associated technologies, such as quantum computers and cryptography.

But although quantum theory can predict the outcomes of entanglement experiments perfectly adequately, it still seems an odd way for the world to behave. We can write down the equations, but we can’t feel the physics behind them. That’s what prompted Fuchs to call for a fresh approach to quantum foundations [1]. His approach [6, 7] argues that quantum states themselves – the entangled state of two photons, say, or even just the spin state of a single photon – don’t exist as objective realities. Rather, “quantum states represent observers’ personal information, expectations and degrees of belief”, he says.

Fuchs calls this approach quantum Bayesianism or QBism (pronounced “cubism”), because he believes that, as standard Bayesian probability theory assumes, probabilities – including quantum probabilities – “are not real things out in the world; their only existence is in quantifying personal degrees of belief of what might happen.” This view, he says, “allows one to see all quantum measurement events as little ‘moments of creation’, rather than as revealing anything pre-existent.”

This idea that quantum theory is really about what we can and do know has always been somewhat in the picture. Schrödinger’s wavefunctions encode a probability distribution of measurement outcomes: what these measurements on a quantum system might be. In the Copenhagen view, it is meaningless to talk about what we actually will measure until we do it. Likewise, Heisenberg’s uncertainty principle insists that we can’t know everything about every observable property with arbitrarily exact accuracy. In other words, quantum theory seemed to impose limits on our precise knowledge of the state of the world – or perhaps better put, to expose a fundamental indeterminacy in our expectations of what measurement will show us. But Fuchs wants us to accept that this isn’t a question of generalized imprecision of knowledge, but a statement about what a specific individual can see and measure. We’re not just part of the painting: in a sense we are partially responsible for painting it.

Information is the key

The rise of quantum information theory over the past few decades has put a new spin on this consideration. One might say that it has replaced an impression of analog fuzziness (“I can’t see this clearly”) with digital error (“the answer might be this or that, but there’s such-and-such a chance of your prediction being wrong”). It is this focus on information – rather, knowledge – that characterizes several of the current attempts to rebuild quantum theory from scratch. As physicists Caslav Brukner and Anton Zeilinger of the University of Vienna put it, “quantum physics is an elementary theory of information” [8].

Jeffrey Bub of the University of Maryland agrees: quantum mechanics, he says, is “fundamentally a theory about the representation and manipulation of information, not a theory about the mechanics of nonclassical waves or particles” – as clear a statement as you could wish for of why early quantum theory got distracted by the wrong things. His approach to reconstruction builds on the formal properties of how different sorts of information can be ordered and permuted, which lie at the heart of the uncertainty principle [9].

In the quantum picture, certain pairs of quantities do not commute, which means that it matters in which order they are considered: momentum times position is not the same as position times momentum, rather as kneading and baking dough do not commute when making bread. Bub believes that noncommutativity is what distinguishes quantum from classical mechanics, and that entanglement is one of the consequences. This property, he says, is a feature of the way information is fundamentally structured, and it might emerge from a principle called ‘information causality’ [10], introduced by Marcin Pawlowski of the University of Gdansk and colleagues. This postulate describes how much information one observer (call him Bob) can gain about a data set held by another (Alice). Classically the amount is limited by what Alice communicates to Bob. Quantum correlations such as entanglement can increase this limit, but only within bounds set by the information causality postulate. Pawlowski and colleagues suspect that this postulate might single out precisely what quantum correlations permit about information transfer. If so, they argue, “information causality might be one of the foundational properties of nature” – in other words, an axiom of quantum theory.

Ontic or epistemic?

At the root of the matter is the issue of whether quantum theory pronounces on the nature of reality (a so-called ontic theory) or merely on our allowed knowledge of it (an epistemic theory). Ontic theories, such as the Many Worlds interpretation, take the view that wavefunctions are real entities. The Copenhagen interpretation, on the other hand, is epistemic, insisting that it’s not physically meaningful to look for any layer of reality beneath what we can measure. In this view, says Fuchs, God plays dice and so “the future is not completely determined by the past.” QBism takes this even further: what we see depends on what we look for. “In both Copenhagen and QBism, the wave function is not something “out there’”, says Fuchs. “QBism should be seen as a modern variant and refinement of Copenhagen.”

His faith in epistemic approaches to reconstruction is boosted by the work of Robert Spekkens, his colleague at the Perimeter Institute. Spekkens has devised a ‘toy theory’ that restricts the amount of information an observer can have about discrete ontic states of the system: specifically, one’s knowledge about these states can never exceed the amount of knowledge one lacks about them. Spekkens calls this the ‘knowledge balance principle’. It might seem an arbitrary imposition, but he finds that it alone is sufficient to reproduce many (but not all) of the characteristics of quantum theory, such as superposition, entanglement and teleportation [11]. Related ideas involving other kinds of restriction on what can be known about a suite of states also find quantum-like behaviours emerging [12,13].

Fuchs sees these insights as a necessary corrective to the way quantum information theory has tended to propagate the notion that information is something objective and real – which is to say, ontic. “It is amazing how many people talk about information as if it is simply some new kind of objective quantity in physics, like energy, but measured in bits instead of ergs”, he says. “You’ll often hear information spoken of as if it’s a new fluid that physics has only recently taken note of.” In contrast, he argues, what else can information possibly be except an expression of what we think we know?

“What quantum information gave us was a vast range of phenomena that nominally looked quite novel when they were first found”, Fuchs explains. For example, it seemed that quantum states, unlike classical states, can’t be ‘cloned’ to make identical copes. “But what Rob’s toy model showed was that so much of this vast range wasn’t really novel at all, so long as one understood these to be phenomena of epistemic states, not ontic ones”. Classical epistemic states can’t be cloned any more than quantum states can be, for much the same reason as you can’t be me.

What’s the use?

What’s striking about several of these attempts at quantum reconstruction is that they suggest that our universe is just one of many mathematical possibilities. “It turns out that many principles lead to a whole class of probabilistic theories, and not specifically quantum theory”, says Schlosshauer. “The problem has been to find principles that actually single out quantum theory”. But this is in itself a valuable insight: “a lot of the features we think of as uniquely quantum, like superpositions, interference and entanglement, are actually generic to many probabilistic theories. This allows us to focus on the question of what makes quantum theory unique.”

Hardy says that, after a hiatus following Fuchs’ call to arms and his own five-axiom proposal in the early 2000s, progress in reconstructions really began in 2009. “We’re now poised for some really significant breakthroughs, in a way that we weren’t ten years ago”, he says. While there’s still no consensus on what the basic axioms should look like, he is confident that “we’ll know them when we see them.” He suspects that ultimately the right description will prove to be ontic rather than epistemic: it will remove the human observer from the scene once more and return us to an objective view of reality. But he acknowledges that some, like Fuchs, disagree profoundly.

For Fuchs, the aim of reconstruction is not to rebuild the existing formalism of quantum theory from scratch, but to rewrite it totally. He says that approaches such as QBism are already motivating new experimental proposals, which might for example reveal a new, deep symmetry within quantum mechanics [14]. The existence of this symmetry, Fuchs says, would allow the quantum probability law to be re-expressed as a minor variation of the standard ‘law of total probability’ in probability theory, which relates the probability of an event to the conditional probabilities of all the ways it might come about. “That new view, if it proves valid, could change our understanding of how to build quantum computers and other quantum information kits,” he says.

Quantum reconstruction is gaining support. A recent poll of attitudes among quantum theorists showed that 60% think reconstructions give useful insights, and more than a quarter think they will lead to a new theory deeper than quantum mechanics [15]. That is a rare degree of consensus for matters connected to quantum foundations.

But how can we judge the success of these efforts? “Since the object is simply to reconstruct quantum theory as it stands, we could not prove that a particular reconstruction was correct since the experimental results are the same regardless”, Hardy admits. “However, we could attempt to do experiments that test that the given axioms are true.” For example, one might seek the ‘higher-order’ interference that his approach predicts.

“However, I would say that the real criterion for success are more theoretical”, he adds. “Do we have a better understanding of quantum theory, and do the axioms give us new ideas as to how to go beyond current day physics?” He is hopeful that some of these principles might assist the development of a theory of quantum gravity – but says that in this regard it’s too early to say whether the approach has been successful.

Fuchs agrees that “the question is not one of testing the reconstructions in any kind of experimental way, but rather through any insight the different variations might give for furthering physical theory along. A good reconstruction is one that has some ‘leading power’ for the way a theorist might think.”

Some remain skeptical. “Reconstructing quantum theory from a set of basic principles seems like an idea with the odds greatly against it”, admits Daniel Greenberger of the City College of New York. “But it’s a worthy enterprise” [16]. Yet Schlosshauer argues that “even if no single reconstruction program can actually find a universally accepted set of principles that works, it’s not a wasted effort, because we will have learned so much along the way.”

He is cautiously optimistic. “I believe that once we have a set of simple and physically intuitive principles, and a convincing story to go with them, quantum mechanics will look a lot less mysterious”, he says. “And I think a lot of the outstanding questions will then go away. I’m probably not the only one who would love to be around to witness the discovery of these principles.” Fuchs feels that could be revolutionary. “My guess is, when the answer is in hand, physics will be ready to explore worlds the faulty preconception of quantum states couldn’t dream of.”

References
1. Fuchs, C., http://arxiv.org/abs/quant-ph/0106166 (2001).
2. Hardy, L. E. http://arxiv.org/abs/quant-ph/0101012 (2003).
3. Sorkin, R., http://arxiv.org/pdf/gr-qc/9401003 (1994).
4. Schrödinger, E. Proc. Cambridge Phil. Soc. 31, 555–563 (1935).
5. A. Aspect et al., Phys. Rev. Lett. 49, 91 (1982).
6. Fuchs, C. http://arxiv.org/pdf/1003.5209
7. Fuchs, C. http://arxiv.org/abs/1207.2141 (2012).
8. Brukner, C. & Zeilinger, A. http://arxiv.org/pdf/quant-ph/0212084 (2008).
9. Bub, J. http://arxiv.org/pdf/quant-ph/0408020 (2008).
10. Pawlowski, M. et al., Nature 461, 1101-1104 (2009).
11. Spekkens, R. W. http://arxiv.org/abs/quant-ph/0401052 (2004).
12. Kirkpatrick, K. A. Found. Phys. Lett. 16, 199 (2003).
13. Smolin, J. A. Quantum Inform. Compu. 5, 161 (2005).
14. Renes, J. M., Blume-Kohout, R., Scott, A. J. & Caves, C. M. J. Math. Phys. 45, 2717 (2004).
15. Schlosshauer, M., Kofler, J. & Zeilinger, A. Stud. Hist. Phil. Mod. Phys. 44, 222–230 (2013).
16. In Schlosshauer, M. (ed.), Elegance and Enigma: The Quantum Interviews (Springer, 2011).