Here’s a more expansive and referenced version (in English!) of my article in the latest La Recherche on the quantum view of “reality”. It was something of a revelation writing it, because I realised to my embarrassment that I had not been properly understanding quantum nonlocality for all these years. It’s no excuse to say this, but a big part of the reason for that is that the concept is so poorly explained not only in most other popular articles but also in some scientific papers too. It is usually explained in terms that suggest that, because of entanglement, “Einstein’s spooky action at a distance is real!” No it’s not. Quantum nonlocality, as explored by John Bell, is precisely not action at a distance, but the alternative to it: we only have to see the correlations of entanglement as “action at a distance” if we are still insisting on an Einsteinian hidden variables picture. This is how Johannes Kofler puts it:
“(Quantum) nonlocality” is usually used as an abbreviation for “violating Bell’s inequality”. But only if you stick to hidden variables is there “real” non-locality (= negation of Einstein’s locality). If you keep a Copenhagen-like interpretation (giving up determinism), i.e. not use any hidden variables in the first place, you do not need any non-locality (= negation of Einstein locality) to explain quantum nonlocality. Then there is (quantum) nonlocality without the need for (Einstein) non-locality.”
Duh, you knew that, didn’t you? Now I do too. Similarly, quantum contextuality doesn’t mean that quantum measurements depend on context, but that they would depend on context in a hidden-variable picture. Aha!
___________________________________________________________________
No matter where we look in quantum theory, we seem to play an active part in constructing the reality we observe.
Philosophers and mystics from Plato to the Buddha have long maintained that the reality we perceive is not really there. But quantum theory seems to insist on a far stranger situation than that. In this picture, it is meaningless to ask about what is “there” until we look. Pascual Jordan, one of the physicists working with Niels Bohr who helped to define the new quantum world view in the 1920s, claimed that “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements” [1].
In this comment lurk all the notorious puzzles and peculiarities of quantum theory. It seems to be an incredibly grandiose, self-obsessed image of reality: nothing exists (or at least, we can’t say what does) until we bring it into being. Isn’t this the antithesis of science, which assumes an objective reality that we can examine and probe with experiments?
No wonder Albert Einstein was uncomfortable with this kind of quantum reality. He expressed his worries very concretely to the young physicist Abraham Pais. “I recall”, Pais later wrote, “that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it” [2]. Isn’t the moon, after all, just made up of quantum particles?
So what’s the answer? Is the moon there when nobody looks? How, without looking, could we know anyway?
Increasingly, scientists today are finding ways of “looking” – of conducting experiments that test whether Bohr and his colleagues or Einstein was right. So far, the evidence clearly favours Bohr: reality is what we make it. But what exactly does that mean? No one is sure. In fact, no one even really knows how serious a problem this is. Richard Feynman, who knew more about quantum theory than almost anyone else ever, famously summed it up: “I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem” [3].
Fretting about these questions led Einstein to arguably his greatest contribution to quantum theory. But it’s not one for which he tends to receive much credit, because he was attempting to do the opposite: to banish a quantum property that turns out to be vital.
In the mid-1920s, Bohr, working in Copenhagen with Jordan, Werner Heisenberg and Wolfgang Pauli, came up with his radical interpretation of what quantum mechanics tells us about reality. In this “Copenhagen Interpretation”, the theory doesn’t say anything about “how things are”. All it can do, and all science can ever do (in Bohr’s view), is tell us about “how things seem”: what we can measure. To ask what is really underlying those measurements, said Bohr, is to ask a question that lies beyond science.
In 1935, in collaboration with Boris Podolsky and Nathan Rosen, Einstein described a thought experiment that sought to show how absurd the Copenhagen Interpretation was. He imagined an experiment in which two particles interact to make their quantum states inter-related. Imagine two photons of light, for example, interacting so that one of them gets polarized horizontally (that is, the oscillating electromagnetic fields are oriented in this manner) and the other vertically. According to Bohr’s view of quantum mechanics, the actual photon polarizations aren’t determined until we make the measurement – all we know is that they are correlated.
But what if, once they have been “entangled” in this way, the photons are allowed to separate over vast, even cosmic, distances? Quantum theory would still seem to insist that, if we make a measurement on one of them, it instantly decides the polarization of them both. Yet this sort of instant “action at a distance” had apparently been ruled out by Einstein’s theory of special relativity, which insisted that no influence could be transmitted faster than light – a condition called locality. The only alternative to this “spooky action at a distance”, as Einstein called it, was that the polarizations of the photons had been decided all along – even though quantum mechanics couldn’t say what they were. “I am therefore inclined to believe”, Einstein wrote to his friend Max Born in 1948, “that the description of quantum mechanics… has to be regarded as an incomplete and indirect description of reality” [4]. He suspected that there were “hidden variables” that, while we couldn’t measure them, awarded the two particles definite states.
The Austrian physicist Erwin Schrödinger saw at once that this property of “entanglement” – a word he coined – was central to quantum theory. In it, he said, was the essence of what made quantum theory distinct from the kind of reality we are used to from everyday experience. To Einstein, that was precisely the problem with quantum theory – entanglement was supposed to show not how strange quantum theory was, but why it wasn’t a complete description of reality.
It wasn’t until 1964 that anyone came up with a way to test those assertions. The Irish physicist John Bell imagined another thought experiment involving making measurements on entangled pairs of quantum particles. If the measurements turned out one way, he said, then quantum systems could not be explained by any “realist” hidden-variables theory and simultaneously be “local” in Einstein’s sense. Rather, either the world lacks a realist description or it must permit real “action at a distance”. Such a situation is called quantum nonlocality. If, on the other hand, the results of Bells’ hypothetical experiment came out the other way, then Einstein would be right: reality is local and realist, meaning that all properties are inherent in a system whether we observe them or not. In this way, Bell showed how in principle we might conduct experiments to determine this fundamental question about the nature of reality: does it obey “quantum nonlocality” or Einstein’s “local realism”?
It took almost another 20 years before Bell’s theorem was put to the test. In the early 1980s Alain Aspect at the University of Paris in Orsay figured out a way to do that using laser beams, and he discovered that the observable effects of quantum entanglement can’t be explained by local hidden variables [5]. Bell’s test is statistical: it relies on making many measurements and discovering whether collectively they stay within the bounds prescribed by local realism or whether they exceed them [see Box 1].
___________________________________________________________________________
Box 1: Testing quantum nonlocality
The concept is simple: a source of particles (C) sends one each of an entangled pair to two detectors, well separated in opposite directions (A and B).
In the Aspect experiment the source is a calcium atom, which emits polarized photons that travel 6 metres to the detectors. Each photon can have one of two types of polarization (horizontal or vertical), and so there are in principle four different possibilities for what the two detectors might measure. Aspect and colleagues arranged for the photons to be dispatched at random in the two opposed directions. So it should be easy to work out the statistical probabilities of the various experimental outcomes. But here’s the key point: if Bohr was right to say that quantum quantities are undefined before they are measured, then these seemingly straightforward statistic change: you can’t assume that the photons must have polarizations that are horizontal or vertical until you measure them – even though you know that these are the only possibilities! The correlations between the entangled photons then produce a statistical outcome of measurements that lies outside the bounds of what “common-sense” arithmetic seems to imply. John Bell quantified these bounds, and Aspect’s experiments confirmed that they are indeed violated.
______________________________________________________________
It’s similar to the way you detect the wave properties of quantum particles by looking at how they interfere with each other. Look at one particle, and it just pops up at a certain position in the detector. But look at many, and you see that more of them appear in some regions (where the interference enhances the chances of finding them) than in others. This indeterminacy of any single experiment, Aspect showed, was not due to our inability to access hidden variables, but was fundamental to quantum theory.
But wait – didn’t we say that special relativity forbids this kind of faster-than-light interaction? Well, Einstein thought so, but it’s not quite true. What it actually forbids is events at one place having a causal influence on events at another faster than the time it takes for light to pass between them. Although it is possible to figure out that a particle in one place has displayed the “action at a distance” of entanglement on a particle at another, it turns out that you can only ever deduce this by exchanging information between the two places – which is indeed restricted to light speed. In other words, while it is possible to demonstrate this action, it’s impossible to use it to communicate faster than light. And that restriction is enough to preserve the integrity of special relativity.
According to Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany, Bell’s theorem is relevant to many aspects of the emerging discipline of quantum information technology, such as quantum computing and quantum cryptography. But he cautions that there are still loopholes – ways one can argue that perhaps something else in Bell-type experiments is masking hidden variables. Experiments have eliminated each of these loopholes individually, he says, but “what is still lacking, is a so-called definitive Bell test where all loopholes are closed simultaneously” [see Box 2]. This isn’t just an academic nicety, he adds, because completely secure telecommunications using quantum cryptography will rely on ensuring that all such loopholes are firmly shut.
___________________________________________________________
Box 2: Closing the loopholes
There have now been many experimental demonstrations of violations of the bounds on nonlocal correlations set by Bell’s theorem. But none is entirely free from ingenious advocates of “local realism”, who would insist that all objects have local properties that fully specify their state. There are three such loopholes, all of them reliant on the fact that the sampling of the particles’ properties at the detectors must be truly random. Loophole number 1 is the “locality” loophole, which says that the measurements at the two detectors could still be influenced by some hidden, fast but slower-than-light communication between them so that the randomization of, say, the detector polarization filters is imperfect. To rule that out demands simply increasing the distance between detectors, which researchers at the University of Innsbruck in Austria achieved in 1998 by placing them 400m apart, with the photons sent along optical fibres [6]. Loophole number 2 is the “freedom of choice” loophole, in which some “local realist” property of the particles themselves influences the choices made in their measurement. This was ruled out in 2010 in an experiment that also closed the locality loophole, by making sure that the detectors were not only distant from one another but also from the photon source: the source and one of the detectors were located on separate islands in the Canaries [7]. This made it possible to control the timing of switching of the polarization at the detectors very precisely, so that it couldn’t possibly be influenced by anything happening at the source.
Finally there is the “fair-sampling” loophole, in which the subset of all the photons that is actually measured is biased in some way by a ‘local realistic’ property. To rule out this possibility demands a high detection efficiency, which was achieved for photons only last year [8]. So all the loopholes have been closed – but not yet all of them simultaneously, arguably giving local realism still a precarious handhold.
________________________________________________________
Three years after Bell proposed his theorem, two physicists named Simon Kochen and Ernst Specker suggested a similar counterintuitive feature of quantum theory: that measurements can depend on their context. The macroscopic world isn’t like this. If you want to count the number of black and white balls in a jar, it doesn’t matter if you count the black ones or the white ones first, or if you tot them up in rows of five or pour them all onto a set of scales and weigh them. But in quantum mechanics, the answer you get may depend on the context of the measurement. Kochen and Specker showed that if quantum systems really display contextuality, this is logically incompatible with the idea that they might be more fully described by hidden variables.
Pawel Kurzynski of the National University of Singapore says that studies of contextuality have lagged behind those of quantum nonlocality by 2-3 decades – the first experiments clearly confirmed it were performed only in 2011. But “the attention now paid to contextuality is similar to the attention paid to nonlocality after the Aspect experiment 30 years ago”, he says, and contextuality seems likely to be just as important. For one thing, it might explain why some quantum computers seem able to work faster than classical ones [9]. “Quite a number of people now hope that contextuality is an important ingredient for the speedup”, says Kofler.
Recently, a team led by Kurzynski have suggested that nonlocality and contextuality might ultimately be expressions of the same thing: different facets of a more fundamental “quantum essence” [10]. The researchers took the two simplest experimental tests of quantum nonlocality and contextuality, and figured out how, in theory, to merge them into one. Then, says Kurzynski, “the joint system either exhibits local contextuality or nonlocality, never both at the same time.” For that reason, they call this behaviour quantum monogamy. “Our result shows that these two issues are related via some more general feature that can take form of either nonlocality or contextuality”, says Kurzynski [see Box 3].
__________________________________________________________
Box 3: Searching for the quantum essence
The simplest experimental tests of quantum nonlocality and contextuality have complicated names: respectively, the Clauser-Horne-Shimony-Holt (CHSH) and Klyachko-Can-Binicioglu-Shumovsky (KCBS) tests. In the CHSH experiment, two observers (say Alice and Bob) each measure the two possible states of two entangled photons, and the statistics of the combined measurements are compared with the predictions of a local realist theory. But the KCBS scenario involves only a single observer making measurements on a single particle, without entanglement. The statistics of certain combinations of successive measurements are again bounded within a realistic theory that doesn’t depend on the context of measurement, so that contexuality shows up as a violation of this bound.
“What we did was to merge both scenarios”, Kurzynski explains. “We have two observers, Alice and Bob, but this time, in addition to the CHSH scenario Alice also performs additional measurements that allows her to test the KCBS scenario on her subsystem.” One might expect that, since it is possible to violate the realist bounds in both cases when the two scenarios are considered separately, it will be also possible to violate both of them when they are considered jointly. However, what the researchers observed when they worked through the calculations was that only one bound can be violated in any one experiment, never both at once.
___________________________________________________________
So Schrödinger may have been premature – as Kurzynksi explains, “the fundamental quantum property is not entanglement, but non-classical correlations in a more general form.” That idea is supported by a recent finding by Maximilian Schlosshauer of the University of Portland in Oregon and Arthur Fine of the University of Washington in Seattle. The theorems of Bell and Kochen-Specker are generally called no-go theorems, because they specify situations that aren’t physically possible – in this case, that certain measurement outcomes can’t be reconciled with a hidden-variables’picture. Schlosshauer and Fine have devised another no-go theorem which shows that, even if two quantum states are not entangled, they can’t be considered independently [11].
“If we put two quantum systems together, and if we want to think of each system as having some ‘real physical state’ that fully determines what we can measure”, says Schlosshauer, “then the real physical state of the two systems together is not simply the combination of the states of each system.” When you make measurements of both the two systems, each looks different from what it would if you measured each one alone. This new form of entanglement-free interdependence of quantum systems has yet to be demonstrated experimentally.
Again, Schlosshauer says, we see that “trying to uphold classical intuitions gets you into trouble with quantum mechanics.” This, he says, underscores what the theorems of Bell and Kochen-Specker have told us: “quantum measurements do not just ascertain what's already there, but create something new.”
Why can’t we just accept that reality isn’t what we thought it was – that the world is nonlocal and contextual and entangled and correlated? The answer is that it just doesn’t seem that way. If I move a coffee cup on my desk, it isn’t going to move one on yours (unless I’ve rigged up some device that transmits the action). In the “classical” world we experience, these weird effects don’t seem to apply. How the physics of the macroscale arises from the quantum physics of fundamental particles is a hot area of research, and many scientists are devising experiments on the “mesoscale” at which one becomes the other: objects consisting of perhaps thousands to billions of atoms. They have, for example, already shown that organic molecules big enough to see in the electron microscope can display quantum behaviour such as wave-like interference [12].
Although there are still questions about exactly how this quantum-to-classical transition happens, you might think that we do at least know where we stand once we get to everyday objects like apples – they, surely, aren’t going to show any quantum weirdness. But can we be sure? In 1985, physicists Anthony Leggett and Anupam Garg proposed some ground rules for what they called macrorealism: the idea that macroscopic objects will behave in the “realistic” way we have come to expect [13]. Perhaps, they said, there’s some fundamental size limit above which quantum theory as we currently know it breaks down and objects are no longer influenced by measurement. Leggett and Garg worked out what observations would be compatible with the macrorealist picture – something like a macroscopic Bell test. If we carried out the corresponding experiments and found that they violate the Leggett-Garg constraint, it would mean that even macroscopic objects could in principle show quantum behaviour. But the challenge is to find a way of looking at the object without disturbing it – in effect, to figure out how to sense that Einstein’s moon is “there” without directly looking. Such experiments are said to be “non-invasive”.
Over the past four years, several experiments of this kind have been devised and carried out [see Box 4], and they suggest that Leggett and Garg’s macrorealism might indeed be violated by large objects. But so far these experiments have only managed to study systems that don’t necessarily qualify as big enough to be truly macroscopic. The problem is that the experiments get ever harder as the objects get bigger. “How to define macroscopic is unfortunately subjective and almost a small research field on its own”, says Kofler. “We’re eagerly awaiting better experiments.”
_____________________________________________________________________
Box 4: Is the world macrorealistic?
In testing the Leggett-Garg condition for macrorealism, one is essentially asking if a macroscopic system initially prepared in some particular state will evolve in the same way regardless of whether or not one observes it. So an experimental test means measuring the state of a single system at various points in time and comparing the outcomes for different measurement sequences. The trick, however, is that the measurements must be “non-invasive”: you have to observe the system without disturbing it. One way to do that is with a “negative” observation: if, say, an object can be either on one side of a chamber or the other, and you don’t see it in one of those sides at some particular time, you can infer – without observing the object directly – that it is on the other side. Sometimes you will see the object when you look, but then you just discard this run and start again, keeping track of the statistics.
But there’s another problem too: you need to be sure that the various states of your system are “macroscopically distinct”: that you can clearly see they are different. That, famously, is the case for Schrödinger’s hypothetical cat: it is either live or dead. But what are the distinct states that you might hope to observe for, say, a cannonball sitting in a box? Its quantum particles might have many different quantum states, but what are the chances that you could initially prepare them all in one state and then see them all jump to another?
That’s why experimental tests of the Leggett-Garg condition have so far tended to be restricted to “mesoscale” systems: they contain many quantum particles, but not so many that distinct collective states can’t be identified. And they have had to find ways of making the observations non-invasively. One test in 2010, for example, monitored oscillations between two different states in a superconducting circuit, which could be regarded as having many thousands of atoms in distinct states [14]. Another test two years later used the “negative-observation” approach to monitor the quantum states of around 10**10 phosphorus impurities in a piece of doped silicon [15]. Both violated the Leggett-Garg condition for macrorealism.
______________________________________________________________
“The various debates about the interpretation of quantum mechanics can be seen as debates about what quantum states refer to”, says Kofler’s colleague Caslav Brukner of the University of Vienna. “There are two major points of view: the states refer to reality, or they refer to our knowledge of the basis from which ‘reality’ is constructed. My current view is that the quantum state is a representation of knowledge necessary for a fictitious observer, within his experimental limitations, to compute probabilities of outcomes of all possible future experiments.”
That brings us back to Einstein’s moon. It now seems that something is there when we don’t look, but exactly what is there is determined only when we look. But there’s no reason to apologize for this intrusion of the observer, Brukner argues. “Fictitious observers are not restricted to quantum theory, but are also introduced in thermodynamics or in the theory of special relativity.” It seems we had better get used to the fact that we’re an essential part of the picture.
References
1. Quoted by M. Jammer, The Philosophy of Quantum Mechanics (Wiley, New York, 1974) p.151.
2. A. Pais, Rev. Mod. Phys. 51, 863 (1979).
3. R.P.Feynman, Int. J. Theor. Phys. 21, 471 (1982).
4. The Born-Einstein Letters, with comments by M. Born (Walker, New York, 1971).
5. A. Aspect et al., Phys. Rev. Lett. 49, 1804 (1982).
6. G. Weihs et al., Phys. Rev. Lett. 81, 5039 (1998).
7. T. Scheidl et al., Proc. Natl Acad. Sci. USA 107, 19708 (2010).
8. M. Giustina et al., Nature 497, 227 (2013).
9. M. Howard, J. Wallman, V. Veitch & J. Emerson, Nature 510, 351 (2014).
10. P. Kurzynski, A. Cabello & D. Kaszlikowski, Phys. Rev. Lett. 112, 100401 (2014).
11. M. Schlosshauer & A. Fine, Phys. Rev. Lett. 112, 070407 (2014).
12. S. Gerlich et al., Nature Commun. 2, 263 (2011).
13. A. J. Leggett & A. Garg, Phys. Rev. Lett. 54, 857 (1985).
14. A. Palacios-Laloy, Nature Phys. 6, 442 (2010).
15. G. C. Knee et al., Nature Commun. 3, 606 (2012).
The universal rule of writing about entanglement on the internet is that, whatever you say, someone will come along and claim that you are wrong. From this,
ReplyDelete"I realised to my embarrassment that I had not been properly understanding quantum nonlocality for all these years. It’s no excuse to say this, but a big part of the reason for that is that the concept is so poorly explained not only in most other popular articles but also in some scientific papers too. It is usually explained in terms that suggest that, because of entanglement, “Einstein’s spooky action at a distance is real!” No it’s not. Quantum nonlocality, as explored by John Bell, is precisely not action at a distance, but the alternative to it: we only have to see the correlations of entanglement as “action at a distance” if we are still insisting on an Einsteinian hidden variables picture."
it sounds to me that you have changed from a more correct opinion to something more problematic. Of course, there are numerous subtleties in understanding the meaning of Bell's theorem, but I think the idea that it requires "hidden variables" or that it can be circumvented by giving up on determinism are precisely the wrong way of thinking about it.
The issues at stake are explained very clearly in this paper by Howard Wiseman http://arxiv.org/abs/1402.0351 who is quite even handed about it, but for the record, I think the view he ascribes to "operationalists", which is obviously shared by the physicists you spoke to, is fairly crazy.
Extremely inspired! Everything is extremely open and clear illumination of issues. It contains really certainties. Your site is extremely important. Much obliged for sharing. www.belltestchamber.com/
ReplyDelete