Here's a pre-edited version of my piece for the Observer today, with a little bit more stuff still in it and some links. This was a great topic to research, and a bit disconcerting at times too.
_____________________________________________________
Be careful what you wish for. That’s what Joel, played by Jim Carrey, discovers in Charlie Kaufmann’s 2004 film Eternal Sunshine of the Spotless Mind, when he asks a memory-erasure company Lacuna Inc. to excise the recollections of a painful breakup from his mind. While the procedure is happening, Joel realizes that he doesn’t want every happy memory of the relationship to vanish, and seeks desperately to hold on to a few fragments.
The movie offers a metaphor for how we are defined by our memories, how poignant is both their recall and their loss, and how unreliable they can be. So what if Lacuna’s process is implausible? Just enjoy the allegory.
Except that selective memory erasure isn’t implausible at all. It’s already happening.
Researchers and clinicians are now using drugs to suppress the emotional impact of traumatic memories. They have been able to implant false memories in flies and mice, so that innocuous environments or smells seem to be “remembered” as threatening. They are showing that memory is not like an old celluloid film, fixed but fading; it is constantly being changed and updated, and can be edited and falsified with alarming ease.
“I see a world where we can reactivate any kind of memory we like, or erase unwanted memories”, says neuroscientist Steve Ramirez of the Massachusetts Institute of Technology. “I even see a world where editing memories is something of a reality. We’re living in a time where it’s possible to pluck questions from the tree of science fiction and ground them in experimental reality.” So be careful what you wish for.
But while it’s easy to weave capabilities like this into dystopian narratives, most of which the movies have already supplied – the authoritarian memory-manipulation of Total Recall, the mind-reading police state of Minority Report, the dream espionage of Inception – research on the manipulation of memory could offer tremendous benefits. Already, people suffering from post-traumatic stress disorder (PTSD), such as soldiers or victims of violent crime, have found relief from the pain of their dark memories through drugs that suppress the emotional associations. And the more we understand about how memories are stored and recalled, the closer we get to treatments for neurodegenerative conditions such as Alzheimer’s and other forms of dementia.
So there are good motivations for exploring the plasticity of memory – how it can be altered or erased. And while there are valid concerns about potential abuses, they aren’t so very different from those that any biomedical advance accrues. What seems more fundamentally unsettling, but also astonishing, about this work is what it tells us about us: how we construct our identity from our experience, and how our recollections of that experience can deceive us. The research, says Ramirez, has taught him “how unstable our identity can be.”
Best forgotten
Your whole being depends on memory in ways you probably take for granted. You see a tree, and recognize it as a tree, and know it is called “tree” and that it is a plant that grows. You know your language, your name, your loved ones. Few things are more devastating, to the individual and those close to them, than the loss of these everyday facts. As the memories fade, the person seems to fade with them. Christopher Nolan’s film Memento echoes the case of Henry Molaison, who, after a brain operation for epilepsy in the 1950s, lost the ability to record short-term memories. Each day his carers had to introduce themselves to him anew.
Molaison’s surgery removed a part of his brain called the hippocampus, giving a clue that this region is involved in short-term memory. Yet he remembered events and facts learnt long ago, and could be taught new ones, indicating that long-term memory is stored somewhere else. Using computer analogies for the brain is risky, but it’s reasonable here to compare our short-term memory with a computer’s ephemeral working memory or RAM, and the long-term memory with the hard drive that holds information more durably. While short-term memory is associated with the hippocampus, long-term memory is more distributed throughout the cortex. Some information is stored long-term, such as facts and events we experience repeatedly or that have an emotional association; other items vanish within hours. If you look up the phone number of a plumber, you’ll probably have forgotten it by tomorrow, but you may remember the phone number of your family home from childhood.
What exactly do we remember? Recall isn’t total – you might retain the key aspects of a significant event but not what day of the week it was, or what you were wearing, or exactly what was said. Your memories are a mixed bag: facts, feelings, sights, smells. Ramirez points out that, while Eternal Sunshine implies that all these features of a memory are bundled up and stored in specific neurons in a single location in the brain, in fact it’s now clear that different aspects are stored in different locations. The “facts”, sometimes called episodic memory, are filed in one place, the feelings in another (generally in a brain region called the amygdala). All the same, those components of the memory do each have specific addresses in the vast network of our billions of neurons. What’s more, these fragments remain linked and can be recalled together, so that the event we reconstruct in our heads is seamless, if incomplete. “Memory feels very cohesive, but in reality it’s a reconstructive process”, says Ramirez.
Given all this filtering and parceling out, it’s not surprising that memory is imperfect. “The fidelity of memory is very poor”, says psychologist Alain Brunet of McGill University in Montreal. “We think we remember exactly what happens, but research demonstrates that this is a fallacy.” It’s our need for a coherent narrative that misleads us: the brain elaborates and fills in gaps, and we can’t easily distinguish the “truth” from the invention. You don’t need fancy technologies to mess with memory – just telling someone they experienced something they didn’t, or showing them digitally manipulated photos, can be enough to seed a false conviction. That, much more than intentional falsehood, is why eye-witness accounts may be so unreliable and contradictory.
It gets worse. One of the most extraordinary findings of modern neuroscience, reported in 2000 by neurobiologist Joseph LeDoux and his colleagues at New York University, is that each time you remember something, you have to rebuild the memory again. LeDoux’s team reported that when rats were conditioned to associate a particular sound with mild electric shocks, so that they showed a “freezing” fear response when they heard the sound subsequently, this association could be broken by infusing the animals’ amygdala with a drug called anisomycin. The sound then no longer provoked fear – but only if the drug was administered within an hour or so of the memory being evoked. Anisomycin disrupts biochemical processes that create proteins, and the researchers figured that this protein manufacture was essential for restoring a memory after it has arisen. This is called reconsolidation: it starts a few minutes after recall, and takes a few hours to complete.
So those security questions asking you for the name of your first pet are even more bothersome than you thought, because each time you have to call up the answer (sorry if I just made you do it again), your brain then has to write the memory back into long-term storage. A computer analogy is again helpful. When we work on a file, the computer makes a copy of the stored version and we work on that – if the power is cut, we still have the original. But as Brunet explains, “When we remember something, we bring up the original file.” If we don’t write it back into the memory, it’s gone.
This rewriting process can, like repeated photocopying, degrade the memory a little. But LeDoux’s work showed that it also offers a window for manipulating the memory. When we call it up, we have the opportunity to change it. LeDoux found that a drug called propranolol can weaken the emotional impact of a memory without affecting the episodic content. This means that the effect of painful recollections causing PTSD can be softened. Propranolol is already known to be safe in humans: it is a beta blocker used to treat hypertension, and (tellingly) also to combat anxiety, because it blocks the action of the stress hormone epinephrine in the amygdala. A team at Harvard Medical School has recently discovered that xenon, the inert gas used as an anaesthetic, can also weaken the reconsolidation of fear memories in rats. An advantage of xenon over propranolol is that it gets in and out of the brain very quickly, taking about three minutes each way. If it works well for humans, says Edward Meloni of the Harvard team, “we envisage that patients could self-administer xenon immediately after experiencing a spontaneous intrusive traumatic memory, such as awakening from a nightmare.” The timing of the drug relative to reactivation of the trauma memory may, he says, be critical for blocking the reconsolidation process.
These techniques are now finding clinical use. Brunet uses propranolol to treat people with PTSD, including soldiers returned from active combat, rape victims and people who have suffered car crashes. “It’s amazingly simple,” he says. They give the patients a pill containing propranolol, and then about an hour later “we evoke the memory by having patients write it down and then read it out.” That’s often not easy for them, he says – but they manage it. The patients are then asked to continue reading the script regularly over the next several weeks. Gradually they find that its emotional impact fades, even though the facts are recalled clearly.
“After three or four weeks”, says Brunet, “our patients say things like ‘I feel like I’m smiling inside, because I feel like I’m reading someone else’s script – I’m no longer personally gripped by it.’” They might feel empathy with the descriptions of the terrible things that happened to this person – but that person no longer feels like them. No “talking cure” could do that so quickly and effectively, while conventional drug therapies only suppress the symptoms. “Psychiatry hasn’t cured a single patient in sixty years”, Brunet says.
These cases are extreme, but aren’t even difficult memories (perhaps especially those) part of what makes us who we are? Should we really want to get rid of them? Brunet is confident about giving these treatments to patients who are struggling with memories so awful that life becomes a torment. “We haven’t had a single person say ‘I miss those memories’”, he says. After all, there’s nothing unnatural about forgetting. “We are in part the sum of our memories, and it’s important to keep them”, Brunet says. “But forgetting is part of the human makeup too. We’re built to forget.”
Yet it’s not exactly forgetting. While propranolol and xenon can modify a memory by dampening its emotional impact, the memory remains: PTSD patients still recall “what happened”, and even the emotions are only reduced, not eliminated. We don’t yet really understand what it means to truly forget something. Is it ever really gone or just impossible to recall? And what happens when we learn to overcome fearful memories – say, letting go of a childhood fear of dogs as we figure that they’re mostly quite friendly? “Forgetting is fairly ill-defined”, says neuroscientist Scott Waddell at the University of Oxford. “Is there some interfering process that out-competes the original memory, or does the original memory disappear altogether?” Some research on flies suggests that forgetting isn’t just a matter of decay but an active process in which the old memory is taken apart. Animal experiments have also revealed the spontaneous re-emergence of memories after they were apparently eliminated by re-training, suggesting that memories don’t vanish but are just pushed aside. “It’s really not clear what is going on”, Waddell admits.
Looking into a fly’s head
That’s not so surprising, though, because it’s not fully understood how memory works in the first place. Waddell is trying to figure that out – by training fruit flies and literally looking into their brains. What makes flies so useful is that it’s easy to breed genetically modified strains, so that the role of specific genes in brain activity can be studied by manipulating or silencing them. And the fruit fly is big and complex enough to show sophisticated behavior, such as learning to associate a particular odour with a reward like sugar, while being simple enough to comprehend – it has around 100,000 neurons, compared to our many billions.
What’s more, a fruit fly’s brain is transparent enough to look right through it under the microscope, so that one can watch neural processing while the fly is alive. By attaching fluorescent molecules to particular neurons, Waddell can identify the neural circuitry linked to a particular memory. In his lab in Oxford he showed me an image of a real fly’s brain: a haze of bluish-coloured neurons, with bright green spots and filaments that are, in effect, a snapshot of a memory. The memory might be along the lines of “Ah, that smell – the last time I followed it, it led to something tasty.”
How do you find the relevant neurons among thousands of others? The key is that when neurons get active to form a memory, they advertise their state of busyness. They produce specific proteins, which can be tagged with other light-emitting proteins by genetic engineering of the respective genes. One approach is to inject benign viruses that stitch the light-emission genes right next to the gene for the protein you want to tag; another is to engineer particular cells to produce a foreign protein to which the fluorescent tags will bind. When these neurons get to work forming a memory, they light up. Ramirez compares it to the way lights in the windows of an office block at night betray the location of workers inside.
This ability to identify and target individual memories has enabled researchers like Waddell and Ramirez to manipulate them experimentally in, well, mind-boggling ways. Rather than just watching memories form by fluorescent tagging, they can use tags that act as light-activated switches to turn gene activity on or off with laser light directed down an optical fibre into the brain. This technique, called optogenetics, is driving a revolution in neuroscience, Ramirez says, because it gives researchers highly selective control over neural activity – enabling them in effect to stimulate or suppress particular thoughts and memories.
Waddell’s lab is not a good place to bring a banana for lunch. The fly store is packed with shelves of glass bottles, each full of flies feasting on a lump of sugar at the bottom. Every bottle is carefully labeled to identify the genetic strain of the insects it contains: which genes have been modified. But surely they get out from time to time, I wonder – and as if on cue, a fly buzzes past. Is that a problem? “They don’t survive for long on the outside,” Waddell reassures me.
Having spent the summer cursing the plague of flies gathering around the compost bin in the kitchen, I’m given fresh respect for these creatures when I inspect one under the microscope and see the bejeweled splendor of its red eyes. It’s only sleeping: you can anaesthetize fruit flies with a puff of carbon dioxide. That’s important for mapping neurons to memories in the microscope, because there’s not much going on in the mind of a dead fly.
These brain maps are now pretty comprehensive. We know, for example, which subset of neurons (about 2,000 in all) is involved in learning to recognize odours, and which neurons can give those smells good or bad associations. And thanks to optogenetics, researchers have been able to switch on some of these “aversive” neurons while flies smell a particular odour, so that they avoid it even though they have actually experienced nothing bad (such as shock treatment) in its presence – in other words, you might say, to stimulate a fictitious false memory. For a fly, it’s not obvious that we can call this “fear”, Waddell says, but “it’s certainly something they don’t like”. In the same way, by using molecular switches that are flipped with heat rather than light, Waddell and his colleagues were able to give flies good vibes about a particular smell. Flies display these preferences by choosing to go in particular directions when they are placed in little plastic mazes, some of them masterfully engineered with little gear-operated gates courtesy of the lab’s 3D printer.
Ramirez, working in a team at MIT led by Susumu Tonegawa, has practiced similar deceptions on mice. In an experiment in 2012 they created a fear memory in a mouse by putting it in a chamber where it experienced mild electric shocks to the feet. While this memory was being laid down, the researchers used optogenetic methods to make the corresponding neurons, located in the hippocampus, switchable with light. Then they put the mouse in a different chamber, where it seemed perfectly at ease. But when they reactivated the fear memory with light, the mouse froze: suddenly it had bad feelings about this place.
That’s not exactly implanting a false memory, however, but just reactivating a true one. To genuinely falsify a recollection, the researchers devised a more elaborate experiment. First, they placed a mouse in a chamber and labeled the neurons that recorded the memory of that place with optogenetic switches. Then the mouse was put in a different chamber and given mild shocks – but while these were delivered, the memory of the first chamber was triggered using light. When the mouse was then put back in the first chamber it froze. Its memory insisted, now without any artificial prompting, that the first chamber was a nasty place, even though nothing untoward had ever happened there. It is not too much to say that a false reality had been directly written into the mouse’s brain.
You must remember this
The problem with memory is often not so much that we totally forget something or recall it wrongly, but that we simply can’t find it even though we know it’s in there somewhere. What triggers memory recall? Why does a fly only seem to recall a food-related odour when it is hungry? Why do we feel fear only if we’re in actual danger, and not all the time? Indeed, it is the breakdown of these normal cues that produces PTSD, where the fear response gets triggered in inappropriate situations.
A good memory is largely about mastering this triggering process. Participants in memory competitions that involve memorizing long sequences of arbitrary numbers are advised to “hook” the information onto easily recalled images. A patient named Solomon Shereshevsky, studied in the early twentieth century by the neuropsychologist Alexander Luria, exploited his condition of synaesthesia – the crosstalk between different sensory experiences such as sound and colour – to tag information with colours, images, sounds or tastes so that he seemed able to remember everything he heard or read. Cases like this show that there is nothing implausible about Jorge Luis Borges’ fictional character Funes the Memorious, who forgets not the slightest detail of his life. We don’t forget because we run out of brain space, even if it sometimes feels like that.
Rather than constructing a complex system of mnemonics, perhaps it is possible simply to boost the strength of the memory as it is imprinted. “We know that emotionally arousing situations are more likely to be remembered than mundane ones”, LeDoux has explained. “A big part of the reason is that in significant situations chemicals called neuromodulators are released, and they enhance the memory storage process.” So memory sticks when the brain is aroused: emotional associations will do it, but so might exercise, or certain drugs. And because of reconsolidation, it seems possible to enhance memory after it has already been laid down. LeDoux has found that a chemical called isoproterenol has the opposite effect from propranolol on reconsolidation of memory in rats, making fear memories even stronger as they are rewritten into long-term storage in the amygdala. If it works for humans too, he speculates that the drug might help people who have “sluggish” memories.
Couldn’t we all do with a bit of that, though? Ramirez regards chemical memory enhancement as perfectly feasible in principle, and in fact there is already some evidence that caffeine can enhance long-term memory. But then what is considered fair play? No one quibbles about students going into an exam buoyed up by an espresso, but where do we draw the line?
Mind control
It’s hard to come up with extrapolations of these discoveries that are too far-fetched to be ruled out. You can tick off the movies one by one. The memory erasure of Eternal Sunshine is happening right now to some degree. And although so far we know only how to implant a false memory if it has actually been experienced in another context, as our understanding of the molecular and cellular encoding of memory improves Ramirez thinks it might be feasible to construct memories “from the ground up”, as in Total Recall or the implanted childhood recollections of the replicant Rachael in Blade Runner. As Rachael so poignantly found out, that’s the way to fake a whole identity.
If we know which neurons are associated with a particular memory, we can look into a brain and know what a person is thinking about, just by seeing which neurons are active: we can mind-read, as in Minority Report. “With sufficiently good technology you could do that”, Ramirez affirms. “It’s just a problem of technical limitations.” By the same token, we might reconstruct or intervene in dreams, as in Inception (Ramirez and colleagues called their false-memory experiment Project Inception). Decoding the thought processes of dreams is “a very trendy area, and one people are quite excited about”, says Waddell.
How about chips implanted in the brain to control neural activity, Matrix-style? Theodore Berger of the University of Southern California has implanted microchips in rats’ brains that can duplicate the role of the hippocampus in forming long-term memories, recording the neural signals involved and then playing them back. His most recent research shows that the same technique of mimicking neural signals seems to work in rhesus monkeys. The US Defense Advanced Research Projects Agency (DARPA) has two such memory-prosthesis projects afoot. One, called SUBNETS, aims to develop wireless implant devices that could treat PTSD and other combat-related disorders. The other, called RAM (Restoring Active Memories), seeks to restore memories lost through brain injury that are needed for specialized motor skills, such as how to drive a car or operate machinery. The details are under wraps, however, and it’s not clear how feasible it will be to record and replay specific memories. LeDoux professes that he can’t imagine how it could work, given that long-term memories aren’t stored in a single location. To stimulate all the right sites, says Waddell, “you’d have to make sure that your implantation was extremely specific – and I can’t see that happening.”
Ramirez says that it’s precisely because the future possibilities are so remarkable, and perhaps so unsettling, that “we’re starting this conversation today so that down the line we have the appropriate infrastructure.” Are we wise enough to know what we want to forget, to remember, or to think we remember? Do we risk blanking out formative, instructive and precious experiences, or finding ourselves one day being told, as Deckard tells Rachael in Blade Runner, “those aren’t your memories – they’re someone else’s”?
“The problems are not with the current research, but with the question of what we might be able to do in 10-15 years,” says Brunet. It’s one thing to bring in legislation to restrict abuses, just as we do for other biomedical technologies. But the hardest arguments might be about not what we prohibit but what we allow. Should individuals be allowed to edit their own memories or have false ones implanted? Ramirez is upbeat, but insists that the ethical choices are not for scientists alone to thrash out. “We all have some really big decisions ahead of us,” he says.
Sunday, October 12, 2014
Thursday, October 09, 2014
Do we tell the right stories about evolution?
There’s a super discussion on evolutionary theory in Nature this week. It’s prompted by the views of Kevin Laland at St Andrews, who has been arguing for some time that the traditional “evolutionary synthesis” needs to be extended beyond its narrow focus on genetics. In response, Gregory Wray at Duke University and others accuse Laland et al. of presenting a caricature of evolutionary biology and of ignoring all the work that is already being done on the issues Laland highlights.
It all sounds remarkably like the response I got to my article in Nature a couple of years back, which was suggesting that, not only is there much we still don’t understand about the way evolution happens at the molecular/genetic level but that the question of how genetic inheritance works seems if anything to be less rather than more clear in the post-genomic era. That too led some biologists to respond in much the same way: No, all is well. (The well-known fact that rules of academic courtesy don’t apply towards “journalists” meant that one or two didn’t quite phrase it that way. You get used to it.)
I guess you might expect, in the light of this, that I’d side with Laland et al. But in fact it looks to me as though Wray et al. have a perfectly valid case. After all, my article was formulated after speaking to several evolutionary biologists – and ones who sit well within what could be considered the mainstream. In particular, I think they are right to imply that the diverse mechanisms of evolutionary change known today are ones that, if Darwin didn’t already suspect, would be welcomed avidly by him.
The real source of the argument, it seems to me, is expressed right at the outset by Laland et al.: “mainstream evolutionary theory has come to focus almost exclusively on genetic inheritance and processes that change gene frequencies”. I’m not sure that this is true, although for good reason this is certainly a major focus – perhaps the major one – of the field. Wray et al. regard this as a caricature, but I think that what Laland et al. are complaining about here is what I wanted to highlight too: not so much the way most evolutionary biologists think, but how evolutionary biology is perceived from the outside. Part of the reason for that predominant “popular” focus on genes is due (ironically, given what it is actually revealing) to the genomics revolution itself, not least because we were promised that this was going to answer every question about who we are and where we came from. But of course, the popular notion that evolution is simply a process of natural selection among genes was well in place before the industrial-scale sequencing of genomes – and one doesn’t have to look too hard to find the origins of that view. As Wray et al. rightly say, the basic processes that produce evolutionary change are several-fold: natural selection, drift, mutation, recombination and gene flow. Things like phenotypic plasticity add fascinating perspectives to this, and my own suspicion is that an awful lot will become clearer once we have tools for grappling with the complexities of gene regulatory networks. There doesn’t seem to be a huge amount of argument about this. But attempts to communicate much beyond a simple equation of evolution with natural selection at the genetic level have been few and far between.
And some of the responses to my article made it clear that this is sometimes a conscious decision. Take the view of Paul Griffith, philosopher of science at the University of Sydney. According to ABC News,
“While simplistic communication about genetics can be used to hype the importance of research, and it can encourage the impression that genes determine everything, Professor Griffiths said he does not believe the answer is to communicate more complexity.”
Then there’s “science communication academic” Joan Leach from The University of Queensland, who apparently “agrees the average member of the public is not going to be that interested in the complexity of genetics, unless its relevant to an issue that they care about.” The ABC story goes on:
"Is there a problem that we need to know about here?" Dr Leach said in response to Dr Ball's article. "There are dangers in telling the simple story, but he hasn't spelt out the advantages of embracing complexity in public communication."
Sorry plebs, you’re too dumb to be told the truth – you’ll have to make do with the simplistic stories we told many decades ago.
It all sounds remarkably like the response I got to my article in Nature a couple of years back, which was suggesting that, not only is there much we still don’t understand about the way evolution happens at the molecular/genetic level but that the question of how genetic inheritance works seems if anything to be less rather than more clear in the post-genomic era. That too led some biologists to respond in much the same way: No, all is well. (The well-known fact that rules of academic courtesy don’t apply towards “journalists” meant that one or two didn’t quite phrase it that way. You get used to it.)
I guess you might expect, in the light of this, that I’d side with Laland et al. But in fact it looks to me as though Wray et al. have a perfectly valid case. After all, my article was formulated after speaking to several evolutionary biologists – and ones who sit well within what could be considered the mainstream. In particular, I think they are right to imply that the diverse mechanisms of evolutionary change known today are ones that, if Darwin didn’t already suspect, would be welcomed avidly by him.
The real source of the argument, it seems to me, is expressed right at the outset by Laland et al.: “mainstream evolutionary theory has come to focus almost exclusively on genetic inheritance and processes that change gene frequencies”. I’m not sure that this is true, although for good reason this is certainly a major focus – perhaps the major one – of the field. Wray et al. regard this as a caricature, but I think that what Laland et al. are complaining about here is what I wanted to highlight too: not so much the way most evolutionary biologists think, but how evolutionary biology is perceived from the outside. Part of the reason for that predominant “popular” focus on genes is due (ironically, given what it is actually revealing) to the genomics revolution itself, not least because we were promised that this was going to answer every question about who we are and where we came from. But of course, the popular notion that evolution is simply a process of natural selection among genes was well in place before the industrial-scale sequencing of genomes – and one doesn’t have to look too hard to find the origins of that view. As Wray et al. rightly say, the basic processes that produce evolutionary change are several-fold: natural selection, drift, mutation, recombination and gene flow. Things like phenotypic plasticity add fascinating perspectives to this, and my own suspicion is that an awful lot will become clearer once we have tools for grappling with the complexities of gene regulatory networks. There doesn’t seem to be a huge amount of argument about this. But attempts to communicate much beyond a simple equation of evolution with natural selection at the genetic level have been few and far between.
And some of the responses to my article made it clear that this is sometimes a conscious decision. Take the view of Paul Griffith, philosopher of science at the University of Sydney. According to ABC News,
“While simplistic communication about genetics can be used to hype the importance of research, and it can encourage the impression that genes determine everything, Professor Griffiths said he does not believe the answer is to communicate more complexity.”
Then there’s “science communication academic” Joan Leach from The University of Queensland, who apparently “agrees the average member of the public is not going to be that interested in the complexity of genetics, unless its relevant to an issue that they care about.” The ABC story goes on:
"Is there a problem that we need to know about here?" Dr Leach said in response to Dr Ball's article. "There are dangers in telling the simple story, but he hasn't spelt out the advantages of embracing complexity in public communication."
Sorry plebs, you’re too dumb to be told the truth – you’ll have to make do with the simplistic stories we told many decades ago.
A tale of many electrons
In what I hope might be a timely occasion with Nobel-fever in the air, here is my leader for the latest issue of Nature Materials. This past decision was a nice one for physics, condensed matter and materials – although curiously it was a chemistry prize.
______________________________________________________________________
Density functional theory, invented half a century ago, now supplies one of the most convenient and popular shortcuts for dealing with systems of many electrons. It was born in a fertile period when theoretical physics stretched from abstruse quantum field theory to practical electrical engineering.
It’s often pointed out that quantum theory is not just a source of counter-intuitive mystery but also an extraordinarily effective intellectual foundation for engineering. It supplies the theoretical basis for the transistor and superconductor, for understanding molecular interactions relevant from mineralogy to biology, and for describing the basic properties of all matter, from superhard alloys to high-energy plasmas. But popular accounts of quantum physics rarely pay more than lip service to this utilitarian virtue – there is little discussion of what it took to turn the ideas of Bohr, Heisenberg and Schrödinger into a theory that works at an everyday level.
One of the milestones in that endeavour occurred 50 years ago, when Pierre Hohenberg and Walter Kohn published a paper [1] that laid the foundations of density functional theory (DFT). This provided a tool for transforming the fiendishly complicated Schrödinger equation of a many-body system such as the atomic lattice of a solid into a mathematically tractable problem that enables the prediction of properties such as structure and electrical conductivity. The milieu in which this advance was formulated was rich and fertile, and from the distance of five decades it is hard not to idealize it as a golden age in which scientists could still see through the walls that now threaten to isolate disciplines. Kohn, exiled from his native Austria as a young Jewish boy during the Nazi era and educated in Canada, was located at the heart of this nexus. Schooled in quantum physics by Julian Schwinger at Harvard amidst peers including Philip Anderson, Rolf Landauer and Joaquin Luttinger, he was also familiar with the challenges of tangible materials systems such as semiconductors and alloys. In the mid-1950s Kohn worked as a consultant at Bell Labs, where the work of John Bardeen, Walter Brattain and William Shockley on transistors a few years earlier had generated a focus on the solid-state theory of semiconductors. And his ground-breaking paper with Hohenberg came from research on alloys at the Ecole Normale Supérieure in Paris, hosted by Philippe Nozières.
Now that DFT is so familiar a technique, used not only to understand electronic structures of molecules and materials but also as a semi-classical approach for studying the atomic structures of fluids, it is easy to forget what a bold hypothesis its inception required. In principle one may write the electron density n(r) of an N-electron system as the integral over space of the N-electron wavefunction, and then to use this to calculate the total energy of the system as a functional of n(r) and the potential energy v(r) of each electron interacting with all the fixed nuclei. (A functional here is a “function of a function” – the energy is a function of the function v(r), say.) Then one could do the calculation by invoking some approximation for the N-electron wavefunction. But Kohn inverted the idea: what if you didn’t start from the complicated N-body wavefunction, but just from the spatially varying electron density n(r)? That’s to say, maybe the external potential v(r), and thus the total energy (for the ground state of the system), depend only on the equilibrium n(r)? Then, that density function is all you needed to know. As Andrew Zangwill puts it in a recent commentary on Kohn’s career [2], “This was a deep question. Walter realized he wasn’t doing alloy theory any more.”
Kohn figured out a proof of this remarkable conjecture, but it seemed so simple that he couldn’t believe it hadn’t been noticed before. So he asked Hohenberg, a post-doc in Nozières’ lab, to help. Together the pair formulated a rigorous proof of the conjecture for the case of an inhomogeneous electron gas; since their 1964 paper, several other proofs have been found. That paper was formal and understated to the point of desiccation, and one needed to pay it close attention to see how remarkable the result was. The initial response was muted, and Hohenberg moved subsequently into other areas, such as hydrodynamics, phase transitions and pattern formation.
Kohn, however, went on to develop the idea into a practical method for calculating the electronic ground states of molecules and solids, working in particular with Hong Kong-born postdoc Lu-Jeu Sham. Their crucial paper3 was much more explicit about the potential of this approach as an approximation for calculating real materials properties of solids, such as cohesive energies and elastic constants, from quantum principles. It is now one of the most highly cited papers in all of physics, but was an example of a “sleeper”: still the community took some time to wake up to what was on offer. Not until the work of John Pople in the early 1990s did chemists begin to appreciate that DFT could offer a simple and convenient way to calculate electronic structures. It was that work which led to the 1998 Nobel prize in chemistry for Pople and Kohn – incongruous for someone so immersed in physics.
Zangwill argues that DFT defies the common belief that important theories reflect the Zeitgeist: it was an idea that was not in the air at all in the 1960s, and, says Zangwill, “might be unknown today if Kohn had not created it in the mid-1960s.” Clearly that’s impossible to prove. But there’s no mistaking the debt that materials and molecular sciences owe to Kohn’s insight, and so if Zangwill is right, all the more reason to ask if we still create the right sort of environments for such fertile ideas to germinate.
1. Hohenberg, P. & Kohn, W. Phys. Rev. 136, B864-871 (1964).
2. Zangwill, A., http://www.arxiv.org/abs/1403.5164 (2014).
3. Kohn, W. & Sham, L. J. Phys. Rev. 140, A1133-1138 (1965).
______________________________________________________________________
Density functional theory, invented half a century ago, now supplies one of the most convenient and popular shortcuts for dealing with systems of many electrons. It was born in a fertile period when theoretical physics stretched from abstruse quantum field theory to practical electrical engineering.
It’s often pointed out that quantum theory is not just a source of counter-intuitive mystery but also an extraordinarily effective intellectual foundation for engineering. It supplies the theoretical basis for the transistor and superconductor, for understanding molecular interactions relevant from mineralogy to biology, and for describing the basic properties of all matter, from superhard alloys to high-energy plasmas. But popular accounts of quantum physics rarely pay more than lip service to this utilitarian virtue – there is little discussion of what it took to turn the ideas of Bohr, Heisenberg and Schrödinger into a theory that works at an everyday level.
One of the milestones in that endeavour occurred 50 years ago, when Pierre Hohenberg and Walter Kohn published a paper [1] that laid the foundations of density functional theory (DFT). This provided a tool for transforming the fiendishly complicated Schrödinger equation of a many-body system such as the atomic lattice of a solid into a mathematically tractable problem that enables the prediction of properties such as structure and electrical conductivity. The milieu in which this advance was formulated was rich and fertile, and from the distance of five decades it is hard not to idealize it as a golden age in which scientists could still see through the walls that now threaten to isolate disciplines. Kohn, exiled from his native Austria as a young Jewish boy during the Nazi era and educated in Canada, was located at the heart of this nexus. Schooled in quantum physics by Julian Schwinger at Harvard amidst peers including Philip Anderson, Rolf Landauer and Joaquin Luttinger, he was also familiar with the challenges of tangible materials systems such as semiconductors and alloys. In the mid-1950s Kohn worked as a consultant at Bell Labs, where the work of John Bardeen, Walter Brattain and William Shockley on transistors a few years earlier had generated a focus on the solid-state theory of semiconductors. And his ground-breaking paper with Hohenberg came from research on alloys at the Ecole Normale Supérieure in Paris, hosted by Philippe Nozières.
Now that DFT is so familiar a technique, used not only to understand electronic structures of molecules and materials but also as a semi-classical approach for studying the atomic structures of fluids, it is easy to forget what a bold hypothesis its inception required. In principle one may write the electron density n(r) of an N-electron system as the integral over space of the N-electron wavefunction, and then to use this to calculate the total energy of the system as a functional of n(r) and the potential energy v(r) of each electron interacting with all the fixed nuclei. (A functional here is a “function of a function” – the energy is a function of the function v(r), say.) Then one could do the calculation by invoking some approximation for the N-electron wavefunction. But Kohn inverted the idea: what if you didn’t start from the complicated N-body wavefunction, but just from the spatially varying electron density n(r)? That’s to say, maybe the external potential v(r), and thus the total energy (for the ground state of the system), depend only on the equilibrium n(r)? Then, that density function is all you needed to know. As Andrew Zangwill puts it in a recent commentary on Kohn’s career [2], “This was a deep question. Walter realized he wasn’t doing alloy theory any more.”
Kohn figured out a proof of this remarkable conjecture, but it seemed so simple that he couldn’t believe it hadn’t been noticed before. So he asked Hohenberg, a post-doc in Nozières’ lab, to help. Together the pair formulated a rigorous proof of the conjecture for the case of an inhomogeneous electron gas; since their 1964 paper, several other proofs have been found. That paper was formal and understated to the point of desiccation, and one needed to pay it close attention to see how remarkable the result was. The initial response was muted, and Hohenberg moved subsequently into other areas, such as hydrodynamics, phase transitions and pattern formation.
Kohn, however, went on to develop the idea into a practical method for calculating the electronic ground states of molecules and solids, working in particular with Hong Kong-born postdoc Lu-Jeu Sham. Their crucial paper3 was much more explicit about the potential of this approach as an approximation for calculating real materials properties of solids, such as cohesive energies and elastic constants, from quantum principles. It is now one of the most highly cited papers in all of physics, but was an example of a “sleeper”: still the community took some time to wake up to what was on offer. Not until the work of John Pople in the early 1990s did chemists begin to appreciate that DFT could offer a simple and convenient way to calculate electronic structures. It was that work which led to the 1998 Nobel prize in chemistry for Pople and Kohn – incongruous for someone so immersed in physics.
Zangwill argues that DFT defies the common belief that important theories reflect the Zeitgeist: it was an idea that was not in the air at all in the 1960s, and, says Zangwill, “might be unknown today if Kohn had not created it in the mid-1960s.” Clearly that’s impossible to prove. But there’s no mistaking the debt that materials and molecular sciences owe to Kohn’s insight, and so if Zangwill is right, all the more reason to ask if we still create the right sort of environments for such fertile ideas to germinate.
1. Hohenberg, P. & Kohn, W. Phys. Rev. 136, B864-871 (1964).
2. Zangwill, A., http://www.arxiv.org/abs/1403.5164 (2014).
3. Kohn, W. & Sham, L. J. Phys. Rev. 140, A1133-1138 (1965).
Wednesday, October 08, 2014
The moment of uncertainty
As part of a feature section in the October issue of La Recherche on uncertainty, I interviewed Robert Crease, historian and philosopher of science at Stony Brook University, New York, on the cultural impact of Heisenberg’s principle. It turned out that Robert had just written a book looking at this very issue – in fact, at the cultural reception of quantum theory in general. It’s called The Quantum Moment, is coauthored by Alfred Scharff Goldhaber, and is a great read – I have written a mini-review for the next (November) issue of Prospect. Here’s the interview, which otherwise appears only in French in La Recherche. Since Robert has such a great way with words, it was one of the easiest I’ve ever done.
________________________________________________________
What led Heisenberg to formulate the uncertainty principle? Was it something that fell out of the formalism in mathematical terms?
That’s a rather dramatic story. The uncertainty principle emerged in exchange of letters between Heisenberg and Pauli, and fell out of the work that Heisenberg had done on quantum theory the previous year, called matrix mechanics. In autumn 1926, he and Pauli were corresponding about how to understand its implications. Heisenberg insisted that the only way to understand it involved junking classical concepts such as position and momentum in the quantum world. In February 1927 he visited Niels Bohr in Copenhagen. Bohr usually helped Heisenberg to think, but this time the visit didn’t have the usual effect. They grew frustrated, and Bohr abandoned Heisenberg to go skiing. One night, walking by himself in the park behind Bohr’s institute, Heisenberg had an insight. He wrote to Pauli: “One will always find that all thought experiments have this property: when a quantity p is pinned down to within an accuracy characterized by the average error p, then... q can only be given at the same time to within an accuracy characterized by the average error q1 ≈ h/p1.” That’s the uncertainty principle. But like many equations, including E = mc2 and Maxwell’s equations, its first appearance is not in its now-famous form. Anyway, Heisenberg sent off a paper on his idea that was published in May.
How did Heisenberg interpret it in physical terms?
He didn’t, really; at the time he kept claiming that the uncertainty principle couldn’t be interpreted in physical terms, and simply reflected the fact that the subatomic world could not be visualized. Newtonian mechanics is visualizable: each thing in it occupies a particular place at a particular time. Heisenberg thought the attempt to construct a visualizable solution for quantum mechanics might lead to trouble, and so he advised paying attention only to the mathematics. Michael Frayn captures this side of Heisenberg well in his play Copenhagen. When the Bohr character charges that Heisenberg doesn't pay attention to the sense of what he’s doing so long as the mathematics works out, the Heisenberg character indignantly responds, "Mathematics is sense. That's what sense is".
Was Heisenberg disturbed by the implications of what he was doing?
No. Both he and Bohr were excited about what they had discovered. From the very beginning they realized that it had profound philosophical implications, and were thrilled to be able to explore them. Almost immediately both began thinking and writing about the epistemological implications of the uncertainty principle.
Was anyone besides Heisenberg and Bohr troubled?
The reaction was mixed. Arthur Eddington, an astronomer and science communicator, was thrilled, saying that the epistemological implications of the uncertainty principle heralded a new unification of science, religion, and the arts. The Harvard physicist Percy Bridgman was deeply disturbed, writing that “the bottom has dropped clean out” of the world. He was terrified about its impact on the public. Once the implications sink in, he wrote, it would “let loose a veritable intellectual spree of licentious and debauched thinking.”
Did physicists all share the same view of the epistemological implications of quantum mechanics?
No, they came up with several different ways to interpret it. As the science historian Don Howard has shown, the notion that the physics community of the day shared a common view, one they called the “Copenhagen interpretation,” is a myth promoted in the 1950s by Heisenberg for his own selfish reasons.
How much did the public pay attention to quantum theory before the uncertainty principle?
Not much. Newspapers and magazines treated it as something of interest because it excited physicists, but as far too complicated to explain to the public. Even philosophers didn’t see quantum physics as posing particularly interesting or significant philosophical problems. The uncertainty principle’s appearance in 1927 changed that. Suddenly, quantum mechanics was not just another scientific theory – it showed that the quantum world works very differently from the everyday world.
How did the uncertainty principle get communicated to a broader public?
It took about a year. In August 1927, Heisenberg, who was not yet a celebrity, gave a talk at a meeting of the British Association for the Advancement of Science, but it sailed way over the heads of journalists. The New York Times’s science reporter said trying to explain it to the public was like “trying to tell an Eskimo what the French language is like without talking French.” Then came a piece of luck. Eddington devoted a section to the uncertainty principle in his book The Nature of the Physical World, published in 1928. He was a terrific explainer, and his imagery and language were very influential.
How did the public react?
Immediately and enthusiastically. A few days after October 29, 1929, the New York Times, tongue-in-cheek, invoked the uncertainty principle as the explanation for the stock market crash.
And today?
Heisenberg and his principle still feature in popular culture. In fact, thanks to the uncertainty principle, I think I’d argue that Heisenberg has made an even greater impact on popular culture than Einstein. In the American television drama series Breaking Bad, 'Heisenberg' is the pseudonym of the protagonist, a high school chemistry teacher who manufactures and sells the illegal drug crystal methamphetamine. The religious poet Christian Wiman, in his recent book about facing cancer, writes that "to feel enduring love like a stroke of pure luck" amid "the havoc of chance" makes God "the ultimate Uncertainty Principle." In The Ascent of Man, the Polish-British scientist Jacob Bronowski calls the uncertainty principle the Principle of Tolerance. There’s even an entire genre of uncertainty principle jokes. A police officer pulls Heisenberg over and says, "Did you know that you were going 90 miles an hour?" Heisenberg says, "Thanks. Now I'm lost."
Has the uncertainty principle been used for serious philosophical purposes?
Yes. Already in 1929, John Dewey wrote about it to promote his ideas about pragmatism, and in particular his thoughts about the untenability of what he called the “spectator theory of knowledge.” The literary critic George Steiner has used the uncertainty principle to describe the process of literary criticism – how it involves transforming the “object” – that is, text – interpreted, and delivers it differently to the generation that follows. More recently, the Slovene philosopher Slavoj Žižek has devoted attention to the philosophical implications of the uncertainty principle.
Some popular culture uses of the uncertainty principle are off the wall. How do you tell meaningful uses from the bogus ones?
It’s not easy. Popular culture often uses scientific terms in ways that are pretentious, erroneous, wacky, or unverifiable. It’s nonsense to apply the uncertainty principle to medicines or self-help issues, for instance. But how is that different from Steiner using it to describe the process of literary criticism?
Outside of physics, has our knowledge that uncertainty is a feature of the subatomic world, and the uses that it has been put by writers and philosophers, helped to change our worldview in any way?
I think so. The contemporary world does not always feel smooth, continuous, and law-governed, like the Newtonian World. Our world instead often feels jittery, discontinuous, and irrational. That has sometimes prompted writers to appeal to quantum imagery and language to describe it. John Updike’s characters, for instance, sometimes appeal to the uncertainty principle, while Updike himself did so in speaking of the contemporary world as full of “gaps, inconsistencies, warps, and bubbles in the surface of circumstance.” Updike and other writers and poets have found this imagery metaphorically apt.
The historians Betty Dobbs and Margaret Jacob have remarked that the Newtonian Moment provided “the material and mental universe – industrial and scientific – in which most Westerners and some non-Westerners now live, one aptly described as modernity.” But that universe is changing. Quantum theory showed that at a more fundamental level the world is not Newtonian at all, but governed by notions such as chance, probability, and uncertainty.
Robert Crease’s book (with Alfred S. Goldhaber) The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught Us to Love Uncertainty will be published by Norton in October 2014.
________________________________________________________
What led Heisenberg to formulate the uncertainty principle? Was it something that fell out of the formalism in mathematical terms?
That’s a rather dramatic story. The uncertainty principle emerged in exchange of letters between Heisenberg and Pauli, and fell out of the work that Heisenberg had done on quantum theory the previous year, called matrix mechanics. In autumn 1926, he and Pauli were corresponding about how to understand its implications. Heisenberg insisted that the only way to understand it involved junking classical concepts such as position and momentum in the quantum world. In February 1927 he visited Niels Bohr in Copenhagen. Bohr usually helped Heisenberg to think, but this time the visit didn’t have the usual effect. They grew frustrated, and Bohr abandoned Heisenberg to go skiing. One night, walking by himself in the park behind Bohr’s institute, Heisenberg had an insight. He wrote to Pauli: “One will always find that all thought experiments have this property: when a quantity p is pinned down to within an accuracy characterized by the average error p, then... q can only be given at the same time to within an accuracy characterized by the average error q1 ≈ h/p1.” That’s the uncertainty principle. But like many equations, including E = mc2 and Maxwell’s equations, its first appearance is not in its now-famous form. Anyway, Heisenberg sent off a paper on his idea that was published in May.
How did Heisenberg interpret it in physical terms?
He didn’t, really; at the time he kept claiming that the uncertainty principle couldn’t be interpreted in physical terms, and simply reflected the fact that the subatomic world could not be visualized. Newtonian mechanics is visualizable: each thing in it occupies a particular place at a particular time. Heisenberg thought the attempt to construct a visualizable solution for quantum mechanics might lead to trouble, and so he advised paying attention only to the mathematics. Michael Frayn captures this side of Heisenberg well in his play Copenhagen. When the Bohr character charges that Heisenberg doesn't pay attention to the sense of what he’s doing so long as the mathematics works out, the Heisenberg character indignantly responds, "Mathematics is sense. That's what sense is".
Was Heisenberg disturbed by the implications of what he was doing?
No. Both he and Bohr were excited about what they had discovered. From the very beginning they realized that it had profound philosophical implications, and were thrilled to be able to explore them. Almost immediately both began thinking and writing about the epistemological implications of the uncertainty principle.
Was anyone besides Heisenberg and Bohr troubled?
The reaction was mixed. Arthur Eddington, an astronomer and science communicator, was thrilled, saying that the epistemological implications of the uncertainty principle heralded a new unification of science, religion, and the arts. The Harvard physicist Percy Bridgman was deeply disturbed, writing that “the bottom has dropped clean out” of the world. He was terrified about its impact on the public. Once the implications sink in, he wrote, it would “let loose a veritable intellectual spree of licentious and debauched thinking.”
Did physicists all share the same view of the epistemological implications of quantum mechanics?
No, they came up with several different ways to interpret it. As the science historian Don Howard has shown, the notion that the physics community of the day shared a common view, one they called the “Copenhagen interpretation,” is a myth promoted in the 1950s by Heisenberg for his own selfish reasons.
How much did the public pay attention to quantum theory before the uncertainty principle?
Not much. Newspapers and magazines treated it as something of interest because it excited physicists, but as far too complicated to explain to the public. Even philosophers didn’t see quantum physics as posing particularly interesting or significant philosophical problems. The uncertainty principle’s appearance in 1927 changed that. Suddenly, quantum mechanics was not just another scientific theory – it showed that the quantum world works very differently from the everyday world.
How did the uncertainty principle get communicated to a broader public?
It took about a year. In August 1927, Heisenberg, who was not yet a celebrity, gave a talk at a meeting of the British Association for the Advancement of Science, but it sailed way over the heads of journalists. The New York Times’s science reporter said trying to explain it to the public was like “trying to tell an Eskimo what the French language is like without talking French.” Then came a piece of luck. Eddington devoted a section to the uncertainty principle in his book The Nature of the Physical World, published in 1928. He was a terrific explainer, and his imagery and language were very influential.
How did the public react?
Immediately and enthusiastically. A few days after October 29, 1929, the New York Times, tongue-in-cheek, invoked the uncertainty principle as the explanation for the stock market crash.
And today?
Heisenberg and his principle still feature in popular culture. In fact, thanks to the uncertainty principle, I think I’d argue that Heisenberg has made an even greater impact on popular culture than Einstein. In the American television drama series Breaking Bad, 'Heisenberg' is the pseudonym of the protagonist, a high school chemistry teacher who manufactures and sells the illegal drug crystal methamphetamine. The religious poet Christian Wiman, in his recent book about facing cancer, writes that "to feel enduring love like a stroke of pure luck" amid "the havoc of chance" makes God "the ultimate Uncertainty Principle." In The Ascent of Man, the Polish-British scientist Jacob Bronowski calls the uncertainty principle the Principle of Tolerance. There’s even an entire genre of uncertainty principle jokes. A police officer pulls Heisenberg over and says, "Did you know that you were going 90 miles an hour?" Heisenberg says, "Thanks. Now I'm lost."
Has the uncertainty principle been used for serious philosophical purposes?
Yes. Already in 1929, John Dewey wrote about it to promote his ideas about pragmatism, and in particular his thoughts about the untenability of what he called the “spectator theory of knowledge.” The literary critic George Steiner has used the uncertainty principle to describe the process of literary criticism – how it involves transforming the “object” – that is, text – interpreted, and delivers it differently to the generation that follows. More recently, the Slovene philosopher Slavoj Žižek has devoted attention to the philosophical implications of the uncertainty principle.
Some popular culture uses of the uncertainty principle are off the wall. How do you tell meaningful uses from the bogus ones?
It’s not easy. Popular culture often uses scientific terms in ways that are pretentious, erroneous, wacky, or unverifiable. It’s nonsense to apply the uncertainty principle to medicines or self-help issues, for instance. But how is that different from Steiner using it to describe the process of literary criticism?
Outside of physics, has our knowledge that uncertainty is a feature of the subatomic world, and the uses that it has been put by writers and philosophers, helped to change our worldview in any way?
I think so. The contemporary world does not always feel smooth, continuous, and law-governed, like the Newtonian World. Our world instead often feels jittery, discontinuous, and irrational. That has sometimes prompted writers to appeal to quantum imagery and language to describe it. John Updike’s characters, for instance, sometimes appeal to the uncertainty principle, while Updike himself did so in speaking of the contemporary world as full of “gaps, inconsistencies, warps, and bubbles in the surface of circumstance.” Updike and other writers and poets have found this imagery metaphorically apt.
The historians Betty Dobbs and Margaret Jacob have remarked that the Newtonian Moment provided “the material and mental universe – industrial and scientific – in which most Westerners and some non-Westerners now live, one aptly described as modernity.” But that universe is changing. Quantum theory showed that at a more fundamental level the world is not Newtonian at all, but governed by notions such as chance, probability, and uncertainty.
Robert Crease’s book (with Alfred S. Goldhaber) The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught Us to Love Uncertainty will be published by Norton in October 2014.
Uncertain about uncertainty
This is the English version of the cover article (in French) of the latest issue of La Recherche (October). It’s accompanied by an interview that I conducted with Robert Crease about the cultural impact of the uncertainty principle, which I’ll post next.
______________________________________________________________________
If there’s one thing most people know about quantum physics, it’s that it is uncertain. There’s a fuzziness about the quantum world that prevents us from knowing everything about it with absolute detail and clarity. Almost 90 years ago, the German physicist Werner Heisenberg pointed this out in his famous Uncertainty Principle. Yet over the few years there has been heated debate among physicists about just what Heisenberg meant, and whether he was correct. The latest experiments seem to indicate that one version of the Uncertainty Principle presented by Heisenberg might be quite wrong, and that we can get a sharper picture of quantum reality than he thought.
In 1927 Heisenberg argued that we can’t measure all the attributes of a quantum particle at the same time and as accurately as we like [1]. In particular, the more we try to pin down a particle’s exact location, the less accurately we can measure its speed, and vice versa. There’s a precise limit to this certainty, Heisenberg said. If the uncertainty is position is denoted Δx, and the uncertainty in momentum (mass times velocity) is Δp, then their product ΔxΔp can be no smaller than ½h, where h [read this as h bar] is the fundamental constant called Planck’s constant, which sets the scale of the ‘granularity’ of the quantum world – the size of the ‘chunks’ into which energy is divided.
Where does this uncertainty come from? Heisenberg’s reasoning was mathematical, but he felt he needed to give some intuitive explanation too. For something as small and delicate as a quantum particle, he suggested, it is virtually impossible to make a measurement without disturbing and altering what we’re trying to measure. It we “look” at an electron by bouncing a photon of light off it in a microscope, that collision will change the path of the electron. The more we try to reduce the intrinsic inaccuracy or “error” of the measurement, say by using a brighter beam of photons, the more we create a disturbance. According to Heisenberg, error (Δe) and disturbance (Δd) are also related by an uncertainty principle in which ΔeΔd can’t be smaller than ½h.
The American physicist Earle Hesse Kennard showed very soon after Heisenberg’s original publication that in fact his thought experiment is superfluous to the issue of uncertainty in quantum theory. The restriction on precise knowledge of both speed and position is an intrinsic property of quantum particles, not a consequence of the limitations of experiments. All the same, might Heisenberg’s “experimental” version of the Uncertainty Principle – his relationship between error and disturbance – still be true?
“When we explain the Uncertainty Principle, especially to non-physicists,” says physicist Aephraim Steinberg of the University of Toronto in Canada, “we tend to describe the Heisenberg microscope thought experiment.” But he says that, while everyone agrees that measurements disturb systems, many physicists no longer think that Heisenberg’s equation relating Δe and Δd describes that process adequately.
Japanese physicist Masanao Ozawa of Nagoya University was one of the first to question Heisenberg. In 2003 he argued that it should be possible to defeat the apparent limit on error and disturbance [2]. Ozawa was motivated by a debate that began in the 1980s on the accuracy of measurements of gravity waves, the ripples in spacetime predicted by Einstein’s theory of general relativity and expected to be produced by violent astrophysical events such as those involving black holes. No one has yet detected a gravity wave, but the techniques proposed to do so entail measuring the very small distortions in space that will occur when such a wave passes by. These disturbances are so tiny – fractions of the size of atoms – that at first glance the Uncertainty Principle would seem to determine if they are feasible at all. In other words, the accuracy demanded in some modern experiments like this means that this question of how measurement disturbs the system has real, practical ramifications.
In 1983 Horace Yuen of Northwestern University in Illinois suggested that, if gravity-wave measurement were done in a way that barely disturbed the detection system at all, the apparently fundamental limit on accuracy dictated by Heisenberg’s error-disturbance relation could be beaten. Others disputed that idea, but Ozawa defended it. This led him to reconsider the general question of how experimental error is related to the degree of disturbance it involves, and in his 2003 paper he proposed a new relationship between these two quantities in which two other terms were added to the equation. In other words, ΔeΔd + A + B ≥ h/2, so that ΔeΔd itself could be smaller than h/2 without violating the limit..
Last year, Cyril Branciard of the University of Queensland in Australia (now at the CNRS Institut Néel at Grenoble) tightened up Ozawa’s new uncertainty equation [3]. “I asked whether all values of Δe and Δd that satisfy his relation are allowed, or whether there could be some values that are nevertheless still forbidden by quantum theory”, Branciard explains. “I showed that there are actually more values that are forbidden. In other words, Ozawa's relation is ‘too weak’.”
But Ozawa’s relationship had by then already been shown to give an adequate account of uncertainty for most purposes, since in 2012 it was put to the test experimentally by two teams [4,5]. Steinberg and his coworkers in Toronto figured out how to measure the quantities in Ozawa’s equation for photons of infrared laser light travelling along optical fibres and being sensed by detectors. They used a way of detecting the photons that perturbed their state as little as possible, and found that indeed they could exceed the relationship between precision and disturbance proposed by Heisenberg but not that of Ozawa. Meanwhile, Ozawa himself teamed up with a team at the Vienna University of Technology led by Yuji Hasegawa, who made measurements on the quantum properties of a beam of neutrons passing through a series of detectors. They too found that the measurements could violate the Heisenberg limit but not Ozawa’s.
Very recent experiments have confirmed that conclusion with still greater accuracy, verifying Branciard’s relationships too [6,7]. Branciard himself was a collaborator on one of those studies, and he says that “experimentally we could get very close indeed to the bounds imposed by my relations.”
Doesn’t this prove that Heisenberg was wrong about how error is connected to disturbance in experimental measurements? Not necessarily. Last year, a team of European researchers claimed to have a theoretical proof that in fact this version of Heisenberg’s Uncertainty Principle is correct after all [8]. They argued that Ozawa’s theory, and the experiments testing it, were using the wrong definitions of error. So they might be correct in their own terms, but weren’t really saying anything about Heisenberg’s error-disturbance principle. As team member Paul Busch of the University of York in England puts it, “Ozawa effectively proposed a wrong relationship between his own definitions of error and disturbance, wrongly ascribed it to Heisenberg, then showed how to fix it.”
So Heisenberg was correct after all in the limits he set on the tradeoff, argues Busch: “if the error is kept small, the disturbance must be large.”
Who is right? It seems to depend on exactly how you pose the question. What, after all, does measurement error mean? If you make a single measurement, there will be some random error that reflects the limits on the accuracy of your technique. But that’s why experimentalists typically make many measurements on the same system, so that you average out some of the randomness. Yet surely, some argue, the whole spirit of Heisenberg’s original argument was about making measurements of different properties on a particular, single quantum object, not averages for a whole bunch of such objects?
It now seems that Heisenberg’s limit on how small the combined uncertainty can be for error and disturbance holds true if you think about averages of many measurements, but that Ozawa’s smaller limit applies if you think about particular quantum states. In the first case you’re effectively measuring something like the “disturbing power” of a specific instrument; in the second case you’re quantifying how much we can know about an individual state. So whether Heisenberg was right or not depends on what you think he meant (and perhaps on whether you think he even recognized the difference).
As Steinberg explains, Busch and colleagues “are really asking how much a particular measuring apparatus is capable of disturbing a system, and they show that they get an equation that looks like the familiar Heisenberg form. We think it is also interesting to ask, as Ozawa did, how much the measuring apparatus disturbs one particular system. Then the less restrictive Ozawa-Branciard relations apply.”
Branciard agrees with Steinberg that this isn’t a question of who’s right and who’s wrong, but just a matter of how you make your definitions. “The two approaches simply address different questions. They each argue that the problem they address was probably the one Heisenberg had in mind. But Heisenberg was simply not clear enough on what he had in mind, and it is always dangerous to put words in someone else's mouth. I believe both questions are interesting and worth studying.”
There’s a broader moral to be drawn, for the debate has highlighted how quantum theory is no longer perceived to reveal an intrinsic fuzziness in the microscopic world. Rather, what the theory can tell you depends on what exactly you want to know and how you intend to find out about it. It suggests that “quantum uncertainty” isn’t some kind of resolution limit, like the point at which objects in a microscope look blurry, but is to some degree chosen by the experimenter. This fits well with the emerging view of quantum theory as, at root, a theory about information and how to access it. In fact, recent theoretical work by Ozawa and his collaborators turns the error-disturbance relationship into a question about the cost of gaining information about one property of a quantum system on the other properties of that system [9]. It’s a little like saying that you begin with a box that you know is red and think weighs one kilogram – but if you want to check that weight exactly, you weaken the link to redness, so that you can’t any longer be sure that the box you’re weighing is a red one. The weight and the colour start to become independent pieces of information about the box.
If this seems hard to intuit, that’s just a reflection of how interpretations of quantum theory are starting to change. It appears to be telling us that what we can know about the world depends on how we ask. To that extent, then, we choose what kind of a world we observe.
The issue isn’t just academic, since an approach to quantum theory in which quantum states are considered to encode information is now starting to produce useful technologies, such as quantum cryptography and the first prototype quantum computers. “Deriving uncertainty relations for error-disturbance or for joint measurement scenarios using information-theoretical definitions of errors and disturbance has a great potential to be useful for proving the security of cryptographic protocols, or other information-processing applications”, says Branciard. “This is a very interesting and timely line of research.”
References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
3. C. Branciard, Proc. Natl. Acad. Sci. U.S.A. 110, 6742 (2013).
4. J. Erhart, S. Sponar, G. Sulyok, G. Badurek, M. Ozawa & Y. Hasegawa, Nat. Phys. 8, 185 (2012).
5. L. A. Rozema, A. Darabi, D. H. Mahler, A. Hayat, Y. Soudagar & A. M. Steinberg, Phys. Rev. Lett. 109, 100404 (2012).
6. F. Kandea, S.-Y. Baek, M. Ozawa & K. Edamatsu, Phys. Rev Lett. 112, 020402 (2014).
7. M. Ringbauer, D. N. Biggerstaff, M. A. Broome, A. Fedrizzi, C. Branciard & A. G. White, Phys. Rev. Lett. 112, 020401 (2014).
8. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
9. F. Buscemi, M. J. W. Hall, M. Ozawa & M. W. Wilde, Phys. Rev. Lett. 112, 050401 (2014).
______________________________________________________________________
If there’s one thing most people know about quantum physics, it’s that it is uncertain. There’s a fuzziness about the quantum world that prevents us from knowing everything about it with absolute detail and clarity. Almost 90 years ago, the German physicist Werner Heisenberg pointed this out in his famous Uncertainty Principle. Yet over the few years there has been heated debate among physicists about just what Heisenberg meant, and whether he was correct. The latest experiments seem to indicate that one version of the Uncertainty Principle presented by Heisenberg might be quite wrong, and that we can get a sharper picture of quantum reality than he thought.
In 1927 Heisenberg argued that we can’t measure all the attributes of a quantum particle at the same time and as accurately as we like [1]. In particular, the more we try to pin down a particle’s exact location, the less accurately we can measure its speed, and vice versa. There’s a precise limit to this certainty, Heisenberg said. If the uncertainty is position is denoted Δx, and the uncertainty in momentum (mass times velocity) is Δp, then their product ΔxΔp can be no smaller than ½h, where h [read this as h bar] is the fundamental constant called Planck’s constant, which sets the scale of the ‘granularity’ of the quantum world – the size of the ‘chunks’ into which energy is divided.
Where does this uncertainty come from? Heisenberg’s reasoning was mathematical, but he felt he needed to give some intuitive explanation too. For something as small and delicate as a quantum particle, he suggested, it is virtually impossible to make a measurement without disturbing and altering what we’re trying to measure. It we “look” at an electron by bouncing a photon of light off it in a microscope, that collision will change the path of the electron. The more we try to reduce the intrinsic inaccuracy or “error” of the measurement, say by using a brighter beam of photons, the more we create a disturbance. According to Heisenberg, error (Δe) and disturbance (Δd) are also related by an uncertainty principle in which ΔeΔd can’t be smaller than ½h.
The American physicist Earle Hesse Kennard showed very soon after Heisenberg’s original publication that in fact his thought experiment is superfluous to the issue of uncertainty in quantum theory. The restriction on precise knowledge of both speed and position is an intrinsic property of quantum particles, not a consequence of the limitations of experiments. All the same, might Heisenberg’s “experimental” version of the Uncertainty Principle – his relationship between error and disturbance – still be true?
“When we explain the Uncertainty Principle, especially to non-physicists,” says physicist Aephraim Steinberg of the University of Toronto in Canada, “we tend to describe the Heisenberg microscope thought experiment.” But he says that, while everyone agrees that measurements disturb systems, many physicists no longer think that Heisenberg’s equation relating Δe and Δd describes that process adequately.
Japanese physicist Masanao Ozawa of Nagoya University was one of the first to question Heisenberg. In 2003 he argued that it should be possible to defeat the apparent limit on error and disturbance [2]. Ozawa was motivated by a debate that began in the 1980s on the accuracy of measurements of gravity waves, the ripples in spacetime predicted by Einstein’s theory of general relativity and expected to be produced by violent astrophysical events such as those involving black holes. No one has yet detected a gravity wave, but the techniques proposed to do so entail measuring the very small distortions in space that will occur when such a wave passes by. These disturbances are so tiny – fractions of the size of atoms – that at first glance the Uncertainty Principle would seem to determine if they are feasible at all. In other words, the accuracy demanded in some modern experiments like this means that this question of how measurement disturbs the system has real, practical ramifications.
In 1983 Horace Yuen of Northwestern University in Illinois suggested that, if gravity-wave measurement were done in a way that barely disturbed the detection system at all, the apparently fundamental limit on accuracy dictated by Heisenberg’s error-disturbance relation could be beaten. Others disputed that idea, but Ozawa defended it. This led him to reconsider the general question of how experimental error is related to the degree of disturbance it involves, and in his 2003 paper he proposed a new relationship between these two quantities in which two other terms were added to the equation. In other words, ΔeΔd + A + B ≥ h/2, so that ΔeΔd itself could be smaller than h/2 without violating the limit..
Last year, Cyril Branciard of the University of Queensland in Australia (now at the CNRS Institut Néel at Grenoble) tightened up Ozawa’s new uncertainty equation [3]. “I asked whether all values of Δe and Δd that satisfy his relation are allowed, or whether there could be some values that are nevertheless still forbidden by quantum theory”, Branciard explains. “I showed that there are actually more values that are forbidden. In other words, Ozawa's relation is ‘too weak’.”
But Ozawa’s relationship had by then already been shown to give an adequate account of uncertainty for most purposes, since in 2012 it was put to the test experimentally by two teams [4,5]. Steinberg and his coworkers in Toronto figured out how to measure the quantities in Ozawa’s equation for photons of infrared laser light travelling along optical fibres and being sensed by detectors. They used a way of detecting the photons that perturbed their state as little as possible, and found that indeed they could exceed the relationship between precision and disturbance proposed by Heisenberg but not that of Ozawa. Meanwhile, Ozawa himself teamed up with a team at the Vienna University of Technology led by Yuji Hasegawa, who made measurements on the quantum properties of a beam of neutrons passing through a series of detectors. They too found that the measurements could violate the Heisenberg limit but not Ozawa’s.
Very recent experiments have confirmed that conclusion with still greater accuracy, verifying Branciard’s relationships too [6,7]. Branciard himself was a collaborator on one of those studies, and he says that “experimentally we could get very close indeed to the bounds imposed by my relations.”
Doesn’t this prove that Heisenberg was wrong about how error is connected to disturbance in experimental measurements? Not necessarily. Last year, a team of European researchers claimed to have a theoretical proof that in fact this version of Heisenberg’s Uncertainty Principle is correct after all [8]. They argued that Ozawa’s theory, and the experiments testing it, were using the wrong definitions of error. So they might be correct in their own terms, but weren’t really saying anything about Heisenberg’s error-disturbance principle. As team member Paul Busch of the University of York in England puts it, “Ozawa effectively proposed a wrong relationship between his own definitions of error and disturbance, wrongly ascribed it to Heisenberg, then showed how to fix it.”
So Heisenberg was correct after all in the limits he set on the tradeoff, argues Busch: “if the error is kept small, the disturbance must be large.”
Who is right? It seems to depend on exactly how you pose the question. What, after all, does measurement error mean? If you make a single measurement, there will be some random error that reflects the limits on the accuracy of your technique. But that’s why experimentalists typically make many measurements on the same system, so that you average out some of the randomness. Yet surely, some argue, the whole spirit of Heisenberg’s original argument was about making measurements of different properties on a particular, single quantum object, not averages for a whole bunch of such objects?
It now seems that Heisenberg’s limit on how small the combined uncertainty can be for error and disturbance holds true if you think about averages of many measurements, but that Ozawa’s smaller limit applies if you think about particular quantum states. In the first case you’re effectively measuring something like the “disturbing power” of a specific instrument; in the second case you’re quantifying how much we can know about an individual state. So whether Heisenberg was right or not depends on what you think he meant (and perhaps on whether you think he even recognized the difference).
As Steinberg explains, Busch and colleagues “are really asking how much a particular measuring apparatus is capable of disturbing a system, and they show that they get an equation that looks like the familiar Heisenberg form. We think it is also interesting to ask, as Ozawa did, how much the measuring apparatus disturbs one particular system. Then the less restrictive Ozawa-Branciard relations apply.”
Branciard agrees with Steinberg that this isn’t a question of who’s right and who’s wrong, but just a matter of how you make your definitions. “The two approaches simply address different questions. They each argue that the problem they address was probably the one Heisenberg had in mind. But Heisenberg was simply not clear enough on what he had in mind, and it is always dangerous to put words in someone else's mouth. I believe both questions are interesting and worth studying.”
There’s a broader moral to be drawn, for the debate has highlighted how quantum theory is no longer perceived to reveal an intrinsic fuzziness in the microscopic world. Rather, what the theory can tell you depends on what exactly you want to know and how you intend to find out about it. It suggests that “quantum uncertainty” isn’t some kind of resolution limit, like the point at which objects in a microscope look blurry, but is to some degree chosen by the experimenter. This fits well with the emerging view of quantum theory as, at root, a theory about information and how to access it. In fact, recent theoretical work by Ozawa and his collaborators turns the error-disturbance relationship into a question about the cost of gaining information about one property of a quantum system on the other properties of that system [9]. It’s a little like saying that you begin with a box that you know is red and think weighs one kilogram – but if you want to check that weight exactly, you weaken the link to redness, so that you can’t any longer be sure that the box you’re weighing is a red one. The weight and the colour start to become independent pieces of information about the box.
If this seems hard to intuit, that’s just a reflection of how interpretations of quantum theory are starting to change. It appears to be telling us that what we can know about the world depends on how we ask. To that extent, then, we choose what kind of a world we observe.
The issue isn’t just academic, since an approach to quantum theory in which quantum states are considered to encode information is now starting to produce useful technologies, such as quantum cryptography and the first prototype quantum computers. “Deriving uncertainty relations for error-disturbance or for joint measurement scenarios using information-theoretical definitions of errors and disturbance has a great potential to be useful for proving the security of cryptographic protocols, or other information-processing applications”, says Branciard. “This is a very interesting and timely line of research.”
References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
3. C. Branciard, Proc. Natl. Acad. Sci. U.S.A. 110, 6742 (2013).
4. J. Erhart, S. Sponar, G. Sulyok, G. Badurek, M. Ozawa & Y. Hasegawa, Nat. Phys. 8, 185 (2012).
5. L. A. Rozema, A. Darabi, D. H. Mahler, A. Hayat, Y. Soudagar & A. M. Steinberg, Phys. Rev. Lett. 109, 100404 (2012).
6. F. Kandea, S.-Y. Baek, M. Ozawa & K. Edamatsu, Phys. Rev Lett. 112, 020402 (2014).
7. M. Ringbauer, D. N. Biggerstaff, M. A. Broome, A. Fedrizzi, C. Branciard & A. G. White, Phys. Rev. Lett. 112, 020401 (2014).
8. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
9. F. Buscemi, M. J. W. Hall, M. Ozawa & M. W. Wilde, Phys. Rev. Lett. 112, 050401 (2014).
Tuesday, October 07, 2014
Waiting for the green (and blue) light
This was intended as a "first response" to the Nobel announcement this morning, destined for the Prospect blog. But as it can take a little while for things to appear there, here it is anyway while the news is still ringing in the air. I'm delighted by the choice.
____________________________________________________
Did you notice when traffic lights began to change colour? The green “go” light once was once a yellowish pea green, but today it has a turquoise hue. And whereas the lights would switch with a brief moment of fading up and down, now they blink on and off in an instant.
I will be consigning myself to the farthest reaches of geekdom by admitting this, but I used to feel a surge of excitement whenever, a decade or so ago, I noticed these new-style traffic lights. That’s because I knew I was witnessing the birth of a new age of light technology. Even if traffic lights didn’t press your buttons, the chances are that you felt the impact of the same innovations in other ways, most notably when the definition of your DVD player got a boost from the introduction of Blu-Ray technology, which happened about a decade ago. What made the difference was the development of a material that could be electrically stimulated into emitting bright blue light: the key component of blue light-emitting diodes (LEDs), used in traffic lights and other full-colour signage displays, and of lasers, which read the information on Blu-Ray DVDs.
It’s for such reasons that this year’s Nobel laureates in physics have genuinely changed the world. Japanese scientists Isamu Akasaki, Hiroshi Amano and Shuji Nakamura only perfected the art of making blue-light-emitting semiconductor devices in the 1990s, and as someone who watched that happen I still feel astonished at how quickly this research progressed from basic lab work to a huge commercial technology. By adding blue (and greenish-blue) to the spectrum of available colours, these Japanese researchers have transformed LED displays from little glowing dots that simply told you if the power was on or off to full-colour screens in which the old red-green-blue system of colour televisions, previously produced by firing electron beams at phosphor materials on the screen, can now be achieved instead with compact, low-power and ultra-bright electronics.
It’s because LEDs need much less power than conventional incandescent light bulbs that the invention of blue LEDs is ultimately so important. Sure, they also switch faster, last longer and break less easily than old-style bulbs – you’ll see fewer out-of-service traffic lights these days – but the low power requirements (partly because far less energy is wasted as heat) mean that LED light sources are also good for the environment. Now that they can produce blue light too, it’s possible to make white-light sources from a red-green-blue combination that can act as regular lighting sources for domestic and office use. What’s more, that spectral mixture can be tuned to simulate all kinds of lighting conditions, mimicking daylight, moonlight, candle-light or an ideal spectrum for plant growth in greenhouses. The recent Making Colour exhibition at the National Gallery in London featured a state-of-the-art LED lighting system to show how different the hues of a painting can seem under different lighting conditions.
As with so many technological innovations, the key was finding the right material. Light-emitting diodes are made from semiconductors that convert electrical current into light. Silicon is no good at doing this, which is why it has been necessary to search out other semiconductors that are relatively inexpensive and compatible with the silicon circuitry on which all microelectronics is based. For red and yellow-green light that didn’t prove so hard: semiconductors such as gallium arsenide and gallium aluminium arsenide have been used since the 1960s for making LEDs and semiconductor lasers for optical telecommunications. But getting blue light from a semiconductor proved much more elusive. From the available candidates around the early 1990s, both Akasaki and Amano at Nagoya University and Nakamura at the chemicals company Nichia put their faith in a material called gallium nitride. It seemed clear that this stuff could be made to emit light at blue wavelengths, but the challenge was to grow crystals of sufficient quality to do that efficiently – if there were impurities or flaws in the crystal, it wouldn’t work well enough. Challenges of this kind are typically an incremental business rather than a question of some sudden breakthrough: you have to keep plugging away and refining your techniques, improving the performance of your system little by little.
Nakamura’s case is particularly appealing because Nichia was a small, family-run company on the island of Shikoku, generally considered a rural backwater – not the kind of place you would expect to beat the giants of Silicon Valley in a race for such a lucrative goal. It was his conviction that gallium nitride really was the best material for the job that kept him going.
The Nobel committee has come up trumps here – it’s a choice that rewards genuinely innovative and important work, which no one will grumble about, and which in retrospect seems obvious. And it’s a reminder that physics is everywhere, not just in CERN and deep space.
Subscribe to:
Posts (Atom)