Thursday, September 13, 2018

The "dark woman of DNA" goes missing again

There’s a curious incident that took place at the excellent "Schrödinger at 75: The Future of Life" meeting in Dublin last week that I’ve been pondering ever since.

One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.

Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.

I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.

You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?

Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).

But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.

I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.

And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.

Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.

Friday, September 07, 2018

What Is Life? Schrödinger at 75

The conference “Schrödinger at 75: The Future of Life” in Dublin, from which I’m now returning, was a fabulous event, packed with good talks equally from eminent folks (including several Nobel laureates) and young rising stars. Ostensibly an exploration of the legacy of Erwin Schrödinger’s influential 1944 book What Is Life?, based on the lectures he gave 75 years ago as director of physical sciences at the Dublin Institute for Advanced Study (on which, more here), it was in fact largely a wonderful excuse to get a bunch of very smart people in the same hall to talk about many areas of the life (and chemical) sciences today and to speculate about what the future holds for them. I think I took away something interesting from every talk.

There was of course much dutiful nodding towards Schrödinger’s book, and also to some of his writing elsewhere, especially his essays in Mind and Matter (1958), where he offered some speculations about mind and consciousness (about half of the speakers worked on aspects of brain, mind and cognition). This didn’t seem merely tokenistic to me – I felt that all the speakers who mentioned Schrödinger had a genuine respect for his ideas. This is all the more interesting given that, as I say in my Nature piece, there wasn’t in some ways a great deal that was truly new and productive of further research in the book. Of course, what gets mentioned most is Schrödinger’s reference to a “code-script” that governs life and which is inherited, and his suggestion that this is encoded in the chromosomes as an “aperiodic crystal”. That image certainly resonated with Francis Crick, who wrote to Schrödinger in 1953 to tell him so.



But the idea of a “code”, as well as the notion that it could be replicated in a manner reminiscent of the ‘templating’ of structure in a crystal, were not really new. It seems rather to be something about the way Schrödinger expressed this idea that mattered, and indeed I can see why: his book is beautifully written, achieving persuasive force without seeming like the imposition of an arrogant physicist.

All of this I enjoyed. But what I missed was a historical presentation that could have put these tributes to What Is Life? in context. There was, for instance, a sense of unease about Schrödinger’s references to “order” and “organization”. What exactly was he getting at here? One suggestion was that “order” here was standing in for that crucial missing word: “information”. But this isn’t really true. Schrödinger’s “code-script” was presented as the means by which an organism’s “organization” is maintained, although quite how it does so he found wholly mysterious, even if the inter-generational transmission of the script by the “aperiodic crystal” was far less so.

What we need to know here is that “organization” had become a biological power-word, a symbol of what it was about living systems that distinguishes them from non-living. In the early nineteenth century this unique property of life was conferred by élan vital in the formulation of vitalism. As vitalism waned, it had to become something more tangible and physical. Some believed, like Thomas Henry Huxley, that the key was a special chemical composition, which made up the stuff of “protoplasm”, the primal living substance from which all life was descended. But as the chemical complexity and heterogeneity of living matter became apparent from the work of late nineteenth-century physiologists, and as the cell came to be seen as the fundamental unit of life, the idea arose that life was distinguished by some peculiar state of “organization” below the level that microscopes could resolve. There were a few tantalizing glimpses of this subcellular organization, for example in the stained chromosome fibres and organelles like the nucleus and mitochondria. These were, however, nothing but blurry blobs, offering no real clue about how their (presumably) molecular nature gave them the apparent agency that distinguished life.

And so, as Andrew Reynolds has shown, “order” and “organization” served a role that was barely more than metaphorical, patching over an ignorance about “what is life”. There’s nothing deplorable about that; it’s the kind of thing science must do all the time, giving a name to an absence of understanding so that it can be contained and built into contingent theories. But for Schrödinger to still be using it in the 1940s shows how his biological reading was rather archaic, for by that stage it had already become apparent that cell physiology relies on enzyme action, and crystallographers like J. Desmond Bernal and Bill Astbury were beginning to apply X-ray crystallography to these proteins to understand their structure. Sure, the origins and nature of the “organization” that cells seemed to exhibit were still pretty obscure, but it was getting less necessary to invoke that nebulous concept.

There were also suggestions at the Dublin meeting that Schrödinger’s “order” was what he meant with his talk of “negative entropy”. There’s some justification to think that, but Schrödinger wasn’t just thinking about how cells prevent their “organization” from falling into entropic disarray. He was puzzled by how this organization could exist in the first place. I don’t think one can really understand his discussion of order and entropy in What Is Life? unless one recognizes that many physical scientists in the early twentieth century considered the molecular world to be fundamentally random. It seems remarkable to me that no accounts of What Is Life? that I have seen refer to Schrodinger’s 1944 essay in Nature on “The Statistical Law in Nature”, where it is almost as if Schrödinger is telling us: ‘this is what I’m thinking about in my book’. The article is a paean to Ludwig Boltzmann, whose influence Schrödinger felt strongly in his early years at Vienna. Schrödinger seems to assert here that there are no laws in nature that do not rely on the statistical averaging over the behaviours of countless microscopic particles. It would have seemed all but meaningless then to suppose that one could speak about law-like, deterministic behaviour at the level of individual molecules, and quantum mechanics had seemed only to confirm this. That is what puzzled Schrödinger so much about the apparent persistence of phenotypic traits that seemed necessarily to arise from the specific details of genes at the molecular scale.

As a consequence, What Is Life? reads a little weirdly to chemists today, or indeed even by the 1950s, to whom the notion that a complex molecule can adopt and sustain a particular structure even in the face of thermal fluctuations seemed unproblematic. Schrödinger’s invocation of quantum mechanics to explain this phenomenon looks rather laboured now, and is quite possibly a part of what irritated Linus Pauling and Max Perutz about the book. It’s also why Schrödinger seems so keen to cement the structure of the gene in place as a “solid”, rather than simply regarding it as a large molecule carrying a linear code.

And what about that code itself? This wasn’t interrogated at the meeting, which was a shame. Indeed, it was sometimes still attributed by speakers the all-mighty agency that Schrödinger himself gave it. It rather astonishes me to see how the claim that the genome contains “all the information you need to make the organism” raises no eyebrows. What surprises me is that scientists are typically a rather sceptical crowd, and demand evidence to support the claims they make. But there is, to my knowledge, no evidence whatsoever that one can make even the simplest organism, let alone a human, from the information contained in the genome. Oh, but surely you can? You can (in principle, and now in some cases in practice) just make the genome from scratch, put it in a cell, and off it goes… Wait. Put it in a cell? So you need a cell to actually enact the “code-script”? Well sure, but the cell goes without saying, right?

Metaphors in biology are always imperfect and often treacherous, but I think this one (a simile, really) has some mileage: saying that the genome is the complete blueprint for an organism is a bit like saying that the Oxford English Dictionary is a blueprint for King Lear. It’s all in there, right? Ok, there’s a lot in there that you don’t need for Lear, but then there’s a lot of junk in the genome too (perhaps!). Sure, to get Lear out of the OED you need to feed the words into William Shakespeare, but Shakespeare goes without saying, right?

For a human, it’s still more complicated. Human cells can of course replicate in a culture medium, but none has ever replicated into an embryo, let alone a person. What they can do – what some induced stem cells can do – is proliferate into an embryoid, an organoid with embryo-like structures. But that won’t make a human. For that, you need not only a cell but a uterus. It’s rather like saying, so the text of King Lear has “all the information” – and then giving it to, say, a Chinese factory worker in Lanzhou. Well OK, so to actually enact Lear in a meaningful way it has to be read by someone who reads English – or translated… But come on, the English goes without saying…

Once we start talking in terms of the information needed to make an organism, though, quite what’s in the genome becomes far less clear. Indeed, we know for sure that maternal factors supply some vital information for the early development of a fertilized egg. And the self-organizing abilities of cells can only create an organism in the right context: every cell needs the right signals from its environment for the whole to assemble properly. Genes somehow encode neurons, but neurons don’t develop properly if they don’t get stimuli from their environment during a critical period.

Are these environmental signals and context then a part of the information needed to make an organism “as nature [meaning evolution, I guess] intends”? Is an understanding of English a part of the information needed for King Lear to be anything more than marks on paper?

Evidently this is an issue of how “information” acquires meaning, which of course was notoriously what Shannon left out of his information theory. And that is why information in Shannon’s sense is greatest when the Shannon entropy is greatest. Periodic solids have rather low entropy. What is needed in biology, then, is a theory for where meaningful information comes from and how it gives rise to causal flows. There’s no doubt that lots of meaningful information is encoded in the genome that contributes to how organisms are built and how they function. But when we say that “the genome contains all the information needed to build an organism”, we are dealing with ill-defined terms. What I solely missed at this meeting was a presentation about how a theory of biological information can be developed, and how to define and measure “meaning” within that theory. Daniel Dennett acknowledged this lacuna in his keynote address, saying that understanding “semantic information” as opposed to Shannon information is still “work in progress”.

A close reading of Schrödinger starts us in that direction too, and is a part of his legacy.

Monday, August 27, 2018

Don't just count qubits

The rapid advances in quantum computing as a technology with real applications are reflected in the increases in the number of qubits these devices have available for computation. In 1998, laboratory prototypes could boast just two: enough for a proof of principle but little more. Today that figure has risen to 72 in the latest device reported by Google. Given that the number of states available in principle to systems of N qubits is 2^N, this is an enormous difference. The ability to hold this number of qubits in entangled states involves a herculean feat of quantum engineering.

It’s not surprising, then, that media reports tend to focus on the number of qubits a quantum computer has at its disposal as the figure of merit. The qubit count is also commonly regarded as the determinant of the machine’s capabilities, most famously with the widely repeated claim that 50 qubits marks the threshold of “quantum supremacy”, when a quantum computer becomes capable of things to all intents and purposes impossible for classical devices.

The problem is that this is all misleading. What a quantum computer can and can’t accomplish depends on many things, of which the qubit count is just one. For one thing, the quality of the qubits is critical: how noisy they are, and how likely to incur errors. There is also the question of their heterogeneity. Qubits manufactured from superconducting circuits will generally differ in their precise characteristics and performance, whereas quantum computers that use trapped-ion qubits benefit from having them all identical. And because qubits can only be kept coherent for short times before quantum decoherence scrambles them, how fast they can be switched can determine how many logic operations you can perform in the time available. The power of the device then depends also on the number of gate operations your algorithm needs: its so-called depth.

There is also the question of connectivity: does every qubit couple with every other, or are they for instance coupled only to two neighbours in a linear array?

The performance of a quantum computer therefore needs a better figure of merit than a crude counting of qubits. Researchers at IBM have suggested one, which they call the “quantum volume” – an attempt to fold all of these features into a single number. And this isn’t, then, a way of evaluating which of two devices “performs better”, but quantifies the power of a particular computation. Device performance will depend on what you’re asking it to do. Particular architectures and hardware will work well for some tasks than for others (see here).

As a result, a media tendency to present quantum computation as a competition between rivals – IBM vs Google, superconducting-qubits vs trapped ions – does the field no favours. Of course one can’t deny that competitiveness exists, as well as a degree of commercial secrecy – this is a business with huge stakes, after all. But no one expects any overall “winner” to be anointed. It’s unfortunate, then, that this is how things looks if we judge from the “qubit counter” created by MIT Tech Review. As a rough-and-ready timeline of how the applied tech of the field is evolving, this might be just about defensible. But some fear that this sort of presentation does more harm than good, and we should certainly not see it as a guide to who is currently “in the lead”.

Friday, June 08, 2018

Myths of Copenhagen

Discussing the Copenhagen interpretation of quantum mechanics with Adam Becker and Jim Baggott makes me think it would be worthwhile setting down how I see it. I don’t claim that this is necessarily the “right” way to look at Copenhagen (there probably isn’t a right way), and I’m conscious that what Bohr wrote and said is often hard to fathom – not, I think, because his thinking was vague, but because he struggled to express it through the limited medium of language. Many people have pored over Bohr’s words more closely than I have, and they might find different interpretations. So if anyone takes issue with what I say here, please do tell me.

Part of the problem too, as Adam said (and reiterates in his excellent new book What is Real?, is that there isn’t really a “Copenhagen interpretation”. I think James Cushing makes a good case that it was largely a retrospective invention of Heisenberg’s, quite possibly as an attempt to rehabilitate himself into the physics community after the war. As I say in Beyond Weird, my feeling is that when we talk about “Copenhagen”, we ought really to stick as close as we can to Bohr – not just for consistency but also because he was the most careful of the Copenhagenist thinkers.

It’s perhaps for this reason too that I think there are misconceptions about the Copenhagen interpretation. The first is that it denies any reality beyond what we can measure: that it is anti-realist. I see no reason to think this. People might read that into Bohr’s famous words: “There is no quantum world. There is only an abstract quantum physical description.” But it seems to me that the meaning here is quite clear: quantum mechanics does not describe a physical reality. We cannot mine it to discover “bits of the world”, nor “histories of the world”. Quantum mechanics is the formal apparatus that allows us to make predictions about the world. There is nothing in that formulation, however, that denies the existence of some underlying stratum in which phenomena take place that produce the outcomes quantum mechanics enables us to predict.

Indeed, what Bohr goes on to say makes this perfectly clear: “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” (Here you can see the influence of Kant on Bohr, who read him.) Here Bohr explicitly acknowledges the existence of “nature” – an underlying reality – but doesn’t think we can get at it, beyond what we can observe.

This is what I like about Copenhagen. I don’t think that Bohr is necessarily right to abandon a quest to probe beneath the theory’s capacity to predict, but I think he is right to caution that nothing in quantum mechanics obviously permits us to make assumptions about that. Once we accept the Born rule, which makes the wavefunction a probability density distribution, we are forced to recognize that.

Here’s the next fallacy about the Copenhagen interpretation: that it insists classical physics, such as governs measuring apparatus, works according to fundamentally different rules from quantum physics, and we just have to accept that sharp division.

Again, I understand why it looks as though Bohr might be saying that. But what he’s really saying is that measurements exist only in the classical realm. Only there can we claim definitive knowledge of some quantum state of affairs – what the position of an electron “is”, say. This split, then, is epistemic: knowledge is classical (because we are).

Bohr didn’t see any prospect of that ever being otherwise. What’s often forgotten is how absolute the distinction seemed in Bohr’s day between the atomic/microscopic and the macroscopic. Schrödinger, who was of course no Copenhagenist, made that clear in What Is Life?, which expresses not the slightest notion that we could ever see individual molecules and follow their behaviour. To him, as to Bohr, we must describe the microscopic world in necessarily statistical terms, and it would have seemed absurd to imagine we would ever point to this or that molecule.

Bohr’s comments about the quantum/classical divide reflect this mindset. It’s a great shame he hasn’t been around to see it dissolve – to see us probe the mesoscale and even manipulate single atoms and photons. It would have been great to know what he would have made of it.

But I don’t believe there is any reason to suppose that, as is sometimes said, he felt that quantum mechanics just had to “stop working” at some particular scale, and classical physics take over. And of course today we have absolutely no reason to suppose that happens. On the contrary, the theory of decoherence (pioneered by the late Dieter Zeh) can go an awfully long way to deconstructing and demystifying measurement. It’s enabled us to chip away at Bohr’s overly pessimistic epistemological quantum-classical divide, both theoretically and experimentally, and understand a great deal about how classical rules emerge from quantum. Some think it has in fact pretty much solved the “measurement problem”, but I think that’s too optimistic, for the reasons below.

But I don’t see anything in those developments that conflicts with Copenhagen. After all, one of the pioneers of such developments, Anton Zeilinger, would describe himself (I’m reliably told) as basically a Copenhagenist. Some will object to this that Bohr was so vague that his ideas can be made to fit anything. But I believe that, in this much at least, apparent conflicts with work on decoherence come from not attending carefully enough to what Bohr said. (I think Henrik Zinkernagel’s discussions of “what Bohr said” are useful here and here.)

I think that in fact these recent developments have helped to refine Bohr’s picture until we can see more clearly what it really boils down to. Bohr saw measurement as an irreversible process, in the sense that once you had classical knowledge about an outcome, that outcome could not be undone. From the perspective of decoherence, this is now viewed in terms that sound a little like the Second Law: measurement entails the entanglement of quantum object and environment, which, as it proceeds and spreads, becomes for all practical purposes irreversible because you can’t hope to untangle it again. (We know that in some special cases where you can keep track, recoherence is possible, much as it is possible in principle to “undo” the Second Law if you keep track of all the interactions and collisions.)

This decoherence remains a “fully quantum” process, even while we can see how it gives rise to classical-like behaviour (via Zurek’s quantum Darwinism, for example). But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes: why only one particular outcome is (classically) observed. In my view, that is the right way to put into more specific and updated language what Bohr was driving at with his insistence on the classicality of measurement. Omnès is content to posit uniqueness of outcomes as an axiom: he thinks we have a complete theory of measurement that amounts to “decoherence + uniqueness”. The Everett interpretation, of course, ditches uniqueness, on the grounds of “why add an extra, arbitrary axiom?” To my mind, and for the reasons explained in my book, I think this leads to a “cognitive instability”, to purloin Sean Carroll’s useful phrase, in our ability to explain the world. So the incoherence that Adam sees in Copenhagen, I see in the Everett view (albeit for different reasons).

But this then is the value I see in Copenhagen: if we stick with it through the theory of decoherence, it takes us to the crux of the matter: the part it just can’t explain, which is uniqueness of outcomes. And by that I mean (irreversible) uniqueness of our knowledge – better known as facts. What the Copenhagenists called collapse or reduction of the wavefunction boils down to the emergence of facts about the world. And because I think they – at least, Bohr – always saw wavefunction collapse in epistemic terms, there is a consistency to this. So Copenhagen doesn’t solve the problem, but it leads us to the right question (indeed, the question that confronts the Everettian view too).

One might say that the Bohmian interpretation solves that issue, because it is a realist model: the facts are there all along, albeit hidden from us. I can see the attraction of that. My problem with it is that the solution comes by fiat – one puts in the hidden facts from the outset, and then explains all the potential problems with that by fiat too: by devising a form of nonlocality that does everything you need it to, without any real physical basis, and insisting that this type of nonlocality just – well, just is. It is ingenious, and sometimes useful, but it doesn’t seem to me that you satisfactorily solve a problem by building the solution into the axioms. I don’t understand the Bohmian model well enough to know how it deals with issues of contextuality and the apparent “non-universality of facts” (as this paper by Caslav Brukner points out), but on the face of it those seem to pose problems for a realist viewpoint too.

It seems to me that a currently very fruitful way to approach quantum mechanics is to think about the issue of why the answers the world gives us seem to depend on the questions we ask (à la John Wheeler’s “20 Questions” analogy). And I feel that Bohr helps point us in that direction, and without any need to suppose some mystical “effect of consciousness on physical reality”. He didn’t have all the answers – but we do him no favours by misrepresenting his questions. A tyrannical imposition of the Copenhagen position is bad for quantum mechanics, but Copenhagen itself is not the problem.

Monday, May 21, 2018

What is a superposition really like?

Here’s a longer version of the news story I just published in Scientific American, which includes more context and background. The interpretation of the outcomes of this thought experiment within the two-state vector formalism of quantum mechanics is by no means the only one possible. But what the experiment does show is that quantum mechanics suggests that superpositions are not always simply a case of a particle seeming to be in two places or states at once. A superposition, liker anything else in quantum mechanics, tells you about the possible outcomes of a measurement. All the rest is contingent interpretation. I’m reminded yet again today that it is going to take an awful lot to get media folks to accept this. I'm starting to see now that it was a mistake for me to assume that they didn't know any better; rather, I think there an active, positive desire for the "two places at once" to be true.

I should say also that I consciously decided to turn a blind eye to the use of the word “spooky” in the title of this piece, because it does perfectly acceptable work as it is. It does not imply that “spooky action at a distance” is a thing. It is not a thing, unless it is a disproved thing. Quantum nonlocality is the alternative to that Einsteinian picture.

______________________________________________________________________

It’s the central question in quantum mechanics, and no one knows the answer: what goes on for a particle in a superposition? All of the head-scratching oddness that seems to pervade quantum theory comes from these peculiar circumstances in which particles seem to be in two places or states at once. What that really means has provoked endless debate and argument. Now a team of researchers in Israel and Japan has proposed https://www.nature.com/articles/s41598-018-26018-y an experiment that should let us say something for sure about the nature of that nebulous state [A. C. Elitzur, E. Cohen, R. Okamoto & S. Takeuchi, Sci. Rep. 8, 7730 (2018)].

Their experiment, which they say could be carried out within a few months using existing technologies, should let us sneak a glance at where a quantum object – in this case a particle of light, called a photon – actually is when it is placed in a superposition of positions. And what the researchers predict is even more shocking and strange than the usual picture of this counterintuitive quantum phenomenon.

The classic illustration of a superposition – indeed, the central experiment of quantum mechanics, according to legendary physicist Richard Feynman – involves firing particles like photons through two closely spaced slits in a wall. Because quantum particles can behave like waves, those passing through one slit can ‘interfere’ with those going through the other, their wavy ripples either boosting or cancelling one another. For photons the result is a pattern of light and dark interference bands when the particles are detected on a screen on the far side, corresponding to a high or low number of photons reaching the screen.

Once you accept the waviness of quantum particles, there’s nothing so odd about this interference pattern. You can see it for ordinary water waves passing through double slits too. What is odd, though, is that the interference remains even if the rate of firing particles at the slits is so low that only one passes through at a time. The only way to rationalize that is to say each particle somehow passes through both slits at once, and interferes with itself. That’s a superposition.

To put it another way: when we ask the seemingly reasonable question “Where is the particle in a superposition?”, we’re using a notion of “where” inherited from our classical world, to which the answer can simply be “there”. But quantum mechanics is known now to be ‘nonlocal’, which means we have to relinquish the whole notion of locality – of “whereness”, you might say.

But that’s a hard habit to give up, which is why the ‘two places at once’ picture is commonly invoked to talk about quantum superpositions. Yet quantum mechanics doesn’t say anything about what particles are like until we make measurements on them. For the Danish physicist Niels Bohr, asking where the particle was in the double-slit experiment before it was measured has no meaning within quantum theory itself.

Why don’t we just look? Well, we can. We could put a detector in or just behind one slit that could register the passing of a particle without absorbing it. And in that case, the detector will show that sometimes the particle goes through one slit, and sometimes it goes through the other. But here’s the catch: there’s then no longer an interference pattern, but just the result we’d expect for particles taking one route or the other. Observing which route the particle takes destroys its ‘quantumness’.

This isn’t about measurements disturbing the particle, since interference is absent even in instances where a detector at one slit doesn’t see the particle, so that it ‘must’ have gone through the other slit. Rather, the ‘collapse’ of a superposition seems to be caused by our mere knowledge of the path.

We can try to be smarter. What if we wait until the particle has definitely passed through the slits before we measure the path? How could that delayed measurement affect what happened earlier at the slits themselves? But it does. In the 1960s the physicist John Wheeler proposed a way of doing this using an apparatus called a Mach-Zehnder interferometer, a modification of the double-slit experiment in which a partial mirror creates a superposition of photons that seems to send them along two different paths before they are brought back together to interfere (or not).

The result was that, just as Bohr had predicted, it makes no difference if we delay the detection. Still superposition and interference vanish if we detect the path before we measure the photons. It is as if the particle ‘knows’ our intention to measure it later.

Bohr’s argument that quantum mechanics is silent about ‘reality’ beyond what we can measure has long seemed deeply unsatisfactory to many researchers. “We know something fishy is going on in a superposition”, says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research in Zichron Ya’akov. “But you’re not allowed to measure it”, he says – because then the superposition collapses. “This is what makes quantum mechanics so diabolical.”

There have been many attempts to develop alternative points of view to Bohr’s that restore an underlying reality in quantum mechanics – some description of the world before we look. But none seems able to restore the kind of picture we have in classical physics of objects that always have definite positions and paths.

One particular approach that aims to deduce something about quantum particles before their measurement is called the two-state-vector formalism (TSVF) of quantum mechanics, developed by Elitzur’s former mentor the Israeli physicist Yakir Aharonov and his collaborators. This postulates that quantum events are in some sense determined by quantum states not just in the past but also in the future: it makes the assumption that quantum mechanics works the same way both forwards and backwards in time. In this view, causes can seem to propagate backwards in time: there is retrocausality.

You don’t have to take that strange notion literally. Rather, in the TSVF you can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: not, say, simply measuring where a particle ends up, but instead choosing a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could, because it means that the particle’s situation at any instant is being evaluated retrospectively in the light of its entire history, up to and including measurement. But odd thing is that it looks as if, simply by you choosing to look for a particular outcome, the choice caused that outcome to happen.

“Normal quantum mechanics is about statistics”, Cohen says: what you see are average values, or what is generally called an expectation value of some variable you are measuring. But by looking at when a system produces some particular, chosen value, you can take a slice though the probabilistic theory and start to talk with certainty about what went on to cause that outcome. The odd thing is that it then looks as if your very choice of outcome was part of the cause.

“It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace of the University of Southern California, a philosopher who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”

Take, for instance, the version of the double-slit experiment devised using the TSVF by Aharonov and coworker Lev Vaidman in 2003. The pair described (but did not build) an optical system in which a single photon can act as a ‘shutter’ that closes a slit by perfectly reflecting another ‘probe’ photon that is doing the standard trick of interfering with itself as it passes through the slits. Aharonov and Vaidman showed that, by applying post-selection to the measurements of the probe photon, we should be able to see that a shutter photon in a superposition can close both (or indeed many) slits at once. So you could say with confidence that the shutter photon really was both ‘here’ and ‘there’ at once [Y. Aharonov & L. Vaidman, Phys. Rev. A 67, 1–3 (2003)] – a situation that seems paradoxical from our everyday experience but is one aspect of the so-called nonlocal properties of quantum particles, where the whole notion of a well-defined location in space dissolves.

In 2016, Ryo Okamoto and Shigeki Takeuchi of Kyoto University implemented Aharonov and Vaidman’s proposal experimentally using apparatus based on a Mach-Zehnder interferometer [R. Okamoto & S. Takeuchi, Sci. Rep. 6, 35161 (2016)]. The ability of a photon to act as a shutter was enabled by a photonic device called a quantum router, in which one photon can control the route taken by another. The crucial point is that this interaction is cleverly arranged to be completely one-sided: it affects only the probe photon. That way, the probe photon carries away no direct information about the shutter photon, and so doesn’t disturb its superposition – but nonetheless one can retrospectively deduce that the shutter photon was definitely in the position needed to reflect the probe.

The Japanese researchers found that the statistics of how the superposed shutter photon reflects the probe photon matched those that Aharonov and Vaidman predicted, and which could only be explained by some non-classical “two places at once” behaviour. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places”, says Cohen.

Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more ingenious experiment, which allows one to say with certainty something about the position of a particle in a superposition at a series of different points in time before any measurement has been made. And it seems that this position is even more odd than the traditional “both here and there”.

Again the experiment involves a kind of Mach-Zehnder set-up in which a shutter photon interacts with some probe photon via quantum routers. This time, though, the probe photon’s route is split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C along the probe photon’s route, and they provide an unambiguous indication that the shutter particle was definitely in a given box at a specific time.

Because nothing is inspected until the probe photon has completed the whole circuit and reached a detector, there should be no collapse of either its superposition or that of the shutter photon – so there’s still interference. But the experiment is carefully set up so that the probe photon can only show this interference pattern if it interacted with the shutter photon in a particular sequence of places and times: namely, if the shutter photon was in both boxes A and C at some time t1, then at a later time t2 only in C, and at a still later time t3 in both B and C. If you see interference in the probe photon, you can say for sure (retrospectively) that the shutter photon displayed this bizarre appearance and disappearance among the boxes at different times – an idea Elitzur, Cohen and Aharonov proposed as a possibility last year for a single particle superposed into three ‘boxes’ [Y. Aharonov, E. Cohen, A. Landau & A. C. Elitzur, Sci. Rep. 7, 531 (2017)].

Why those particular places and times, though? You could certainly look at other points on the route, says Elitzur, but those times and locations are ones where, in this configuration, the probability of finding the particle becomes 1 – in other words, a certainty.

So this thought experiment seems to lift part of the veil off a quantum superposition, and to let us say something definite beyond Bohr’s “Don’t ask” proscription. The TSVF opens up the story by considering both the initial and final states, which allows one to reconstruct what was not measured, namely what happens in between. “I like the way this paper frames questions about what is happening in terms of entire histories, rather than instantaneous states”, says physicist Ken Wharton of San Jose State University in California. “Taking about ‘states’ is an old pervasive bias, whereas full histories are generally far more rich and interesting.”

And the researchers’ interpretation of that intermediate history before measurement is extraordinary. The apparent vanishing of particles in one place at one time, and their reappearance in other times and places, suggests a new vision of what the underlying processes are that create quantum randomness and nonlocality. Within the TSVF, this flickering, ever-changing existence can be understood as a series of events in which a particle is somehow ‘cancelled’ by its own “counterparticle”, with negative energy and negative mass.

Elitzur compares this to the notion introduced by British physicist Paul Dirac in the 1920s that particles have antiparticles that can annihilate one another – a picture that seemed at first just a manner of speaking, but which soon led to the discovery that such antiparticles are real. The disappearance of quantum particles is not annihilation in this same sense, but it is somewhat analogous.

So while the traditional “two places at once” view of superpositions might seem odd enough, “it’s possible that a superposition is a collection of states that are even crazier”, says Elitzur. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. With just a hint of nervousness, he ventures to suggest that as a result, measurements on a quantum particle might be contingent on when you look even if the quantum state itself is unchanging in time. You might not find it here when you look – but had you looked a moment later, it might indeed have been there. Such an interpretation of quantum behaviour would be, Elitzur says, “revolutionary” – because it would entail a hitherto unguessed menagerie of real states underlying counter-intuitive quantum phenomena.

The researchers say that to do the actual experiment will require some refining of what quantum routers are capable of, but that they hope to have it ready to roll in three to five months. “The experiment is bound to work”, says Wharton – but he adds that it is also “bound to not convince anyone of anything, since the results are predicted by standard quantum mechanics.”

Elitzur agrees that this picture of a particle’s apparent appearance and disappearance at various points along the trajectory could have been noticed in quantum mechanics decades ago. But it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he says, “well, let them go ahead!”

Tuesday, April 24, 2018

More on the politics of genes and education

There was never any prospect that my article in New Statesman on genes, intelligence and education would wrap up everything so nicely that there was nothing left to be said. For one thing, aspects of the science are still controversial – I would have liked among other things, to delve more deeply into the difficulties (impossibility, actually) of cleanly separating genetic from environmental influences on intelligence.

I was, I admit, somewhat hard on Toby Young, while wanting to absolve him from some of the kneejerk accusations that have come his way. He is not some swivel-eyed hard-right eugenicist, and indeed if I have given the impression that he is a crude social Darwinist, as Toby thinks I have, then I have given a wrong impression: his position is more nuanced than that. Toby has been rather gracious in his response in The Spectator.

OK, not entirely – but so it goes. I recognize the temptation to construct artificial narratives, and I fear Toby has done so in his discussion of my article in Prospect. I take his remark on my “bravery” in tackling this subject after writing that piece as a backhanded compliment that implies I was brave to return to a subject after I’d screwed up earlier. In fact, my Prospect piece was not primarily about genes and intelligence anyway. Yes, Stuart Ritchie had some criticisms about that particular aspect of it, but these centred on technical arguments about other studies in the field – in other words, on issues that the specialists themselves are arguing about. Other geneticists, including some who work on intelligence, saw and approved my article. To say that I had to “publish some ‘clarifications’” after “a lot of criticism” is misleading to a rather naughty degree. The reader is meant to infer that these are euphemistic ‘clarifications’, i.e. corrections made in response to errors pointed out. Actually I “published” nothing of the sort – what Toby is referring to are merely some comments I posted on my blog in response to the discussion.

As for the link to the criticisms made by Dominic Cummings: well, I recommend you read them. Not because they add anything of substance to the discussion, but because they are a reminder of what this man, who once wielded considerable behind-the-scenes political power and who has had an inordinate influence on the current predicament of the country, is really like. I still find it chilling.

What’s most striking about Toby’s piece, however, is how political it is. I don’t consider that a criticism, but rather, a vindication of one of the central points of my article in New Statesman: that while the science is fairly (if not entirely) clear, what one concludes from it is highly dependent on political leaning.

This includes a tendency to attribute ideas and views to your political opposites simply because of their persuasion. I must acknowledge the possibility that I did so with Toby. He returns the favour here:
“I suspect the popularity of the ‘personalised learning’ recommendation among the experts in this field – as well as Philip Ball – is partly because they don’t want to antagonise their left-wing colleagues.”

Actually I am sceptical about ‘personalized learning’ based on genetic intelligence measures, and said so in the article, since I see no evidence that they could be effective (although I’m open to the possibility that that might change). The aim of my article, Toby decided, was to reassure my fellow liberals that yes, genes do influence intelligence, but really it’ll be OK.

I find this bizarre – but not as bizarre as the view Toby attributes to Charles Murray, who seems to think that the “left” is either going to have a breakdown over genetic influences on traits or, worse, will decide to embrace genetic social engineering, using CRISPR no less, to eradicate innate differences in some sort of Brave New World scenario. If Murray really thinks that, his grasp of the science is as poor as some experts have said it is. And if in his alternative universe he finds a hard-left government trying to do such things anyway, he’ll find me alongside him opposing it.

You see, what we leftists are told we believe is that everyone is a blank slate, equal in all respects, until society kicks in with its prejudices and inequalities. And we denounce anything to the contrary as crypto-fascism. Steven Pinker, who has pushed the ‘blank slate’ as a myth of the left, weighed in on my article by commenting that even left-leaning magazines like New Statesman are now having to face up to the truth, as though my intention were to confess to past leftie sins of omission.

Now, I fully acknowledge that there have been hysterical reactions to ‘sociobiology’ and to suggestions that human traits may be partly genetically hardwired. And these have often come from the left – indeed, sometimes from the Marxian post-modern intellectuals who Pinker regards as the root of so many modern evils. But such denial is plain silly, and I’m not sure that many left-leaning moderates would disagree, or would be somehow too frightened to say so.

The caricatures Toby creates are grotesque. “It’s now just flat out wrong to think that varying levels of ability and success are solely determined by economic and historical forces”, he says. We agree – but does anyone seriously want to argue otherwise?

“That means it’s a dangerous fantasy”, he continues, “to think that, once you’ve eradicated socio-economic inequality, human nature will flatten out accordingly – that you can return to ‘year zero’, as the Khmer Rouge put it. On the contrary, biological differences between human beings will stubbornly refuse to wither away, which means that an egalitarian society can only be maintained by a brutally coercive state that is constantly intervening to ‘correct’ the inequities of nature.”

But most of us who would like to see an “egalitarian society” don’t mean by that a society in which absolute equality is imposed by the jackboot. We just want to see, for example, fewer people struggle against the inequalities they are born into, while others rise to power and influence on the back of their privileged background. We want to see less tolerance of, and even encouragement of, naked greed that exploits the powerless. We want to see more equality of opportunity. I think we accept that there can never be equality of outcome, at least without unjustified coercion. But we would also like to see reward more closely tied to contribution to society, not simply to what you can get away with. And in fact, while we will differ in degree and probably in methodology, I suspect that in these aspirations we liberal lefties are not so different from Toby Young.

In fact, evidently we do agree on this much:
“The findings of evolutionary psychologists, sociobiologists, cognitive neuroscientists, biosocial criminologists, and so on, [don’t] inevitably lead to Alan Ryan’s ‘apocalyptic conservatism’. On the contrary, I think they’re compatible with a wide range of political arrangements, including – at a pinch – Scandinavian social democracy.”

Which is why it’s baffling to me why Toby thinks we “progressive liberals” should be so disconcerted by the findings of genetics. Disconcerted by the discovery that traits, like height, are partly innate? Disconcerted that a society that tries to impose complete equality of ability on everyone will be a Stalinist dystopia? The implication here seems to be that science has disproved our leftwing delusions, and we’d better face up to that. But all it has ‘disproved’ is some wild, extreme fantasies and some straw men.

Such comments only reinforce my view that all this politicization of the debate gets in the way of actually moving it on. In my experience, the reason many educators and educationalists are not terribly enchanted with studies of the genetic basis of intelligence is not because they think it is some foul plot but because they don’t see it as terribly relevant. It doesn’t help them do their job any better. Now, if that leads them to actually deny the role of genes in intelligence, then they’re barking up the wrong tree. But I think many see it merely as a distraction from the business of trying to improve education. After all, so far genetics has offered next to no suggestions about how to do that – as I said in my article, pretty much all the sensible recommendations that Robert Plomin and Kathryn Asbury make in their book could have been made without the benefit of genetic studies.

Now, one way to read the implications of those studies is that there actually isn’t much that educationalists can do. Take the recent paper by Plomin and colleagues claiming that schools make virtually no additional contribution to outcomes beyond the innate cognitive abilities of their student intake. This is a very interesting finding, but there needs to be careful discussion about what it means. So we shouldn’t worry at all about Ofsted reports of “failing” schools? I doubt if anyone would conclude that, but then how is a school influencing outcomes? When a new head arrives and turns a school around, what has happened? Has the new head somehow just managed to alter the IQ distribution of the intake? I don’t know the answers to these things.

The authors of that paper are not so unwise as to conclude that (presumably beyond some minimal level of competence) “teaching makes no difference to outcomes”. But you can imagine others drawing that conclusion, and then should understand if some teachers and educators express frustration with this sort of thing. For one thing, the differences teaching and teachers make are not always going to be registered in exam results. As things stood, I was always going to get A’s in my chemistry A levels – but it was the enthusiasm and advocacy of Dr McCarthy and Mr Heasman that inspired me to study the subject at university. I was probably always going to get an A in my English O level, but it was Ms Priske who encouraged me to read Mervyn Peake.

All too often, however, the position of right-leaning commentators on the matter can read like laissez-faire: tinker all you like but it’s not going to make much difference, because you well-meaning liberals are just going to have to accept that some pupils are smarter than others. (So why are Conservative education ministers so keen to keep buggering about with the curriculum?) And if you do manage to level the playing field, you’ll see that even more clearly. And then where will you be, eh, with all your Maoist visions?

I don’t think they really do think like this; at least I don’t think Toby does. I certainly hope not. But that’s why both sides have to stop any posturing about the facts, and get on with figuring out what to make of them. We already know not all kids will do equally well in exams, come what may. But how do we find those who could do better, given the right circumstances? How do we find ways of engaging those pupils with ability but not inclination? How do we find ways of helping those of lower academic ability feel fulfilled rather than discarded in the bottom set? How do we decide, for God’s sake, what is important in an education anyway? These are the kinds of hard questions that teachers and educators have to face every day, and it would be good to see if the knowledge we’re gaining about inherent cognitive abilities could be useful to them, rather than turning it into a political football.

Friday, April 13, 2018

The thousand-year song

In February I had the pleasure of meeting Jem Finer, the founder of the Longplayer project, to discuss the “music of the future” at this event in London. It seemed a perfect subject for my latest column for Sapere magazine on music cognition, where it will appear in Italian. Here it is in English.
______________________________________________________________

Most people will have experienced music that seemed to go on forever, and usually that’s not a good thing. But Longplayer, a composition by British musician Jem Finer, a founder member of the band The Pogues, really does. It’s a piece conceived on a geological timescale, lasting for a thousand years. So far, only 18 of them have been performed – but the performance is ongoing even as you read this. It began at the turn of the new millennium and will end on 31 December 2999. Longplayer can be heard online and at various listening posts around the world, the most evocative being a Victorian lighthouse in London’s docklands.

Longplayer is scored for a set of Tibetan singing bowls, each of which sounds in a repeating pattern determined by a mathematical algorithm that will not repeat any combination exactly until one thousand years have passed. The parts interweave in complex, constantly shifting ways, not unlike compositions such as Steve Reich’s Piano Phase in which repeating patterns move in and out of step. Right now Longplayer sounds rather serene and meditative, but Finer says that there are going to be pretty chaotic, discordant passages ahead, lasting for decades at a time – albeit not in his or my lifetime.


The visual score of Longplayer. (Image: Jem Finer/Longplayer Foundation)


An installation of Tibetan prayer bowls used for Longplayer at Trinity Buoy Wharf, London Docks. (Photo: James Whitaker)

One way to regard Longplayer is as a kind of conceptual artwork, taking with a pinch of salt the idea that it will be playing in a century’s time, let alone a millennium. Finer, though, has careful plans for how to sustain the piece into the indefinite future in the face of technological and social change. There’s no doubt that performance is a strong feature of the project: live events playing part of the piece have been rather beautiful, the instruments arrayed in concentric circles that reflect both the score itself and the sense of planetary orbits unfurling in slow, dignified synchrony.

But if this all seems ritualistic, so is a great deal of music. I do think Longplayer is a serious musical adventure, not least in how it both emphasizes and challenges the central cognitive process involved in listening: our perception of pattern and regularity. Those are the building blocks of this piece, and yet they take place mostly beyond the scope of an individual’s perception, forcing us – as perhaps the pointillistic dissonance of Pierre Boulez’s total serialism does – to find new ways of listening.

More than this, though, Longplayer connects to the persistence of music through the “deep time” of humanity, offering a message of determination and hope. Tectonic plates may shift, the climate may change, we might even reinvent ourselves – but we will do our best to ensure that this expression of ourselves will endure.


A live performance of part of Longplayer at the Yerba Buena Center, San Francisco, in 2010. (Photo: Stephen Hill)