Everyone knows how science writing works. Academic scientists labour with great diligence to tease nuanced truths from theory and experiment, only for journalists and popularizers to reduce them to simplistic sound bites for the sake of a good story.
I’ve been moved to ponder that narrative by the widespread appearance on Christmas science/non-fiction books lists of two books by leading science academics: Steven Pinker’s Enlightenment Now and Robert Plomin’s Blueprint. I reviewed both books at length in Prospect, and my feelings about both of them were surprisingly similar: they have some important and valuable things to say, but are both infuriating too in terms of what they fudge, leave out or misrepresent.
I won’t recapitulate those views here. Plomin has taken some flak for the genetic determinism that his book seems to encourage – most recently from Angela Saini in the latest Prospect, whose conclusion I fully endorse: “Scientists… should concentrate on engaging with historians and social scientists to better understand humans not as simple biological machines but as complex, social beings.” Pinker has been excoriated in one or two places (most vigorously, and some would say predictable, by John Gray) for using the “Enlightenment” ahistorically as a concept to be moulded at will to fit his agenda (not to mention his simplistic and obsolete characterization of Nietzsche).
What both books do is precisely what the caricature of science journalism above is said to do, albeit with more style and more graphs: to eschew nuance and caveats in order to tell a story that is only partly true.
And here’s the moral: it works! By delivering a controversial message in this manner, both books have received massive media attention. If they had been more careful, less confrontational, more ready to tell a complex story, I very much doubt that they would have been awarded anything like as much coverage.
Now, my impression here – having spoken to both Pinker and Plomin – is that they both genuinely believe what they wrote. Yes, Pinker did acknowledge that he was using a simplified picture of the Enlightenment for rhetorical ends, and in conversation Plomin and I were broadly in agreement most of the time about what genetic analyses do and don’t show about human behaviour. But I don’t think either of them was setting out cynically to present a distorted message in order to boost book sales. What seems to be happening here is more in the line of a tacit collusion between academics keen to push a particular point of view (nothing wrong with that in itself) and publishers keen to see an eye-catching and controversial message. And we have, of course, been here before (The God Delusion, anyone?).
Stephen Hawking’s book Brief Answers to the Big Questions was also a popular book choice for 2018 that, in a different way, often veered towards the reductively simplistic, though it seemed to fall only to me (so far as I was able) and my esteemed colleague Michael Brooks to point that out in our reviews.
It seems, then, increasingly to be the job of science writers and critics, like Angela and Michael, to hold the “specialists” to account – and not vice versa.
I could nobly declare that I decline to adopt such a tactic to sell my own books. But the truth is that I couldn’t do it even if I wanted to. My instincts are too set against it. For one thing, it would cause me too much discomfort, even pain, to knowingly ignore or cherry-pick historical or scientific facts (which isn’t to say that I will sometimes get them wrong), or to decline to enter areas of enquiry that might dilute a catchy thesis. But perhaps even more importantly, I would find simplistic narratives and theses to be just a bit too boring to sustain me through a book project. What interests me is not winning some constructed argument but exploring ideas – including the fascinating ideas in Enlightenment Now and Blueprint.
Wednesday, December 12, 2018
Wednesday, October 31, 2018
Musical illusions
Here's the English version of my column on music cognition for the current issue of the Italian science magazine Sapere.
_____________________________________________________________
“In studying [optical] illusions”, writes Kathryn Schulz in her book Being Wrong, “scientists aren’t learning how our visual system fails. They are learning how it works.” What Schulz means is that normal visual processing is typically a matter of integrating confusing information into a plausible story that lets us navigate the world. Colour constancy is a good example: the brain “corrects” for variations in brightness so that objects don’t appear to change hue as the lighting conditions alter. The famous “checkerboard shadow” illusion devised by vision scientist Edward Adelson fools this automatic recalibration of perception.
Adelson’s checkerboard illusion. The squares A and B are the same shade of grey.
In this regard as in many others, auditory perception mirrors the visual. The brain often rearranges what we hear to create something that “makes more sense” – with the same potential for creating illusions. Psychologist of music Diana Deutsch has delved deeply into the subject of musical illusions, some of which are presented on a CD released by Philomel in 1995. Several of these have to be heard through stereo headphones: they deliver different pitches to the left and right ears, which the brain reassigns to create coherence. For example, in the “scale illusion” the notes of two simultaneous scales – one ascending, one descending – are sent alternately to each ear. But what one hears is a much simpler pattern: an ascending scale in one ear, descending in the other. Here the brain is choosing to perceive the more likely pattern, even though it’s wrong.
Another example reveals the limitations of pitch perception. A familiar tune (Yankee Doodle) is played with each note assigned a random octave. It sounds incomprehensible. The test shows that, in deciphering melody, we attend not so much to absolute pitch class (a C or D, say) as to relative pitch: how big pitch jumps are between successive notes. Arnold Schoenberg’s twelve-tone serialism ignored this, which is why the persistence of his “tone rows” is often inaudible.
Perhaps the strangest thing about optical illusions is that we enjoy them, even if – indeed, because – we find them perplexing. Instead of being upset by the brain’s inability to “get it right”, we are apt to laugh – not a common response to wrongness, although it’s actually how a lot of comedy works. You might, then, expect to find musical illusions put to pleasurable use in music, especially by jokers like Mozart. But they are rather rare, maybe because we simply won’t notice them unless we see the score. Something like the scale illusion is used, however, in the second movement of Rachmaninov’s Second Suite for Two Pianos, where two sets of seesawing notes on each piano are heard as two sets of single repeated notes. It seems likely that Rachmaninov (not noted for jocularity) wasn’t just having fun – it’s merely easier to play these rapid quavers using pitch jumps rather than on the same note.
_____________________________________________________________
“In studying [optical] illusions”, writes Kathryn Schulz in her book Being Wrong, “scientists aren’t learning how our visual system fails. They are learning how it works.” What Schulz means is that normal visual processing is typically a matter of integrating confusing information into a plausible story that lets us navigate the world. Colour constancy is a good example: the brain “corrects” for variations in brightness so that objects don’t appear to change hue as the lighting conditions alter. The famous “checkerboard shadow” illusion devised by vision scientist Edward Adelson fools this automatic recalibration of perception.
Adelson’s checkerboard illusion. The squares A and B are the same shade of grey.
In this regard as in many others, auditory perception mirrors the visual. The brain often rearranges what we hear to create something that “makes more sense” – with the same potential for creating illusions. Psychologist of music Diana Deutsch has delved deeply into the subject of musical illusions, some of which are presented on a CD released by Philomel in 1995. Several of these have to be heard through stereo headphones: they deliver different pitches to the left and right ears, which the brain reassigns to create coherence. For example, in the “scale illusion” the notes of two simultaneous scales – one ascending, one descending – are sent alternately to each ear. But what one hears is a much simpler pattern: an ascending scale in one ear, descending in the other. Here the brain is choosing to perceive the more likely pattern, even though it’s wrong.
Another example reveals the limitations of pitch perception. A familiar tune (Yankee Doodle) is played with each note assigned a random octave. It sounds incomprehensible. The test shows that, in deciphering melody, we attend not so much to absolute pitch class (a C or D, say) as to relative pitch: how big pitch jumps are between successive notes. Arnold Schoenberg’s twelve-tone serialism ignored this, which is why the persistence of his “tone rows” is often inaudible.
Perhaps the strangest thing about optical illusions is that we enjoy them, even if – indeed, because – we find them perplexing. Instead of being upset by the brain’s inability to “get it right”, we are apt to laugh – not a common response to wrongness, although it’s actually how a lot of comedy works. You might, then, expect to find musical illusions put to pleasurable use in music, especially by jokers like Mozart. But they are rather rare, maybe because we simply won’t notice them unless we see the score. Something like the scale illusion is used, however, in the second movement of Rachmaninov’s Second Suite for Two Pianos, where two sets of seesawing notes on each piano are heard as two sets of single repeated notes. It seems likely that Rachmaninov (not noted for jocularity) wasn’t just having fun – it’s merely easier to play these rapid quavers using pitch jumps rather than on the same note.
Monday, October 29, 2018
Why brief answers are sometimes not enough
I reviewed Stephen Hawking's last book Brief Answers to the Big Questions for New Scientist, but it needed shortening and, in the print version, didn't come out as I'd intended. Here's the original.
_____________________________________________________________________
Most people as famous as Stephen Hawking have their character interrogated with forensic intimacy. But Hawking’s personality was in its way as insulated as the Queen’s, impermeably fortified by the role allotted to him. There’s a hint in Brief Answers that he knew this: “I fit the stereotype of a disabled genius”, he writes. Unworldly intelligence, a wry sense of humour, and tremendous resilience against adversity: that seemed to suffice for the celebrity in the wheelchair with the computerized voice (itself another part of the armour, of course).
It made me uneasy though. The public Hawking was that stereotype, and while it was delightful to see how he demolished the does-he-take-sugar laziness that links physical with mental disability, he did so only by taking matters to the other extreme ("such a mind in such a body!"). It perhaps suited Hawking that the media were content with the cliché – he didn’t give much impression of caring for the touchy-feely. (Eddie Redmayne, who played Hawking in the 2014 biopic The Theory of Everything, reminds us in his foreword that the physicist would have preferred the film to have “more physics and fewer feelings”.) But his story suggests we still have some way to go in integrating people with disabilities into able-bodied society.
I approached this book, a collection of Hawking’s later essays on “big questions”, with some trepidation. You know you won’t go wrong with the cosmology, relativity and quantum mechanics, but in other areas, even within science, it’s touch and go. The scientific essays supply a series of now-familiar Greatest Hits: his work with Roger Penrose on gravitational singularities and their relation to the Big Bang; his realization that black holes will emit energy (Hawking radiation) from their event horizons; his speculations about the origin of the universe in a chance quantum fluctuation; the debate – still unresolved – about whether black holes destroy information. Hawking, as Kip Thorne reminds us in his introduction, helped to integrate several of the central concepts of physics: general relativity, quantum mechanics, thermodynamics and information theory. It’s a phenomenal body of work.
Sometimes there’s a plainness to his prose that can be touching even while it sounds like an anodyne self-help manual: “Be brave, be curious, be determined, overcome the odds. It can be done.” Who would argue with Hawking’s right to that sentiment? His plea for the importance of inspirational teaching, his concerns about climate change and environmental degradation, his contempt for Trump and the regressive aspects of Brexit, and (albeit not here) his championing of the NHS, sometimes made you glad to have Hawking on your side. People listened.
A common danger with collections of this kind is repetition, which the editors have been curiously unconcerned to avoid. But the recurring and familiar passages are in themselves quite telling, for they show Hawking curating his image: the boy who was always taking things apart but not always managing to put them back together again, the man who told us to “look up at the stars and no down at your feet.”
There’s no doubt that Hawking cared passionately about the future of humankind and the potential of science to improve it. His advocacy resembles the old-fashioned boosterism into which H. G. Wells often strayed in later life, tempered like Wells by an awareness of the destructive potential of technologies in malicious or plain foolish hands. But what are Hawking’s resources for developing that agenda? One of the most striking features of this book is the lack of extra-curricular references – to art, music, philosophy, literature, say. This would not matter so much (though it’s a bit odd) if it were not that the scope of some of pieces exposes these gaps painfully.
Beginning an essay called “Is there a God” by saying that “people will always cling to religion, because it gives comfort, and they do not trust or understand science” tells you pretty much what to expect from it, and you’d not be wrong. God, as no theologian said ever, is all about explaining the origin of the universe. And most people, Hawking tells us, define God as “a human-like being, with whom one can have a personal relationship.” I suspect “most people’s” views of what a molecule or light is would bear similarly scant resemblance to what well-informed folks say on the matter, but I doubt Hawking would give those views precedence.
As for history, try this: “People might well have argued that it was a waste of money to send Columbus on a wild goose chase. Yet the discovery of the New World made a profound difference to the Old. Just think, we wouldn’t have had the Big Mac or KFC.” The lame joke might have been just about tolerable if one didn’t sense it is there because Hawking could think of nothing to put in its place. This remark, as you might guess, is part of a defense of human space exploration, during which Hawking demonstrates no more inclination to probe the real reasons for the space race in the 1960s than he does to examine what Columbus was all about. He feels that the human race has no future if we don’t colonize space, although it isn’t clear why his generally dim view of our self-destructive idiocies becomes so rosy once we are on other worlds. Maybe the answer lies with the fact that here, as elsewhere, his main point of reference is Star Trek. But I suspect he knew he was preaching to the converted, so that mere assertion (“We have no other option”) was all he needed in lieu of argument.
There’s a glib insouciance to some of the other scientific speculations too. “If there is intelligent life elsewhere”, he writes, “it must be a very long way away otherwise it would have visited earth by now. And I think we would’ve known if we had been visited; it would be like the film Independence Day.” Assertion again replaces explanation in Hawking’s assumption apropos artificial intelligence that the human brain is just like a computer, as if this were not hotly disputed among neuroscientists. Here too, his vision seems mainly informed by the science fiction within easiest reach: his fears for the dangers of AI conjure up the Terminator series’ Skynet and tropes of supercomputers declaring themselves God and fusing the plug. Science fiction has plenty to tell us about our fears of the present, but probably rather less about the realities of the future.
It is best, too, not to rely on Hawking’s history of science, which for example parrots the myth of Max Planck postulating the quantum to avoid the ‘ultraviolet catastrophe’ of blackbody radiation. (Planck did not mention it.) Don’t expect more than the usual clichés: here comes Feynman, playing the bongos in a strip joint (what a guy!), there goes Einstein riding on a light wave.
This is all, in a sense, so very unfair. Hawking was a great scientist who had a remarkable life, but in another universe without motor neurone disease (well, he did like the Many Worlds interpretation of quantum mechanics) we’d have no reason to confer such authority on his thoughts about all and sundry, or to notice or care that he entered the peculiar time-warp that is Stringfellows “gentlemen’s club”. We would not deny him the right to his ordinariness, and we would see his occasional brash arrogance and egotism for no more or less than it is.
There’s every reason to believe that Hawking enjoyed his fame, and that’s a cheering thought. The Hawking phenomenon is our problem, not his. He likes to remind us that he was born on the same date that Galileo died, but it’s Brecht’s Galileo that comes to mind here: to paraphrase, unhappy is the land that needs a guru.
_____________________________________________________________________
Most people as famous as Stephen Hawking have their character interrogated with forensic intimacy. But Hawking’s personality was in its way as insulated as the Queen’s, impermeably fortified by the role allotted to him. There’s a hint in Brief Answers that he knew this: “I fit the stereotype of a disabled genius”, he writes. Unworldly intelligence, a wry sense of humour, and tremendous resilience against adversity: that seemed to suffice for the celebrity in the wheelchair with the computerized voice (itself another part of the armour, of course).
It made me uneasy though. The public Hawking was that stereotype, and while it was delightful to see how he demolished the does-he-take-sugar laziness that links physical with mental disability, he did so only by taking matters to the other extreme ("such a mind in such a body!"). It perhaps suited Hawking that the media were content with the cliché – he didn’t give much impression of caring for the touchy-feely. (Eddie Redmayne, who played Hawking in the 2014 biopic The Theory of Everything, reminds us in his foreword that the physicist would have preferred the film to have “more physics and fewer feelings”.) But his story suggests we still have some way to go in integrating people with disabilities into able-bodied society.
I approached this book, a collection of Hawking’s later essays on “big questions”, with some trepidation. You know you won’t go wrong with the cosmology, relativity and quantum mechanics, but in other areas, even within science, it’s touch and go. The scientific essays supply a series of now-familiar Greatest Hits: his work with Roger Penrose on gravitational singularities and their relation to the Big Bang; his realization that black holes will emit energy (Hawking radiation) from their event horizons; his speculations about the origin of the universe in a chance quantum fluctuation; the debate – still unresolved – about whether black holes destroy information. Hawking, as Kip Thorne reminds us in his introduction, helped to integrate several of the central concepts of physics: general relativity, quantum mechanics, thermodynamics and information theory. It’s a phenomenal body of work.
Sometimes there’s a plainness to his prose that can be touching even while it sounds like an anodyne self-help manual: “Be brave, be curious, be determined, overcome the odds. It can be done.” Who would argue with Hawking’s right to that sentiment? His plea for the importance of inspirational teaching, his concerns about climate change and environmental degradation, his contempt for Trump and the regressive aspects of Brexit, and (albeit not here) his championing of the NHS, sometimes made you glad to have Hawking on your side. People listened.
A common danger with collections of this kind is repetition, which the editors have been curiously unconcerned to avoid. But the recurring and familiar passages are in themselves quite telling, for they show Hawking curating his image: the boy who was always taking things apart but not always managing to put them back together again, the man who told us to “look up at the stars and no down at your feet.”
There’s no doubt that Hawking cared passionately about the future of humankind and the potential of science to improve it. His advocacy resembles the old-fashioned boosterism into which H. G. Wells often strayed in later life, tempered like Wells by an awareness of the destructive potential of technologies in malicious or plain foolish hands. But what are Hawking’s resources for developing that agenda? One of the most striking features of this book is the lack of extra-curricular references – to art, music, philosophy, literature, say. This would not matter so much (though it’s a bit odd) if it were not that the scope of some of pieces exposes these gaps painfully.
Beginning an essay called “Is there a God” by saying that “people will always cling to religion, because it gives comfort, and they do not trust or understand science” tells you pretty much what to expect from it, and you’d not be wrong. God, as no theologian said ever, is all about explaining the origin of the universe. And most people, Hawking tells us, define God as “a human-like being, with whom one can have a personal relationship.” I suspect “most people’s” views of what a molecule or light is would bear similarly scant resemblance to what well-informed folks say on the matter, but I doubt Hawking would give those views precedence.
As for history, try this: “People might well have argued that it was a waste of money to send Columbus on a wild goose chase. Yet the discovery of the New World made a profound difference to the Old. Just think, we wouldn’t have had the Big Mac or KFC.” The lame joke might have been just about tolerable if one didn’t sense it is there because Hawking could think of nothing to put in its place. This remark, as you might guess, is part of a defense of human space exploration, during which Hawking demonstrates no more inclination to probe the real reasons for the space race in the 1960s than he does to examine what Columbus was all about. He feels that the human race has no future if we don’t colonize space, although it isn’t clear why his generally dim view of our self-destructive idiocies becomes so rosy once we are on other worlds. Maybe the answer lies with the fact that here, as elsewhere, his main point of reference is Star Trek. But I suspect he knew he was preaching to the converted, so that mere assertion (“We have no other option”) was all he needed in lieu of argument.
There’s a glib insouciance to some of the other scientific speculations too. “If there is intelligent life elsewhere”, he writes, “it must be a very long way away otherwise it would have visited earth by now. And I think we would’ve known if we had been visited; it would be like the film Independence Day.” Assertion again replaces explanation in Hawking’s assumption apropos artificial intelligence that the human brain is just like a computer, as if this were not hotly disputed among neuroscientists. Here too, his vision seems mainly informed by the science fiction within easiest reach: his fears for the dangers of AI conjure up the Terminator series’ Skynet and tropes of supercomputers declaring themselves God and fusing the plug. Science fiction has plenty to tell us about our fears of the present, but probably rather less about the realities of the future.
It is best, too, not to rely on Hawking’s history of science, which for example parrots the myth of Max Planck postulating the quantum to avoid the ‘ultraviolet catastrophe’ of blackbody radiation. (Planck did not mention it.) Don’t expect more than the usual clichés: here comes Feynman, playing the bongos in a strip joint (what a guy!), there goes Einstein riding on a light wave.
This is all, in a sense, so very unfair. Hawking was a great scientist who had a remarkable life, but in another universe without motor neurone disease (well, he did like the Many Worlds interpretation of quantum mechanics) we’d have no reason to confer such authority on his thoughts about all and sundry, or to notice or care that he entered the peculiar time-warp that is Stringfellows “gentlemen’s club”. We would not deny him the right to his ordinariness, and we would see his occasional brash arrogance and egotism for no more or less than it is.
There’s every reason to believe that Hawking enjoyed his fame, and that’s a cheering thought. The Hawking phenomenon is our problem, not his. He likes to remind us that he was born on the same date that Galileo died, but it’s Brecht’s Galileo that comes to mind here: to paraphrase, unhappy is the land that needs a guru.
Thursday, September 13, 2018
The "dark woman of DNA" goes missing again
There’s a curious incident that took place at the excellent "Schrödinger at 75: The Future of Life" meeting in Dublin last week that I’ve been pondering ever since.
One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.
Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.
I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.
You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?
Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).
But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.
I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.
And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.
Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.
One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.
Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.
I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.
You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?
Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).
But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.
I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.
And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.
Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.
Friday, September 07, 2018
What Is Life? Schrödinger at 75
The conference “Schrödinger at 75: The Future of Life” in Dublin, from which I’m now returning, was a fabulous event, packed with good talks equally from eminent folks (including several Nobel laureates) and young rising stars. Ostensibly an exploration of the legacy of Erwin Schrödinger’s influential 1944 book What Is Life?, based on the lectures he gave 75 years ago as director of physical sciences at the Dublin Institute for Advanced Study (on which, more here), it was in fact largely a wonderful excuse to get a bunch of very smart people in the same hall to talk about many areas of the life (and chemical) sciences today and to speculate about what the future holds for them. I think I took away something interesting from every talk.
There was of course much dutiful nodding towards Schrödinger’s book, and also to some of his writing elsewhere, especially his essays in Mind and Matter (1958), where he offered some speculations about mind and consciousness (about half of the speakers worked on aspects of brain, mind and cognition). This didn’t seem merely tokenistic to me – I felt that all the speakers who mentioned Schrödinger had a genuine respect for his ideas. This is all the more interesting given that, as I say in my Nature piece, there wasn’t in some ways a great deal that was truly new and productive of further research in the book. Of course, what gets mentioned most is Schrödinger’s reference to a “code-script” that governs life and which is inherited, and his suggestion that this is encoded in the chromosomes as an “aperiodic crystal”. That image certainly resonated with Francis Crick, who wrote to Schrödinger in 1953 to tell him so.
But the idea of a “code”, as well as the notion that it could be replicated in a manner reminiscent of the ‘templating’ of structure in a crystal, were not really new. It seems rather to be something about the way Schrödinger expressed this idea that mattered, and indeed I can see why: his book is beautifully written, achieving persuasive force without seeming like the imposition of an arrogant physicist.
All of this I enjoyed. But what I missed was a historical presentation that could have put these tributes to What Is Life? in context. There was, for instance, a sense of unease about Schrödinger’s references to “order” and “organization”. What exactly was he getting at here? One suggestion was that “order” here was standing in for that crucial missing word: “information”. But this isn’t really true. Schrödinger’s “code-script” was presented as the means by which an organism’s “organization” is maintained, although quite how it does so he found wholly mysterious, even if the inter-generational transmission of the script by the “aperiodic crystal” was far less so.
What we need to know here is that “organization” had become a biological power-word, a symbol of what it was about living systems that distinguishes them from non-living. In the early nineteenth century this unique property of life was conferred by élan vital in the formulation of vitalism. As vitalism waned, it had to become something more tangible and physical. Some believed, like Thomas Henry Huxley, that the key was a special chemical composition, which made up the stuff of “protoplasm”, the primal living substance from which all life was descended. But as the chemical complexity and heterogeneity of living matter became apparent from the work of late nineteenth-century physiologists, and as the cell came to be seen as the fundamental unit of life, the idea arose that life was distinguished by some peculiar state of “organization” below the level that microscopes could resolve. There were a few tantalizing glimpses of this subcellular organization, for example in the stained chromosome fibres and organelles like the nucleus and mitochondria. These were, however, nothing but blurry blobs, offering no real clue about how their (presumably) molecular nature gave them the apparent agency that distinguished life.
And so, as Andrew Reynolds has shown, “order” and “organization” served a role that was barely more than metaphorical, patching over an ignorance about “what is life”. There’s nothing deplorable about that; it’s the kind of thing science must do all the time, giving a name to an absence of understanding so that it can be contained and built into contingent theories. But for Schrödinger to still be using it in the 1940s shows how his biological reading was rather archaic, for by that stage it had already become apparent that cell physiology relies on enzyme action, and crystallographers like J. Desmond Bernal and Bill Astbury were beginning to apply X-ray crystallography to these proteins to understand their structure. Sure, the origins and nature of the “organization” that cells seemed to exhibit were still pretty obscure, but it was getting less necessary to invoke that nebulous concept.
There were also suggestions at the Dublin meeting that Schrödinger’s “order” was what he meant with his talk of “negative entropy”. There’s some justification to think that, but Schrödinger wasn’t just thinking about how cells prevent their “organization” from falling into entropic disarray. He was puzzled by how this organization could exist in the first place. I don’t think one can really understand his discussion of order and entropy in What Is Life? unless one recognizes that many physical scientists in the early twentieth century considered the molecular world to be fundamentally random. It seems remarkable to me that no accounts of What Is Life? that I have seen refer to Schrodinger’s 1944 essay in Nature on “The Statistical Law in Nature”, where it is almost as if Schrödinger is telling us: ‘this is what I’m thinking about in my book’. The article is a paean to Ludwig Boltzmann, whose influence Schrödinger felt strongly in his early years at Vienna. Schrödinger seems to assert here that there are no laws in nature that do not rely on the statistical averaging over the behaviours of countless microscopic particles. It would have seemed all but meaningless then to suppose that one could speak about law-like, deterministic behaviour at the level of individual molecules, and quantum mechanics had seemed only to confirm this. That is what puzzled Schrödinger so much about the apparent persistence of phenotypic traits that seemed necessarily to arise from the specific details of genes at the molecular scale.
As a consequence, What Is Life? reads a little weirdly to chemists today, or indeed even by the 1950s, to whom the notion that a complex molecule can adopt and sustain a particular structure even in the face of thermal fluctuations seemed unproblematic. Schrödinger’s invocation of quantum mechanics to explain this phenomenon looks rather laboured now, and is quite possibly a part of what irritated Linus Pauling and Max Perutz about the book. It’s also why Schrödinger seems so keen to cement the structure of the gene in place as a “solid”, rather than simply regarding it as a large molecule carrying a linear code.
And what about that code itself? This wasn’t interrogated at the meeting, which was a shame. Indeed, it was sometimes still attributed by speakers the all-mighty agency that Schrödinger himself gave it. It rather astonishes me to see how the claim that the genome contains “all the information you need to make the organism” raises no eyebrows. What surprises me is that scientists are typically a rather sceptical crowd, and demand evidence to support the claims they make. But there is, to my knowledge, no evidence whatsoever that one can make even the simplest organism, let alone a human, from the information contained in the genome. Oh, but surely you can? You can (in principle, and now in some cases in practice) just make the genome from scratch, put it in a cell, and off it goes… Wait. Put it in a cell? So you need a cell to actually enact the “code-script”? Well sure, but the cell goes without saying, right?
Metaphors in biology are always imperfect and often treacherous, but I think this one (a simile, really) has some mileage: saying that the genome is the complete blueprint for an organism is a bit like saying that the Oxford English Dictionary is a blueprint for King Lear. It’s all in there, right? Ok, there’s a lot in there that you don’t need for Lear, but then there’s a lot of junk in the genome too (perhaps!). Sure, to get Lear out of the OED you need to feed the words into William Shakespeare, but Shakespeare goes without saying, right?
For a human, it’s still more complicated. Human cells can of course replicate in a culture medium, but none has ever replicated into an embryo, let alone a person. What they can do – what some induced stem cells can do – is proliferate into an embryoid, an organoid with embryo-like structures. But that won’t make a human. For that, you need not only a cell but a uterus. It’s rather like saying, so the text of King Lear has “all the information” – and then giving it to, say, a Chinese factory worker in Lanzhou. Well OK, so to actually enact Lear in a meaningful way it has to be read by someone who reads English – or translated… But come on, the English goes without saying…
Once we start talking in terms of the information needed to make an organism, though, quite what’s in the genome becomes far less clear. Indeed, we know for sure that maternal factors supply some vital information for the early development of a fertilized egg. And the self-organizing abilities of cells can only create an organism in the right context: every cell needs the right signals from its environment for the whole to assemble properly. Genes somehow encode neurons, but neurons don’t develop properly if they don’t get stimuli from their environment during a critical period.
Are these environmental signals and context then a part of the information needed to make an organism “as nature [meaning evolution, I guess] intends”? Is an understanding of English a part of the information needed for King Lear to be anything more than marks on paper?
Evidently this is an issue of how “information” acquires meaning, which of course was notoriously what Shannon left out of his information theory. And that is why information in Shannon’s sense is greatest when the Shannon entropy is greatest. Periodic solids have rather low entropy. What is needed in biology, then, is a theory for where meaningful information comes from and how it gives rise to causal flows. There’s no doubt that lots of meaningful information is encoded in the genome that contributes to how organisms are built and how they function. But when we say that “the genome contains all the information needed to build an organism”, we are dealing with ill-defined terms. What I solely missed at this meeting was a presentation about how a theory of biological information can be developed, and how to define and measure “meaning” within that theory. Daniel Dennett acknowledged this lacuna in his keynote address, saying that understanding “semantic information” as opposed to Shannon information is still “work in progress”.
A close reading of Schrödinger starts us in that direction too, and is a part of his legacy.
There was of course much dutiful nodding towards Schrödinger’s book, and also to some of his writing elsewhere, especially his essays in Mind and Matter (1958), where he offered some speculations about mind and consciousness (about half of the speakers worked on aspects of brain, mind and cognition). This didn’t seem merely tokenistic to me – I felt that all the speakers who mentioned Schrödinger had a genuine respect for his ideas. This is all the more interesting given that, as I say in my Nature piece, there wasn’t in some ways a great deal that was truly new and productive of further research in the book. Of course, what gets mentioned most is Schrödinger’s reference to a “code-script” that governs life and which is inherited, and his suggestion that this is encoded in the chromosomes as an “aperiodic crystal”. That image certainly resonated with Francis Crick, who wrote to Schrödinger in 1953 to tell him so.
But the idea of a “code”, as well as the notion that it could be replicated in a manner reminiscent of the ‘templating’ of structure in a crystal, were not really new. It seems rather to be something about the way Schrödinger expressed this idea that mattered, and indeed I can see why: his book is beautifully written, achieving persuasive force without seeming like the imposition of an arrogant physicist.
All of this I enjoyed. But what I missed was a historical presentation that could have put these tributes to What Is Life? in context. There was, for instance, a sense of unease about Schrödinger’s references to “order” and “organization”. What exactly was he getting at here? One suggestion was that “order” here was standing in for that crucial missing word: “information”. But this isn’t really true. Schrödinger’s “code-script” was presented as the means by which an organism’s “organization” is maintained, although quite how it does so he found wholly mysterious, even if the inter-generational transmission of the script by the “aperiodic crystal” was far less so.
What we need to know here is that “organization” had become a biological power-word, a symbol of what it was about living systems that distinguishes them from non-living. In the early nineteenth century this unique property of life was conferred by élan vital in the formulation of vitalism. As vitalism waned, it had to become something more tangible and physical. Some believed, like Thomas Henry Huxley, that the key was a special chemical composition, which made up the stuff of “protoplasm”, the primal living substance from which all life was descended. But as the chemical complexity and heterogeneity of living matter became apparent from the work of late nineteenth-century physiologists, and as the cell came to be seen as the fundamental unit of life, the idea arose that life was distinguished by some peculiar state of “organization” below the level that microscopes could resolve. There were a few tantalizing glimpses of this subcellular organization, for example in the stained chromosome fibres and organelles like the nucleus and mitochondria. These were, however, nothing but blurry blobs, offering no real clue about how their (presumably) molecular nature gave them the apparent agency that distinguished life.
And so, as Andrew Reynolds has shown, “order” and “organization” served a role that was barely more than metaphorical, patching over an ignorance about “what is life”. There’s nothing deplorable about that; it’s the kind of thing science must do all the time, giving a name to an absence of understanding so that it can be contained and built into contingent theories. But for Schrödinger to still be using it in the 1940s shows how his biological reading was rather archaic, for by that stage it had already become apparent that cell physiology relies on enzyme action, and crystallographers like J. Desmond Bernal and Bill Astbury were beginning to apply X-ray crystallography to these proteins to understand their structure. Sure, the origins and nature of the “organization” that cells seemed to exhibit were still pretty obscure, but it was getting less necessary to invoke that nebulous concept.
There were also suggestions at the Dublin meeting that Schrödinger’s “order” was what he meant with his talk of “negative entropy”. There’s some justification to think that, but Schrödinger wasn’t just thinking about how cells prevent their “organization” from falling into entropic disarray. He was puzzled by how this organization could exist in the first place. I don’t think one can really understand his discussion of order and entropy in What Is Life? unless one recognizes that many physical scientists in the early twentieth century considered the molecular world to be fundamentally random. It seems remarkable to me that no accounts of What Is Life? that I have seen refer to Schrodinger’s 1944 essay in Nature on “The Statistical Law in Nature”, where it is almost as if Schrödinger is telling us: ‘this is what I’m thinking about in my book’. The article is a paean to Ludwig Boltzmann, whose influence Schrödinger felt strongly in his early years at Vienna. Schrödinger seems to assert here that there are no laws in nature that do not rely on the statistical averaging over the behaviours of countless microscopic particles. It would have seemed all but meaningless then to suppose that one could speak about law-like, deterministic behaviour at the level of individual molecules, and quantum mechanics had seemed only to confirm this. That is what puzzled Schrödinger so much about the apparent persistence of phenotypic traits that seemed necessarily to arise from the specific details of genes at the molecular scale.
As a consequence, What Is Life? reads a little weirdly to chemists today, or indeed even by the 1950s, to whom the notion that a complex molecule can adopt and sustain a particular structure even in the face of thermal fluctuations seemed unproblematic. Schrödinger’s invocation of quantum mechanics to explain this phenomenon looks rather laboured now, and is quite possibly a part of what irritated Linus Pauling and Max Perutz about the book. It’s also why Schrödinger seems so keen to cement the structure of the gene in place as a “solid”, rather than simply regarding it as a large molecule carrying a linear code.
And what about that code itself? This wasn’t interrogated at the meeting, which was a shame. Indeed, it was sometimes still attributed by speakers the all-mighty agency that Schrödinger himself gave it. It rather astonishes me to see how the claim that the genome contains “all the information you need to make the organism” raises no eyebrows. What surprises me is that scientists are typically a rather sceptical crowd, and demand evidence to support the claims they make. But there is, to my knowledge, no evidence whatsoever that one can make even the simplest organism, let alone a human, from the information contained in the genome. Oh, but surely you can? You can (in principle, and now in some cases in practice) just make the genome from scratch, put it in a cell, and off it goes… Wait. Put it in a cell? So you need a cell to actually enact the “code-script”? Well sure, but the cell goes without saying, right?
Metaphors in biology are always imperfect and often treacherous, but I think this one (a simile, really) has some mileage: saying that the genome is the complete blueprint for an organism is a bit like saying that the Oxford English Dictionary is a blueprint for King Lear. It’s all in there, right? Ok, there’s a lot in there that you don’t need for Lear, but then there’s a lot of junk in the genome too (perhaps!). Sure, to get Lear out of the OED you need to feed the words into William Shakespeare, but Shakespeare goes without saying, right?
For a human, it’s still more complicated. Human cells can of course replicate in a culture medium, but none has ever replicated into an embryo, let alone a person. What they can do – what some induced stem cells can do – is proliferate into an embryoid, an organoid with embryo-like structures. But that won’t make a human. For that, you need not only a cell but a uterus. It’s rather like saying, so the text of King Lear has “all the information” – and then giving it to, say, a Chinese factory worker in Lanzhou. Well OK, so to actually enact Lear in a meaningful way it has to be read by someone who reads English – or translated… But come on, the English goes without saying…
Once we start talking in terms of the information needed to make an organism, though, quite what’s in the genome becomes far less clear. Indeed, we know for sure that maternal factors supply some vital information for the early development of a fertilized egg. And the self-organizing abilities of cells can only create an organism in the right context: every cell needs the right signals from its environment for the whole to assemble properly. Genes somehow encode neurons, but neurons don’t develop properly if they don’t get stimuli from their environment during a critical period.
Are these environmental signals and context then a part of the information needed to make an organism “as nature [meaning evolution, I guess] intends”? Is an understanding of English a part of the information needed for King Lear to be anything more than marks on paper?
Evidently this is an issue of how “information” acquires meaning, which of course was notoriously what Shannon left out of his information theory. And that is why information in Shannon’s sense is greatest when the Shannon entropy is greatest. Periodic solids have rather low entropy. What is needed in biology, then, is a theory for where meaningful information comes from and how it gives rise to causal flows. There’s no doubt that lots of meaningful information is encoded in the genome that contributes to how organisms are built and how they function. But when we say that “the genome contains all the information needed to build an organism”, we are dealing with ill-defined terms. What I solely missed at this meeting was a presentation about how a theory of biological information can be developed, and how to define and measure “meaning” within that theory. Daniel Dennett acknowledged this lacuna in his keynote address, saying that understanding “semantic information” as opposed to Shannon information is still “work in progress”.
A close reading of Schrödinger starts us in that direction too, and is a part of his legacy.
Monday, August 27, 2018
Don't just count qubits
The rapid advances in quantum computing as a technology with real applications are reflected in the increases in the number of qubits these devices have available for computation. In 1998, laboratory prototypes could boast just two: enough for a proof of principle but little more. Today that figure has risen to 72 in the latest device reported by Google. Given that the number of states available in principle to systems of N qubits is 2^N, this is an enormous difference. The ability to hold this number of qubits in entangled states involves a herculean feat of quantum engineering.
It’s not surprising, then, that media reports tend to focus on the number of qubits a quantum computer has at its disposal as the figure of merit. The qubit count is also commonly regarded as the determinant of the machine’s capabilities, most famously with the widely repeated claim that 50 qubits marks the threshold of “quantum supremacy”, when a quantum computer becomes capable of things to all intents and purposes impossible for classical devices.
The problem is that this is all misleading. What a quantum computer can and can’t accomplish depends on many things, of which the qubit count is just one. For one thing, the quality of the qubits is critical: how noisy they are, and how likely to incur errors. There is also the question of their heterogeneity. Qubits manufactured from superconducting circuits will generally differ in their precise characteristics and performance, whereas quantum computers that use trapped-ion qubits benefit from having them all identical. And because qubits can only be kept coherent for short times before quantum decoherence scrambles them, how fast they can be switched can determine how many logic operations you can perform in the time available. The power of the device then depends also on the number of gate operations your algorithm needs: its so-called depth.
There is also the question of connectivity: does every qubit couple with every other, or are they for instance coupled only to two neighbours in a linear array?
The performance of a quantum computer therefore needs a better figure of merit than a crude counting of qubits. Researchers at IBM have suggested one, which they call the “quantum volume” – an attempt to fold all of these features into a single number. And this isn’t, then, a way of evaluating which of two devices “performs better”, but quantifies the power of a particular computation. Device performance will depend on what you’re asking it to do. Particular architectures and hardware will work well for some tasks than for others (see here).
As a result, a media tendency to present quantum computation as a competition between rivals – IBM vs Google, superconducting-qubits vs trapped ions – does the field no favours. Of course one can’t deny that competitiveness exists, as well as a degree of commercial secrecy – this is a business with huge stakes, after all. But no one expects any overall “winner” to be anointed. It’s unfortunate, then, that this is how things looks if we judge from the “qubit counter” created by MIT Tech Review. As a rough-and-ready timeline of how the applied tech of the field is evolving, this might be just about defensible. But some fear that this sort of presentation does more harm than good, and we should certainly not see it as a guide to who is currently “in the lead”.
It’s not surprising, then, that media reports tend to focus on the number of qubits a quantum computer has at its disposal as the figure of merit. The qubit count is also commonly regarded as the determinant of the machine’s capabilities, most famously with the widely repeated claim that 50 qubits marks the threshold of “quantum supremacy”, when a quantum computer becomes capable of things to all intents and purposes impossible for classical devices.
The problem is that this is all misleading. What a quantum computer can and can’t accomplish depends on many things, of which the qubit count is just one. For one thing, the quality of the qubits is critical: how noisy they are, and how likely to incur errors. There is also the question of their heterogeneity. Qubits manufactured from superconducting circuits will generally differ in their precise characteristics and performance, whereas quantum computers that use trapped-ion qubits benefit from having them all identical. And because qubits can only be kept coherent for short times before quantum decoherence scrambles them, how fast they can be switched can determine how many logic operations you can perform in the time available. The power of the device then depends also on the number of gate operations your algorithm needs: its so-called depth.
There is also the question of connectivity: does every qubit couple with every other, or are they for instance coupled only to two neighbours in a linear array?
The performance of a quantum computer therefore needs a better figure of merit than a crude counting of qubits. Researchers at IBM have suggested one, which they call the “quantum volume” – an attempt to fold all of these features into a single number. And this isn’t, then, a way of evaluating which of two devices “performs better”, but quantifies the power of a particular computation. Device performance will depend on what you’re asking it to do. Particular architectures and hardware will work well for some tasks than for others (see here).
As a result, a media tendency to present quantum computation as a competition between rivals – IBM vs Google, superconducting-qubits vs trapped ions – does the field no favours. Of course one can’t deny that competitiveness exists, as well as a degree of commercial secrecy – this is a business with huge stakes, after all. But no one expects any overall “winner” to be anointed. It’s unfortunate, then, that this is how things looks if we judge from the “qubit counter” created by MIT Tech Review. As a rough-and-ready timeline of how the applied tech of the field is evolving, this might be just about defensible. But some fear that this sort of presentation does more harm than good, and we should certainly not see it as a guide to who is currently “in the lead”.
Friday, June 08, 2018
Myths of Copenhagen
Discussing the Copenhagen interpretation of quantum mechanics with Adam Becker and Jim Baggott makes me think it would be worthwhile setting down how I see it. I don’t claim that this is necessarily the “right” way to look at Copenhagen (there probably isn’t a right way), and I’m conscious that what Bohr wrote and said is often hard to fathom – not, I think, because his thinking was vague, but because he struggled to express it through the limited medium of language. Many people have pored over Bohr’s words more closely than I have, and they might find different interpretations. So if anyone takes issue with what I say here, please do tell me.
Part of the problem too, as Adam said (and reiterates in his excellent new book What is Real?, is that there isn’t really a “Copenhagen interpretation”. I think James Cushing makes a good case that it was largely a retrospective invention of Heisenberg’s, quite possibly as an attempt to rehabilitate himself into the physics community after the war. As I say in Beyond Weird, my feeling is that when we talk about “Copenhagen”, we ought really to stick as close as we can to Bohr – not just for consistency but also because he was the most careful of the Copenhagenist thinkers.
It’s perhaps for this reason too that I think there are misconceptions about the Copenhagen interpretation. The first is that it denies any reality beyond what we can measure: that it is anti-realist. I see no reason to think this. People might read that into Bohr’s famous words: “There is no quantum world. There is only an abstract quantum physical description.” But it seems to me that the meaning here is quite clear: quantum mechanics does not describe a physical reality. We cannot mine it to discover “bits of the world”, nor “histories of the world”. Quantum mechanics is the formal apparatus that allows us to make predictions about the world. There is nothing in that formulation, however, that denies the existence of some underlying stratum in which phenomena take place that produce the outcomes quantum mechanics enables us to predict.
Indeed, what Bohr goes on to say makes this perfectly clear: “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” (Here you can see the influence of Kant on Bohr, who read him.) Here Bohr explicitly acknowledges the existence of “nature” – an underlying reality – but doesn’t think we can get at it, beyond what we can observe.
This is what I like about Copenhagen. I don’t think that Bohr is necessarily right to abandon a quest to probe beneath the theory’s capacity to predict, but I think he is right to caution that nothing in quantum mechanics obviously permits us to make assumptions about that. Once we accept the Born rule, which makes the wavefunction a probability density distribution, we are forced to recognize that.
Here’s the next fallacy about the Copenhagen interpretation: that it insists classical physics, such as governs measuring apparatus, works according to fundamentally different rules from quantum physics, and we just have to accept that sharp division.
Again, I understand why it looks as though Bohr might be saying that. But what he’s really saying is that measurements exist only in the classical realm. Only there can we claim definitive knowledge of some quantum state of affairs – what the position of an electron “is”, say. This split, then, is epistemic: knowledge is classical (because we are).
Bohr didn’t see any prospect of that ever being otherwise. What’s often forgotten is how absolute the distinction seemed in Bohr’s day between the atomic/microscopic and the macroscopic. Schrödinger, who was of course no Copenhagenist, made that clear in What Is Life?, which expresses not the slightest notion that we could ever see individual molecules and follow their behaviour. To him, as to Bohr, we must describe the microscopic world in necessarily statistical terms, and it would have seemed absurd to imagine we would ever point to this or that molecule.
Bohr’s comments about the quantum/classical divide reflect this mindset. It’s a great shame he hasn’t been around to see it dissolve – to see us probe the mesoscale and even manipulate single atoms and photons. It would have been great to know what he would have made of it.
But I don’t believe there is any reason to suppose that, as is sometimes said, he felt that quantum mechanics just had to “stop working” at some particular scale, and classical physics take over. And of course today we have absolutely no reason to suppose that happens. On the contrary, the theory of decoherence (pioneered by the late Dieter Zeh) can go an awfully long way to deconstructing and demystifying measurement. It’s enabled us to chip away at Bohr’s overly pessimistic epistemological quantum-classical divide, both theoretically and experimentally, and understand a great deal about how classical rules emerge from quantum. Some think it has in fact pretty much solved the “measurement problem”, but I think that’s too optimistic, for the reasons below.
But I don’t see anything in those developments that conflicts with Copenhagen. After all, one of the pioneers of such developments, Anton Zeilinger, would describe himself (I’m reliably told) as basically a Copenhagenist. Some will object to this that Bohr was so vague that his ideas can be made to fit anything. But I believe that, in this much at least, apparent conflicts with work on decoherence come from not attending carefully enough to what Bohr said. (I think Henrik Zinkernagel’s discussions of “what Bohr said” are useful here and here.)
I think that in fact these recent developments have helped to refine Bohr’s picture until we can see more clearly what it really boils down to. Bohr saw measurement as an irreversible process, in the sense that once you had classical knowledge about an outcome, that outcome could not be undone. From the perspective of decoherence, this is now viewed in terms that sound a little like the Second Law: measurement entails the entanglement of quantum object and environment, which, as it proceeds and spreads, becomes for all practical purposes irreversible because you can’t hope to untangle it again. (We know that in some special cases where you can keep track, recoherence is possible, much as it is possible in principle to “undo” the Second Law if you keep track of all the interactions and collisions.)
This decoherence remains a “fully quantum” process, even while we can see how it gives rise to classical-like behaviour (via Zurek’s quantum Darwinism, for example). But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes: why only one particular outcome is (classically) observed. In my view, that is the right way to put into more specific and updated language what Bohr was driving at with his insistence on the classicality of measurement. Omnès is content to posit uniqueness of outcomes as an axiom: he thinks we have a complete theory of measurement that amounts to “decoherence + uniqueness”. The Everett interpretation, of course, ditches uniqueness, on the grounds of “why add an extra, arbitrary axiom?” To my mind, and for the reasons explained in my book, I think this leads to a “cognitive instability”, to purloin Sean Carroll’s useful phrase, in our ability to explain the world. So the incoherence that Adam sees in Copenhagen, I see in the Everett view (albeit for different reasons).
But this then is the value I see in Copenhagen: if we stick with it through the theory of decoherence, it takes us to the crux of the matter: the part it just can’t explain, which is uniqueness of outcomes. And by that I mean (irreversible) uniqueness of our knowledge – better known as facts. What the Copenhagenists called collapse or reduction of the wavefunction boils down to the emergence of facts about the world. And because I think they – at least, Bohr – always saw wavefunction collapse in epistemic terms, there is a consistency to this. So Copenhagen doesn’t solve the problem, but it leads us to the right question (indeed, the question that confronts the Everettian view too).
One might say that the Bohmian interpretation solves that issue, because it is a realist model: the facts are there all along, albeit hidden from us. I can see the attraction of that. My problem with it is that the solution comes by fiat – one puts in the hidden facts from the outset, and then explains all the potential problems with that by fiat too: by devising a form of nonlocality that does everything you need it to, without any real physical basis, and insisting that this type of nonlocality just – well, just is. It is ingenious, and sometimes useful, but it doesn’t seem to me that you satisfactorily solve a problem by building the solution into the axioms. I don’t understand the Bohmian model well enough to know how it deals with issues of contextuality and the apparent “non-universality of facts” (as this paper by Caslav Brukner points out), but on the face of it those seem to pose problems for a realist viewpoint too.
It seems to me that a currently very fruitful way to approach quantum mechanics is to think about the issue of why the answers the world gives us seem to depend on the questions we ask (à la John Wheeler’s “20 Questions” analogy). And I feel that Bohr helps point us in that direction, and without any need to suppose some mystical “effect of consciousness on physical reality”. He didn’t have all the answers – but we do him no favours by misrepresenting his questions. A tyrannical imposition of the Copenhagen position is bad for quantum mechanics, but Copenhagen itself is not the problem.
Part of the problem too, as Adam said (and reiterates in his excellent new book What is Real?, is that there isn’t really a “Copenhagen interpretation”. I think James Cushing makes a good case that it was largely a retrospective invention of Heisenberg’s, quite possibly as an attempt to rehabilitate himself into the physics community after the war. As I say in Beyond Weird, my feeling is that when we talk about “Copenhagen”, we ought really to stick as close as we can to Bohr – not just for consistency but also because he was the most careful of the Copenhagenist thinkers.
It’s perhaps for this reason too that I think there are misconceptions about the Copenhagen interpretation. The first is that it denies any reality beyond what we can measure: that it is anti-realist. I see no reason to think this. People might read that into Bohr’s famous words: “There is no quantum world. There is only an abstract quantum physical description.” But it seems to me that the meaning here is quite clear: quantum mechanics does not describe a physical reality. We cannot mine it to discover “bits of the world”, nor “histories of the world”. Quantum mechanics is the formal apparatus that allows us to make predictions about the world. There is nothing in that formulation, however, that denies the existence of some underlying stratum in which phenomena take place that produce the outcomes quantum mechanics enables us to predict.
Indeed, what Bohr goes on to say makes this perfectly clear: “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” (Here you can see the influence of Kant on Bohr, who read him.) Here Bohr explicitly acknowledges the existence of “nature” – an underlying reality – but doesn’t think we can get at it, beyond what we can observe.
This is what I like about Copenhagen. I don’t think that Bohr is necessarily right to abandon a quest to probe beneath the theory’s capacity to predict, but I think he is right to caution that nothing in quantum mechanics obviously permits us to make assumptions about that. Once we accept the Born rule, which makes the wavefunction a probability density distribution, we are forced to recognize that.
Here’s the next fallacy about the Copenhagen interpretation: that it insists classical physics, such as governs measuring apparatus, works according to fundamentally different rules from quantum physics, and we just have to accept that sharp division.
Again, I understand why it looks as though Bohr might be saying that. But what he’s really saying is that measurements exist only in the classical realm. Only there can we claim definitive knowledge of some quantum state of affairs – what the position of an electron “is”, say. This split, then, is epistemic: knowledge is classical (because we are).
Bohr didn’t see any prospect of that ever being otherwise. What’s often forgotten is how absolute the distinction seemed in Bohr’s day between the atomic/microscopic and the macroscopic. Schrödinger, who was of course no Copenhagenist, made that clear in What Is Life?, which expresses not the slightest notion that we could ever see individual molecules and follow their behaviour. To him, as to Bohr, we must describe the microscopic world in necessarily statistical terms, and it would have seemed absurd to imagine we would ever point to this or that molecule.
Bohr’s comments about the quantum/classical divide reflect this mindset. It’s a great shame he hasn’t been around to see it dissolve – to see us probe the mesoscale and even manipulate single atoms and photons. It would have been great to know what he would have made of it.
But I don’t believe there is any reason to suppose that, as is sometimes said, he felt that quantum mechanics just had to “stop working” at some particular scale, and classical physics take over. And of course today we have absolutely no reason to suppose that happens. On the contrary, the theory of decoherence (pioneered by the late Dieter Zeh) can go an awfully long way to deconstructing and demystifying measurement. It’s enabled us to chip away at Bohr’s overly pessimistic epistemological quantum-classical divide, both theoretically and experimentally, and understand a great deal about how classical rules emerge from quantum. Some think it has in fact pretty much solved the “measurement problem”, but I think that’s too optimistic, for the reasons below.
But I don’t see anything in those developments that conflicts with Copenhagen. After all, one of the pioneers of such developments, Anton Zeilinger, would describe himself (I’m reliably told) as basically a Copenhagenist. Some will object to this that Bohr was so vague that his ideas can be made to fit anything. But I believe that, in this much at least, apparent conflicts with work on decoherence come from not attending carefully enough to what Bohr said. (I think Henrik Zinkernagel’s discussions of “what Bohr said” are useful here and here.)
I think that in fact these recent developments have helped to refine Bohr’s picture until we can see more clearly what it really boils down to. Bohr saw measurement as an irreversible process, in the sense that once you had classical knowledge about an outcome, that outcome could not be undone. From the perspective of decoherence, this is now viewed in terms that sound a little like the Second Law: measurement entails the entanglement of quantum object and environment, which, as it proceeds and spreads, becomes for all practical purposes irreversible because you can’t hope to untangle it again. (We know that in some special cases where you can keep track, recoherence is possible, much as it is possible in principle to “undo” the Second Law if you keep track of all the interactions and collisions.)
This decoherence remains a “fully quantum” process, even while we can see how it gives rise to classical-like behaviour (via Zurek’s quantum Darwinism, for example). But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes: why only one particular outcome is (classically) observed. In my view, that is the right way to put into more specific and updated language what Bohr was driving at with his insistence on the classicality of measurement. Omnès is content to posit uniqueness of outcomes as an axiom: he thinks we have a complete theory of measurement that amounts to “decoherence + uniqueness”. The Everett interpretation, of course, ditches uniqueness, on the grounds of “why add an extra, arbitrary axiom?” To my mind, and for the reasons explained in my book, I think this leads to a “cognitive instability”, to purloin Sean Carroll’s useful phrase, in our ability to explain the world. So the incoherence that Adam sees in Copenhagen, I see in the Everett view (albeit for different reasons).
But this then is the value I see in Copenhagen: if we stick with it through the theory of decoherence, it takes us to the crux of the matter: the part it just can’t explain, which is uniqueness of outcomes. And by that I mean (irreversible) uniqueness of our knowledge – better known as facts. What the Copenhagenists called collapse or reduction of the wavefunction boils down to the emergence of facts about the world. And because I think they – at least, Bohr – always saw wavefunction collapse in epistemic terms, there is a consistency to this. So Copenhagen doesn’t solve the problem, but it leads us to the right question (indeed, the question that confronts the Everettian view too).
One might say that the Bohmian interpretation solves that issue, because it is a realist model: the facts are there all along, albeit hidden from us. I can see the attraction of that. My problem with it is that the solution comes by fiat – one puts in the hidden facts from the outset, and then explains all the potential problems with that by fiat too: by devising a form of nonlocality that does everything you need it to, without any real physical basis, and insisting that this type of nonlocality just – well, just is. It is ingenious, and sometimes useful, but it doesn’t seem to me that you satisfactorily solve a problem by building the solution into the axioms. I don’t understand the Bohmian model well enough to know how it deals with issues of contextuality and the apparent “non-universality of facts” (as this paper by Caslav Brukner points out), but on the face of it those seem to pose problems for a realist viewpoint too.
It seems to me that a currently very fruitful way to approach quantum mechanics is to think about the issue of why the answers the world gives us seem to depend on the questions we ask (à la John Wheeler’s “20 Questions” analogy). And I feel that Bohr helps point us in that direction, and without any need to suppose some mystical “effect of consciousness on physical reality”. He didn’t have all the answers – but we do him no favours by misrepresenting his questions. A tyrannical imposition of the Copenhagen position is bad for quantum mechanics, but Copenhagen itself is not the problem.
Monday, May 21, 2018
What is a superposition really like?
Here’s a longer version of the news story I just published in Scientific American, which includes more context and background. The interpretation of the outcomes of this thought experiment within the two-state vector formalism of quantum mechanics is by no means the only one possible. But what the experiment does show is that quantum mechanics suggests that superpositions are not always simply a case of a particle seeming to be in two places or states at once. A superposition, liker anything else in quantum mechanics, tells you about the possible outcomes of a measurement. All the rest is contingent interpretation. I’m reminded yet again today that it is going to take an awful lot to get media folks to accept this. I'm starting to see now that it was a mistake for me to assume that they didn't know any better; rather, I think there an active, positive desire for the "two places at once" to be true.
I should say also that I consciously decided to turn a blind eye to the use of the word “spooky” in the title of this piece, because it does perfectly acceptable work as it is. It does not imply that “spooky action at a distance” is a thing. It is not a thing, unless it is a disproved thing. Quantum nonlocality is the alternative to that Einsteinian picture.
______________________________________________________________________
It’s the central question in quantum mechanics, and no one knows the answer: what goes on for a particle in a superposition? All of the head-scratching oddness that seems to pervade quantum theory comes from these peculiar circumstances in which particles seem to be in two places or states at once. What that really means has provoked endless debate and argument. Now a team of researchers in Israel and Japan has proposed https://www.nature.com/articles/s41598-018-26018-y an experiment that should let us say something for sure about the nature of that nebulous state [A. C. Elitzur, E. Cohen, R. Okamoto & S. Takeuchi, Sci. Rep. 8, 7730 (2018)].
Their experiment, which they say could be carried out within a few months using existing technologies, should let us sneak a glance at where a quantum object – in this case a particle of light, called a photon – actually is when it is placed in a superposition of positions. And what the researchers predict is even more shocking and strange than the usual picture of this counterintuitive quantum phenomenon.
The classic illustration of a superposition – indeed, the central experiment of quantum mechanics, according to legendary physicist Richard Feynman – involves firing particles like photons through two closely spaced slits in a wall. Because quantum particles can behave like waves, those passing through one slit can ‘interfere’ with those going through the other, their wavy ripples either boosting or cancelling one another. For photons the result is a pattern of light and dark interference bands when the particles are detected on a screen on the far side, corresponding to a high or low number of photons reaching the screen.
Once you accept the waviness of quantum particles, there’s nothing so odd about this interference pattern. You can see it for ordinary water waves passing through double slits too. What is odd, though, is that the interference remains even if the rate of firing particles at the slits is so low that only one passes through at a time. The only way to rationalize that is to say each particle somehow passes through both slits at once, and interferes with itself. That’s a superposition.
To put it another way: when we ask the seemingly reasonable question “Where is the particle in a superposition?”, we’re using a notion of “where” inherited from our classical world, to which the answer can simply be “there”. But quantum mechanics is known now to be ‘nonlocal’, which means we have to relinquish the whole notion of locality – of “whereness”, you might say.
But that’s a hard habit to give up, which is why the ‘two places at once’ picture is commonly invoked to talk about quantum superpositions. Yet quantum mechanics doesn’t say anything about what particles are like until we make measurements on them. For the Danish physicist Niels Bohr, asking where the particle was in the double-slit experiment before it was measured has no meaning within quantum theory itself.
Why don’t we just look? Well, we can. We could put a detector in or just behind one slit that could register the passing of a particle without absorbing it. And in that case, the detector will show that sometimes the particle goes through one slit, and sometimes it goes through the other. But here’s the catch: there’s then no longer an interference pattern, but just the result we’d expect for particles taking one route or the other. Observing which route the particle takes destroys its ‘quantumness’.
This isn’t about measurements disturbing the particle, since interference is absent even in instances where a detector at one slit doesn’t see the particle, so that it ‘must’ have gone through the other slit. Rather, the ‘collapse’ of a superposition seems to be caused by our mere knowledge of the path.
We can try to be smarter. What if we wait until the particle has definitely passed through the slits before we measure the path? How could that delayed measurement affect what happened earlier at the slits themselves? But it does. In the 1960s the physicist John Wheeler proposed a way of doing this using an apparatus called a Mach-Zehnder interferometer, a modification of the double-slit experiment in which a partial mirror creates a superposition of photons that seems to send them along two different paths before they are brought back together to interfere (or not).
The result was that, just as Bohr had predicted, it makes no difference if we delay the detection. Still superposition and interference vanish if we detect the path before we measure the photons. It is as if the particle ‘knows’ our intention to measure it later.
Bohr’s argument that quantum mechanics is silent about ‘reality’ beyond what we can measure has long seemed deeply unsatisfactory to many researchers. “We know something fishy is going on in a superposition”, says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research in Zichron Ya’akov. “But you’re not allowed to measure it”, he says – because then the superposition collapses. “This is what makes quantum mechanics so diabolical.”
There have been many attempts to develop alternative points of view to Bohr’s that restore an underlying reality in quantum mechanics – some description of the world before we look. But none seems able to restore the kind of picture we have in classical physics of objects that always have definite positions and paths.
One particular approach that aims to deduce something about quantum particles before their measurement is called the two-state-vector formalism (TSVF) of quantum mechanics, developed by Elitzur’s former mentor the Israeli physicist Yakir Aharonov and his collaborators. This postulates that quantum events are in some sense determined by quantum states not just in the past but also in the future: it makes the assumption that quantum mechanics works the same way both forwards and backwards in time. In this view, causes can seem to propagate backwards in time: there is retrocausality.
You don’t have to take that strange notion literally. Rather, in the TSVF you can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: not, say, simply measuring where a particle ends up, but instead choosing a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could, because it means that the particle’s situation at any instant is being evaluated retrospectively in the light of its entire history, up to and including measurement. But odd thing is that it looks as if, simply by you choosing to look for a particular outcome, the choice caused that outcome to happen.
“Normal quantum mechanics is about statistics”, Cohen says: what you see are average values, or what is generally called an expectation value of some variable you are measuring. But by looking at when a system produces some particular, chosen value, you can take a slice though the probabilistic theory and start to talk with certainty about what went on to cause that outcome. The odd thing is that it then looks as if your very choice of outcome was part of the cause.
“It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace of the University of Southern California, a philosopher who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”
Take, for instance, the version of the double-slit experiment devised using the TSVF by Aharonov and coworker Lev Vaidman in 2003. The pair described (but did not build) an optical system in which a single photon can act as a ‘shutter’ that closes a slit by perfectly reflecting another ‘probe’ photon that is doing the standard trick of interfering with itself as it passes through the slits. Aharonov and Vaidman showed that, by applying post-selection to the measurements of the probe photon, we should be able to see that a shutter photon in a superposition can close both (or indeed many) slits at once. So you could say with confidence that the shutter photon really was both ‘here’ and ‘there’ at once [Y. Aharonov & L. Vaidman, Phys. Rev. A 67, 1–3 (2003)] – a situation that seems paradoxical from our everyday experience but is one aspect of the so-called nonlocal properties of quantum particles, where the whole notion of a well-defined location in space dissolves.
In 2016, Ryo Okamoto and Shigeki Takeuchi of Kyoto University implemented Aharonov and Vaidman’s proposal experimentally using apparatus based on a Mach-Zehnder interferometer [R. Okamoto & S. Takeuchi, Sci. Rep. 6, 35161 (2016)]. The ability of a photon to act as a shutter was enabled by a photonic device called a quantum router, in which one photon can control the route taken by another. The crucial point is that this interaction is cleverly arranged to be completely one-sided: it affects only the probe photon. That way, the probe photon carries away no direct information about the shutter photon, and so doesn’t disturb its superposition – but nonetheless one can retrospectively deduce that the shutter photon was definitely in the position needed to reflect the probe.
The Japanese researchers found that the statistics of how the superposed shutter photon reflects the probe photon matched those that Aharonov and Vaidman predicted, and which could only be explained by some non-classical “two places at once” behaviour. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places”, says Cohen.
Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more ingenious experiment, which allows one to say with certainty something about the position of a particle in a superposition at a series of different points in time before any measurement has been made. And it seems that this position is even more odd than the traditional “both here and there”.
Again the experiment involves a kind of Mach-Zehnder set-up in which a shutter photon interacts with some probe photon via quantum routers. This time, though, the probe photon’s route is split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C along the probe photon’s route, and they provide an unambiguous indication that the shutter particle was definitely in a given box at a specific time.
Because nothing is inspected until the probe photon has completed the whole circuit and reached a detector, there should be no collapse of either its superposition or that of the shutter photon – so there’s still interference. But the experiment is carefully set up so that the probe photon can only show this interference pattern if it interacted with the shutter photon in a particular sequence of places and times: namely, if the shutter photon was in both boxes A and C at some time t1, then at a later time t2 only in C, and at a still later time t3 in both B and C. If you see interference in the probe photon, you can say for sure (retrospectively) that the shutter photon displayed this bizarre appearance and disappearance among the boxes at different times – an idea Elitzur, Cohen and Aharonov proposed as a possibility last year for a single particle superposed into three ‘boxes’ [Y. Aharonov, E. Cohen, A. Landau & A. C. Elitzur, Sci. Rep. 7, 531 (2017)].
Why those particular places and times, though? You could certainly look at other points on the route, says Elitzur, but those times and locations are ones where, in this configuration, the probability of finding the particle becomes 1 – in other words, a certainty.
So this thought experiment seems to lift part of the veil off a quantum superposition, and to let us say something definite beyond Bohr’s “Don’t ask” proscription. The TSVF opens up the story by considering both the initial and final states, which allows one to reconstruct what was not measured, namely what happens in between. “I like the way this paper frames questions about what is happening in terms of entire histories, rather than instantaneous states”, says physicist Ken Wharton of San Jose State University in California. “Taking about ‘states’ is an old pervasive bias, whereas full histories are generally far more rich and interesting.”
And the researchers’ interpretation of that intermediate history before measurement is extraordinary. The apparent vanishing of particles in one place at one time, and their reappearance in other times and places, suggests a new vision of what the underlying processes are that create quantum randomness and nonlocality. Within the TSVF, this flickering, ever-changing existence can be understood as a series of events in which a particle is somehow ‘cancelled’ by its own “counterparticle”, with negative energy and negative mass.
Elitzur compares this to the notion introduced by British physicist Paul Dirac in the 1920s that particles have antiparticles that can annihilate one another – a picture that seemed at first just a manner of speaking, but which soon led to the discovery that such antiparticles are real. The disappearance of quantum particles is not annihilation in this same sense, but it is somewhat analogous.
So while the traditional “two places at once” view of superpositions might seem odd enough, “it’s possible that a superposition is a collection of states that are even crazier”, says Elitzur. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. With just a hint of nervousness, he ventures to suggest that as a result, measurements on a quantum particle might be contingent on when you look even if the quantum state itself is unchanging in time. You might not find it here when you look – but had you looked a moment later, it might indeed have been there. Such an interpretation of quantum behaviour would be, Elitzur says, “revolutionary” – because it would entail a hitherto unguessed menagerie of real states underlying counter-intuitive quantum phenomena.
The researchers say that to do the actual experiment will require some refining of what quantum routers are capable of, but that they hope to have it ready to roll in three to five months. “The experiment is bound to work”, says Wharton – but he adds that it is also “bound to not convince anyone of anything, since the results are predicted by standard quantum mechanics.”
Elitzur agrees that this picture of a particle’s apparent appearance and disappearance at various points along the trajectory could have been noticed in quantum mechanics decades ago. But it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he says, “well, let them go ahead!”
I should say also that I consciously decided to turn a blind eye to the use of the word “spooky” in the title of this piece, because it does perfectly acceptable work as it is. It does not imply that “spooky action at a distance” is a thing. It is not a thing, unless it is a disproved thing. Quantum nonlocality is the alternative to that Einsteinian picture.
______________________________________________________________________
It’s the central question in quantum mechanics, and no one knows the answer: what goes on for a particle in a superposition? All of the head-scratching oddness that seems to pervade quantum theory comes from these peculiar circumstances in which particles seem to be in two places or states at once. What that really means has provoked endless debate and argument. Now a team of researchers in Israel and Japan has proposed https://www.nature.com/articles/s41598-018-26018-y an experiment that should let us say something for sure about the nature of that nebulous state [A. C. Elitzur, E. Cohen, R. Okamoto & S. Takeuchi, Sci. Rep. 8, 7730 (2018)].
Their experiment, which they say could be carried out within a few months using existing technologies, should let us sneak a glance at where a quantum object – in this case a particle of light, called a photon – actually is when it is placed in a superposition of positions. And what the researchers predict is even more shocking and strange than the usual picture of this counterintuitive quantum phenomenon.
The classic illustration of a superposition – indeed, the central experiment of quantum mechanics, according to legendary physicist Richard Feynman – involves firing particles like photons through two closely spaced slits in a wall. Because quantum particles can behave like waves, those passing through one slit can ‘interfere’ with those going through the other, their wavy ripples either boosting or cancelling one another. For photons the result is a pattern of light and dark interference bands when the particles are detected on a screen on the far side, corresponding to a high or low number of photons reaching the screen.
Once you accept the waviness of quantum particles, there’s nothing so odd about this interference pattern. You can see it for ordinary water waves passing through double slits too. What is odd, though, is that the interference remains even if the rate of firing particles at the slits is so low that only one passes through at a time. The only way to rationalize that is to say each particle somehow passes through both slits at once, and interferes with itself. That’s a superposition.
To put it another way: when we ask the seemingly reasonable question “Where is the particle in a superposition?”, we’re using a notion of “where” inherited from our classical world, to which the answer can simply be “there”. But quantum mechanics is known now to be ‘nonlocal’, which means we have to relinquish the whole notion of locality – of “whereness”, you might say.
But that’s a hard habit to give up, which is why the ‘two places at once’ picture is commonly invoked to talk about quantum superpositions. Yet quantum mechanics doesn’t say anything about what particles are like until we make measurements on them. For the Danish physicist Niels Bohr, asking where the particle was in the double-slit experiment before it was measured has no meaning within quantum theory itself.
Why don’t we just look? Well, we can. We could put a detector in or just behind one slit that could register the passing of a particle without absorbing it. And in that case, the detector will show that sometimes the particle goes through one slit, and sometimes it goes through the other. But here’s the catch: there’s then no longer an interference pattern, but just the result we’d expect for particles taking one route or the other. Observing which route the particle takes destroys its ‘quantumness’.
This isn’t about measurements disturbing the particle, since interference is absent even in instances where a detector at one slit doesn’t see the particle, so that it ‘must’ have gone through the other slit. Rather, the ‘collapse’ of a superposition seems to be caused by our mere knowledge of the path.
We can try to be smarter. What if we wait until the particle has definitely passed through the slits before we measure the path? How could that delayed measurement affect what happened earlier at the slits themselves? But it does. In the 1960s the physicist John Wheeler proposed a way of doing this using an apparatus called a Mach-Zehnder interferometer, a modification of the double-slit experiment in which a partial mirror creates a superposition of photons that seems to send them along two different paths before they are brought back together to interfere (or not).
The result was that, just as Bohr had predicted, it makes no difference if we delay the detection. Still superposition and interference vanish if we detect the path before we measure the photons. It is as if the particle ‘knows’ our intention to measure it later.
Bohr’s argument that quantum mechanics is silent about ‘reality’ beyond what we can measure has long seemed deeply unsatisfactory to many researchers. “We know something fishy is going on in a superposition”, says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research in Zichron Ya’akov. “But you’re not allowed to measure it”, he says – because then the superposition collapses. “This is what makes quantum mechanics so diabolical.”
There have been many attempts to develop alternative points of view to Bohr’s that restore an underlying reality in quantum mechanics – some description of the world before we look. But none seems able to restore the kind of picture we have in classical physics of objects that always have definite positions and paths.
One particular approach that aims to deduce something about quantum particles before their measurement is called the two-state-vector formalism (TSVF) of quantum mechanics, developed by Elitzur’s former mentor the Israeli physicist Yakir Aharonov and his collaborators. This postulates that quantum events are in some sense determined by quantum states not just in the past but also in the future: it makes the assumption that quantum mechanics works the same way both forwards and backwards in time. In this view, causes can seem to propagate backwards in time: there is retrocausality.
You don’t have to take that strange notion literally. Rather, in the TSVF you can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: not, say, simply measuring where a particle ends up, but instead choosing a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could, because it means that the particle’s situation at any instant is being evaluated retrospectively in the light of its entire history, up to and including measurement. But odd thing is that it looks as if, simply by you choosing to look for a particular outcome, the choice caused that outcome to happen.
“Normal quantum mechanics is about statistics”, Cohen says: what you see are average values, or what is generally called an expectation value of some variable you are measuring. But by looking at when a system produces some particular, chosen value, you can take a slice though the probabilistic theory and start to talk with certainty about what went on to cause that outcome. The odd thing is that it then looks as if your very choice of outcome was part of the cause.
“It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace of the University of Southern California, a philosopher who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”
Take, for instance, the version of the double-slit experiment devised using the TSVF by Aharonov and coworker Lev Vaidman in 2003. The pair described (but did not build) an optical system in which a single photon can act as a ‘shutter’ that closes a slit by perfectly reflecting another ‘probe’ photon that is doing the standard trick of interfering with itself as it passes through the slits. Aharonov and Vaidman showed that, by applying post-selection to the measurements of the probe photon, we should be able to see that a shutter photon in a superposition can close both (or indeed many) slits at once. So you could say with confidence that the shutter photon really was both ‘here’ and ‘there’ at once [Y. Aharonov & L. Vaidman, Phys. Rev. A 67, 1–3 (2003)] – a situation that seems paradoxical from our everyday experience but is one aspect of the so-called nonlocal properties of quantum particles, where the whole notion of a well-defined location in space dissolves.
In 2016, Ryo Okamoto and Shigeki Takeuchi of Kyoto University implemented Aharonov and Vaidman’s proposal experimentally using apparatus based on a Mach-Zehnder interferometer [R. Okamoto & S. Takeuchi, Sci. Rep. 6, 35161 (2016)]. The ability of a photon to act as a shutter was enabled by a photonic device called a quantum router, in which one photon can control the route taken by another. The crucial point is that this interaction is cleverly arranged to be completely one-sided: it affects only the probe photon. That way, the probe photon carries away no direct information about the shutter photon, and so doesn’t disturb its superposition – but nonetheless one can retrospectively deduce that the shutter photon was definitely in the position needed to reflect the probe.
The Japanese researchers found that the statistics of how the superposed shutter photon reflects the probe photon matched those that Aharonov and Vaidman predicted, and which could only be explained by some non-classical “two places at once” behaviour. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places”, says Cohen.
Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more ingenious experiment, which allows one to say with certainty something about the position of a particle in a superposition at a series of different points in time before any measurement has been made. And it seems that this position is even more odd than the traditional “both here and there”.
Again the experiment involves a kind of Mach-Zehnder set-up in which a shutter photon interacts with some probe photon via quantum routers. This time, though, the probe photon’s route is split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C along the probe photon’s route, and they provide an unambiguous indication that the shutter particle was definitely in a given box at a specific time.
Because nothing is inspected until the probe photon has completed the whole circuit and reached a detector, there should be no collapse of either its superposition or that of the shutter photon – so there’s still interference. But the experiment is carefully set up so that the probe photon can only show this interference pattern if it interacted with the shutter photon in a particular sequence of places and times: namely, if the shutter photon was in both boxes A and C at some time t1, then at a later time t2 only in C, and at a still later time t3 in both B and C. If you see interference in the probe photon, you can say for sure (retrospectively) that the shutter photon displayed this bizarre appearance and disappearance among the boxes at different times – an idea Elitzur, Cohen and Aharonov proposed as a possibility last year for a single particle superposed into three ‘boxes’ [Y. Aharonov, E. Cohen, A. Landau & A. C. Elitzur, Sci. Rep. 7, 531 (2017)].
Why those particular places and times, though? You could certainly look at other points on the route, says Elitzur, but those times and locations are ones where, in this configuration, the probability of finding the particle becomes 1 – in other words, a certainty.
So this thought experiment seems to lift part of the veil off a quantum superposition, and to let us say something definite beyond Bohr’s “Don’t ask” proscription. The TSVF opens up the story by considering both the initial and final states, which allows one to reconstruct what was not measured, namely what happens in between. “I like the way this paper frames questions about what is happening in terms of entire histories, rather than instantaneous states”, says physicist Ken Wharton of San Jose State University in California. “Taking about ‘states’ is an old pervasive bias, whereas full histories are generally far more rich and interesting.”
And the researchers’ interpretation of that intermediate history before measurement is extraordinary. The apparent vanishing of particles in one place at one time, and their reappearance in other times and places, suggests a new vision of what the underlying processes are that create quantum randomness and nonlocality. Within the TSVF, this flickering, ever-changing existence can be understood as a series of events in which a particle is somehow ‘cancelled’ by its own “counterparticle”, with negative energy and negative mass.
Elitzur compares this to the notion introduced by British physicist Paul Dirac in the 1920s that particles have antiparticles that can annihilate one another – a picture that seemed at first just a manner of speaking, but which soon led to the discovery that such antiparticles are real. The disappearance of quantum particles is not annihilation in this same sense, but it is somewhat analogous.
So while the traditional “two places at once” view of superpositions might seem odd enough, “it’s possible that a superposition is a collection of states that are even crazier”, says Elitzur. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. With just a hint of nervousness, he ventures to suggest that as a result, measurements on a quantum particle might be contingent on when you look even if the quantum state itself is unchanging in time. You might not find it here when you look – but had you looked a moment later, it might indeed have been there. Such an interpretation of quantum behaviour would be, Elitzur says, “revolutionary” – because it would entail a hitherto unguessed menagerie of real states underlying counter-intuitive quantum phenomena.
The researchers say that to do the actual experiment will require some refining of what quantum routers are capable of, but that they hope to have it ready to roll in three to five months. “The experiment is bound to work”, says Wharton – but he adds that it is also “bound to not convince anyone of anything, since the results are predicted by standard quantum mechanics.”
Elitzur agrees that this picture of a particle’s apparent appearance and disappearance at various points along the trajectory could have been noticed in quantum mechanics decades ago. But it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he says, “well, let them go ahead!”
Tuesday, April 24, 2018
More on the politics of genes and education
There was never any prospect that my article in New Statesman on genes, intelligence and education would wrap up everything so nicely that there was nothing left to be said. For one thing, aspects of the science are still controversial – I would have liked among other things, to delve more deeply into the difficulties (impossibility, actually) of cleanly separating genetic from environmental influences on intelligence.
I was, I admit, somewhat hard on Toby Young, while wanting to absolve him from some of the kneejerk accusations that have come his way. He is not some swivel-eyed hard-right eugenicist, and indeed if I have given the impression that he is a crude social Darwinist, as Toby thinks I have, then I have given a wrong impression: his position is more nuanced than that. Toby has been rather gracious in his response in The Spectator.
OK, not entirely – but so it goes. I recognize the temptation to construct artificial narratives, and I fear Toby has done so in his discussion of my article in Prospect. I take his remark on my “bravery” in tackling this subject after writing that piece as a backhanded compliment that implies I was brave to return to a subject after I’d screwed up earlier. In fact, my Prospect piece was not primarily about genes and intelligence anyway. Yes, Stuart Ritchie had some criticisms about that particular aspect of it, but these centred on technical arguments about other studies in the field – in other words, on issues that the specialists themselves are arguing about. Other geneticists, including some who work on intelligence, saw and approved my article. To say that I had to “publish some ‘clarifications’” after “a lot of criticism” is misleading to a rather naughty degree. The reader is meant to infer that these are euphemistic ‘clarifications’, i.e. corrections made in response to errors pointed out. Actually I “published” nothing of the sort – what Toby is referring to are merely some comments I posted on my blog in response to the discussion.
As for the link to the criticisms made by Dominic Cummings: well, I recommend you read them. Not because they add anything of substance to the discussion, but because they are a reminder of what this man, who once wielded considerable behind-the-scenes political power and who has had an inordinate influence on the current predicament of the country, is really like. I still find it chilling.
What’s most striking about Toby’s piece, however, is how political it is. I don’t consider that a criticism, but rather, a vindication of one of the central points of my article in New Statesman: that while the science is fairly (if not entirely) clear, what one concludes from it is highly dependent on political leaning.
This includes a tendency to attribute ideas and views to your political opposites simply because of their persuasion. I must acknowledge the possibility that I did so with Toby. He returns the favour here:
“I suspect the popularity of the ‘personalised learning’ recommendation among the experts in this field – as well as Philip Ball – is partly because they don’t want to antagonise their left-wing colleagues.”
Actually I am sceptical about ‘personalized learning’ based on genetic intelligence measures, and said so in the article, since I see no evidence that they could be effective (although I’m open to the possibility that that might change). The aim of my article, Toby decided, was to reassure my fellow liberals that yes, genes do influence intelligence, but really it’ll be OK.
I find this bizarre – but not as bizarre as the view Toby attributes to Charles Murray, who seems to think that the “left” is either going to have a breakdown over genetic influences on traits or, worse, will decide to embrace genetic social engineering, using CRISPR no less, to eradicate innate differences in some sort of Brave New World scenario. If Murray really thinks that, his grasp of the science is as poor as some experts have said it is. And if in his alternative universe he finds a hard-left government trying to do such things anyway, he’ll find me alongside him opposing it.
You see, what we leftists are told we believe is that everyone is a blank slate, equal in all respects, until society kicks in with its prejudices and inequalities. And we denounce anything to the contrary as crypto-fascism. Steven Pinker, who has pushed the ‘blank slate’ as a myth of the left, weighed in on my article by commenting that even left-leaning magazines like New Statesman are now having to face up to the truth, as though my intention were to confess to past leftie sins of omission.
Now, I fully acknowledge that there have been hysterical reactions to ‘sociobiology’ and to suggestions that human traits may be partly genetically hardwired. And these have often come from the left – indeed, sometimes from the Marxian post-modern intellectuals who Pinker regards as the root of so many modern evils. But such denial is plain silly, and I’m not sure that many left-leaning moderates would disagree, or would be somehow too frightened to say so.
The caricatures Toby creates are grotesque. “It’s now just flat out wrong to think that varying levels of ability and success are solely determined by economic and historical forces”, he says. We agree – but does anyone seriously want to argue otherwise?
“That means it’s a dangerous fantasy”, he continues, “to think that, once you’ve eradicated socio-economic inequality, human nature will flatten out accordingly – that you can return to ‘year zero’, as the Khmer Rouge put it. On the contrary, biological differences between human beings will stubbornly refuse to wither away, which means that an egalitarian society can only be maintained by a brutally coercive state that is constantly intervening to ‘correct’ the inequities of nature.”
But most of us who would like to see an “egalitarian society” don’t mean by that a society in which absolute equality is imposed by the jackboot. We just want to see, for example, fewer people struggle against the inequalities they are born into, while others rise to power and influence on the back of their privileged background. We want to see less tolerance of, and even encouragement of, naked greed that exploits the powerless. We want to see more equality of opportunity. I think we accept that there can never be equality of outcome, at least without unjustified coercion. But we would also like to see reward more closely tied to contribution to society, not simply to what you can get away with. And in fact, while we will differ in degree and probably in methodology, I suspect that in these aspirations we liberal lefties are not so different from Toby Young.
In fact, evidently we do agree on this much:
“The findings of evolutionary psychologists, sociobiologists, cognitive neuroscientists, biosocial criminologists, and so on, [don’t] inevitably lead to Alan Ryan’s ‘apocalyptic conservatism’. On the contrary, I think they’re compatible with a wide range of political arrangements, including – at a pinch – Scandinavian social democracy.”
Which is why it’s baffling to me why Toby thinks we “progressive liberals” should be so disconcerted by the findings of genetics. Disconcerted by the discovery that traits, like height, are partly innate? Disconcerted that a society that tries to impose complete equality of ability on everyone will be a Stalinist dystopia? The implication here seems to be that science has disproved our leftwing delusions, and we’d better face up to that. But all it has ‘disproved’ is some wild, extreme fantasies and some straw men.
Such comments only reinforce my view that all this politicization of the debate gets in the way of actually moving it on. In my experience, the reason many educators and educationalists are not terribly enchanted with studies of the genetic basis of intelligence is not because they think it is some foul plot but because they don’t see it as terribly relevant. It doesn’t help them do their job any better. Now, if that leads them to actually deny the role of genes in intelligence, then they’re barking up the wrong tree. But I think many see it merely as a distraction from the business of trying to improve education. After all, so far genetics has offered next to no suggestions about how to do that – as I said in my article, pretty much all the sensible recommendations that Robert Plomin and Kathryn Asbury make in their book could have been made without the benefit of genetic studies.
Now, one way to read the implications of those studies is that there actually isn’t much that educationalists can do. Take the recent paper by Plomin and colleagues claiming that schools make virtually no additional contribution to outcomes beyond the innate cognitive abilities of their student intake. This is a very interesting finding, but there needs to be careful discussion about what it means. So we shouldn’t worry at all about Ofsted reports of “failing” schools? I doubt if anyone would conclude that, but then how is a school influencing outcomes? When a new head arrives and turns a school around, what has happened? Has the new head somehow just managed to alter the IQ distribution of the intake? I don’t know the answers to these things.
The authors of that paper are not so unwise as to conclude that (presumably beyond some minimal level of competence) “teaching makes no difference to outcomes”. But you can imagine others drawing that conclusion, and then should understand if some teachers and educators express frustration with this sort of thing. For one thing, the differences teaching and teachers make are not always going to be registered in exam results. As things stood, I was always going to get A’s in my chemistry A levels – but it was the enthusiasm and advocacy of Dr McCarthy and Mr Heasman that inspired me to study the subject at university. I was probably always going to get an A in my English O level, but it was Ms Priske who encouraged me to read Mervyn Peake.
All too often, however, the position of right-leaning commentators on the matter can read like laissez-faire: tinker all you like but it’s not going to make much difference, because you well-meaning liberals are just going to have to accept that some pupils are smarter than others. (So why are Conservative education ministers so keen to keep buggering about with the curriculum?) And if you do manage to level the playing field, you’ll see that even more clearly. And then where will you be, eh, with all your Maoist visions?
I don’t think they really do think like this; at least I don’t think Toby does. I certainly hope not. But that’s why both sides have to stop any posturing about the facts, and get on with figuring out what to make of them. We already know not all kids will do equally well in exams, come what may. But how do we find those who could do better, given the right circumstances? How do we find ways of engaging those pupils with ability but not inclination? How do we find ways of helping those of lower academic ability feel fulfilled rather than discarded in the bottom set? How do we decide, for God’s sake, what is important in an education anyway? These are the kinds of hard questions that teachers and educators have to face every day, and it would be good to see if the knowledge we’re gaining about inherent cognitive abilities could be useful to them, rather than turning it into a political football.
I was, I admit, somewhat hard on Toby Young, while wanting to absolve him from some of the kneejerk accusations that have come his way. He is not some swivel-eyed hard-right eugenicist, and indeed if I have given the impression that he is a crude social Darwinist, as Toby thinks I have, then I have given a wrong impression: his position is more nuanced than that. Toby has been rather gracious in his response in The Spectator.
OK, not entirely – but so it goes. I recognize the temptation to construct artificial narratives, and I fear Toby has done so in his discussion of my article in Prospect. I take his remark on my “bravery” in tackling this subject after writing that piece as a backhanded compliment that implies I was brave to return to a subject after I’d screwed up earlier. In fact, my Prospect piece was not primarily about genes and intelligence anyway. Yes, Stuart Ritchie had some criticisms about that particular aspect of it, but these centred on technical arguments about other studies in the field – in other words, on issues that the specialists themselves are arguing about. Other geneticists, including some who work on intelligence, saw and approved my article. To say that I had to “publish some ‘clarifications’” after “a lot of criticism” is misleading to a rather naughty degree. The reader is meant to infer that these are euphemistic ‘clarifications’, i.e. corrections made in response to errors pointed out. Actually I “published” nothing of the sort – what Toby is referring to are merely some comments I posted on my blog in response to the discussion.
As for the link to the criticisms made by Dominic Cummings: well, I recommend you read them. Not because they add anything of substance to the discussion, but because they are a reminder of what this man, who once wielded considerable behind-the-scenes political power and who has had an inordinate influence on the current predicament of the country, is really like. I still find it chilling.
What’s most striking about Toby’s piece, however, is how political it is. I don’t consider that a criticism, but rather, a vindication of one of the central points of my article in New Statesman: that while the science is fairly (if not entirely) clear, what one concludes from it is highly dependent on political leaning.
This includes a tendency to attribute ideas and views to your political opposites simply because of their persuasion. I must acknowledge the possibility that I did so with Toby. He returns the favour here:
“I suspect the popularity of the ‘personalised learning’ recommendation among the experts in this field – as well as Philip Ball – is partly because they don’t want to antagonise their left-wing colleagues.”
Actually I am sceptical about ‘personalized learning’ based on genetic intelligence measures, and said so in the article, since I see no evidence that they could be effective (although I’m open to the possibility that that might change). The aim of my article, Toby decided, was to reassure my fellow liberals that yes, genes do influence intelligence, but really it’ll be OK.
I find this bizarre – but not as bizarre as the view Toby attributes to Charles Murray, who seems to think that the “left” is either going to have a breakdown over genetic influences on traits or, worse, will decide to embrace genetic social engineering, using CRISPR no less, to eradicate innate differences in some sort of Brave New World scenario. If Murray really thinks that, his grasp of the science is as poor as some experts have said it is. And if in his alternative universe he finds a hard-left government trying to do such things anyway, he’ll find me alongside him opposing it.
You see, what we leftists are told we believe is that everyone is a blank slate, equal in all respects, until society kicks in with its prejudices and inequalities. And we denounce anything to the contrary as crypto-fascism. Steven Pinker, who has pushed the ‘blank slate’ as a myth of the left, weighed in on my article by commenting that even left-leaning magazines like New Statesman are now having to face up to the truth, as though my intention were to confess to past leftie sins of omission.
Now, I fully acknowledge that there have been hysterical reactions to ‘sociobiology’ and to suggestions that human traits may be partly genetically hardwired. And these have often come from the left – indeed, sometimes from the Marxian post-modern intellectuals who Pinker regards as the root of so many modern evils. But such denial is plain silly, and I’m not sure that many left-leaning moderates would disagree, or would be somehow too frightened to say so.
The caricatures Toby creates are grotesque. “It’s now just flat out wrong to think that varying levels of ability and success are solely determined by economic and historical forces”, he says. We agree – but does anyone seriously want to argue otherwise?
“That means it’s a dangerous fantasy”, he continues, “to think that, once you’ve eradicated socio-economic inequality, human nature will flatten out accordingly – that you can return to ‘year zero’, as the Khmer Rouge put it. On the contrary, biological differences between human beings will stubbornly refuse to wither away, which means that an egalitarian society can only be maintained by a brutally coercive state that is constantly intervening to ‘correct’ the inequities of nature.”
But most of us who would like to see an “egalitarian society” don’t mean by that a society in which absolute equality is imposed by the jackboot. We just want to see, for example, fewer people struggle against the inequalities they are born into, while others rise to power and influence on the back of their privileged background. We want to see less tolerance of, and even encouragement of, naked greed that exploits the powerless. We want to see more equality of opportunity. I think we accept that there can never be equality of outcome, at least without unjustified coercion. But we would also like to see reward more closely tied to contribution to society, not simply to what you can get away with. And in fact, while we will differ in degree and probably in methodology, I suspect that in these aspirations we liberal lefties are not so different from Toby Young.
In fact, evidently we do agree on this much:
“The findings of evolutionary psychologists, sociobiologists, cognitive neuroscientists, biosocial criminologists, and so on, [don’t] inevitably lead to Alan Ryan’s ‘apocalyptic conservatism’. On the contrary, I think they’re compatible with a wide range of political arrangements, including – at a pinch – Scandinavian social democracy.”
Which is why it’s baffling to me why Toby thinks we “progressive liberals” should be so disconcerted by the findings of genetics. Disconcerted by the discovery that traits, like height, are partly innate? Disconcerted that a society that tries to impose complete equality of ability on everyone will be a Stalinist dystopia? The implication here seems to be that science has disproved our leftwing delusions, and we’d better face up to that. But all it has ‘disproved’ is some wild, extreme fantasies and some straw men.
Such comments only reinforce my view that all this politicization of the debate gets in the way of actually moving it on. In my experience, the reason many educators and educationalists are not terribly enchanted with studies of the genetic basis of intelligence is not because they think it is some foul plot but because they don’t see it as terribly relevant. It doesn’t help them do their job any better. Now, if that leads them to actually deny the role of genes in intelligence, then they’re barking up the wrong tree. But I think many see it merely as a distraction from the business of trying to improve education. After all, so far genetics has offered next to no suggestions about how to do that – as I said in my article, pretty much all the sensible recommendations that Robert Plomin and Kathryn Asbury make in their book could have been made without the benefit of genetic studies.
Now, one way to read the implications of those studies is that there actually isn’t much that educationalists can do. Take the recent paper by Plomin and colleagues claiming that schools make virtually no additional contribution to outcomes beyond the innate cognitive abilities of their student intake. This is a very interesting finding, but there needs to be careful discussion about what it means. So we shouldn’t worry at all about Ofsted reports of “failing” schools? I doubt if anyone would conclude that, but then how is a school influencing outcomes? When a new head arrives and turns a school around, what has happened? Has the new head somehow just managed to alter the IQ distribution of the intake? I don’t know the answers to these things.
The authors of that paper are not so unwise as to conclude that (presumably beyond some minimal level of competence) “teaching makes no difference to outcomes”. But you can imagine others drawing that conclusion, and then should understand if some teachers and educators express frustration with this sort of thing. For one thing, the differences teaching and teachers make are not always going to be registered in exam results. As things stood, I was always going to get A’s in my chemistry A levels – but it was the enthusiasm and advocacy of Dr McCarthy and Mr Heasman that inspired me to study the subject at university. I was probably always going to get an A in my English O level, but it was Ms Priske who encouraged me to read Mervyn Peake.
All too often, however, the position of right-leaning commentators on the matter can read like laissez-faire: tinker all you like but it’s not going to make much difference, because you well-meaning liberals are just going to have to accept that some pupils are smarter than others. (So why are Conservative education ministers so keen to keep buggering about with the curriculum?) And if you do manage to level the playing field, you’ll see that even more clearly. And then where will you be, eh, with all your Maoist visions?
I don’t think they really do think like this; at least I don’t think Toby does. I certainly hope not. But that’s why both sides have to stop any posturing about the facts, and get on with figuring out what to make of them. We already know not all kids will do equally well in exams, come what may. But how do we find those who could do better, given the right circumstances? How do we find ways of engaging those pupils with ability but not inclination? How do we find ways of helping those of lower academic ability feel fulfilled rather than discarded in the bottom set? How do we decide, for God’s sake, what is important in an education anyway? These are the kinds of hard questions that teachers and educators have to face every day, and it would be good to see if the knowledge we’re gaining about inherent cognitive abilities could be useful to them, rather than turning it into a political football.
Friday, April 13, 2018
The thousand-year song
In February I had the pleasure of meeting Jem Finer, the founder of the Longplayer project, to discuss the “music of the future” at this event in London. It seemed a perfect subject for my latest column for Sapere magazine on music cognition, where it will appear in Italian. Here it is in English.
______________________________________________________________
Most people will have experienced music that seemed to go on forever, and usually that’s not a good thing. But Longplayer, a composition by British musician Jem Finer, a founder member of the band The Pogues, really does. It’s a piece conceived on a geological timescale, lasting for a thousand years. So far, only 18 of them have been performed – but the performance is ongoing even as you read this. It began at the turn of the new millennium and will end on 31 December 2999. Longplayer can be heard online and at various listening posts around the world, the most evocative being a Victorian lighthouse in London’s docklands.
Longplayer is scored for a set of Tibetan singing bowls, each of which sounds in a repeating pattern determined by a mathematical algorithm that will not repeat any combination exactly until one thousand years have passed. The parts interweave in complex, constantly shifting ways, not unlike compositions such as Steve Reich’s Piano Phase in which repeating patterns move in and out of step. Right now Longplayer sounds rather serene and meditative, but Finer says that there are going to be pretty chaotic, discordant passages ahead, lasting for decades at a time – albeit not in his or my lifetime.
The visual score of Longplayer. (Image: Jem Finer/Longplayer Foundation)
An installation of Tibetan prayer bowls used for Longplayer at Trinity Buoy Wharf, London Docks. (Photo: James Whitaker)
One way to regard Longplayer is as a kind of conceptual artwork, taking with a pinch of salt the idea that it will be playing in a century’s time, let alone a millennium. Finer, though, has careful plans for how to sustain the piece into the indefinite future in the face of technological and social change. There’s no doubt that performance is a strong feature of the project: live events playing part of the piece have been rather beautiful, the instruments arrayed in concentric circles that reflect both the score itself and the sense of planetary orbits unfurling in slow, dignified synchrony.
But if this all seems ritualistic, so is a great deal of music. I do think Longplayer is a serious musical adventure, not least in how it both emphasizes and challenges the central cognitive process involved in listening: our perception of pattern and regularity. Those are the building blocks of this piece, and yet they take place mostly beyond the scope of an individual’s perception, forcing us – as perhaps the pointillistic dissonance of Pierre Boulez’s total serialism does – to find new ways of listening.
More than this, though, Longplayer connects to the persistence of music through the “deep time” of humanity, offering a message of determination and hope. Tectonic plates may shift, the climate may change, we might even reinvent ourselves – but we will do our best to ensure that this expression of ourselves will endure.
A live performance of part of Longplayer at the Yerba Buena Center, San Francisco, in 2010. (Photo: Stephen Hill)
______________________________________________________________
Most people will have experienced music that seemed to go on forever, and usually that’s not a good thing. But Longplayer, a composition by British musician Jem Finer, a founder member of the band The Pogues, really does. It’s a piece conceived on a geological timescale, lasting for a thousand years. So far, only 18 of them have been performed – but the performance is ongoing even as you read this. It began at the turn of the new millennium and will end on 31 December 2999. Longplayer can be heard online and at various listening posts around the world, the most evocative being a Victorian lighthouse in London’s docklands.
Longplayer is scored for a set of Tibetan singing bowls, each of which sounds in a repeating pattern determined by a mathematical algorithm that will not repeat any combination exactly until one thousand years have passed. The parts interweave in complex, constantly shifting ways, not unlike compositions such as Steve Reich’s Piano Phase in which repeating patterns move in and out of step. Right now Longplayer sounds rather serene and meditative, but Finer says that there are going to be pretty chaotic, discordant passages ahead, lasting for decades at a time – albeit not in his or my lifetime.
The visual score of Longplayer. (Image: Jem Finer/Longplayer Foundation)
An installation of Tibetan prayer bowls used for Longplayer at Trinity Buoy Wharf, London Docks. (Photo: James Whitaker)
One way to regard Longplayer is as a kind of conceptual artwork, taking with a pinch of salt the idea that it will be playing in a century’s time, let alone a millennium. Finer, though, has careful plans for how to sustain the piece into the indefinite future in the face of technological and social change. There’s no doubt that performance is a strong feature of the project: live events playing part of the piece have been rather beautiful, the instruments arrayed in concentric circles that reflect both the score itself and the sense of planetary orbits unfurling in slow, dignified synchrony.
But if this all seems ritualistic, so is a great deal of music. I do think Longplayer is a serious musical adventure, not least in how it both emphasizes and challenges the central cognitive process involved in listening: our perception of pattern and regularity. Those are the building blocks of this piece, and yet they take place mostly beyond the scope of an individual’s perception, forcing us – as perhaps the pointillistic dissonance of Pierre Boulez’s total serialism does – to find new ways of listening.
More than this, though, Longplayer connects to the persistence of music through the “deep time” of humanity, offering a message of determination and hope. Tectonic plates may shift, the climate may change, we might even reinvent ourselves – but we will do our best to ensure that this expression of ourselves will endure.
A live performance of part of Longplayer at the Yerba Buena Center, San Francisco, in 2010. (Photo: Stephen Hill)
Thursday, March 01, 2018
On the pros and cons of showing copy to sources - redux
Dana Smith has written a nice article for Undark about whether science journalists should or should not show drafts or quotes to their scientist sources before publication.
I’ve been thinking about this some more after writing the blog entry from which Dana quotes. One issue that I think comes out from Dana’s piece is that there is perhaps something of a generational divide here: I sense that younger writers are more likely to consider it ethically questionable ever to show drafts to sources, while old’uns like me, Gary Stix and John Rennie have less of a problem with it. And I wonder if this has something to do with the fact that the old’uns probably didn’t get much in the way of formal journalistic training (apologies to Gary and John if I’m wrong!), because science writers rarely did back then. I have the impression that “never show anything to sources” is a notion that has entered into science writing from other journalistic practice, and I do wonder if has acquired something of the status of dogma in the process.
Erin Biba suggests that the onus is one the reporter to get the facts right. I fully agree that we have that responsibility. But frankly, we will often not get the facts right. Science is not uniquely hard, but it absolutely is hard. Even when we think we know a topic well and have done our best to tell it correctly, chances are that there are small, and sometimes big, ways in which we’ll miss what real experts will see. To suggest that asking the experts is “the easy way out” sounds massively hubristic to me.
(Incidentally, I’m not too fussed about the matter of checking out quotes. If I show drafts, it’s to check out if I have got any of the scientific details wrong. I often tend to leave in quotes just because there doesn’t seem much point in removing them – they are very rarely queried – but I might omit critical quotes from others to avoid arguments that might otherwise end up needing third-part peer review.)
Dana doesn’t so much go into the arguments for why it is so terrible (in the view of some) to show your copy to sources. She mentions that some say it’s a matter of “journalistic integrity”, or just that it’s a “hard rule” – which makes the practice sound terribly transgressive. But why? The argument often seems to be, “Well, the scientists will get you to change your story to suit them.” To which I say, “Why on earth would I let them do that?” In the face of such attempts (which I’ve hardly ever encountered), why do I not just say, “Sorry, no”? Oh, but you’ll not be able to resist, will you? You have no will and judgement. You’re just a journalist.
Some folks, it’s true, say instead “Oh, I know you’ll feel confident and assertive enough to resist undue pressure to change the message, but some younger reporters will be more vulnerable, so it’s safer to have a blanket policy.” I can see that point, and am not unsympathetic to it (although I do wonder whether journalistic training might focus less on conveying the evils of showing copy to sources and more on developing skills and resources for resisting such pressures). But so long as I’m able to work as a freelancer on my own terms, I’ll continue to do it this way: to use what is useful and discard what is not. I don’t believe it is so hard to tell the difference, and I don’t think it is very helpful to teach science journalists that the only way you can insulate yourself from bad advice is to cut yourself off from good advice too.
Here’s an example of why we science writers would be unwise to trust we can assess the correctness of our writing ourselves, and why experts can be helpful if used judiciously. I have just written a book on quantum mechanics. I have immersed myself in the field, talked to many experts, read masses of books and papers, and generally informed myself about the topic in far, far greater detail than any reporter could be expected to do in the course of writing a news story on the subject. That’s why, when a Chinese team reported last year that they had achieved quantum teleportation between a ground base and a satellite, I felt able to write a piece for Nature explaining what this really means, and pointing out some common misconceptions in the reporting of it.
And I feel – and hope – I managed to do that. But I got something wrong.
It was not a major thing, and didn’t alter the main point of the article, but it was a statement that was wrong.
I discovered this only when, in correspondence with a quantum physicist, he happened to mention in passing that one of his colleagues had criticized my article for this error in a blog. So I contacted the chap in question and had a fruitful exchange. He asserted that there were some other dubious statements in my piece too, but on that matter I replied that he had either misunderstood what I was saying or was presenting an unbalanced view of the diversity of opinion. The point was, it was very much a give-and-take interaction. But it was clear that on this one point he was right and I was wrong – so I got the correction made.
Now, had I sent my draft to a physicist working on quantum teleportation, I strongly suspect that my error would have been spotted right away. (And I do think it would have had to be a specialist in that particular field, not just a random quantum physicist, for the mistake to have been noticed.) I didn’t do so partly because I had no real sources in this case to bounce off, but also partly because I had a false sense of my own “mastery” of the topic. And this will happen all the time – it will happen not because we writers don’t feel confident in our knowledge of the topic, but precisely because we do feel (falsely) confident in it. I cannot for the life of me see why some imported norm from elsewhere in journalism makes it “unethical” to seek expert advice in a case like this – not advice before we write, but advice on what we have actually written.
Erin is right to say that most mistakes, like mine here, really aren’t a big deal. They’re not going to damage a scientist’s career or seriously mislead the public. And of course we should admit to and correct them when they happen. But why let them happen more often than they need to?
As it happens, having said earlier that I very rarely get responses from scientists to whom I’ve shown drafts beyond some technical clarifications, I recently wrote two pieces that were less straightforward. Both were on topics that I knew to be controversial. And in both cases I received some comments that made me suspect their authors were wanting to somewhat dictate the message, taking issue with some of the things the “other side” said.
But this was not a problem. I thought carefully about what they said, took on board some clearly factual remarks, considered whether the language I’d used captured the right nuance in some other places, and simply decided I would respectfully decline to make any modifications to my text in others. Everything was on a case-by-case basis. These scientists were in return very respectful of my position. They seemed to feel that I’d heard and considered their position, and that I had priorities and obligations different from theirs. I felt that my pieces were better as a result, without my independence at all being compromised, and they were happy with the outcome. Everyone, including the readers, were better served as a result of the exchange. I’m quite baffled by how there could be deemed to be anything unethical in that.
And that’s one of the things that makes me particularly uneasy about how showing any copy to sources is sometimes presented not as an informed choice but as tantamount to breaking a professional code. I’ve got little time for the notion that it conflicts with the journalist’s mission to critique science and not merely act as its cheerleader. Getting your facts right and sticking to your guns are separate matters. Indeed, I have witnessed plenty of times the way in which a scientist who is being (or merely feels) criticized will happily seize on any small errors (or just misunderstandings of what you’ve written) as a way of undermining the validity of the whole piece. Why give them that opportunity after the fact? The more airtight a piece is factually, the more authoritative the critique will be seen to be.
I should add that I absolutely agree with Erin that the headlines our articles are sometimes given are bad, misleading and occasionally sensationalist. I’ve discussed this too with some of my colleagues recently, and I agree that we writers have to take some responsibility for this, challenging our editors when it happens. It’s not always a clear-cut issue: I’ve received occasional moans from scientists and others about a headline that didn’t quite get the right nuance, but which I thought weren’t so bad, and so I’m not inclined to start badgering folks about that. (I wouldn’t have used the headline that Nature gave my quantum teleportation piece, but hey.) But I think magazines and other outlets have to be open to this sort of feedback – I was disheartened to find that one that I challenged recently was not. (I should say that others are – Prospect has always been particularly good at making changes if I feel the headlines for my online pieces convey the wrong message.) As Chris Chambers has rightly tweeted, we’re all responsible for this stuff: writers, editors, scientists. So we need to work together – which also means standing up against one another when necessary, rather than simply not talking.
I’ve been thinking about this some more after writing the blog entry from which Dana quotes. One issue that I think comes out from Dana’s piece is that there is perhaps something of a generational divide here: I sense that younger writers are more likely to consider it ethically questionable ever to show drafts to sources, while old’uns like me, Gary Stix and John Rennie have less of a problem with it. And I wonder if this has something to do with the fact that the old’uns probably didn’t get much in the way of formal journalistic training (apologies to Gary and John if I’m wrong!), because science writers rarely did back then. I have the impression that “never show anything to sources” is a notion that has entered into science writing from other journalistic practice, and I do wonder if has acquired something of the status of dogma in the process.
Erin Biba suggests that the onus is one the reporter to get the facts right. I fully agree that we have that responsibility. But frankly, we will often not get the facts right. Science is not uniquely hard, but it absolutely is hard. Even when we think we know a topic well and have done our best to tell it correctly, chances are that there are small, and sometimes big, ways in which we’ll miss what real experts will see. To suggest that asking the experts is “the easy way out” sounds massively hubristic to me.
(Incidentally, I’m not too fussed about the matter of checking out quotes. If I show drafts, it’s to check out if I have got any of the scientific details wrong. I often tend to leave in quotes just because there doesn’t seem much point in removing them – they are very rarely queried – but I might omit critical quotes from others to avoid arguments that might otherwise end up needing third-part peer review.)
Dana doesn’t so much go into the arguments for why it is so terrible (in the view of some) to show your copy to sources. She mentions that some say it’s a matter of “journalistic integrity”, or just that it’s a “hard rule” – which makes the practice sound terribly transgressive. But why? The argument often seems to be, “Well, the scientists will get you to change your story to suit them.” To which I say, “Why on earth would I let them do that?” In the face of such attempts (which I’ve hardly ever encountered), why do I not just say, “Sorry, no”? Oh, but you’ll not be able to resist, will you? You have no will and judgement. You’re just a journalist.
Some folks, it’s true, say instead “Oh, I know you’ll feel confident and assertive enough to resist undue pressure to change the message, but some younger reporters will be more vulnerable, so it’s safer to have a blanket policy.” I can see that point, and am not unsympathetic to it (although I do wonder whether journalistic training might focus less on conveying the evils of showing copy to sources and more on developing skills and resources for resisting such pressures). But so long as I’m able to work as a freelancer on my own terms, I’ll continue to do it this way: to use what is useful and discard what is not. I don’t believe it is so hard to tell the difference, and I don’t think it is very helpful to teach science journalists that the only way you can insulate yourself from bad advice is to cut yourself off from good advice too.
Here’s an example of why we science writers would be unwise to trust we can assess the correctness of our writing ourselves, and why experts can be helpful if used judiciously. I have just written a book on quantum mechanics. I have immersed myself in the field, talked to many experts, read masses of books and papers, and generally informed myself about the topic in far, far greater detail than any reporter could be expected to do in the course of writing a news story on the subject. That’s why, when a Chinese team reported last year that they had achieved quantum teleportation between a ground base and a satellite, I felt able to write a piece for Nature explaining what this really means, and pointing out some common misconceptions in the reporting of it.
And I feel – and hope – I managed to do that. But I got something wrong.
It was not a major thing, and didn’t alter the main point of the article, but it was a statement that was wrong.
I discovered this only when, in correspondence with a quantum physicist, he happened to mention in passing that one of his colleagues had criticized my article for this error in a blog. So I contacted the chap in question and had a fruitful exchange. He asserted that there were some other dubious statements in my piece too, but on that matter I replied that he had either misunderstood what I was saying or was presenting an unbalanced view of the diversity of opinion. The point was, it was very much a give-and-take interaction. But it was clear that on this one point he was right and I was wrong – so I got the correction made.
Now, had I sent my draft to a physicist working on quantum teleportation, I strongly suspect that my error would have been spotted right away. (And I do think it would have had to be a specialist in that particular field, not just a random quantum physicist, for the mistake to have been noticed.) I didn’t do so partly because I had no real sources in this case to bounce off, but also partly because I had a false sense of my own “mastery” of the topic. And this will happen all the time – it will happen not because we writers don’t feel confident in our knowledge of the topic, but precisely because we do feel (falsely) confident in it. I cannot for the life of me see why some imported norm from elsewhere in journalism makes it “unethical” to seek expert advice in a case like this – not advice before we write, but advice on what we have actually written.
Erin is right to say that most mistakes, like mine here, really aren’t a big deal. They’re not going to damage a scientist’s career or seriously mislead the public. And of course we should admit to and correct them when they happen. But why let them happen more often than they need to?
As it happens, having said earlier that I very rarely get responses from scientists to whom I’ve shown drafts beyond some technical clarifications, I recently wrote two pieces that were less straightforward. Both were on topics that I knew to be controversial. And in both cases I received some comments that made me suspect their authors were wanting to somewhat dictate the message, taking issue with some of the things the “other side” said.
But this was not a problem. I thought carefully about what they said, took on board some clearly factual remarks, considered whether the language I’d used captured the right nuance in some other places, and simply decided I would respectfully decline to make any modifications to my text in others. Everything was on a case-by-case basis. These scientists were in return very respectful of my position. They seemed to feel that I’d heard and considered their position, and that I had priorities and obligations different from theirs. I felt that my pieces were better as a result, without my independence at all being compromised, and they were happy with the outcome. Everyone, including the readers, were better served as a result of the exchange. I’m quite baffled by how there could be deemed to be anything unethical in that.
And that’s one of the things that makes me particularly uneasy about how showing any copy to sources is sometimes presented not as an informed choice but as tantamount to breaking a professional code. I’ve got little time for the notion that it conflicts with the journalist’s mission to critique science and not merely act as its cheerleader. Getting your facts right and sticking to your guns are separate matters. Indeed, I have witnessed plenty of times the way in which a scientist who is being (or merely feels) criticized will happily seize on any small errors (or just misunderstandings of what you’ve written) as a way of undermining the validity of the whole piece. Why give them that opportunity after the fact? The more airtight a piece is factually, the more authoritative the critique will be seen to be.
I should add that I absolutely agree with Erin that the headlines our articles are sometimes given are bad, misleading and occasionally sensationalist. I’ve discussed this too with some of my colleagues recently, and I agree that we writers have to take some responsibility for this, challenging our editors when it happens. It’s not always a clear-cut issue: I’ve received occasional moans from scientists and others about a headline that didn’t quite get the right nuance, but which I thought weren’t so bad, and so I’m not inclined to start badgering folks about that. (I wouldn’t have used the headline that Nature gave my quantum teleportation piece, but hey.) But I think magazines and other outlets have to be open to this sort of feedback – I was disheartened to find that one that I challenged recently was not. (I should say that others are – Prospect has always been particularly good at making changes if I feel the headlines for my online pieces convey the wrong message.) As Chris Chambers has rightly tweeted, we’re all responsible for this stuff: writers, editors, scientists. So we need to work together – which also means standing up against one another when necessary, rather than simply not talking.
Sunday, February 04, 2018
Should you send the scientist your draft article?
The Twitter discussion sparked by this poll was very illuminating. There’s a clear sense that scientists largely think they should be entitled to review quotes they make to a journalist (and perhaps to see the whole piece), while journalists say absolutely not, that’s not the way journalism works.
Of course (well, I say that but I’m not sure it’s obvious to everyone), the choices are not: (1) Journalist speaks to scientist, writes the piece, publishes; or (2) Journalist speaks to scientist, sends the scientist the piece so that the scientist can change it to their whim, publishes.
What more generally happens is that, after the draft is submitted to the editor, articles get fact-checked by the publication before publication. Typically this involves a fact-checker calling up the scientist and saying “Did you basically say X?” (usually with a light paraphrase). The fact-checker also typically asks the writer to send transcripts of interviews, to forward email exchanges etc, as well as to provide links or references to back up factual statements in the piece. This is, of course, time-consuming, and the extent to which, and rigour with which, it is done depends on the resources of the publication. Some science publications, like Quanta, have a great fact-checking machinery. Some smaller or more specialized journals don’t really have much of it at all, and might rely on an alert subeditor to spot things that look questionable.
This means that a scientist has no way of knowing, when he or she gives an interview, how accurately they are going to be quoted – though in some cases the writer can reassure them that a fact-checker will get in touch to check quotes. But – and this is the point many of the comments on the poll don’t quite acknowledge – it is not all about quotes! Many scientists are equally concerned about whether their work will be described accurately. If they don’t get to see any of the draft and are just asked about quotes, there is no way to ensure this.
One might say that it’s the responsibility of the writer to get that right. Of course it is. And they’ll do their best, for sure. But I don’t think I’ll be underestimating the awesomeness of my colleagues to say that we will get it wrong. We will get it wrong often. Usually this will be in little ways. We slightly misunderstood the explanation of the technique, we didn’t appreciate nuances and so our paraphrasing wasn’t quite apt, or – this is not uncommon – what the scientist wrote, and which we confidently repeated in simpler words, was not exactly what they meant. Sometimes our oversights and errors will be bigger. And if the reporter who has read the papers and talked with the scientists still didn’t quite get it right, what chance is there that even the most diligent fact-checker (and boy are they diligent) will spot that?
OK, mistakes happen. But they don’t have to, or not so often, if the scientist gets to see the text.
Now, I completely understand the arguments for why it might not be a good idea to show a draft to the people whose work is being discussed. The scientists might interfere to try to bend the text in their favour. They might insist that their critics, quoted in the piece, are talking nonsense and must be omitted. They might want to take back something they said, having got cold feet. Clearly, a practice like that couldn’t work in political writing.
Here, though, is what I don’t understand. What is to stop the writer saying No, that stays as it is? Sure, the scientist will be pissed off. But the scientist would be no less pissed off if the piece appeared without them ever having seen it.
Folks at Nature have told me, Well sometimes it’s not just a matter of scientists trying to interfere. On some sensitive subjects, they might get legal. And I can see that there are some stories, for example looking at misconduct or dodgy dealings by a pharmaceutical company, where passing round a draft is asking for trouble. Nature says that if they have a blanket policy so that the writer can just say Sorry, we don’t do that, it makes things much more clear-cut for everyone. I get that, and I respect it.
But my own personal preference is for discretion, not blanket policies. If you’re writing about, say, topological phases and it is brain-busting stuff, trying to think up paraphrases that will accurately reflect what you have said (or what the writer has said) to the interviewee while fact-checking seems a bit crazy when you could just show the researcher the way you described a Dirac fermion and ask them if it’s right. (I should say that I think Nature would buy that too in this situation.)
What’s more, there’s no reason on earth why a writer could not show a researcher a draft minus the comments that others have made on their work, so as to focus just on getting the facts right.
The real reason I feel deeply uncomfortable about the way that showing interviewees a draft is increasing frowned on, and even considered “highly unethical”, is however empirical. In decades of having done this whenever I can, and whenever I thought it advisable, I struggle to think of a single instance where a scientist came back with anything obstructive or unhelpful. Almost without exception they are incredibly generous and understanding, and any comments they made have improved the piece: by pointing out errors, offering better explanations or expanding on nuances. The accuracy of my writing has undoubtedly been enhanced as a result.
Indeed, writers of Focus articles for the American Physical Society, which report on papers generally from the Phys Rev journals, are requested to send articles to the papers’ authors before publication, and sometimes to get the authors to respond to criticisms raised by advisers. And this is done explicitly with the readers in mind: to ensure that the stories are as accurate as possible, and that they get some sense of the to-and-fro of questions raised. Now, it’s a very particular style of journalism at Focus, and wouldn’t work for everyone; but I believe it is a very defensible policy.
The New York Times explained its "no show" policy in 2012, and it made a lot of sense: it seems some political spokespeople and organizations were demanding quote approval and abusing it to exert control over what was reported. Press aides wanted to vet everything. This was clearly compromising to pen and balanced reporting.
But I have never encountered anything like that in many years of science reporting. That's not surprising, because it is (at least when we are reporting on scientific papers for the scientific press) a completely different ball game. Occasionally I have had people working at private companies needing to get their answers to my questions checked by the PR department before passing them on to me. That's tedious, but if it means that what results is something extremely anodyne, I just won't use it. I've also found some institutions - the NIH is particularly bad at this - reluctant to let their scientists speak at all, so that questions get fielded to a PR person who responds with such pathetic blandness and generality that it's a waste of everyone's time. It's a dereliction of duty for state-funded scientific research, but that's another issue.
As it happens, just recently while writing on a controversial topic in physical chemistry, I encountered the extremely rare situation where, having shown my interviewees a draft, one scientist told me that it was wrong for those in the other camp to be claiming X, because the scientific facts of the matter had been clearly established and they were not X. So I said fine, I can quote you as saying “The facts of the matter are not X” – but I will keep the others insisting that X is in fact that case. And I will retain the authorial voice implying that the matter is still being debated and is certainly not settled. And this guy was totally understanding and reasonable, and respected my position. This was no more or less than I had anticipated, given the way most scientists are.
In short, while I appreciate that an insistence that we writers not show drafts to the scientists is often made in an attempt to save us from being put in an awkward situation, in fact it can feel as though we are being treated as credulous dupes who cannot stand up to obstruction and bullying (if it should arise, which in my experience it hasn’t in this context), or resist manipulation, or make up our own minds about the right way to tell the story.
There’s another reason why I prefer to ask the scientists to review my texts, though – which is that I also write books. In non-fiction writing there simply is not this notion that you show no one except your editor the text before publication. To do so would be utter bloody madness. Because You Will Get Things Wrong – but with expert eyes seeing the draft, you will get much less wrong. I have always tried to get experts to read drafts of my books, or relevant parts of them, before publication, and I always thank God that I did and am deeply grateful that many scientists are generous enough to take on that onerous task (believe me, not all other disciplines have a tradition of being so forthcoming with help and advice). Always when I do this, I have no doubt that I am the author, and that I get the final say about what is said and how. But I have never had a single expert reader who has been anything but helpful, sympathetic and understanding. (Referees of books for academic publishers, however – now that’s another matter entirely. Don’t get me started.)
I seem to be in a minority here. And I may be misunderstanding something. Certainly, I fully understand why some science writers, writing some kinds of stories, would find it necessary to refuse to show copy to interviewees before publication. What's more, I will always respect editors’ requests not to show drafts of articles to interviewees. But I will continue to do so, when I think it is advisable, unless requested to do otherwise.
Of course (well, I say that but I’m not sure it’s obvious to everyone), the choices are not: (1) Journalist speaks to scientist, writes the piece, publishes; or (2) Journalist speaks to scientist, sends the scientist the piece so that the scientist can change it to their whim, publishes.
What more generally happens is that, after the draft is submitted to the editor, articles get fact-checked by the publication before publication. Typically this involves a fact-checker calling up the scientist and saying “Did you basically say X?” (usually with a light paraphrase). The fact-checker also typically asks the writer to send transcripts of interviews, to forward email exchanges etc, as well as to provide links or references to back up factual statements in the piece. This is, of course, time-consuming, and the extent to which, and rigour with which, it is done depends on the resources of the publication. Some science publications, like Quanta, have a great fact-checking machinery. Some smaller or more specialized journals don’t really have much of it at all, and might rely on an alert subeditor to spot things that look questionable.
This means that a scientist has no way of knowing, when he or she gives an interview, how accurately they are going to be quoted – though in some cases the writer can reassure them that a fact-checker will get in touch to check quotes. But – and this is the point many of the comments on the poll don’t quite acknowledge – it is not all about quotes! Many scientists are equally concerned about whether their work will be described accurately. If they don’t get to see any of the draft and are just asked about quotes, there is no way to ensure this.
One might say that it’s the responsibility of the writer to get that right. Of course it is. And they’ll do their best, for sure. But I don’t think I’ll be underestimating the awesomeness of my colleagues to say that we will get it wrong. We will get it wrong often. Usually this will be in little ways. We slightly misunderstood the explanation of the technique, we didn’t appreciate nuances and so our paraphrasing wasn’t quite apt, or – this is not uncommon – what the scientist wrote, and which we confidently repeated in simpler words, was not exactly what they meant. Sometimes our oversights and errors will be bigger. And if the reporter who has read the papers and talked with the scientists still didn’t quite get it right, what chance is there that even the most diligent fact-checker (and boy are they diligent) will spot that?
OK, mistakes happen. But they don’t have to, or not so often, if the scientist gets to see the text.
Now, I completely understand the arguments for why it might not be a good idea to show a draft to the people whose work is being discussed. The scientists might interfere to try to bend the text in their favour. They might insist that their critics, quoted in the piece, are talking nonsense and must be omitted. They might want to take back something they said, having got cold feet. Clearly, a practice like that couldn’t work in political writing.
Here, though, is what I don’t understand. What is to stop the writer saying No, that stays as it is? Sure, the scientist will be pissed off. But the scientist would be no less pissed off if the piece appeared without them ever having seen it.
Folks at Nature have told me, Well sometimes it’s not just a matter of scientists trying to interfere. On some sensitive subjects, they might get legal. And I can see that there are some stories, for example looking at misconduct or dodgy dealings by a pharmaceutical company, where passing round a draft is asking for trouble. Nature says that if they have a blanket policy so that the writer can just say Sorry, we don’t do that, it makes things much more clear-cut for everyone. I get that, and I respect it.
But my own personal preference is for discretion, not blanket policies. If you’re writing about, say, topological phases and it is brain-busting stuff, trying to think up paraphrases that will accurately reflect what you have said (or what the writer has said) to the interviewee while fact-checking seems a bit crazy when you could just show the researcher the way you described a Dirac fermion and ask them if it’s right. (I should say that I think Nature would buy that too in this situation.)
What’s more, there’s no reason on earth why a writer could not show a researcher a draft minus the comments that others have made on their work, so as to focus just on getting the facts right.
The real reason I feel deeply uncomfortable about the way that showing interviewees a draft is increasing frowned on, and even considered “highly unethical”, is however empirical. In decades of having done this whenever I can, and whenever I thought it advisable, I struggle to think of a single instance where a scientist came back with anything obstructive or unhelpful. Almost without exception they are incredibly generous and understanding, and any comments they made have improved the piece: by pointing out errors, offering better explanations or expanding on nuances. The accuracy of my writing has undoubtedly been enhanced as a result.
Indeed, writers of Focus articles for the American Physical Society, which report on papers generally from the Phys Rev journals, are requested to send articles to the papers’ authors before publication, and sometimes to get the authors to respond to criticisms raised by advisers. And this is done explicitly with the readers in mind: to ensure that the stories are as accurate as possible, and that they get some sense of the to-and-fro of questions raised. Now, it’s a very particular style of journalism at Focus, and wouldn’t work for everyone; but I believe it is a very defensible policy.
The New York Times explained its "no show" policy in 2012, and it made a lot of sense: it seems some political spokespeople and organizations were demanding quote approval and abusing it to exert control over what was reported. Press aides wanted to vet everything. This was clearly compromising to pen and balanced reporting.
But I have never encountered anything like that in many years of science reporting. That's not surprising, because it is (at least when we are reporting on scientific papers for the scientific press) a completely different ball game. Occasionally I have had people working at private companies needing to get their answers to my questions checked by the PR department before passing them on to me. That's tedious, but if it means that what results is something extremely anodyne, I just won't use it. I've also found some institutions - the NIH is particularly bad at this - reluctant to let their scientists speak at all, so that questions get fielded to a PR person who responds with such pathetic blandness and generality that it's a waste of everyone's time. It's a dereliction of duty for state-funded scientific research, but that's another issue.
As it happens, just recently while writing on a controversial topic in physical chemistry, I encountered the extremely rare situation where, having shown my interviewees a draft, one scientist told me that it was wrong for those in the other camp to be claiming X, because the scientific facts of the matter had been clearly established and they were not X. So I said fine, I can quote you as saying “The facts of the matter are not X” – but I will keep the others insisting that X is in fact that case. And I will retain the authorial voice implying that the matter is still being debated and is certainly not settled. And this guy was totally understanding and reasonable, and respected my position. This was no more or less than I had anticipated, given the way most scientists are.
In short, while I appreciate that an insistence that we writers not show drafts to the scientists is often made in an attempt to save us from being put in an awkward situation, in fact it can feel as though we are being treated as credulous dupes who cannot stand up to obstruction and bullying (if it should arise, which in my experience it hasn’t in this context), or resist manipulation, or make up our own minds about the right way to tell the story.
There’s another reason why I prefer to ask the scientists to review my texts, though – which is that I also write books. In non-fiction writing there simply is not this notion that you show no one except your editor the text before publication. To do so would be utter bloody madness. Because You Will Get Things Wrong – but with expert eyes seeing the draft, you will get much less wrong. I have always tried to get experts to read drafts of my books, or relevant parts of them, before publication, and I always thank God that I did and am deeply grateful that many scientists are generous enough to take on that onerous task (believe me, not all other disciplines have a tradition of being so forthcoming with help and advice). Always when I do this, I have no doubt that I am the author, and that I get the final say about what is said and how. But I have never had a single expert reader who has been anything but helpful, sympathetic and understanding. (Referees of books for academic publishers, however – now that’s another matter entirely. Don’t get me started.)
I seem to be in a minority here. And I may be misunderstanding something. Certainly, I fully understand why some science writers, writing some kinds of stories, would find it necessary to refuse to show copy to interviewees before publication. What's more, I will always respect editors’ requests not to show drafts of articles to interviewees. But I will continue to do so, when I think it is advisable, unless requested to do otherwise.
Subscribe to:
Posts (Atom)