Some discussion stemming from Erwin Schrödinger’s birthday prompts me to set out briefly why his cat is widely misunderstood and is actually of rather limited value in truly getting to grips with the conundrums of quantum mechanics.
Schrödinger formulated the thought experiment during correspondence with Einstein in which they articulated what they found objectionable in the view of QM formulated by Niels Bohr and his circle (the “Copenhagen interpretation”, which should probably always be given scare quotes since it never corresponded to a unique, clearly adduced position). In that view, one couldn’t speak about the properties of quantum objects until they were measured. Einstein and Schrödinger considered this absurd, and in 1935 Schrödinger enlisted his cat to explain why. Famously, he imagined a situation in which the property of some quantum object, placed in a superposition of states, determines the fate of a cat in a closed box, hidden from the observer until it is opened. In his original exposition he spoke of how, according to Bohr’s view, the wavefunction of the system would, before being observed, “express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts.”
This is (even back then) more careful wording than the thought experiment is usually afforded today, talking specifically about the wavefunction and not about the cat. Even so, a key problem with Schrödinger’s cat if taken literally as a thought experiment is that it refers to no well defined property. In principle, Schrödinger could have talked instead about a macroscopic instrument with a pointer that could indicate one of two states. But he wanted an example that was not simply hard to intuit – a pointer in a superposition of two states, say – but was semantically absurd. “Live” and “dead” are not simply two different states of being, but are mutually exclusive. Then the absurdity is all the more apparent.
But in doing so, Schrödinger undermined his scenario as an actual experiment. There is not even a single classical measurement, let alone a quantum state one can write down, that defines “live” or “dead”. Of course, it is not hard to find out if a cat is alive or dead – but it is very hard to identify a single variable whose measurement will allow you to fix a well defined instant where the cat goes from live to dead. Certainly, no one has the slightest idea how to write down a wavefunction for a live or dead cat, and it seems unlikely that we could even imagine what they might look like or what would distinguish them.
This is then not, at any rate, an experiment devised (as is often said) to probe the issue of the quantum-classical boundary. Schrödinger gives no indication that he was thinking about that, except for the fact that he wanted a macroscopic example in order to make the absurdity apparent. It’s now clear how hard it would be to think of a way of keeping a cat sufficiently isolated from the environment to avoid (near-instantaneous) decoherence – the process by which “quantumness” generally becomes “classical” – while being able to sustain it in principle in a living state.
Ignoring all this, popular accounts typically take the thought experiment as a literal one rather than as a metaphor. As a rule, they then go on to (1) misunderstand the nature of superpositions as being “in two states at once”, and (2) misrepresent the Copenhagen interpretation as making ontological statements about a quantum system before measurement, and thereby tell us merrily that, if Bohr and colleagues are right, “the cat is both alive and dead at the same time!”
My suspicion is that, precisely because it is so evocative, Schrödinger’s thought experiment does not merely suffer from these misunderstandings but invites them. And that is why I would be very happy to see it retired.
Of course, there is more discussion of all these things in my book Beyond Weird.
Tuesday, August 13, 2019
Thursday, April 25, 2019
A Place That Exists Only In Moonlight: a Q&A with Katie Paterson
I have a Q&A with Katie Paterson in the 25 April issue of Nature. There was a lot in Katie’s comments that I didn’t have room for there, so here is the extended interview. The exhibition is wonderful, though sadly it only runs for a couple more weeks. This is science-inspired art at its finest.
___________________________________________________________________________
Scottish artist Katie Paterson is one of the most scientifically engaged of contemporary artists. Her work has been described as “combining a Romantic sensibility with a research-based approach, conceptual rigour and coolly minimalist presentation.” It makes use of meteorites, astronomical observations, fossils and experiments in sound and light to foster a human engagement with scales in time and space that far exceed our everyday experience.
Many of her works have astronomical themes. All the Dead Stars depicts, on a sheet of black etched steel, the location of around 27,000 stars that are no longer visible. For the Dying Star Letters (2011-) she wrote letters of condolence for every star newly recorded has having “died” – a task that got ever more challenging with advances in observing technologies. And History of Darkness (2010-) is an ongoing archive of slides of totally dark areas of the universe at different epochs and locations.
For Future Library (2014-2114), 100 writers including Margaret Atwood and David Mitchell will write stories (one is commissioned each year since 2014) that will be kept in sealed storage until 2114, when they will be printed on paper made from 1,000 trees being planted in a forest in Norway. Paterson has said of the project that “it questions the present tendency to think in short bursts of time, making decisions only for us living now.”
Some of your works speak to concerns about degradation of the environment and the onset of the Anthropocene – Future Library, for example, and the Vatnajökull project (2007-8) that relays the live sound of meltwater flowing within an Icelandic glacier to listeners who dial in on mobile phones. Do you think that what can seem like an overwhelming problem of environmental change on scales that are hard to contemplate can be made tangible and intelligible through art?
Future Library has a circular ecology built into it: words become enmeshed in growing trees, which, fed by water and light, a century later will become books. It’s a gathering, and the trees spell out time. The artwork is made with simple materials, people, nature and words, and its connected to feelings and senses. The phone call I set up to the glacier was an intimate one-to-one experience; listening to a graveyard of ice. The crisis of global warming does not feel intimate when it’s screeching at us through screens and graphs – yet of course it is. Our planet is disappearing. Humans understand suffering, the cycle of birth and dying. We need a contemporary approach to what Stephen Hawking called ‘Cathedral thinking’: far-reaching vision that is humanly relatable.
David Mitchell sees an optimistic message in Future Library (as well as an exercise in trust): it is, he says, “a vote of confidence in the future. Its fruition is predicated upon the ongoing existence of Northern Europe, of libraries, of Norwegian spruces, of books and of readers.” How confident are you that the books will be made?
We have put many methods in place to ensure that the books will be made. Each tree is marked on a computerized system, and the foresters take great care. We are investigating the likely methods of making ink in 100 years’ time. The city of Oslo has taken this artwork to their heart, and even the king and queen of Norway are involved. We have a Trust whose mandate is to “compassionately sustain the artwork for its 100 year duration.” Yes, Future Library is an exercise in trust. This year’s author Han Kang described the project as having an undercurrent of love flowing through it. It concerns me, and certainly says something about our moment in time, that we even question whether it will be possible to make books in just 100 years. We have clearly reached a crisis.
You have said “Time runs through everything I make.” Your work deals with the scales of distance and time that astronomers and geologists have to consider routinely, but which far exceed human intuition. How can we cope with that?
I find professions that routinely deal with long timescales fascinating. For the foresters in Future Library, 100 years is normal. Geologists work across time periods where major extinctions become plots on a map. Astronomers work with spans of time that go beyond everything that has ever lived. However, this routineness may blur the immensity of the concepts at hand. All the same, we can unearth materials fallen from space and comprehend that they go back far beyond humanity’s time on earth. Our technologies are advanced enough to look to a time beyond the Earth’s existence, approaching the Big Bang. Humans have devised and created these images, yet they exceed our capacity to understand them.
For me the route to a different kind of understanding of time is through the imagination. That’s the space that provides the most freedom and openness. My art attempts to deal directly with concepts that I can’t get to otherwise. Perhaps mathematical languages enable something similar. My journey in astronomy has been a search for connection: understanding that we are not separate from the universe, but are intrinsically linked.
Your work Light Bulb to Simulate Moonlight (2008) does exactly what it says on the tin. The bulb was created in collaboration with engineers at OSRAM. Can you explain how it was made?
I approached Dieter Lang, innovation manager and lighting engineer at OSRAM, and asked him to adapt the methods they use to make ‘daylight bulbs’ to recreate moonlight. I wanted to create a whole lifetime of moonlight – a bulb that lasts the length of an average human life. Dieter took light measurements under a full moon in the countryside outside Munich. I’d always imagined the futility of trying to recreate something as ineffable as moonlight, yet I was happy with the result – the light bulbs burn very brightly, a yellowy-blue tinged light, which changes according to your distance to it, just like the moon.
Do you see projects like the “dead stars” works or History of Darkness as attempts to connect us to the vastness of deep space and time? Or might they in fact suggest the futility of trying to keep track of all that has happened in the observable cosmos?
It oscillates somewhere in between. History of Darkness has futility written into it, capturing infinite darkness from across space and time. Each slide could contain millions of worlds, and learning that these images refer to places beyond human life and even the Earth may expand our relationship to these phenomena, and enhance the sense of our fallibility. All the Dead Stars was made in 2009. I’d like to update it in years to come – it might become an expanse of white dots, as telescopes become even more powerful and abundant.
I’m always drawn to the idea of the universe as deep wilderness. No matter how extensive our research and advanced technologies become, we can never ever truly access the great beyond. I read that our ‘cosmic horizon’ is around 42 billion light years away. What lies beyond, whether finite or infinite, will forever remain outside our understanding. Creating artwork is as much my own way of grappling with the “divine incommensurability” of our position in the universe, as much as an attempt to communicate it with others.
In Earth-Moon-Earth (Moonlight Sonata Reflected from the Surface of the Moon) (2007), you encoded Beethoven’s sonata in Morse code, broadcast it to the surface of the moon in radio waves, and reconstructed the partial score from the reflections. That evidently required some powerful technology. And in 2014 an ESA mission to the Space Shuttle enabled your project of returning a fragment of meteorite to earth orbit. How do these collaborations with scientific institutions come about?
Earth-Moon-Earth was created with “moon bouncer” radio enthusiasts: underground groups of people sending messages to each other via the moon. I simply wrote them letters. While studying at the Slade [art school in London] I wandered into the Rock & Ice Physics Laboratory next door [in University College London]. They allowed me to play my glacial ice records in their walk-in freezers. That was when I found out quite how easy it was to approach others in different fields. With the moonlight bulb I simply called round a number of lighting companies till I came across the right person. The map of the dead stars involved hundreds of researchers. Some scientists are far more involved than others, from sharing data (NASA gave me the recipe for the scent of Saturn’s moon) to developing the artworks very closely with myself and my studio. [Astronomers] Richard Ellis and Steve Fossey have played an enormous role. I tend to approach people who are experts in niche fields, such as type 2a supernova, and I ask to draw on their specialization. It’s their passion, so they are generally receptive. This can be a chance to share their knowledge in a way that they haven’t been asked to before, that will become manifest in an artwork engaging with totally different audiences. Of course there can be bafflement, but so far it’s been overwhelmingly positive.
Recently, for the first time researchers from came to me. I received a message from a group of scientists working on a mission proposal to NASA, inviting me to join their team as a ‘space-artist/co-investigator’ inquiring into cosmic dust. I’m extremely happy about this, not only for the creative potential but because the scientists have shown genuine concern that an artist might have something of value to contribute to their research. The group understands that art can be a way to share their knowledge through a different, more experiential, channel.
Your concepts clearly draw on – and indeed derive from – new scientific discoveries and techniques. For example, The Cosmic Spectrum (2019) is a large rotating colour wheel on which segments show the “average colour” of the Universe (as perceived by the human eye) from the Big Bang until the present, partly using data from the 2dF Galaxy Redshift Survey. How do you stay abreast of the latest scientific developments, and what do you tend to look for in them?
I discovered [astronomer] Ivan Baldry’s work on the cosmic spectrum several years ago. Many of my ideas sit on the back burner for years and manifest themselves at later stages. I don’t feel on top of scientific developments, but sometimes just one experience has enough potency to carry projects through years later.
I’m drawn to current investigations into the sunsets on Mars caught by NASA’s Mars Curiosity rover – but equally by botanical records from bygone eras, or the ray of light in a Florentine cathedral that marks the solstice built centuries ago. Sometimes just looking at titles on the shelves of science libraries can be enough to evoke compelling images. My inspirations have been wide and varied: from looking through telescopes to extremely distant galaxies, to tending a moss garden in a Zen monastery (a universe in itself). I’ve always drawn inspiration from artists, writers, musicians and thinkers whose work has a cosmic dimension: for example, raku ceramicists molding ‘the cosmos in a tea bowl’.
Some of your works exist only as the ongoing collection of ideas in the book A Place That Exists Only in Moonlight (2019). Occasionally they find a striking resonance with concepts that, for a cosmologist or physicist say, might almost seem like a thought experiment or research proposal: “A reset button for the universe pressed only once”, say, or “The speed of light slowed to absolute stillness”. Do you ever find that the scientists you collaborate with or encounter are inspired by your ideas into asking new questions or conducting new investigations themselves?
A Place that Exists Only in Moonlight arose out of a period of heavy production. I wanted to find a ‘lighter’ approach, which is the creative core of everything for me; just the ideas themselves. The book contains artworks to exist in the mind, many of which refer to suns, stars, moons, planets, earthly and cosmic matter. The cover is printed with cosmic dust: a mixture of moondust, dust from Mars, shooting stars, ancient meteorites and asteroids. I wanted the reader to be able to hold and touch the material the words describe, while taking them in. The Ideas are like thought experiments, Zen koans, Gedankenexperiment. In a way that’s true of all my artworks. What time is it on Venus? What texts will be read by unborn people? Is it possible to plant a forest using saplings from the oldest tree on earth, can we make ink to be read only under moonlight? I’m always curious. I will post copies of the book to everyone I have worked with, and I would be very happy indeed if they chose to conduct new investigations themselves.
A Place That Exists Only in Moonlight, an exhibition that pairs Paterson’s works with studies of light, sky and landscapes by J. M. W. Turner, is at the Turner Gallery in Margate, UK, until 6 May.

___________________________________________________________________________
Scottish artist Katie Paterson is one of the most scientifically engaged of contemporary artists. Her work has been described as “combining a Romantic sensibility with a research-based approach, conceptual rigour and coolly minimalist presentation.” It makes use of meteorites, astronomical observations, fossils and experiments in sound and light to foster a human engagement with scales in time and space that far exceed our everyday experience.
Many of her works have astronomical themes. All the Dead Stars depicts, on a sheet of black etched steel, the location of around 27,000 stars that are no longer visible. For the Dying Star Letters (2011-) she wrote letters of condolence for every star newly recorded has having “died” – a task that got ever more challenging with advances in observing technologies. And History of Darkness (2010-) is an ongoing archive of slides of totally dark areas of the universe at different epochs and locations.
For Future Library (2014-2114), 100 writers including Margaret Atwood and David Mitchell will write stories (one is commissioned each year since 2014) that will be kept in sealed storage until 2114, when they will be printed on paper made from 1,000 trees being planted in a forest in Norway. Paterson has said of the project that “it questions the present tendency to think in short bursts of time, making decisions only for us living now.”
Some of your works speak to concerns about degradation of the environment and the onset of the Anthropocene – Future Library, for example, and the Vatnajökull project (2007-8) that relays the live sound of meltwater flowing within an Icelandic glacier to listeners who dial in on mobile phones. Do you think that what can seem like an overwhelming problem of environmental change on scales that are hard to contemplate can be made tangible and intelligible through art?
Future Library has a circular ecology built into it: words become enmeshed in growing trees, which, fed by water and light, a century later will become books. It’s a gathering, and the trees spell out time. The artwork is made with simple materials, people, nature and words, and its connected to feelings and senses. The phone call I set up to the glacier was an intimate one-to-one experience; listening to a graveyard of ice. The crisis of global warming does not feel intimate when it’s screeching at us through screens and graphs – yet of course it is. Our planet is disappearing. Humans understand suffering, the cycle of birth and dying. We need a contemporary approach to what Stephen Hawking called ‘Cathedral thinking’: far-reaching vision that is humanly relatable.
David Mitchell sees an optimistic message in Future Library (as well as an exercise in trust): it is, he says, “a vote of confidence in the future. Its fruition is predicated upon the ongoing existence of Northern Europe, of libraries, of Norwegian spruces, of books and of readers.” How confident are you that the books will be made?
We have put many methods in place to ensure that the books will be made. Each tree is marked on a computerized system, and the foresters take great care. We are investigating the likely methods of making ink in 100 years’ time. The city of Oslo has taken this artwork to their heart, and even the king and queen of Norway are involved. We have a Trust whose mandate is to “compassionately sustain the artwork for its 100 year duration.” Yes, Future Library is an exercise in trust. This year’s author Han Kang described the project as having an undercurrent of love flowing through it. It concerns me, and certainly says something about our moment in time, that we even question whether it will be possible to make books in just 100 years. We have clearly reached a crisis.
You have said “Time runs through everything I make.” Your work deals with the scales of distance and time that astronomers and geologists have to consider routinely, but which far exceed human intuition. How can we cope with that?
I find professions that routinely deal with long timescales fascinating. For the foresters in Future Library, 100 years is normal. Geologists work across time periods where major extinctions become plots on a map. Astronomers work with spans of time that go beyond everything that has ever lived. However, this routineness may blur the immensity of the concepts at hand. All the same, we can unearth materials fallen from space and comprehend that they go back far beyond humanity’s time on earth. Our technologies are advanced enough to look to a time beyond the Earth’s existence, approaching the Big Bang. Humans have devised and created these images, yet they exceed our capacity to understand them.
For me the route to a different kind of understanding of time is through the imagination. That’s the space that provides the most freedom and openness. My art attempts to deal directly with concepts that I can’t get to otherwise. Perhaps mathematical languages enable something similar. My journey in astronomy has been a search for connection: understanding that we are not separate from the universe, but are intrinsically linked.
Your work Light Bulb to Simulate Moonlight (2008) does exactly what it says on the tin. The bulb was created in collaboration with engineers at OSRAM. Can you explain how it was made?
I approached Dieter Lang, innovation manager and lighting engineer at OSRAM, and asked him to adapt the methods they use to make ‘daylight bulbs’ to recreate moonlight. I wanted to create a whole lifetime of moonlight – a bulb that lasts the length of an average human life. Dieter took light measurements under a full moon in the countryside outside Munich. I’d always imagined the futility of trying to recreate something as ineffable as moonlight, yet I was happy with the result – the light bulbs burn very brightly, a yellowy-blue tinged light, which changes according to your distance to it, just like the moon.

Do you see projects like the “dead stars” works or History of Darkness as attempts to connect us to the vastness of deep space and time? Or might they in fact suggest the futility of trying to keep track of all that has happened in the observable cosmos?
It oscillates somewhere in between. History of Darkness has futility written into it, capturing infinite darkness from across space and time. Each slide could contain millions of worlds, and learning that these images refer to places beyond human life and even the Earth may expand our relationship to these phenomena, and enhance the sense of our fallibility. All the Dead Stars was made in 2009. I’d like to update it in years to come – it might become an expanse of white dots, as telescopes become even more powerful and abundant.
I’m always drawn to the idea of the universe as deep wilderness. No matter how extensive our research and advanced technologies become, we can never ever truly access the great beyond. I read that our ‘cosmic horizon’ is around 42 billion light years away. What lies beyond, whether finite or infinite, will forever remain outside our understanding. Creating artwork is as much my own way of grappling with the “divine incommensurability” of our position in the universe, as much as an attempt to communicate it with others.
In Earth-Moon-Earth (Moonlight Sonata Reflected from the Surface of the Moon) (2007), you encoded Beethoven’s sonata in Morse code, broadcast it to the surface of the moon in radio waves, and reconstructed the partial score from the reflections. That evidently required some powerful technology. And in 2014 an ESA mission to the Space Shuttle enabled your project of returning a fragment of meteorite to earth orbit. How do these collaborations with scientific institutions come about?
Earth-Moon-Earth was created with “moon bouncer” radio enthusiasts: underground groups of people sending messages to each other via the moon. I simply wrote them letters. While studying at the Slade [art school in London] I wandered into the Rock & Ice Physics Laboratory next door [in University College London]. They allowed me to play my glacial ice records in their walk-in freezers. That was when I found out quite how easy it was to approach others in different fields. With the moonlight bulb I simply called round a number of lighting companies till I came across the right person. The map of the dead stars involved hundreds of researchers. Some scientists are far more involved than others, from sharing data (NASA gave me the recipe for the scent of Saturn’s moon) to developing the artworks very closely with myself and my studio. [Astronomers] Richard Ellis and Steve Fossey have played an enormous role. I tend to approach people who are experts in niche fields, such as type 2a supernova, and I ask to draw on their specialization. It’s their passion, so they are generally receptive. This can be a chance to share their knowledge in a way that they haven’t been asked to before, that will become manifest in an artwork engaging with totally different audiences. Of course there can be bafflement, but so far it’s been overwhelmingly positive.
Recently, for the first time researchers from came to me. I received a message from a group of scientists working on a mission proposal to NASA, inviting me to join their team as a ‘space-artist/co-investigator’ inquiring into cosmic dust. I’m extremely happy about this, not only for the creative potential but because the scientists have shown genuine concern that an artist might have something of value to contribute to their research. The group understands that art can be a way to share their knowledge through a different, more experiential, channel.
Your concepts clearly draw on – and indeed derive from – new scientific discoveries and techniques. For example, The Cosmic Spectrum (2019) is a large rotating colour wheel on which segments show the “average colour” of the Universe (as perceived by the human eye) from the Big Bang until the present, partly using data from the 2dF Galaxy Redshift Survey. How do you stay abreast of the latest scientific developments, and what do you tend to look for in them?
I discovered [astronomer] Ivan Baldry’s work on the cosmic spectrum several years ago. Many of my ideas sit on the back burner for years and manifest themselves at later stages. I don’t feel on top of scientific developments, but sometimes just one experience has enough potency to carry projects through years later.
I’m drawn to current investigations into the sunsets on Mars caught by NASA’s Mars Curiosity rover – but equally by botanical records from bygone eras, or the ray of light in a Florentine cathedral that marks the solstice built centuries ago. Sometimes just looking at titles on the shelves of science libraries can be enough to evoke compelling images. My inspirations have been wide and varied: from looking through telescopes to extremely distant galaxies, to tending a moss garden in a Zen monastery (a universe in itself). I’ve always drawn inspiration from artists, writers, musicians and thinkers whose work has a cosmic dimension: for example, raku ceramicists molding ‘the cosmos in a tea bowl’.
Some of your works exist only as the ongoing collection of ideas in the book A Place That Exists Only in Moonlight (2019). Occasionally they find a striking resonance with concepts that, for a cosmologist or physicist say, might almost seem like a thought experiment or research proposal: “A reset button for the universe pressed only once”, say, or “The speed of light slowed to absolute stillness”. Do you ever find that the scientists you collaborate with or encounter are inspired by your ideas into asking new questions or conducting new investigations themselves?
A Place that Exists Only in Moonlight arose out of a period of heavy production. I wanted to find a ‘lighter’ approach, which is the creative core of everything for me; just the ideas themselves. The book contains artworks to exist in the mind, many of which refer to suns, stars, moons, planets, earthly and cosmic matter. The cover is printed with cosmic dust: a mixture of moondust, dust from Mars, shooting stars, ancient meteorites and asteroids. I wanted the reader to be able to hold and touch the material the words describe, while taking them in. The Ideas are like thought experiments, Zen koans, Gedankenexperiment. In a way that’s true of all my artworks. What time is it on Venus? What texts will be read by unborn people? Is it possible to plant a forest using saplings from the oldest tree on earth, can we make ink to be read only under moonlight? I’m always curious. I will post copies of the book to everyone I have worked with, and I would be very happy indeed if they chose to conduct new investigations themselves.
A Place That Exists Only in Moonlight, an exhibition that pairs Paterson’s works with studies of light, sky and landscapes by J. M. W. Turner, is at the Turner Gallery in Margate, UK, until 6 May.
Monday, April 15, 2019
Out of the ashes of Notre Dame

There is no positive spin to put on the fire that has gutted Notre Dame Cathedral, and it could sound idiotic to think otherwise. This was one of the masterpieces of the Gothic era, a place where – as Napoleon allegedly said of Chartres – an atheist would feel uneasy (although this atheist instead felt moved and inspired). I don’t yet know the extent of the damage, but it is hard to imagine that the thirteenth-century northern rose window will have survived the inferno, or that the west front of the building, which has been called “one of the supreme architectural achievements of all time”, will emerge intact. Even if the building is eventually restored – and I am sure it will be – one might wonder what will be the point of a twenty-first-century facsimile, bereft of the spirit and philosophy that motivated the original construction.
And yet… The Gothic cathedrals already undermine notions of “authenticity”. In past ages, they weren’t seen as buildings that had to be maintained in some “pristine” state at all costs. Ever since they were erected, they were modified and redesigned, sometimes with very little care for their integrity. This happened at Notre Dame in the seventeenth century, when the flame of Gothic had long gone out. There was a fashion for plonking grotesque, kitsch marble sculptures in place of medieval statuary, which was indeed the fate of Notre Dame’s high altar. The vandalism went on through the eighteenth century – and that was even before the Revolutionaries did their worst, melting down metal bells, grilles and reliquaries and then using the cathedral as a kind of warehouse. The Gothic revival of Viollet-le-Duc in the nineteenth century had better intentions but not always better taste.
This was ever the way, even in the Middle Ages: bishops would decide that their cathedral had become old-fashioned, and would commission some new extension or renovation that as often as not ended up as a jarring clash of styles. The notion of conservation and a “respect for the old” simply didn’t exist.
And that’s even before we consider the ravages of unintentional damage. Many of the wonders of Gothic architecture only came about as a result of fire in the first place. That is how we got Chartres: thanks to a fire in 1194 that destroyed the building commissioned in the 1020s (after the cathedral before that was burnt down). The conflagration was devastating to the morale of the local people: according to a document written in 1210, they “considered as the totality of their misfortune the fact that they, unhappy wretches, in justice for their own sins, had lost the palace of the Blessed Virgin, the special glory of the city, the showpiece of the entire region, the incomparable house of prayer”. Yet look what they got in its place.
And they had no hesitation in putting a positive spin on it. Another early thirteenth-century account asserted that this was God’s will – or the Virgin’s – all along: “She therefore permitted the old and inadequate church to become the victim of the flames, thus making room for the present basilica, which has no equal throughout the entire world.”
And so it went on throughout the Middle Ages and beyond: the astonishing edifices of the Gothic masters fell or burnt down, got neglected or half-dismembered, were subjected to undignified “improvement”, were ransacked or, later, bombed. Chartres has had catastrophic fires too: no one seems now too bothered that the original roof and allegedly wonderful timberwork beneath it were consumed by flames in 1836, or that the replacement we see today was originally intended only to be temporary.
What happened today at Notre Dame is truly a tragedy. But we shouldn’t forget that these magnificent buildings have always been works in progress, always in flux. Perhaps, in mourning what was lost, we can see it as an opportunity to marvel again at the worldview that produced it: at the ambition, the imagination, the profound union of technical skill and philosophical and spiritual conviction. And we can consider it a worthy challenge to see if we can find some way of matching and honouring that vision.
Wednesday, December 12, 2018
How to write a science best-seller
Everyone knows how science writing works. Academic scientists labour with great diligence to tease nuanced truths from theory and experiment, only for journalists and popularizers to reduce them to simplistic sound bites for the sake of a good story.
I’ve been moved to ponder that narrative by the widespread appearance on Christmas science/non-fiction books lists of two books by leading science academics: Steven Pinker’s Enlightenment Now and Robert Plomin’s Blueprint. I reviewed both books at length in Prospect, and my feelings about both of them were surprisingly similar: they have some important and valuable things to say, but are both infuriating too in terms of what they fudge, leave out or misrepresent.
I won’t recapitulate those views here. Plomin has taken some flak for the genetic determinism that his book seems to encourage – most recently from Angela Saini in the latest Prospect, whose conclusion I fully endorse: “Scientists… should concentrate on engaging with historians and social scientists to better understand humans not as simple biological machines but as complex, social beings.” Pinker has been excoriated in one or two places (most vigorously, and some would say predictable, by John Gray) for using the “Enlightenment” ahistorically as a concept to be moulded at will to fit his agenda (not to mention his simplistic and obsolete characterization of Nietzsche).
What both books do is precisely what the caricature of science journalism above is said to do, albeit with more style and more graphs: to eschew nuance and caveats in order to tell a story that is only partly true.
And here’s the moral: it works! By delivering a controversial message in this manner, both books have received massive media attention. If they had been more careful, less confrontational, more ready to tell a complex story, I very much doubt that they would have been awarded anything like as much coverage.
Now, my impression here – having spoken to both Pinker and Plomin – is that they both genuinely believe what they wrote. Yes, Pinker did acknowledge that he was using a simplified picture of the Enlightenment for rhetorical ends, and in conversation Plomin and I were broadly in agreement most of the time about what genetic analyses do and don’t show about human behaviour. But I don’t think either of them was setting out cynically to present a distorted message in order to boost book sales. What seems to be happening here is more in the line of a tacit collusion between academics keen to push a particular point of view (nothing wrong with that in itself) and publishers keen to see an eye-catching and controversial message. And we have, of course, been here before (The God Delusion, anyone?).
Stephen Hawking’s book Brief Answers to the Big Questions was also a popular book choice for 2018 that, in a different way, often veered towards the reductively simplistic, though it seemed to fall only to me (so far as I was able) and my esteemed colleague Michael Brooks to point that out in our reviews.
It seems, then, increasingly to be the job of science writers and critics, like Angela and Michael, to hold the “specialists” to account – and not vice versa.
I could nobly declare that I decline to adopt such a tactic to sell my own books. But the truth is that I couldn’t do it even if I wanted to. My instincts are too set against it. For one thing, it would cause me too much discomfort, even pain, to knowingly ignore or cherry-pick historical or scientific facts (which isn’t to say that I will sometimes get them wrong), or to decline to enter areas of enquiry that might dilute a catchy thesis. But perhaps even more importantly, I would find simplistic narratives and theses to be just a bit too boring to sustain me through a book project. What interests me is not winning some constructed argument but exploring ideas – including the fascinating ideas in Enlightenment Now and Blueprint.
I’ve been moved to ponder that narrative by the widespread appearance on Christmas science/non-fiction books lists of two books by leading science academics: Steven Pinker’s Enlightenment Now and Robert Plomin’s Blueprint. I reviewed both books at length in Prospect, and my feelings about both of them were surprisingly similar: they have some important and valuable things to say, but are both infuriating too in terms of what they fudge, leave out or misrepresent.
I won’t recapitulate those views here. Plomin has taken some flak for the genetic determinism that his book seems to encourage – most recently from Angela Saini in the latest Prospect, whose conclusion I fully endorse: “Scientists… should concentrate on engaging with historians and social scientists to better understand humans not as simple biological machines but as complex, social beings.” Pinker has been excoriated in one or two places (most vigorously, and some would say predictable, by John Gray) for using the “Enlightenment” ahistorically as a concept to be moulded at will to fit his agenda (not to mention his simplistic and obsolete characterization of Nietzsche).
What both books do is precisely what the caricature of science journalism above is said to do, albeit with more style and more graphs: to eschew nuance and caveats in order to tell a story that is only partly true.
And here’s the moral: it works! By delivering a controversial message in this manner, both books have received massive media attention. If they had been more careful, less confrontational, more ready to tell a complex story, I very much doubt that they would have been awarded anything like as much coverage.
Now, my impression here – having spoken to both Pinker and Plomin – is that they both genuinely believe what they wrote. Yes, Pinker did acknowledge that he was using a simplified picture of the Enlightenment for rhetorical ends, and in conversation Plomin and I were broadly in agreement most of the time about what genetic analyses do and don’t show about human behaviour. But I don’t think either of them was setting out cynically to present a distorted message in order to boost book sales. What seems to be happening here is more in the line of a tacit collusion between academics keen to push a particular point of view (nothing wrong with that in itself) and publishers keen to see an eye-catching and controversial message. And we have, of course, been here before (The God Delusion, anyone?).
Stephen Hawking’s book Brief Answers to the Big Questions was also a popular book choice for 2018 that, in a different way, often veered towards the reductively simplistic, though it seemed to fall only to me (so far as I was able) and my esteemed colleague Michael Brooks to point that out in our reviews.
It seems, then, increasingly to be the job of science writers and critics, like Angela and Michael, to hold the “specialists” to account – and not vice versa.
I could nobly declare that I decline to adopt such a tactic to sell my own books. But the truth is that I couldn’t do it even if I wanted to. My instincts are too set against it. For one thing, it would cause me too much discomfort, even pain, to knowingly ignore or cherry-pick historical or scientific facts (which isn’t to say that I will sometimes get them wrong), or to decline to enter areas of enquiry that might dilute a catchy thesis. But perhaps even more importantly, I would find simplistic narratives and theses to be just a bit too boring to sustain me through a book project. What interests me is not winning some constructed argument but exploring ideas – including the fascinating ideas in Enlightenment Now and Blueprint.
Wednesday, October 31, 2018
Musical illusions
Here's the English version of my column on music cognition for the current issue of the Italian science magazine Sapere.
_____________________________________________________________
“In studying [optical] illusions”, writes Kathryn Schulz in her book Being Wrong, “scientists aren’t learning how our visual system fails. They are learning how it works.” What Schulz means is that normal visual processing is typically a matter of integrating confusing information into a plausible story that lets us navigate the world. Colour constancy is a good example: the brain “corrects” for variations in brightness so that objects don’t appear to change hue as the lighting conditions alter. The famous “checkerboard shadow” illusion devised by vision scientist Edward Adelson fools this automatic recalibration of perception.
Adelson’s checkerboard illusion. The squares A and B are the same shade of grey.
In this regard as in many others, auditory perception mirrors the visual. The brain often rearranges what we hear to create something that “makes more sense” – with the same potential for creating illusions. Psychologist of music Diana Deutsch has delved deeply into the subject of musical illusions, some of which are presented on a CD released by Philomel in 1995. Several of these have to be heard through stereo headphones: they deliver different pitches to the left and right ears, which the brain reassigns to create coherence. For example, in the “scale illusion” the notes of two simultaneous scales – one ascending, one descending – are sent alternately to each ear. But what one hears is a much simpler pattern: an ascending scale in one ear, descending in the other. Here the brain is choosing to perceive the more likely pattern, even though it’s wrong.
Another example reveals the limitations of pitch perception. A familiar tune (Yankee Doodle) is played with each note assigned a random octave. It sounds incomprehensible. The test shows that, in deciphering melody, we attend not so much to absolute pitch class (a C or D, say) as to relative pitch: how big pitch jumps are between successive notes. Arnold Schoenberg’s twelve-tone serialism ignored this, which is why the persistence of his “tone rows” is often inaudible.
Perhaps the strangest thing about optical illusions is that we enjoy them, even if – indeed, because – we find them perplexing. Instead of being upset by the brain’s inability to “get it right”, we are apt to laugh – not a common response to wrongness, although it’s actually how a lot of comedy works. You might, then, expect to find musical illusions put to pleasurable use in music, especially by jokers like Mozart. But they are rather rare, maybe because we simply won’t notice them unless we see the score. Something like the scale illusion is used, however, in the second movement of Rachmaninov’s Second Suite for Two Pianos, where two sets of seesawing notes on each piano are heard as two sets of single repeated notes. It seems likely that Rachmaninov (not noted for jocularity) wasn’t just having fun – it’s merely easier to play these rapid quavers using pitch jumps rather than on the same note.
_____________________________________________________________
“In studying [optical] illusions”, writes Kathryn Schulz in her book Being Wrong, “scientists aren’t learning how our visual system fails. They are learning how it works.” What Schulz means is that normal visual processing is typically a matter of integrating confusing information into a plausible story that lets us navigate the world. Colour constancy is a good example: the brain “corrects” for variations in brightness so that objects don’t appear to change hue as the lighting conditions alter. The famous “checkerboard shadow” illusion devised by vision scientist Edward Adelson fools this automatic recalibration of perception.

Adelson’s checkerboard illusion. The squares A and B are the same shade of grey.
In this regard as in many others, auditory perception mirrors the visual. The brain often rearranges what we hear to create something that “makes more sense” – with the same potential for creating illusions. Psychologist of music Diana Deutsch has delved deeply into the subject of musical illusions, some of which are presented on a CD released by Philomel in 1995. Several of these have to be heard through stereo headphones: they deliver different pitches to the left and right ears, which the brain reassigns to create coherence. For example, in the “scale illusion” the notes of two simultaneous scales – one ascending, one descending – are sent alternately to each ear. But what one hears is a much simpler pattern: an ascending scale in one ear, descending in the other. Here the brain is choosing to perceive the more likely pattern, even though it’s wrong.
Another example reveals the limitations of pitch perception. A familiar tune (Yankee Doodle) is played with each note assigned a random octave. It sounds incomprehensible. The test shows that, in deciphering melody, we attend not so much to absolute pitch class (a C or D, say) as to relative pitch: how big pitch jumps are between successive notes. Arnold Schoenberg’s twelve-tone serialism ignored this, which is why the persistence of his “tone rows” is often inaudible.
Perhaps the strangest thing about optical illusions is that we enjoy them, even if – indeed, because – we find them perplexing. Instead of being upset by the brain’s inability to “get it right”, we are apt to laugh – not a common response to wrongness, although it’s actually how a lot of comedy works. You might, then, expect to find musical illusions put to pleasurable use in music, especially by jokers like Mozart. But they are rather rare, maybe because we simply won’t notice them unless we see the score. Something like the scale illusion is used, however, in the second movement of Rachmaninov’s Second Suite for Two Pianos, where two sets of seesawing notes on each piano are heard as two sets of single repeated notes. It seems likely that Rachmaninov (not noted for jocularity) wasn’t just having fun – it’s merely easier to play these rapid quavers using pitch jumps rather than on the same note.
Monday, October 29, 2018
Why brief answers are sometimes not enough
I reviewed Stephen Hawking's last book Brief Answers to the Big Questions for New Scientist, but it needed shortening and, in the print version, didn't come out as I'd intended. Here's the original.
_____________________________________________________________________
Most people as famous as Stephen Hawking have their character interrogated with forensic intimacy. But Hawking’s personality was in its way as insulated as the Queen’s, impermeably fortified by the role allotted to him. There’s a hint in Brief Answers that he knew this: “I fit the stereotype of a disabled genius”, he writes. Unworldly intelligence, a wry sense of humour, and tremendous resilience against adversity: that seemed to suffice for the celebrity in the wheelchair with the computerized voice (itself another part of the armour, of course).
It made me uneasy though. The public Hawking was that stereotype, and while it was delightful to see how he demolished the does-he-take-sugar laziness that links physical with mental disability, he did so only by taking matters to the other extreme ("such a mind in such a body!"). It perhaps suited Hawking that the media were content with the cliché – he didn’t give much impression of caring for the touchy-feely. (Eddie Redmayne, who played Hawking in the 2014 biopic The Theory of Everything, reminds us in his foreword that the physicist would have preferred the film to have “more physics and fewer feelings”.) But his story suggests we still have some way to go in integrating people with disabilities into able-bodied society.
I approached this book, a collection of Hawking’s later essays on “big questions”, with some trepidation. You know you won’t go wrong with the cosmology, relativity and quantum mechanics, but in other areas, even within science, it’s touch and go. The scientific essays supply a series of now-familiar Greatest Hits: his work with Roger Penrose on gravitational singularities and their relation to the Big Bang; his realization that black holes will emit energy (Hawking radiation) from their event horizons; his speculations about the origin of the universe in a chance quantum fluctuation; the debate – still unresolved – about whether black holes destroy information. Hawking, as Kip Thorne reminds us in his introduction, helped to integrate several of the central concepts of physics: general relativity, quantum mechanics, thermodynamics and information theory. It’s a phenomenal body of work.
Sometimes there’s a plainness to his prose that can be touching even while it sounds like an anodyne self-help manual: “Be brave, be curious, be determined, overcome the odds. It can be done.” Who would argue with Hawking’s right to that sentiment? His plea for the importance of inspirational teaching, his concerns about climate change and environmental degradation, his contempt for Trump and the regressive aspects of Brexit, and (albeit not here) his championing of the NHS, sometimes made you glad to have Hawking on your side. People listened.
A common danger with collections of this kind is repetition, which the editors have been curiously unconcerned to avoid. But the recurring and familiar passages are in themselves quite telling, for they show Hawking curating his image: the boy who was always taking things apart but not always managing to put them back together again, the man who told us to “look up at the stars and no down at your feet.”
There’s no doubt that Hawking cared passionately about the future of humankind and the potential of science to improve it. His advocacy resembles the old-fashioned boosterism into which H. G. Wells often strayed in later life, tempered like Wells by an awareness of the destructive potential of technologies in malicious or plain foolish hands. But what are Hawking’s resources for developing that agenda? One of the most striking features of this book is the lack of extra-curricular references – to art, music, philosophy, literature, say. This would not matter so much (though it’s a bit odd) if it were not that the scope of some of pieces exposes these gaps painfully.
Beginning an essay called “Is there a God” by saying that “people will always cling to religion, because it gives comfort, and they do not trust or understand science” tells you pretty much what to expect from it, and you’d not be wrong. God, as no theologian said ever, is all about explaining the origin of the universe. And most people, Hawking tells us, define God as “a human-like being, with whom one can have a personal relationship.” I suspect “most people’s” views of what a molecule or light is would bear similarly scant resemblance to what well-informed folks say on the matter, but I doubt Hawking would give those views precedence.
As for history, try this: “People might well have argued that it was a waste of money to send Columbus on a wild goose chase. Yet the discovery of the New World made a profound difference to the Old. Just think, we wouldn’t have had the Big Mac or KFC.” The lame joke might have been just about tolerable if one didn’t sense it is there because Hawking could think of nothing to put in its place. This remark, as you might guess, is part of a defense of human space exploration, during which Hawking demonstrates no more inclination to probe the real reasons for the space race in the 1960s than he does to examine what Columbus was all about. He feels that the human race has no future if we don’t colonize space, although it isn’t clear why his generally dim view of our self-destructive idiocies becomes so rosy once we are on other worlds. Maybe the answer lies with the fact that here, as elsewhere, his main point of reference is Star Trek. But I suspect he knew he was preaching to the converted, so that mere assertion (“We have no other option”) was all he needed in lieu of argument.
There’s a glib insouciance to some of the other scientific speculations too. “If there is intelligent life elsewhere”, he writes, “it must be a very long way away otherwise it would have visited earth by now. And I think we would’ve known if we had been visited; it would be like the film Independence Day.” Assertion again replaces explanation in Hawking’s assumption apropos artificial intelligence that the human brain is just like a computer, as if this were not hotly disputed among neuroscientists. Here too, his vision seems mainly informed by the science fiction within easiest reach: his fears for the dangers of AI conjure up the Terminator series’ Skynet and tropes of supercomputers declaring themselves God and fusing the plug. Science fiction has plenty to tell us about our fears of the present, but probably rather less about the realities of the future.
It is best, too, not to rely on Hawking’s history of science, which for example parrots the myth of Max Planck postulating the quantum to avoid the ‘ultraviolet catastrophe’ of blackbody radiation. (Planck did not mention it.) Don’t expect more than the usual clichés: here comes Feynman, playing the bongos in a strip joint (what a guy!), there goes Einstein riding on a light wave.
This is all, in a sense, so very unfair. Hawking was a great scientist who had a remarkable life, but in another universe without motor neurone disease (well, he did like the Many Worlds interpretation of quantum mechanics) we’d have no reason to confer such authority on his thoughts about all and sundry, or to notice or care that he entered the peculiar time-warp that is Stringfellows “gentlemen’s club”. We would not deny him the right to his ordinariness, and we would see his occasional brash arrogance and egotism for no more or less than it is.
There’s every reason to believe that Hawking enjoyed his fame, and that’s a cheering thought. The Hawking phenomenon is our problem, not his. He likes to remind us that he was born on the same date that Galileo died, but it’s Brecht’s Galileo that comes to mind here: to paraphrase, unhappy is the land that needs a guru.
_____________________________________________________________________
Most people as famous as Stephen Hawking have their character interrogated with forensic intimacy. But Hawking’s personality was in its way as insulated as the Queen’s, impermeably fortified by the role allotted to him. There’s a hint in Brief Answers that he knew this: “I fit the stereotype of a disabled genius”, he writes. Unworldly intelligence, a wry sense of humour, and tremendous resilience against adversity: that seemed to suffice for the celebrity in the wheelchair with the computerized voice (itself another part of the armour, of course).
It made me uneasy though. The public Hawking was that stereotype, and while it was delightful to see how he demolished the does-he-take-sugar laziness that links physical with mental disability, he did so only by taking matters to the other extreme ("such a mind in such a body!"). It perhaps suited Hawking that the media were content with the cliché – he didn’t give much impression of caring for the touchy-feely. (Eddie Redmayne, who played Hawking in the 2014 biopic The Theory of Everything, reminds us in his foreword that the physicist would have preferred the film to have “more physics and fewer feelings”.) But his story suggests we still have some way to go in integrating people with disabilities into able-bodied society.
I approached this book, a collection of Hawking’s later essays on “big questions”, with some trepidation. You know you won’t go wrong with the cosmology, relativity and quantum mechanics, but in other areas, even within science, it’s touch and go. The scientific essays supply a series of now-familiar Greatest Hits: his work with Roger Penrose on gravitational singularities and their relation to the Big Bang; his realization that black holes will emit energy (Hawking radiation) from their event horizons; his speculations about the origin of the universe in a chance quantum fluctuation; the debate – still unresolved – about whether black holes destroy information. Hawking, as Kip Thorne reminds us in his introduction, helped to integrate several of the central concepts of physics: general relativity, quantum mechanics, thermodynamics and information theory. It’s a phenomenal body of work.
Sometimes there’s a plainness to his prose that can be touching even while it sounds like an anodyne self-help manual: “Be brave, be curious, be determined, overcome the odds. It can be done.” Who would argue with Hawking’s right to that sentiment? His plea for the importance of inspirational teaching, his concerns about climate change and environmental degradation, his contempt for Trump and the regressive aspects of Brexit, and (albeit not here) his championing of the NHS, sometimes made you glad to have Hawking on your side. People listened.
A common danger with collections of this kind is repetition, which the editors have been curiously unconcerned to avoid. But the recurring and familiar passages are in themselves quite telling, for they show Hawking curating his image: the boy who was always taking things apart but not always managing to put them back together again, the man who told us to “look up at the stars and no down at your feet.”
There’s no doubt that Hawking cared passionately about the future of humankind and the potential of science to improve it. His advocacy resembles the old-fashioned boosterism into which H. G. Wells often strayed in later life, tempered like Wells by an awareness of the destructive potential of technologies in malicious or plain foolish hands. But what are Hawking’s resources for developing that agenda? One of the most striking features of this book is the lack of extra-curricular references – to art, music, philosophy, literature, say. This would not matter so much (though it’s a bit odd) if it were not that the scope of some of pieces exposes these gaps painfully.
Beginning an essay called “Is there a God” by saying that “people will always cling to religion, because it gives comfort, and they do not trust or understand science” tells you pretty much what to expect from it, and you’d not be wrong. God, as no theologian said ever, is all about explaining the origin of the universe. And most people, Hawking tells us, define God as “a human-like being, with whom one can have a personal relationship.” I suspect “most people’s” views of what a molecule or light is would bear similarly scant resemblance to what well-informed folks say on the matter, but I doubt Hawking would give those views precedence.
As for history, try this: “People might well have argued that it was a waste of money to send Columbus on a wild goose chase. Yet the discovery of the New World made a profound difference to the Old. Just think, we wouldn’t have had the Big Mac or KFC.” The lame joke might have been just about tolerable if one didn’t sense it is there because Hawking could think of nothing to put in its place. This remark, as you might guess, is part of a defense of human space exploration, during which Hawking demonstrates no more inclination to probe the real reasons for the space race in the 1960s than he does to examine what Columbus was all about. He feels that the human race has no future if we don’t colonize space, although it isn’t clear why his generally dim view of our self-destructive idiocies becomes so rosy once we are on other worlds. Maybe the answer lies with the fact that here, as elsewhere, his main point of reference is Star Trek. But I suspect he knew he was preaching to the converted, so that mere assertion (“We have no other option”) was all he needed in lieu of argument.
There’s a glib insouciance to some of the other scientific speculations too. “If there is intelligent life elsewhere”, he writes, “it must be a very long way away otherwise it would have visited earth by now. And I think we would’ve known if we had been visited; it would be like the film Independence Day.” Assertion again replaces explanation in Hawking’s assumption apropos artificial intelligence that the human brain is just like a computer, as if this were not hotly disputed among neuroscientists. Here too, his vision seems mainly informed by the science fiction within easiest reach: his fears for the dangers of AI conjure up the Terminator series’ Skynet and tropes of supercomputers declaring themselves God and fusing the plug. Science fiction has plenty to tell us about our fears of the present, but probably rather less about the realities of the future.
It is best, too, not to rely on Hawking’s history of science, which for example parrots the myth of Max Planck postulating the quantum to avoid the ‘ultraviolet catastrophe’ of blackbody radiation. (Planck did not mention it.) Don’t expect more than the usual clichés: here comes Feynman, playing the bongos in a strip joint (what a guy!), there goes Einstein riding on a light wave.
This is all, in a sense, so very unfair. Hawking was a great scientist who had a remarkable life, but in another universe without motor neurone disease (well, he did like the Many Worlds interpretation of quantum mechanics) we’d have no reason to confer such authority on his thoughts about all and sundry, or to notice or care that he entered the peculiar time-warp that is Stringfellows “gentlemen’s club”. We would not deny him the right to his ordinariness, and we would see his occasional brash arrogance and egotism for no more or less than it is.
There’s every reason to believe that Hawking enjoyed his fame, and that’s a cheering thought. The Hawking phenomenon is our problem, not his. He likes to remind us that he was born on the same date that Galileo died, but it’s Brecht’s Galileo that comes to mind here: to paraphrase, unhappy is the land that needs a guru.
Thursday, September 13, 2018
The "dark woman of DNA" goes missing again
There’s a curious incident that took place at the excellent "Schrödinger at 75: The Future of Life" meeting in Dublin last week that I’ve been pondering ever since.
One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.
Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.
I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.
You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?
Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).
But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.
I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.
And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.
Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.
One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.
Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.
I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.
You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?
Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).
But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.
I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.
And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.
Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.
Friday, September 07, 2018
What Is Life? Schrödinger at 75
The conference “Schrödinger at 75: The Future of Life” in Dublin, from which I’m now returning, was a fabulous event, packed with good talks equally from eminent folks (including several Nobel laureates) and young rising stars. Ostensibly an exploration of the legacy of Erwin Schrödinger’s influential 1944 book What Is Life?, based on the lectures he gave 75 years ago as director of physical sciences at the Dublin Institute for Advanced Study (on which, more here), it was in fact largely a wonderful excuse to get a bunch of very smart people in the same hall to talk about many areas of the life (and chemical) sciences today and to speculate about what the future holds for them. I think I took away something interesting from every talk.
There was of course much dutiful nodding towards Schrödinger’s book, and also to some of his writing elsewhere, especially his essays in Mind and Matter (1958), where he offered some speculations about mind and consciousness (about half of the speakers worked on aspects of brain, mind and cognition). This didn’t seem merely tokenistic to me – I felt that all the speakers who mentioned Schrödinger had a genuine respect for his ideas. This is all the more interesting given that, as I say in my Nature piece, there wasn’t in some ways a great deal that was truly new and productive of further research in the book. Of course, what gets mentioned most is Schrödinger’s reference to a “code-script” that governs life and which is inherited, and his suggestion that this is encoded in the chromosomes as an “aperiodic crystal”. That image certainly resonated with Francis Crick, who wrote to Schrödinger in 1953 to tell him so.
But the idea of a “code”, as well as the notion that it could be replicated in a manner reminiscent of the ‘templating’ of structure in a crystal, were not really new. It seems rather to be something about the way Schrödinger expressed this idea that mattered, and indeed I can see why: his book is beautifully written, achieving persuasive force without seeming like the imposition of an arrogant physicist.
All of this I enjoyed. But what I missed was a historical presentation that could have put these tributes to What Is Life? in context. There was, for instance, a sense of unease about Schrödinger’s references to “order” and “organization”. What exactly was he getting at here? One suggestion was that “order” here was standing in for that crucial missing word: “information”. But this isn’t really true. Schrödinger’s “code-script” was presented as the means by which an organism’s “organization” is maintained, although quite how it does so he found wholly mysterious, even if the inter-generational transmission of the script by the “aperiodic crystal” was far less so.
What we need to know here is that “organization” had become a biological power-word, a symbol of what it was about living systems that distinguishes them from non-living. In the early nineteenth century this unique property of life was conferred by élan vital in the formulation of vitalism. As vitalism waned, it had to become something more tangible and physical. Some believed, like Thomas Henry Huxley, that the key was a special chemical composition, which made up the stuff of “protoplasm”, the primal living substance from which all life was descended. But as the chemical complexity and heterogeneity of living matter became apparent from the work of late nineteenth-century physiologists, and as the cell came to be seen as the fundamental unit of life, the idea arose that life was distinguished by some peculiar state of “organization” below the level that microscopes could resolve. There were a few tantalizing glimpses of this subcellular organization, for example in the stained chromosome fibres and organelles like the nucleus and mitochondria. These were, however, nothing but blurry blobs, offering no real clue about how their (presumably) molecular nature gave them the apparent agency that distinguished life.
And so, as Andrew Reynolds has shown, “order” and “organization” served a role that was barely more than metaphorical, patching over an ignorance about “what is life”. There’s nothing deplorable about that; it’s the kind of thing science must do all the time, giving a name to an absence of understanding so that it can be contained and built into contingent theories. But for Schrödinger to still be using it in the 1940s shows how his biological reading was rather archaic, for by that stage it had already become apparent that cell physiology relies on enzyme action, and crystallographers like J. Desmond Bernal and Bill Astbury were beginning to apply X-ray crystallography to these proteins to understand their structure. Sure, the origins and nature of the “organization” that cells seemed to exhibit were still pretty obscure, but it was getting less necessary to invoke that nebulous concept.
There were also suggestions at the Dublin meeting that Schrödinger’s “order” was what he meant with his talk of “negative entropy”. There’s some justification to think that, but Schrödinger wasn’t just thinking about how cells prevent their “organization” from falling into entropic disarray. He was puzzled by how this organization could exist in the first place. I don’t think one can really understand his discussion of order and entropy in What Is Life? unless one recognizes that many physical scientists in the early twentieth century considered the molecular world to be fundamentally random. It seems remarkable to me that no accounts of What Is Life? that I have seen refer to Schrodinger’s 1944 essay in Nature on “The Statistical Law in Nature”, where it is almost as if Schrödinger is telling us: ‘this is what I’m thinking about in my book’. The article is a paean to Ludwig Boltzmann, whose influence Schrödinger felt strongly in his early years at Vienna. Schrödinger seems to assert here that there are no laws in nature that do not rely on the statistical averaging over the behaviours of countless microscopic particles. It would have seemed all but meaningless then to suppose that one could speak about law-like, deterministic behaviour at the level of individual molecules, and quantum mechanics had seemed only to confirm this. That is what puzzled Schrödinger so much about the apparent persistence of phenotypic traits that seemed necessarily to arise from the specific details of genes at the molecular scale.
As a consequence, What Is Life? reads a little weirdly to chemists today, or indeed even by the 1950s, to whom the notion that a complex molecule can adopt and sustain a particular structure even in the face of thermal fluctuations seemed unproblematic. Schrödinger’s invocation of quantum mechanics to explain this phenomenon looks rather laboured now, and is quite possibly a part of what irritated Linus Pauling and Max Perutz about the book. It’s also why Schrödinger seems so keen to cement the structure of the gene in place as a “solid”, rather than simply regarding it as a large molecule carrying a linear code.
And what about that code itself? This wasn’t interrogated at the meeting, which was a shame. Indeed, it was sometimes still attributed by speakers the all-mighty agency that Schrödinger himself gave it. It rather astonishes me to see how the claim that the genome contains “all the information you need to make the organism” raises no eyebrows. What surprises me is that scientists are typically a rather sceptical crowd, and demand evidence to support the claims they make. But there is, to my knowledge, no evidence whatsoever that one can make even the simplest organism, let alone a human, from the information contained in the genome. Oh, but surely you can? You can (in principle, and now in some cases in practice) just make the genome from scratch, put it in a cell, and off it goes… Wait. Put it in a cell? So you need a cell to actually enact the “code-script”? Well sure, but the cell goes without saying, right?
Metaphors in biology are always imperfect and often treacherous, but I think this one (a simile, really) has some mileage: saying that the genome is the complete blueprint for an organism is a bit like saying that the Oxford English Dictionary is a blueprint for King Lear. It’s all in there, right? Ok, there’s a lot in there that you don’t need for Lear, but then there’s a lot of junk in the genome too (perhaps!). Sure, to get Lear out of the OED you need to feed the words into William Shakespeare, but Shakespeare goes without saying, right?
For a human, it’s still more complicated. Human cells can of course replicate in a culture medium, but none has ever replicated into an embryo, let alone a person. What they can do – what some induced stem cells can do – is proliferate into an embryoid, an organoid with embryo-like structures. But that won’t make a human. For that, you need not only a cell but a uterus. It’s rather like saying, so the text of King Lear has “all the information” – and then giving it to, say, a Chinese factory worker in Lanzhou. Well OK, so to actually enact Lear in a meaningful way it has to be read by someone who reads English – or translated… But come on, the English goes without saying…
Once we start talking in terms of the information needed to make an organism, though, quite what’s in the genome becomes far less clear. Indeed, we know for sure that maternal factors supply some vital information for the early development of a fertilized egg. And the self-organizing abilities of cells can only create an organism in the right context: every cell needs the right signals from its environment for the whole to assemble properly. Genes somehow encode neurons, but neurons don’t develop properly if they don’t get stimuli from their environment during a critical period.
Are these environmental signals and context then a part of the information needed to make an organism “as nature [meaning evolution, I guess] intends”? Is an understanding of English a part of the information needed for King Lear to be anything more than marks on paper?
Evidently this is an issue of how “information” acquires meaning, which of course was notoriously what Shannon left out of his information theory. And that is why information in Shannon’s sense is greatest when the Shannon entropy is greatest. Periodic solids have rather low entropy. What is needed in biology, then, is a theory for where meaningful information comes from and how it gives rise to causal flows. There’s no doubt that lots of meaningful information is encoded in the genome that contributes to how organisms are built and how they function. But when we say that “the genome contains all the information needed to build an organism”, we are dealing with ill-defined terms. What I solely missed at this meeting was a presentation about how a theory of biological information can be developed, and how to define and measure “meaning” within that theory. Daniel Dennett acknowledged this lacuna in his keynote address, saying that understanding “semantic information” as opposed to Shannon information is still “work in progress”.
A close reading of Schrödinger starts us in that direction too, and is a part of his legacy.
There was of course much dutiful nodding towards Schrödinger’s book, and also to some of his writing elsewhere, especially his essays in Mind and Matter (1958), where he offered some speculations about mind and consciousness (about half of the speakers worked on aspects of brain, mind and cognition). This didn’t seem merely tokenistic to me – I felt that all the speakers who mentioned Schrödinger had a genuine respect for his ideas. This is all the more interesting given that, as I say in my Nature piece, there wasn’t in some ways a great deal that was truly new and productive of further research in the book. Of course, what gets mentioned most is Schrödinger’s reference to a “code-script” that governs life and which is inherited, and his suggestion that this is encoded in the chromosomes as an “aperiodic crystal”. That image certainly resonated with Francis Crick, who wrote to Schrödinger in 1953 to tell him so.

But the idea of a “code”, as well as the notion that it could be replicated in a manner reminiscent of the ‘templating’ of structure in a crystal, were not really new. It seems rather to be something about the way Schrödinger expressed this idea that mattered, and indeed I can see why: his book is beautifully written, achieving persuasive force without seeming like the imposition of an arrogant physicist.
All of this I enjoyed. But what I missed was a historical presentation that could have put these tributes to What Is Life? in context. There was, for instance, a sense of unease about Schrödinger’s references to “order” and “organization”. What exactly was he getting at here? One suggestion was that “order” here was standing in for that crucial missing word: “information”. But this isn’t really true. Schrödinger’s “code-script” was presented as the means by which an organism’s “organization” is maintained, although quite how it does so he found wholly mysterious, even if the inter-generational transmission of the script by the “aperiodic crystal” was far less so.
What we need to know here is that “organization” had become a biological power-word, a symbol of what it was about living systems that distinguishes them from non-living. In the early nineteenth century this unique property of life was conferred by élan vital in the formulation of vitalism. As vitalism waned, it had to become something more tangible and physical. Some believed, like Thomas Henry Huxley, that the key was a special chemical composition, which made up the stuff of “protoplasm”, the primal living substance from which all life was descended. But as the chemical complexity and heterogeneity of living matter became apparent from the work of late nineteenth-century physiologists, and as the cell came to be seen as the fundamental unit of life, the idea arose that life was distinguished by some peculiar state of “organization” below the level that microscopes could resolve. There were a few tantalizing glimpses of this subcellular organization, for example in the stained chromosome fibres and organelles like the nucleus and mitochondria. These were, however, nothing but blurry blobs, offering no real clue about how their (presumably) molecular nature gave them the apparent agency that distinguished life.
And so, as Andrew Reynolds has shown, “order” and “organization” served a role that was barely more than metaphorical, patching over an ignorance about “what is life”. There’s nothing deplorable about that; it’s the kind of thing science must do all the time, giving a name to an absence of understanding so that it can be contained and built into contingent theories. But for Schrödinger to still be using it in the 1940s shows how his biological reading was rather archaic, for by that stage it had already become apparent that cell physiology relies on enzyme action, and crystallographers like J. Desmond Bernal and Bill Astbury were beginning to apply X-ray crystallography to these proteins to understand their structure. Sure, the origins and nature of the “organization” that cells seemed to exhibit were still pretty obscure, but it was getting less necessary to invoke that nebulous concept.
There were also suggestions at the Dublin meeting that Schrödinger’s “order” was what he meant with his talk of “negative entropy”. There’s some justification to think that, but Schrödinger wasn’t just thinking about how cells prevent their “organization” from falling into entropic disarray. He was puzzled by how this organization could exist in the first place. I don’t think one can really understand his discussion of order and entropy in What Is Life? unless one recognizes that many physical scientists in the early twentieth century considered the molecular world to be fundamentally random. It seems remarkable to me that no accounts of What Is Life? that I have seen refer to Schrodinger’s 1944 essay in Nature on “The Statistical Law in Nature”, where it is almost as if Schrödinger is telling us: ‘this is what I’m thinking about in my book’. The article is a paean to Ludwig Boltzmann, whose influence Schrödinger felt strongly in his early years at Vienna. Schrödinger seems to assert here that there are no laws in nature that do not rely on the statistical averaging over the behaviours of countless microscopic particles. It would have seemed all but meaningless then to suppose that one could speak about law-like, deterministic behaviour at the level of individual molecules, and quantum mechanics had seemed only to confirm this. That is what puzzled Schrödinger so much about the apparent persistence of phenotypic traits that seemed necessarily to arise from the specific details of genes at the molecular scale.
As a consequence, What Is Life? reads a little weirdly to chemists today, or indeed even by the 1950s, to whom the notion that a complex molecule can adopt and sustain a particular structure even in the face of thermal fluctuations seemed unproblematic. Schrödinger’s invocation of quantum mechanics to explain this phenomenon looks rather laboured now, and is quite possibly a part of what irritated Linus Pauling and Max Perutz about the book. It’s also why Schrödinger seems so keen to cement the structure of the gene in place as a “solid”, rather than simply regarding it as a large molecule carrying a linear code.
And what about that code itself? This wasn’t interrogated at the meeting, which was a shame. Indeed, it was sometimes still attributed by speakers the all-mighty agency that Schrödinger himself gave it. It rather astonishes me to see how the claim that the genome contains “all the information you need to make the organism” raises no eyebrows. What surprises me is that scientists are typically a rather sceptical crowd, and demand evidence to support the claims they make. But there is, to my knowledge, no evidence whatsoever that one can make even the simplest organism, let alone a human, from the information contained in the genome. Oh, but surely you can? You can (in principle, and now in some cases in practice) just make the genome from scratch, put it in a cell, and off it goes… Wait. Put it in a cell? So you need a cell to actually enact the “code-script”? Well sure, but the cell goes without saying, right?
Metaphors in biology are always imperfect and often treacherous, but I think this one (a simile, really) has some mileage: saying that the genome is the complete blueprint for an organism is a bit like saying that the Oxford English Dictionary is a blueprint for King Lear. It’s all in there, right? Ok, there’s a lot in there that you don’t need for Lear, but then there’s a lot of junk in the genome too (perhaps!). Sure, to get Lear out of the OED you need to feed the words into William Shakespeare, but Shakespeare goes without saying, right?
For a human, it’s still more complicated. Human cells can of course replicate in a culture medium, but none has ever replicated into an embryo, let alone a person. What they can do – what some induced stem cells can do – is proliferate into an embryoid, an organoid with embryo-like structures. But that won’t make a human. For that, you need not only a cell but a uterus. It’s rather like saying, so the text of King Lear has “all the information” – and then giving it to, say, a Chinese factory worker in Lanzhou. Well OK, so to actually enact Lear in a meaningful way it has to be read by someone who reads English – or translated… But come on, the English goes without saying…
Once we start talking in terms of the information needed to make an organism, though, quite what’s in the genome becomes far less clear. Indeed, we know for sure that maternal factors supply some vital information for the early development of a fertilized egg. And the self-organizing abilities of cells can only create an organism in the right context: every cell needs the right signals from its environment for the whole to assemble properly. Genes somehow encode neurons, but neurons don’t develop properly if they don’t get stimuli from their environment during a critical period.
Are these environmental signals and context then a part of the information needed to make an organism “as nature [meaning evolution, I guess] intends”? Is an understanding of English a part of the information needed for King Lear to be anything more than marks on paper?
Evidently this is an issue of how “information” acquires meaning, which of course was notoriously what Shannon left out of his information theory. And that is why information in Shannon’s sense is greatest when the Shannon entropy is greatest. Periodic solids have rather low entropy. What is needed in biology, then, is a theory for where meaningful information comes from and how it gives rise to causal flows. There’s no doubt that lots of meaningful information is encoded in the genome that contributes to how organisms are built and how they function. But when we say that “the genome contains all the information needed to build an organism”, we are dealing with ill-defined terms. What I solely missed at this meeting was a presentation about how a theory of biological information can be developed, and how to define and measure “meaning” within that theory. Daniel Dennett acknowledged this lacuna in his keynote address, saying that understanding “semantic information” as opposed to Shannon information is still “work in progress”.
A close reading of Schrödinger starts us in that direction too, and is a part of his legacy.
Monday, August 27, 2018
Don't just count qubits
The rapid advances in quantum computing as a technology with real applications are reflected in the increases in the number of qubits these devices have available for computation. In 1998, laboratory prototypes could boast just two: enough for a proof of principle but little more. Today that figure has risen to 72 in the latest device reported by Google. Given that the number of states available in principle to systems of N qubits is 2^N, this is an enormous difference. The ability to hold this number of qubits in entangled states involves a herculean feat of quantum engineering.
It’s not surprising, then, that media reports tend to focus on the number of qubits a quantum computer has at its disposal as the figure of merit. The qubit count is also commonly regarded as the determinant of the machine’s capabilities, most famously with the widely repeated claim that 50 qubits marks the threshold of “quantum supremacy”, when a quantum computer becomes capable of things to all intents and purposes impossible for classical devices.
The problem is that this is all misleading. What a quantum computer can and can’t accomplish depends on many things, of which the qubit count is just one. For one thing, the quality of the qubits is critical: how noisy they are, and how likely to incur errors. There is also the question of their heterogeneity. Qubits manufactured from superconducting circuits will generally differ in their precise characteristics and performance, whereas quantum computers that use trapped-ion qubits benefit from having them all identical. And because qubits can only be kept coherent for short times before quantum decoherence scrambles them, how fast they can be switched can determine how many logic operations you can perform in the time available. The power of the device then depends also on the number of gate operations your algorithm needs: its so-called depth.
There is also the question of connectivity: does every qubit couple with every other, or are they for instance coupled only to two neighbours in a linear array?
The performance of a quantum computer therefore needs a better figure of merit than a crude counting of qubits. Researchers at IBM have suggested one, which they call the “quantum volume” – an attempt to fold all of these features into a single number. And this isn’t, then, a way of evaluating which of two devices “performs better”, but quantifies the power of a particular computation. Device performance will depend on what you’re asking it to do. Particular architectures and hardware will work well for some tasks than for others (see here).
As a result, a media tendency to present quantum computation as a competition between rivals – IBM vs Google, superconducting-qubits vs trapped ions – does the field no favours. Of course one can’t deny that competitiveness exists, as well as a degree of commercial secrecy – this is a business with huge stakes, after all. But no one expects any overall “winner” to be anointed. It’s unfortunate, then, that this is how things looks if we judge from the “qubit counter” created by MIT Tech Review. As a rough-and-ready timeline of how the applied tech of the field is evolving, this might be just about defensible. But some fear that this sort of presentation does more harm than good, and we should certainly not see it as a guide to who is currently “in the lead”.
It’s not surprising, then, that media reports tend to focus on the number of qubits a quantum computer has at its disposal as the figure of merit. The qubit count is also commonly regarded as the determinant of the machine’s capabilities, most famously with the widely repeated claim that 50 qubits marks the threshold of “quantum supremacy”, when a quantum computer becomes capable of things to all intents and purposes impossible for classical devices.
The problem is that this is all misleading. What a quantum computer can and can’t accomplish depends on many things, of which the qubit count is just one. For one thing, the quality of the qubits is critical: how noisy they are, and how likely to incur errors. There is also the question of their heterogeneity. Qubits manufactured from superconducting circuits will generally differ in their precise characteristics and performance, whereas quantum computers that use trapped-ion qubits benefit from having them all identical. And because qubits can only be kept coherent for short times before quantum decoherence scrambles them, how fast they can be switched can determine how many logic operations you can perform in the time available. The power of the device then depends also on the number of gate operations your algorithm needs: its so-called depth.
There is also the question of connectivity: does every qubit couple with every other, or are they for instance coupled only to two neighbours in a linear array?
The performance of a quantum computer therefore needs a better figure of merit than a crude counting of qubits. Researchers at IBM have suggested one, which they call the “quantum volume” – an attempt to fold all of these features into a single number. And this isn’t, then, a way of evaluating which of two devices “performs better”, but quantifies the power of a particular computation. Device performance will depend on what you’re asking it to do. Particular architectures and hardware will work well for some tasks than for others (see here).
As a result, a media tendency to present quantum computation as a competition between rivals – IBM vs Google, superconducting-qubits vs trapped ions – does the field no favours. Of course one can’t deny that competitiveness exists, as well as a degree of commercial secrecy – this is a business with huge stakes, after all. But no one expects any overall “winner” to be anointed. It’s unfortunate, then, that this is how things looks if we judge from the “qubit counter” created by MIT Tech Review. As a rough-and-ready timeline of how the applied tech of the field is evolving, this might be just about defensible. But some fear that this sort of presentation does more harm than good, and we should certainly not see it as a guide to who is currently “in the lead”.
Friday, June 08, 2018
Myths of Copenhagen
Discussing the Copenhagen interpretation of quantum mechanics with Adam Becker and Jim Baggott makes me think it would be worthwhile setting down how I see it. I don’t claim that this is necessarily the “right” way to look at Copenhagen (there probably isn’t a right way), and I’m conscious that what Bohr wrote and said is often hard to fathom – not, I think, because his thinking was vague, but because he struggled to express it through the limited medium of language. Many people have pored over Bohr’s words more closely than I have, and they might find different interpretations. So if anyone takes issue with what I say here, please do tell me.
Part of the problem too, as Adam said (and reiterates in his excellent new book What is Real?, is that there isn’t really a “Copenhagen interpretation”. I think James Cushing makes a good case that it was largely a retrospective invention of Heisenberg’s, quite possibly as an attempt to rehabilitate himself into the physics community after the war. As I say in Beyond Weird, my feeling is that when we talk about “Copenhagen”, we ought really to stick as close as we can to Bohr – not just for consistency but also because he was the most careful of the Copenhagenist thinkers.
It’s perhaps for this reason too that I think there are misconceptions about the Copenhagen interpretation. The first is that it denies any reality beyond what we can measure: that it is anti-realist. I see no reason to think this. People might read that into Bohr’s famous words: “There is no quantum world. There is only an abstract quantum physical description.” But it seems to me that the meaning here is quite clear: quantum mechanics does not describe a physical reality. We cannot mine it to discover “bits of the world”, nor “histories of the world”. Quantum mechanics is the formal apparatus that allows us to make predictions about the world. There is nothing in that formulation, however, that denies the existence of some underlying stratum in which phenomena take place that produce the outcomes quantum mechanics enables us to predict.
Indeed, what Bohr goes on to say makes this perfectly clear: “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” (Here you can see the influence of Kant on Bohr, who read him.) Here Bohr explicitly acknowledges the existence of “nature” – an underlying reality – but doesn’t think we can get at it, beyond what we can observe.
This is what I like about Copenhagen. I don’t think that Bohr is necessarily right to abandon a quest to probe beneath the theory’s capacity to predict, but I think he is right to caution that nothing in quantum mechanics obviously permits us to make assumptions about that. Once we accept the Born rule, which makes the wavefunction a probability density distribution, we are forced to recognize that.
Here’s the next fallacy about the Copenhagen interpretation: that it insists classical physics, such as governs measuring apparatus, works according to fundamentally different rules from quantum physics, and we just have to accept that sharp division.
Again, I understand why it looks as though Bohr might be saying that. But what he’s really saying is that measurements exist only in the classical realm. Only there can we claim definitive knowledge of some quantum state of affairs – what the position of an electron “is”, say. This split, then, is epistemic: knowledge is classical (because we are).
Bohr didn’t see any prospect of that ever being otherwise. What’s often forgotten is how absolute the distinction seemed in Bohr’s day between the atomic/microscopic and the macroscopic. Schrödinger, who was of course no Copenhagenist, made that clear in What Is Life?, which expresses not the slightest notion that we could ever see individual molecules and follow their behaviour. To him, as to Bohr, we must describe the microscopic world in necessarily statistical terms, and it would have seemed absurd to imagine we would ever point to this or that molecule.
Bohr’s comments about the quantum/classical divide reflect this mindset. It’s a great shame he hasn’t been around to see it dissolve – to see us probe the mesoscale and even manipulate single atoms and photons. It would have been great to know what he would have made of it.
But I don’t believe there is any reason to suppose that, as is sometimes said, he felt that quantum mechanics just had to “stop working” at some particular scale, and classical physics take over. And of course today we have absolutely no reason to suppose that happens. On the contrary, the theory of decoherence (pioneered by the late Dieter Zeh) can go an awfully long way to deconstructing and demystifying measurement. It’s enabled us to chip away at Bohr’s overly pessimistic epistemological quantum-classical divide, both theoretically and experimentally, and understand a great deal about how classical rules emerge from quantum. Some think it has in fact pretty much solved the “measurement problem”, but I think that’s too optimistic, for the reasons below.
But I don’t see anything in those developments that conflicts with Copenhagen. After all, one of the pioneers of such developments, Anton Zeilinger, would describe himself (I’m reliably told) as basically a Copenhagenist. Some will object to this that Bohr was so vague that his ideas can be made to fit anything. But I believe that, in this much at least, apparent conflicts with work on decoherence come from not attending carefully enough to what Bohr said. (I think Henrik Zinkernagel’s discussions of “what Bohr said” are useful here and here.)
I think that in fact these recent developments have helped to refine Bohr’s picture until we can see more clearly what it really boils down to. Bohr saw measurement as an irreversible process, in the sense that once you had classical knowledge about an outcome, that outcome could not be undone. From the perspective of decoherence, this is now viewed in terms that sound a little like the Second Law: measurement entails the entanglement of quantum object and environment, which, as it proceeds and spreads, becomes for all practical purposes irreversible because you can’t hope to untangle it again. (We know that in some special cases where you can keep track, recoherence is possible, much as it is possible in principle to “undo” the Second Law if you keep track of all the interactions and collisions.)
This decoherence remains a “fully quantum” process, even while we can see how it gives rise to classical-like behaviour (via Zurek’s quantum Darwinism, for example). But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes: why only one particular outcome is (classically) observed. In my view, that is the right way to put into more specific and updated language what Bohr was driving at with his insistence on the classicality of measurement. Omnès is content to posit uniqueness of outcomes as an axiom: he thinks we have a complete theory of measurement that amounts to “decoherence + uniqueness”. The Everett interpretation, of course, ditches uniqueness, on the grounds of “why add an extra, arbitrary axiom?” To my mind, and for the reasons explained in my book, I think this leads to a “cognitive instability”, to purloin Sean Carroll’s useful phrase, in our ability to explain the world. So the incoherence that Adam sees in Copenhagen, I see in the Everett view (albeit for different reasons).
But this then is the value I see in Copenhagen: if we stick with it through the theory of decoherence, it takes us to the crux of the matter: the part it just can’t explain, which is uniqueness of outcomes. And by that I mean (irreversible) uniqueness of our knowledge – better known as facts. What the Copenhagenists called collapse or reduction of the wavefunction boils down to the emergence of facts about the world. And because I think they – at least, Bohr – always saw wavefunction collapse in epistemic terms, there is a consistency to this. So Copenhagen doesn’t solve the problem, but it leads us to the right question (indeed, the question that confronts the Everettian view too).
One might say that the Bohmian interpretation solves that issue, because it is a realist model: the facts are there all along, albeit hidden from us. I can see the attraction of that. My problem with it is that the solution comes by fiat – one puts in the hidden facts from the outset, and then explains all the potential problems with that by fiat too: by devising a form of nonlocality that does everything you need it to, without any real physical basis, and insisting that this type of nonlocality just – well, just is. It is ingenious, and sometimes useful, but it doesn’t seem to me that you satisfactorily solve a problem by building the solution into the axioms. I don’t understand the Bohmian model well enough to know how it deals with issues of contextuality and the apparent “non-universality of facts” (as this paper by Caslav Brukner points out), but on the face of it those seem to pose problems for a realist viewpoint too.
It seems to me that a currently very fruitful way to approach quantum mechanics is to think about the issue of why the answers the world gives us seem to depend on the questions we ask (à la John Wheeler’s “20 Questions” analogy). And I feel that Bohr helps point us in that direction, and without any need to suppose some mystical “effect of consciousness on physical reality”. He didn’t have all the answers – but we do him no favours by misrepresenting his questions. A tyrannical imposition of the Copenhagen position is bad for quantum mechanics, but Copenhagen itself is not the problem.
Part of the problem too, as Adam said (and reiterates in his excellent new book What is Real?, is that there isn’t really a “Copenhagen interpretation”. I think James Cushing makes a good case that it was largely a retrospective invention of Heisenberg’s, quite possibly as an attempt to rehabilitate himself into the physics community after the war. As I say in Beyond Weird, my feeling is that when we talk about “Copenhagen”, we ought really to stick as close as we can to Bohr – not just for consistency but also because he was the most careful of the Copenhagenist thinkers.
It’s perhaps for this reason too that I think there are misconceptions about the Copenhagen interpretation. The first is that it denies any reality beyond what we can measure: that it is anti-realist. I see no reason to think this. People might read that into Bohr’s famous words: “There is no quantum world. There is only an abstract quantum physical description.” But it seems to me that the meaning here is quite clear: quantum mechanics does not describe a physical reality. We cannot mine it to discover “bits of the world”, nor “histories of the world”. Quantum mechanics is the formal apparatus that allows us to make predictions about the world. There is nothing in that formulation, however, that denies the existence of some underlying stratum in which phenomena take place that produce the outcomes quantum mechanics enables us to predict.
Indeed, what Bohr goes on to say makes this perfectly clear: “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” (Here you can see the influence of Kant on Bohr, who read him.) Here Bohr explicitly acknowledges the existence of “nature” – an underlying reality – but doesn’t think we can get at it, beyond what we can observe.
This is what I like about Copenhagen. I don’t think that Bohr is necessarily right to abandon a quest to probe beneath the theory’s capacity to predict, but I think he is right to caution that nothing in quantum mechanics obviously permits us to make assumptions about that. Once we accept the Born rule, which makes the wavefunction a probability density distribution, we are forced to recognize that.
Here’s the next fallacy about the Copenhagen interpretation: that it insists classical physics, such as governs measuring apparatus, works according to fundamentally different rules from quantum physics, and we just have to accept that sharp division.
Again, I understand why it looks as though Bohr might be saying that. But what he’s really saying is that measurements exist only in the classical realm. Only there can we claim definitive knowledge of some quantum state of affairs – what the position of an electron “is”, say. This split, then, is epistemic: knowledge is classical (because we are).
Bohr didn’t see any prospect of that ever being otherwise. What’s often forgotten is how absolute the distinction seemed in Bohr’s day between the atomic/microscopic and the macroscopic. Schrödinger, who was of course no Copenhagenist, made that clear in What Is Life?, which expresses not the slightest notion that we could ever see individual molecules and follow their behaviour. To him, as to Bohr, we must describe the microscopic world in necessarily statistical terms, and it would have seemed absurd to imagine we would ever point to this or that molecule.
Bohr’s comments about the quantum/classical divide reflect this mindset. It’s a great shame he hasn’t been around to see it dissolve – to see us probe the mesoscale and even manipulate single atoms and photons. It would have been great to know what he would have made of it.
But I don’t believe there is any reason to suppose that, as is sometimes said, he felt that quantum mechanics just had to “stop working” at some particular scale, and classical physics take over. And of course today we have absolutely no reason to suppose that happens. On the contrary, the theory of decoherence (pioneered by the late Dieter Zeh) can go an awfully long way to deconstructing and demystifying measurement. It’s enabled us to chip away at Bohr’s overly pessimistic epistemological quantum-classical divide, both theoretically and experimentally, and understand a great deal about how classical rules emerge from quantum. Some think it has in fact pretty much solved the “measurement problem”, but I think that’s too optimistic, for the reasons below.
But I don’t see anything in those developments that conflicts with Copenhagen. After all, one of the pioneers of such developments, Anton Zeilinger, would describe himself (I’m reliably told) as basically a Copenhagenist. Some will object to this that Bohr was so vague that his ideas can be made to fit anything. But I believe that, in this much at least, apparent conflicts with work on decoherence come from not attending carefully enough to what Bohr said. (I think Henrik Zinkernagel’s discussions of “what Bohr said” are useful here and here.)
I think that in fact these recent developments have helped to refine Bohr’s picture until we can see more clearly what it really boils down to. Bohr saw measurement as an irreversible process, in the sense that once you had classical knowledge about an outcome, that outcome could not be undone. From the perspective of decoherence, this is now viewed in terms that sound a little like the Second Law: measurement entails the entanglement of quantum object and environment, which, as it proceeds and spreads, becomes for all practical purposes irreversible because you can’t hope to untangle it again. (We know that in some special cases where you can keep track, recoherence is possible, much as it is possible in principle to “undo” the Second Law if you keep track of all the interactions and collisions.)
This decoherence remains a “fully quantum” process, even while we can see how it gives rise to classical-like behaviour (via Zurek’s quantum Darwinism, for example). But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes: why only one particular outcome is (classically) observed. In my view, that is the right way to put into more specific and updated language what Bohr was driving at with his insistence on the classicality of measurement. Omnès is content to posit uniqueness of outcomes as an axiom: he thinks we have a complete theory of measurement that amounts to “decoherence + uniqueness”. The Everett interpretation, of course, ditches uniqueness, on the grounds of “why add an extra, arbitrary axiom?” To my mind, and for the reasons explained in my book, I think this leads to a “cognitive instability”, to purloin Sean Carroll’s useful phrase, in our ability to explain the world. So the incoherence that Adam sees in Copenhagen, I see in the Everett view (albeit for different reasons).
But this then is the value I see in Copenhagen: if we stick with it through the theory of decoherence, it takes us to the crux of the matter: the part it just can’t explain, which is uniqueness of outcomes. And by that I mean (irreversible) uniqueness of our knowledge – better known as facts. What the Copenhagenists called collapse or reduction of the wavefunction boils down to the emergence of facts about the world. And because I think they – at least, Bohr – always saw wavefunction collapse in epistemic terms, there is a consistency to this. So Copenhagen doesn’t solve the problem, but it leads us to the right question (indeed, the question that confronts the Everettian view too).
One might say that the Bohmian interpretation solves that issue, because it is a realist model: the facts are there all along, albeit hidden from us. I can see the attraction of that. My problem with it is that the solution comes by fiat – one puts in the hidden facts from the outset, and then explains all the potential problems with that by fiat too: by devising a form of nonlocality that does everything you need it to, without any real physical basis, and insisting that this type of nonlocality just – well, just is. It is ingenious, and sometimes useful, but it doesn’t seem to me that you satisfactorily solve a problem by building the solution into the axioms. I don’t understand the Bohmian model well enough to know how it deals with issues of contextuality and the apparent “non-universality of facts” (as this paper by Caslav Brukner points out), but on the face of it those seem to pose problems for a realist viewpoint too.
It seems to me that a currently very fruitful way to approach quantum mechanics is to think about the issue of why the answers the world gives us seem to depend on the questions we ask (à la John Wheeler’s “20 Questions” analogy). And I feel that Bohr helps point us in that direction, and without any need to suppose some mystical “effect of consciousness on physical reality”. He didn’t have all the answers – but we do him no favours by misrepresenting his questions. A tyrannical imposition of the Copenhagen position is bad for quantum mechanics, but Copenhagen itself is not the problem.
Monday, May 21, 2018
What is a superposition really like?
Here’s a longer version of the news story I just published in Scientific American, which includes more context and background. The interpretation of the outcomes of this thought experiment within the two-state vector formalism of quantum mechanics is by no means the only one possible. But what the experiment does show is that quantum mechanics suggests that superpositions are not always simply a case of a particle seeming to be in two places or states at once. A superposition, liker anything else in quantum mechanics, tells you about the possible outcomes of a measurement. All the rest is contingent interpretation. I’m reminded yet again today that it is going to take an awful lot to get media folks to accept this. I'm starting to see now that it was a mistake for me to assume that they didn't know any better; rather, I think there an active, positive desire for the "two places at once" to be true.
I should say also that I consciously decided to turn a blind eye to the use of the word “spooky” in the title of this piece, because it does perfectly acceptable work as it is. It does not imply that “spooky action at a distance” is a thing. It is not a thing, unless it is a disproved thing. Quantum nonlocality is the alternative to that Einsteinian picture.
______________________________________________________________________
It’s the central question in quantum mechanics, and no one knows the answer: what goes on for a particle in a superposition? All of the head-scratching oddness that seems to pervade quantum theory comes from these peculiar circumstances in which particles seem to be in two places or states at once. What that really means has provoked endless debate and argument. Now a team of researchers in Israel and Japan has proposed https://www.nature.com/articles/s41598-018-26018-y an experiment that should let us say something for sure about the nature of that nebulous state [A. C. Elitzur, E. Cohen, R. Okamoto & S. Takeuchi, Sci. Rep. 8, 7730 (2018)].
Their experiment, which they say could be carried out within a few months using existing technologies, should let us sneak a glance at where a quantum object – in this case a particle of light, called a photon – actually is when it is placed in a superposition of positions. And what the researchers predict is even more shocking and strange than the usual picture of this counterintuitive quantum phenomenon.
The classic illustration of a superposition – indeed, the central experiment of quantum mechanics, according to legendary physicist Richard Feynman – involves firing particles like photons through two closely spaced slits in a wall. Because quantum particles can behave like waves, those passing through one slit can ‘interfere’ with those going through the other, their wavy ripples either boosting or cancelling one another. For photons the result is a pattern of light and dark interference bands when the particles are detected on a screen on the far side, corresponding to a high or low number of photons reaching the screen.
Once you accept the waviness of quantum particles, there’s nothing so odd about this interference pattern. You can see it for ordinary water waves passing through double slits too. What is odd, though, is that the interference remains even if the rate of firing particles at the slits is so low that only one passes through at a time. The only way to rationalize that is to say each particle somehow passes through both slits at once, and interferes with itself. That’s a superposition.
To put it another way: when we ask the seemingly reasonable question “Where is the particle in a superposition?”, we’re using a notion of “where” inherited from our classical world, to which the answer can simply be “there”. But quantum mechanics is known now to be ‘nonlocal’, which means we have to relinquish the whole notion of locality – of “whereness”, you might say.
But that’s a hard habit to give up, which is why the ‘two places at once’ picture is commonly invoked to talk about quantum superpositions. Yet quantum mechanics doesn’t say anything about what particles are like until we make measurements on them. For the Danish physicist Niels Bohr, asking where the particle was in the double-slit experiment before it was measured has no meaning within quantum theory itself.
Why don’t we just look? Well, we can. We could put a detector in or just behind one slit that could register the passing of a particle without absorbing it. And in that case, the detector will show that sometimes the particle goes through one slit, and sometimes it goes through the other. But here’s the catch: there’s then no longer an interference pattern, but just the result we’d expect for particles taking one route or the other. Observing which route the particle takes destroys its ‘quantumness’.
This isn’t about measurements disturbing the particle, since interference is absent even in instances where a detector at one slit doesn’t see the particle, so that it ‘must’ have gone through the other slit. Rather, the ‘collapse’ of a superposition seems to be caused by our mere knowledge of the path.
We can try to be smarter. What if we wait until the particle has definitely passed through the slits before we measure the path? How could that delayed measurement affect what happened earlier at the slits themselves? But it does. In the 1960s the physicist John Wheeler proposed a way of doing this using an apparatus called a Mach-Zehnder interferometer, a modification of the double-slit experiment in which a partial mirror creates a superposition of photons that seems to send them along two different paths before they are brought back together to interfere (or not).
The result was that, just as Bohr had predicted, it makes no difference if we delay the detection. Still superposition and interference vanish if we detect the path before we measure the photons. It is as if the particle ‘knows’ our intention to measure it later.
Bohr’s argument that quantum mechanics is silent about ‘reality’ beyond what we can measure has long seemed deeply unsatisfactory to many researchers. “We know something fishy is going on in a superposition”, says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research in Zichron Ya’akov. “But you’re not allowed to measure it”, he says – because then the superposition collapses. “This is what makes quantum mechanics so diabolical.”
There have been many attempts to develop alternative points of view to Bohr’s that restore an underlying reality in quantum mechanics – some description of the world before we look. But none seems able to restore the kind of picture we have in classical physics of objects that always have definite positions and paths.
One particular approach that aims to deduce something about quantum particles before their measurement is called the two-state-vector formalism (TSVF) of quantum mechanics, developed by Elitzur’s former mentor the Israeli physicist Yakir Aharonov and his collaborators. This postulates that quantum events are in some sense determined by quantum states not just in the past but also in the future: it makes the assumption that quantum mechanics works the same way both forwards and backwards in time. In this view, causes can seem to propagate backwards in time: there is retrocausality.
You don’t have to take that strange notion literally. Rather, in the TSVF you can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: not, say, simply measuring where a particle ends up, but instead choosing a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could, because it means that the particle’s situation at any instant is being evaluated retrospectively in the light of its entire history, up to and including measurement. But odd thing is that it looks as if, simply by you choosing to look for a particular outcome, the choice caused that outcome to happen.
“Normal quantum mechanics is about statistics”, Cohen says: what you see are average values, or what is generally called an expectation value of some variable you are measuring. But by looking at when a system produces some particular, chosen value, you can take a slice though the probabilistic theory and start to talk with certainty about what went on to cause that outcome. The odd thing is that it then looks as if your very choice of outcome was part of the cause.
“It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace of the University of Southern California, a philosopher who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”
Take, for instance, the version of the double-slit experiment devised using the TSVF by Aharonov and coworker Lev Vaidman in 2003. The pair described (but did not build) an optical system in which a single photon can act as a ‘shutter’ that closes a slit by perfectly reflecting another ‘probe’ photon that is doing the standard trick of interfering with itself as it passes through the slits. Aharonov and Vaidman showed that, by applying post-selection to the measurements of the probe photon, we should be able to see that a shutter photon in a superposition can close both (or indeed many) slits at once. So you could say with confidence that the shutter photon really was both ‘here’ and ‘there’ at once [Y. Aharonov & L. Vaidman, Phys. Rev. A 67, 1–3 (2003)] – a situation that seems paradoxical from our everyday experience but is one aspect of the so-called nonlocal properties of quantum particles, where the whole notion of a well-defined location in space dissolves.
In 2016, Ryo Okamoto and Shigeki Takeuchi of Kyoto University implemented Aharonov and Vaidman’s proposal experimentally using apparatus based on a Mach-Zehnder interferometer [R. Okamoto & S. Takeuchi, Sci. Rep. 6, 35161 (2016)]. The ability of a photon to act as a shutter was enabled by a photonic device called a quantum router, in which one photon can control the route taken by another. The crucial point is that this interaction is cleverly arranged to be completely one-sided: it affects only the probe photon. That way, the probe photon carries away no direct information about the shutter photon, and so doesn’t disturb its superposition – but nonetheless one can retrospectively deduce that the shutter photon was definitely in the position needed to reflect the probe.
The Japanese researchers found that the statistics of how the superposed shutter photon reflects the probe photon matched those that Aharonov and Vaidman predicted, and which could only be explained by some non-classical “two places at once” behaviour. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places”, says Cohen.
Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more ingenious experiment, which allows one to say with certainty something about the position of a particle in a superposition at a series of different points in time before any measurement has been made. And it seems that this position is even more odd than the traditional “both here and there”.
Again the experiment involves a kind of Mach-Zehnder set-up in which a shutter photon interacts with some probe photon via quantum routers. This time, though, the probe photon’s route is split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C along the probe photon’s route, and they provide an unambiguous indication that the shutter particle was definitely in a given box at a specific time.
Because nothing is inspected until the probe photon has completed the whole circuit and reached a detector, there should be no collapse of either its superposition or that of the shutter photon – so there’s still interference. But the experiment is carefully set up so that the probe photon can only show this interference pattern if it interacted with the shutter photon in a particular sequence of places and times: namely, if the shutter photon was in both boxes A and C at some time t1, then at a later time t2 only in C, and at a still later time t3 in both B and C. If you see interference in the probe photon, you can say for sure (retrospectively) that the shutter photon displayed this bizarre appearance and disappearance among the boxes at different times – an idea Elitzur, Cohen and Aharonov proposed as a possibility last year for a single particle superposed into three ‘boxes’ [Y. Aharonov, E. Cohen, A. Landau & A. C. Elitzur, Sci. Rep. 7, 531 (2017)].
Why those particular places and times, though? You could certainly look at other points on the route, says Elitzur, but those times and locations are ones where, in this configuration, the probability of finding the particle becomes 1 – in other words, a certainty.
So this thought experiment seems to lift part of the veil off a quantum superposition, and to let us say something definite beyond Bohr’s “Don’t ask” proscription. The TSVF opens up the story by considering both the initial and final states, which allows one to reconstruct what was not measured, namely what happens in between. “I like the way this paper frames questions about what is happening in terms of entire histories, rather than instantaneous states”, says physicist Ken Wharton of San Jose State University in California. “Taking about ‘states’ is an old pervasive bias, whereas full histories are generally far more rich and interesting.”
And the researchers’ interpretation of that intermediate history before measurement is extraordinary. The apparent vanishing of particles in one place at one time, and their reappearance in other times and places, suggests a new vision of what the underlying processes are that create quantum randomness and nonlocality. Within the TSVF, this flickering, ever-changing existence can be understood as a series of events in which a particle is somehow ‘cancelled’ by its own “counterparticle”, with negative energy and negative mass.
Elitzur compares this to the notion introduced by British physicist Paul Dirac in the 1920s that particles have antiparticles that can annihilate one another – a picture that seemed at first just a manner of speaking, but which soon led to the discovery that such antiparticles are real. The disappearance of quantum particles is not annihilation in this same sense, but it is somewhat analogous.
So while the traditional “two places at once” view of superpositions might seem odd enough, “it’s possible that a superposition is a collection of states that are even crazier”, says Elitzur. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. With just a hint of nervousness, he ventures to suggest that as a result, measurements on a quantum particle might be contingent on when you look even if the quantum state itself is unchanging in time. You might not find it here when you look – but had you looked a moment later, it might indeed have been there. Such an interpretation of quantum behaviour would be, Elitzur says, “revolutionary” – because it would entail a hitherto unguessed menagerie of real states underlying counter-intuitive quantum phenomena.
The researchers say that to do the actual experiment will require some refining of what quantum routers are capable of, but that they hope to have it ready to roll in three to five months. “The experiment is bound to work”, says Wharton – but he adds that it is also “bound to not convince anyone of anything, since the results are predicted by standard quantum mechanics.”
Elitzur agrees that this picture of a particle’s apparent appearance and disappearance at various points along the trajectory could have been noticed in quantum mechanics decades ago. But it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he says, “well, let them go ahead!”
I should say also that I consciously decided to turn a blind eye to the use of the word “spooky” in the title of this piece, because it does perfectly acceptable work as it is. It does not imply that “spooky action at a distance” is a thing. It is not a thing, unless it is a disproved thing. Quantum nonlocality is the alternative to that Einsteinian picture.
______________________________________________________________________
It’s the central question in quantum mechanics, and no one knows the answer: what goes on for a particle in a superposition? All of the head-scratching oddness that seems to pervade quantum theory comes from these peculiar circumstances in which particles seem to be in two places or states at once. What that really means has provoked endless debate and argument. Now a team of researchers in Israel and Japan has proposed https://www.nature.com/articles/s41598-018-26018-y an experiment that should let us say something for sure about the nature of that nebulous state [A. C. Elitzur, E. Cohen, R. Okamoto & S. Takeuchi, Sci. Rep. 8, 7730 (2018)].
Their experiment, which they say could be carried out within a few months using existing technologies, should let us sneak a glance at where a quantum object – in this case a particle of light, called a photon – actually is when it is placed in a superposition of positions. And what the researchers predict is even more shocking and strange than the usual picture of this counterintuitive quantum phenomenon.
The classic illustration of a superposition – indeed, the central experiment of quantum mechanics, according to legendary physicist Richard Feynman – involves firing particles like photons through two closely spaced slits in a wall. Because quantum particles can behave like waves, those passing through one slit can ‘interfere’ with those going through the other, their wavy ripples either boosting or cancelling one another. For photons the result is a pattern of light and dark interference bands when the particles are detected on a screen on the far side, corresponding to a high or low number of photons reaching the screen.
Once you accept the waviness of quantum particles, there’s nothing so odd about this interference pattern. You can see it for ordinary water waves passing through double slits too. What is odd, though, is that the interference remains even if the rate of firing particles at the slits is so low that only one passes through at a time. The only way to rationalize that is to say each particle somehow passes through both slits at once, and interferes with itself. That’s a superposition.
To put it another way: when we ask the seemingly reasonable question “Where is the particle in a superposition?”, we’re using a notion of “where” inherited from our classical world, to which the answer can simply be “there”. But quantum mechanics is known now to be ‘nonlocal’, which means we have to relinquish the whole notion of locality – of “whereness”, you might say.
But that’s a hard habit to give up, which is why the ‘two places at once’ picture is commonly invoked to talk about quantum superpositions. Yet quantum mechanics doesn’t say anything about what particles are like until we make measurements on them. For the Danish physicist Niels Bohr, asking where the particle was in the double-slit experiment before it was measured has no meaning within quantum theory itself.
Why don’t we just look? Well, we can. We could put a detector in or just behind one slit that could register the passing of a particle without absorbing it. And in that case, the detector will show that sometimes the particle goes through one slit, and sometimes it goes through the other. But here’s the catch: there’s then no longer an interference pattern, but just the result we’d expect for particles taking one route or the other. Observing which route the particle takes destroys its ‘quantumness’.
This isn’t about measurements disturbing the particle, since interference is absent even in instances where a detector at one slit doesn’t see the particle, so that it ‘must’ have gone through the other slit. Rather, the ‘collapse’ of a superposition seems to be caused by our mere knowledge of the path.
We can try to be smarter. What if we wait until the particle has definitely passed through the slits before we measure the path? How could that delayed measurement affect what happened earlier at the slits themselves? But it does. In the 1960s the physicist John Wheeler proposed a way of doing this using an apparatus called a Mach-Zehnder interferometer, a modification of the double-slit experiment in which a partial mirror creates a superposition of photons that seems to send them along two different paths before they are brought back together to interfere (or not).
The result was that, just as Bohr had predicted, it makes no difference if we delay the detection. Still superposition and interference vanish if we detect the path before we measure the photons. It is as if the particle ‘knows’ our intention to measure it later.
Bohr’s argument that quantum mechanics is silent about ‘reality’ beyond what we can measure has long seemed deeply unsatisfactory to many researchers. “We know something fishy is going on in a superposition”, says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research in Zichron Ya’akov. “But you’re not allowed to measure it”, he says – because then the superposition collapses. “This is what makes quantum mechanics so diabolical.”
There have been many attempts to develop alternative points of view to Bohr’s that restore an underlying reality in quantum mechanics – some description of the world before we look. But none seems able to restore the kind of picture we have in classical physics of objects that always have definite positions and paths.
One particular approach that aims to deduce something about quantum particles before their measurement is called the two-state-vector formalism (TSVF) of quantum mechanics, developed by Elitzur’s former mentor the Israeli physicist Yakir Aharonov and his collaborators. This postulates that quantum events are in some sense determined by quantum states not just in the past but also in the future: it makes the assumption that quantum mechanics works the same way both forwards and backwards in time. In this view, causes can seem to propagate backwards in time: there is retrocausality.
You don’t have to take that strange notion literally. Rather, in the TSVF you can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: not, say, simply measuring where a particle ends up, but instead choosing a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could, because it means that the particle’s situation at any instant is being evaluated retrospectively in the light of its entire history, up to and including measurement. But odd thing is that it looks as if, simply by you choosing to look for a particular outcome, the choice caused that outcome to happen.
“Normal quantum mechanics is about statistics”, Cohen says: what you see are average values, or what is generally called an expectation value of some variable you are measuring. But by looking at when a system produces some particular, chosen value, you can take a slice though the probabilistic theory and start to talk with certainty about what went on to cause that outcome. The odd thing is that it then looks as if your very choice of outcome was part of the cause.
“It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace of the University of Southern California, a philosopher who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”
Take, for instance, the version of the double-slit experiment devised using the TSVF by Aharonov and coworker Lev Vaidman in 2003. The pair described (but did not build) an optical system in which a single photon can act as a ‘shutter’ that closes a slit by perfectly reflecting another ‘probe’ photon that is doing the standard trick of interfering with itself as it passes through the slits. Aharonov and Vaidman showed that, by applying post-selection to the measurements of the probe photon, we should be able to see that a shutter photon in a superposition can close both (or indeed many) slits at once. So you could say with confidence that the shutter photon really was both ‘here’ and ‘there’ at once [Y. Aharonov & L. Vaidman, Phys. Rev. A 67, 1–3 (2003)] – a situation that seems paradoxical from our everyday experience but is one aspect of the so-called nonlocal properties of quantum particles, where the whole notion of a well-defined location in space dissolves.
In 2016, Ryo Okamoto and Shigeki Takeuchi of Kyoto University implemented Aharonov and Vaidman’s proposal experimentally using apparatus based on a Mach-Zehnder interferometer [R. Okamoto & S. Takeuchi, Sci. Rep. 6, 35161 (2016)]. The ability of a photon to act as a shutter was enabled by a photonic device called a quantum router, in which one photon can control the route taken by another. The crucial point is that this interaction is cleverly arranged to be completely one-sided: it affects only the probe photon. That way, the probe photon carries away no direct information about the shutter photon, and so doesn’t disturb its superposition – but nonetheless one can retrospectively deduce that the shutter photon was definitely in the position needed to reflect the probe.
The Japanese researchers found that the statistics of how the superposed shutter photon reflects the probe photon matched those that Aharonov and Vaidman predicted, and which could only be explained by some non-classical “two places at once” behaviour. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places”, says Cohen.
Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more ingenious experiment, which allows one to say with certainty something about the position of a particle in a superposition at a series of different points in time before any measurement has been made. And it seems that this position is even more odd than the traditional “both here and there”.
Again the experiment involves a kind of Mach-Zehnder set-up in which a shutter photon interacts with some probe photon via quantum routers. This time, though, the probe photon’s route is split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C along the probe photon’s route, and they provide an unambiguous indication that the shutter particle was definitely in a given box at a specific time.
Because nothing is inspected until the probe photon has completed the whole circuit and reached a detector, there should be no collapse of either its superposition or that of the shutter photon – so there’s still interference. But the experiment is carefully set up so that the probe photon can only show this interference pattern if it interacted with the shutter photon in a particular sequence of places and times: namely, if the shutter photon was in both boxes A and C at some time t1, then at a later time t2 only in C, and at a still later time t3 in both B and C. If you see interference in the probe photon, you can say for sure (retrospectively) that the shutter photon displayed this bizarre appearance and disappearance among the boxes at different times – an idea Elitzur, Cohen and Aharonov proposed as a possibility last year for a single particle superposed into three ‘boxes’ [Y. Aharonov, E. Cohen, A. Landau & A. C. Elitzur, Sci. Rep. 7, 531 (2017)].
Why those particular places and times, though? You could certainly look at other points on the route, says Elitzur, but those times and locations are ones where, in this configuration, the probability of finding the particle becomes 1 – in other words, a certainty.
So this thought experiment seems to lift part of the veil off a quantum superposition, and to let us say something definite beyond Bohr’s “Don’t ask” proscription. The TSVF opens up the story by considering both the initial and final states, which allows one to reconstruct what was not measured, namely what happens in between. “I like the way this paper frames questions about what is happening in terms of entire histories, rather than instantaneous states”, says physicist Ken Wharton of San Jose State University in California. “Taking about ‘states’ is an old pervasive bias, whereas full histories are generally far more rich and interesting.”
And the researchers’ interpretation of that intermediate history before measurement is extraordinary. The apparent vanishing of particles in one place at one time, and their reappearance in other times and places, suggests a new vision of what the underlying processes are that create quantum randomness and nonlocality. Within the TSVF, this flickering, ever-changing existence can be understood as a series of events in which a particle is somehow ‘cancelled’ by its own “counterparticle”, with negative energy and negative mass.
Elitzur compares this to the notion introduced by British physicist Paul Dirac in the 1920s that particles have antiparticles that can annihilate one another – a picture that seemed at first just a manner of speaking, but which soon led to the discovery that such antiparticles are real. The disappearance of quantum particles is not annihilation in this same sense, but it is somewhat analogous.
So while the traditional “two places at once” view of superpositions might seem odd enough, “it’s possible that a superposition is a collection of states that are even crazier”, says Elitzur. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. With just a hint of nervousness, he ventures to suggest that as a result, measurements on a quantum particle might be contingent on when you look even if the quantum state itself is unchanging in time. You might not find it here when you look – but had you looked a moment later, it might indeed have been there. Such an interpretation of quantum behaviour would be, Elitzur says, “revolutionary” – because it would entail a hitherto unguessed menagerie of real states underlying counter-intuitive quantum phenomena.
The researchers say that to do the actual experiment will require some refining of what quantum routers are capable of, but that they hope to have it ready to roll in three to five months. “The experiment is bound to work”, says Wharton – but he adds that it is also “bound to not convince anyone of anything, since the results are predicted by standard quantum mechanics.”
Elitzur agrees that this picture of a particle’s apparent appearance and disappearance at various points along the trajectory could have been noticed in quantum mechanics decades ago. But it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he says, “well, let them go ahead!”
Tuesday, April 24, 2018
More on the politics of genes and education
There was never any prospect that my article in New Statesman on genes, intelligence and education would wrap up everything so nicely that there was nothing left to be said. For one thing, aspects of the science are still controversial – I would have liked among other things, to delve more deeply into the difficulties (impossibility, actually) of cleanly separating genetic from environmental influences on intelligence.
I was, I admit, somewhat hard on Toby Young, while wanting to absolve him from some of the kneejerk accusations that have come his way. He is not some swivel-eyed hard-right eugenicist, and indeed if I have given the impression that he is a crude social Darwinist, as Toby thinks I have, then I have given a wrong impression: his position is more nuanced than that. Toby has been rather gracious in his response in The Spectator.
OK, not entirely – but so it goes. I recognize the temptation to construct artificial narratives, and I fear Toby has done so in his discussion of my article in Prospect. I take his remark on my “bravery” in tackling this subject after writing that piece as a backhanded compliment that implies I was brave to return to a subject after I’d screwed up earlier. In fact, my Prospect piece was not primarily about genes and intelligence anyway. Yes, Stuart Ritchie had some criticisms about that particular aspect of it, but these centred on technical arguments about other studies in the field – in other words, on issues that the specialists themselves are arguing about. Other geneticists, including some who work on intelligence, saw and approved my article. To say that I had to “publish some ‘clarifications’” after “a lot of criticism” is misleading to a rather naughty degree. The reader is meant to infer that these are euphemistic ‘clarifications’, i.e. corrections made in response to errors pointed out. Actually I “published” nothing of the sort – what Toby is referring to are merely some comments I posted on my blog in response to the discussion.
As for the link to the criticisms made by Dominic Cummings: well, I recommend you read them. Not because they add anything of substance to the discussion, but because they are a reminder of what this man, who once wielded considerable behind-the-scenes political power and who has had an inordinate influence on the current predicament of the country, is really like. I still find it chilling.
What’s most striking about Toby’s piece, however, is how political it is. I don’t consider that a criticism, but rather, a vindication of one of the central points of my article in New Statesman: that while the science is fairly (if not entirely) clear, what one concludes from it is highly dependent on political leaning.
This includes a tendency to attribute ideas and views to your political opposites simply because of their persuasion. I must acknowledge the possibility that I did so with Toby. He returns the favour here:
“I suspect the popularity of the ‘personalised learning’ recommendation among the experts in this field – as well as Philip Ball – is partly because they don’t want to antagonise their left-wing colleagues.”
Actually I am sceptical about ‘personalized learning’ based on genetic intelligence measures, and said so in the article, since I see no evidence that they could be effective (although I’m open to the possibility that that might change). The aim of my article, Toby decided, was to reassure my fellow liberals that yes, genes do influence intelligence, but really it’ll be OK.
I find this bizarre – but not as bizarre as the view Toby attributes to Charles Murray, who seems to think that the “left” is either going to have a breakdown over genetic influences on traits or, worse, will decide to embrace genetic social engineering, using CRISPR no less, to eradicate innate differences in some sort of Brave New World scenario. If Murray really thinks that, his grasp of the science is as poor as some experts have said it is. And if in his alternative universe he finds a hard-left government trying to do such things anyway, he’ll find me alongside him opposing it.
You see, what we leftists are told we believe is that everyone is a blank slate, equal in all respects, until society kicks in with its prejudices and inequalities. And we denounce anything to the contrary as crypto-fascism. Steven Pinker, who has pushed the ‘blank slate’ as a myth of the left, weighed in on my article by commenting that even left-leaning magazines like New Statesman are now having to face up to the truth, as though my intention were to confess to past leftie sins of omission.
Now, I fully acknowledge that there have been hysterical reactions to ‘sociobiology’ and to suggestions that human traits may be partly genetically hardwired. And these have often come from the left – indeed, sometimes from the Marxian post-modern intellectuals who Pinker regards as the root of so many modern evils. But such denial is plain silly, and I’m not sure that many left-leaning moderates would disagree, or would be somehow too frightened to say so.
The caricatures Toby creates are grotesque. “It’s now just flat out wrong to think that varying levels of ability and success are solely determined by economic and historical forces”, he says. We agree – but does anyone seriously want to argue otherwise?
“That means it’s a dangerous fantasy”, he continues, “to think that, once you’ve eradicated socio-economic inequality, human nature will flatten out accordingly – that you can return to ‘year zero’, as the Khmer Rouge put it. On the contrary, biological differences between human beings will stubbornly refuse to wither away, which means that an egalitarian society can only be maintained by a brutally coercive state that is constantly intervening to ‘correct’ the inequities of nature.”
But most of us who would like to see an “egalitarian society” don’t mean by that a society in which absolute equality is imposed by the jackboot. We just want to see, for example, fewer people struggle against the inequalities they are born into, while others rise to power and influence on the back of their privileged background. We want to see less tolerance of, and even encouragement of, naked greed that exploits the powerless. We want to see more equality of opportunity. I think we accept that there can never be equality of outcome, at least without unjustified coercion. But we would also like to see reward more closely tied to contribution to society, not simply to what you can get away with. And in fact, while we will differ in degree and probably in methodology, I suspect that in these aspirations we liberal lefties are not so different from Toby Young.
In fact, evidently we do agree on this much:
“The findings of evolutionary psychologists, sociobiologists, cognitive neuroscientists, biosocial criminologists, and so on, [don’t] inevitably lead to Alan Ryan’s ‘apocalyptic conservatism’. On the contrary, I think they’re compatible with a wide range of political arrangements, including – at a pinch – Scandinavian social democracy.”
Which is why it’s baffling to me why Toby thinks we “progressive liberals” should be so disconcerted by the findings of genetics. Disconcerted by the discovery that traits, like height, are partly innate? Disconcerted that a society that tries to impose complete equality of ability on everyone will be a Stalinist dystopia? The implication here seems to be that science has disproved our leftwing delusions, and we’d better face up to that. But all it has ‘disproved’ is some wild, extreme fantasies and some straw men.
Such comments only reinforce my view that all this politicization of the debate gets in the way of actually moving it on. In my experience, the reason many educators and educationalists are not terribly enchanted with studies of the genetic basis of intelligence is not because they think it is some foul plot but because they don’t see it as terribly relevant. It doesn’t help them do their job any better. Now, if that leads them to actually deny the role of genes in intelligence, then they’re barking up the wrong tree. But I think many see it merely as a distraction from the business of trying to improve education. After all, so far genetics has offered next to no suggestions about how to do that – as I said in my article, pretty much all the sensible recommendations that Robert Plomin and Kathryn Asbury make in their book could have been made without the benefit of genetic studies.
Now, one way to read the implications of those studies is that there actually isn’t much that educationalists can do. Take the recent paper by Plomin and colleagues claiming that schools make virtually no additional contribution to outcomes beyond the innate cognitive abilities of their student intake. This is a very interesting finding, but there needs to be careful discussion about what it means. So we shouldn’t worry at all about Ofsted reports of “failing” schools? I doubt if anyone would conclude that, but then how is a school influencing outcomes? When a new head arrives and turns a school around, what has happened? Has the new head somehow just managed to alter the IQ distribution of the intake? I don’t know the answers to these things.
The authors of that paper are not so unwise as to conclude that (presumably beyond some minimal level of competence) “teaching makes no difference to outcomes”. But you can imagine others drawing that conclusion, and then should understand if some teachers and educators express frustration with this sort of thing. For one thing, the differences teaching and teachers make are not always going to be registered in exam results. As things stood, I was always going to get A’s in my chemistry A levels – but it was the enthusiasm and advocacy of Dr McCarthy and Mr Heasman that inspired me to study the subject at university. I was probably always going to get an A in my English O level, but it was Ms Priske who encouraged me to read Mervyn Peake.
All too often, however, the position of right-leaning commentators on the matter can read like laissez-faire: tinker all you like but it’s not going to make much difference, because you well-meaning liberals are just going to have to accept that some pupils are smarter than others. (So why are Conservative education ministers so keen to keep buggering about with the curriculum?) And if you do manage to level the playing field, you’ll see that even more clearly. And then where will you be, eh, with all your Maoist visions?
I don’t think they really do think like this; at least I don’t think Toby does. I certainly hope not. But that’s why both sides have to stop any posturing about the facts, and get on with figuring out what to make of them. We already know not all kids will do equally well in exams, come what may. But how do we find those who could do better, given the right circumstances? How do we find ways of engaging those pupils with ability but not inclination? How do we find ways of helping those of lower academic ability feel fulfilled rather than discarded in the bottom set? How do we decide, for God’s sake, what is important in an education anyway? These are the kinds of hard questions that teachers and educators have to face every day, and it would be good to see if the knowledge we’re gaining about inherent cognitive abilities could be useful to them, rather than turning it into a political football.
I was, I admit, somewhat hard on Toby Young, while wanting to absolve him from some of the kneejerk accusations that have come his way. He is not some swivel-eyed hard-right eugenicist, and indeed if I have given the impression that he is a crude social Darwinist, as Toby thinks I have, then I have given a wrong impression: his position is more nuanced than that. Toby has been rather gracious in his response in The Spectator.
OK, not entirely – but so it goes. I recognize the temptation to construct artificial narratives, and I fear Toby has done so in his discussion of my article in Prospect. I take his remark on my “bravery” in tackling this subject after writing that piece as a backhanded compliment that implies I was brave to return to a subject after I’d screwed up earlier. In fact, my Prospect piece was not primarily about genes and intelligence anyway. Yes, Stuart Ritchie had some criticisms about that particular aspect of it, but these centred on technical arguments about other studies in the field – in other words, on issues that the specialists themselves are arguing about. Other geneticists, including some who work on intelligence, saw and approved my article. To say that I had to “publish some ‘clarifications’” after “a lot of criticism” is misleading to a rather naughty degree. The reader is meant to infer that these are euphemistic ‘clarifications’, i.e. corrections made in response to errors pointed out. Actually I “published” nothing of the sort – what Toby is referring to are merely some comments I posted on my blog in response to the discussion.
As for the link to the criticisms made by Dominic Cummings: well, I recommend you read them. Not because they add anything of substance to the discussion, but because they are a reminder of what this man, who once wielded considerable behind-the-scenes political power and who has had an inordinate influence on the current predicament of the country, is really like. I still find it chilling.
What’s most striking about Toby’s piece, however, is how political it is. I don’t consider that a criticism, but rather, a vindication of one of the central points of my article in New Statesman: that while the science is fairly (if not entirely) clear, what one concludes from it is highly dependent on political leaning.
This includes a tendency to attribute ideas and views to your political opposites simply because of their persuasion. I must acknowledge the possibility that I did so with Toby. He returns the favour here:
“I suspect the popularity of the ‘personalised learning’ recommendation among the experts in this field – as well as Philip Ball – is partly because they don’t want to antagonise their left-wing colleagues.”
Actually I am sceptical about ‘personalized learning’ based on genetic intelligence measures, and said so in the article, since I see no evidence that they could be effective (although I’m open to the possibility that that might change). The aim of my article, Toby decided, was to reassure my fellow liberals that yes, genes do influence intelligence, but really it’ll be OK.
I find this bizarre – but not as bizarre as the view Toby attributes to Charles Murray, who seems to think that the “left” is either going to have a breakdown over genetic influences on traits or, worse, will decide to embrace genetic social engineering, using CRISPR no less, to eradicate innate differences in some sort of Brave New World scenario. If Murray really thinks that, his grasp of the science is as poor as some experts have said it is. And if in his alternative universe he finds a hard-left government trying to do such things anyway, he’ll find me alongside him opposing it.
You see, what we leftists are told we believe is that everyone is a blank slate, equal in all respects, until society kicks in with its prejudices and inequalities. And we denounce anything to the contrary as crypto-fascism. Steven Pinker, who has pushed the ‘blank slate’ as a myth of the left, weighed in on my article by commenting that even left-leaning magazines like New Statesman are now having to face up to the truth, as though my intention were to confess to past leftie sins of omission.
Now, I fully acknowledge that there have been hysterical reactions to ‘sociobiology’ and to suggestions that human traits may be partly genetically hardwired. And these have often come from the left – indeed, sometimes from the Marxian post-modern intellectuals who Pinker regards as the root of so many modern evils. But such denial is plain silly, and I’m not sure that many left-leaning moderates would disagree, or would be somehow too frightened to say so.
The caricatures Toby creates are grotesque. “It’s now just flat out wrong to think that varying levels of ability and success are solely determined by economic and historical forces”, he says. We agree – but does anyone seriously want to argue otherwise?
“That means it’s a dangerous fantasy”, he continues, “to think that, once you’ve eradicated socio-economic inequality, human nature will flatten out accordingly – that you can return to ‘year zero’, as the Khmer Rouge put it. On the contrary, biological differences between human beings will stubbornly refuse to wither away, which means that an egalitarian society can only be maintained by a brutally coercive state that is constantly intervening to ‘correct’ the inequities of nature.”
But most of us who would like to see an “egalitarian society” don’t mean by that a society in which absolute equality is imposed by the jackboot. We just want to see, for example, fewer people struggle against the inequalities they are born into, while others rise to power and influence on the back of their privileged background. We want to see less tolerance of, and even encouragement of, naked greed that exploits the powerless. We want to see more equality of opportunity. I think we accept that there can never be equality of outcome, at least without unjustified coercion. But we would also like to see reward more closely tied to contribution to society, not simply to what you can get away with. And in fact, while we will differ in degree and probably in methodology, I suspect that in these aspirations we liberal lefties are not so different from Toby Young.
In fact, evidently we do agree on this much:
“The findings of evolutionary psychologists, sociobiologists, cognitive neuroscientists, biosocial criminologists, and so on, [don’t] inevitably lead to Alan Ryan’s ‘apocalyptic conservatism’. On the contrary, I think they’re compatible with a wide range of political arrangements, including – at a pinch – Scandinavian social democracy.”
Which is why it’s baffling to me why Toby thinks we “progressive liberals” should be so disconcerted by the findings of genetics. Disconcerted by the discovery that traits, like height, are partly innate? Disconcerted that a society that tries to impose complete equality of ability on everyone will be a Stalinist dystopia? The implication here seems to be that science has disproved our leftwing delusions, and we’d better face up to that. But all it has ‘disproved’ is some wild, extreme fantasies and some straw men.
Such comments only reinforce my view that all this politicization of the debate gets in the way of actually moving it on. In my experience, the reason many educators and educationalists are not terribly enchanted with studies of the genetic basis of intelligence is not because they think it is some foul plot but because they don’t see it as terribly relevant. It doesn’t help them do their job any better. Now, if that leads them to actually deny the role of genes in intelligence, then they’re barking up the wrong tree. But I think many see it merely as a distraction from the business of trying to improve education. After all, so far genetics has offered next to no suggestions about how to do that – as I said in my article, pretty much all the sensible recommendations that Robert Plomin and Kathryn Asbury make in their book could have been made without the benefit of genetic studies.
Now, one way to read the implications of those studies is that there actually isn’t much that educationalists can do. Take the recent paper by Plomin and colleagues claiming that schools make virtually no additional contribution to outcomes beyond the innate cognitive abilities of their student intake. This is a very interesting finding, but there needs to be careful discussion about what it means. So we shouldn’t worry at all about Ofsted reports of “failing” schools? I doubt if anyone would conclude that, but then how is a school influencing outcomes? When a new head arrives and turns a school around, what has happened? Has the new head somehow just managed to alter the IQ distribution of the intake? I don’t know the answers to these things.
The authors of that paper are not so unwise as to conclude that (presumably beyond some minimal level of competence) “teaching makes no difference to outcomes”. But you can imagine others drawing that conclusion, and then should understand if some teachers and educators express frustration with this sort of thing. For one thing, the differences teaching and teachers make are not always going to be registered in exam results. As things stood, I was always going to get A’s in my chemistry A levels – but it was the enthusiasm and advocacy of Dr McCarthy and Mr Heasman that inspired me to study the subject at university. I was probably always going to get an A in my English O level, but it was Ms Priske who encouraged me to read Mervyn Peake.
All too often, however, the position of right-leaning commentators on the matter can read like laissez-faire: tinker all you like but it’s not going to make much difference, because you well-meaning liberals are just going to have to accept that some pupils are smarter than others. (So why are Conservative education ministers so keen to keep buggering about with the curriculum?) And if you do manage to level the playing field, you’ll see that even more clearly. And then where will you be, eh, with all your Maoist visions?
I don’t think they really do think like this; at least I don’t think Toby does. I certainly hope not. But that’s why both sides have to stop any posturing about the facts, and get on with figuring out what to make of them. We already know not all kids will do equally well in exams, come what may. But how do we find those who could do better, given the right circumstances? How do we find ways of engaging those pupils with ability but not inclination? How do we find ways of helping those of lower academic ability feel fulfilled rather than discarded in the bottom set? How do we decide, for God’s sake, what is important in an education anyway? These are the kinds of hard questions that teachers and educators have to face every day, and it would be good to see if the knowledge we’re gaining about inherent cognitive abilities could be useful to them, rather than turning it into a political football.
Subscribe to:
Posts (Atom)