Tuesday, August 13, 2019

Still trying to kill the cat

Some discussion stemming from Erwin Schrödinger’s birthday prompts me to set out briefly why his cat is widely misunderstood and is actually of rather limited value in truly getting to grips with the conundrums of quantum mechanics.

Schrödinger formulated the thought experiment during correspondence with Einstein in which they articulated what they found objectionable in the view of QM formulated by Niels Bohr and his circle (the “Copenhagen interpretation”, which should probably always be given scare quotes since it never corresponded to a unique, clearly adduced position). In that view, one couldn’t speak about the properties of quantum objects until they were measured. Einstein and Schrödinger considered this absurd, and in 1935 Schrödinger enlisted his cat to explain why. Famously, he imagined a situation in which the property of some quantum object, placed in a superposition of states, determines the fate of a cat in a closed box, hidden from the observer until it is opened. In his original exposition he spoke of how, according to Bohr’s view, the wavefunction of the system would, before being observed, “express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts.”

This is (even back then) more careful wording than the thought experiment is usually afforded today, talking specifically about the wavefunction and not about the cat. Even so, a key problem with Schrödinger’s cat if taken literally as a thought experiment is that it refers to no well defined property. In principle, Schrödinger could have talked instead about a macroscopic instrument with a pointer that could indicate one of two states. But he wanted an example that was not simply hard to intuit – a pointer in a superposition of two states, say – but was semantically absurd. “Live” and “dead” are not simply two different states of being, but are mutually exclusive. Then the absurdity is all the more apparent.

But in doing so, Schrödinger undermined his scenario as an actual experiment. There is not even a single classical measurement, let alone a quantum state one can write down, that defines “live” or “dead”. Of course, it is not hard to find out if a cat is alive or dead – but it is very hard to identify a single variable whose measurement will allow you to fix a well defined instant where the cat goes from live to dead. Certainly, no one has the slightest idea how to write down a wavefunction for a live or dead cat, and it seems unlikely that we could even imagine what they might look like or what would distinguish them.

This is then not, at any rate, an experiment devised (as is often said) to probe the issue of the quantum-classical boundary. Schrödinger gives no indication that he was thinking about that, except for the fact that he wanted a macroscopic example in order to make the absurdity apparent. It’s now clear how hard it would be to think of a way of keeping a cat sufficiently isolated from the environment to avoid (near-instantaneous) decoherence – the process by which “quantumness” generally becomes “classical” – while being able to sustain it in principle in a living state.

Ignoring all this, popular accounts typically take the thought experiment as a literal one rather than as a metaphor. As a rule, they then go on to (1) misunderstand the nature of superpositions as being “in two states at once”, and (2) misrepresent the Copenhagen interpretation as making ontological statements about a quantum system before measurement, and thereby tell us merrily that, if Bohr and colleagues are right, “the cat is both alive and dead at the same time!”

My suspicion is that, precisely because it is so evocative, Schrödinger’s thought experiment does not merely suffer from these misunderstandings but invites them. And that is why I would be very happy to see it retired.

Of course, there is more discussion of all these things in my book Beyond Weird.

Thursday, April 25, 2019

A Place That Exists Only In Moonlight: a Q&A with Katie Paterson

I have a Q&A with Katie Paterson in the 25 April issue of Nature. There was a lot in Katie’s comments that I didn’t have room for there, so here is the extended interview. The exhibition is wonderful, though sadly it only runs for a couple more weeks. This is science-inspired art at its finest.



___________________________________________________________________________

Scottish artist Katie Paterson is one of the most scientifically engaged of contemporary artists. Her work has been described as “combining a Romantic sensibility with a research-based approach, conceptual rigour and coolly minimalist presentation.” It makes use of meteorites, astronomical observations, fossils and experiments in sound and light to foster a human engagement with scales in time and space that far exceed our everyday experience.

Many of her works have astronomical themes. All the Dead Stars depicts, on a sheet of black etched steel, the location of around 27,000 stars that are no longer visible. For the Dying Star Letters (2011-) she wrote letters of condolence for every star newly recorded has having “died” – a task that got ever more challenging with advances in observing technologies. And History of Darkness (2010-) is an ongoing archive of slides of totally dark areas of the universe at different epochs and locations.

For Future Library (2014-2114), 100 writers including Margaret Atwood and David Mitchell will write stories (one is commissioned each year since 2014) that will be kept in sealed storage until 2114, when they will be printed on paper made from 1,000 trees being planted in a forest in Norway. Paterson has said of the project that “it questions the present tendency to think in short bursts of time, making decisions only for us living now.”

Some of your works speak to concerns about degradation of the environment and the onset of the Anthropocene – Future Library, for example, and the Vatnajökull project (2007-8) that relays the live sound of meltwater flowing within an Icelandic glacier to listeners who dial in on mobile phones. Do you think that what can seem like an overwhelming problem of environmental change on scales that are hard to contemplate can be made tangible and intelligible through art?

Future Library has a circular ecology built into it: words become enmeshed in growing trees, which, fed by water and light, a century later will become books. It’s a gathering, and the trees spell out time. The artwork is made with simple materials, people, nature and words, and its connected to feelings and senses. The phone call I set up to the glacier was an intimate one-to-one experience; listening to a graveyard of ice. The crisis of global warming does not feel intimate when it’s screeching at us through screens and graphs – yet of course it is. Our planet is disappearing. Humans understand suffering, the cycle of birth and dying. We need a contemporary approach to what Stephen Hawking called ‘Cathedral thinking’: far-reaching vision that is humanly relatable.

David Mitchell sees an optimistic message in Future Library (as well as an exercise in trust): it is, he says, “a vote of confidence in the future. Its fruition is predicated upon the ongoing existence of Northern Europe, of libraries, of Norwegian spruces, of books and of readers.” How confident are you that the books will be made?

We have put many methods in place to ensure that the books will be made. Each tree is marked on a computerized system, and the foresters take great care. We are investigating the likely methods of making ink in 100 years’ time. The city of Oslo has taken this artwork to their heart, and even the king and queen of Norway are involved. We have a Trust whose mandate is to “compassionately sustain the artwork for its 100 year duration.” Yes, Future Library is an exercise in trust. This year’s author Han Kang described the project as having an undercurrent of love flowing through it. It concerns me, and certainly says something about our moment in time, that we even question whether it will be possible to make books in just 100 years. We have clearly reached a crisis.

You have said “Time runs through everything I make.” Your work deals with the scales of distance and time that astronomers and geologists have to consider routinely, but which far exceed human intuition. How can we cope with that?

I find professions that routinely deal with long timescales fascinating. For the foresters in Future Library, 100 years is normal. Geologists work across time periods where major extinctions become plots on a map. Astronomers work with spans of time that go beyond everything that has ever lived. However, this routineness may blur the immensity of the concepts at hand. All the same, we can unearth materials fallen from space and comprehend that they go back far beyond humanity’s time on earth. Our technologies are advanced enough to look to a time beyond the Earth’s existence, approaching the Big Bang. Humans have devised and created these images, yet they exceed our capacity to understand them.
For me the route to a different kind of understanding of time is through the imagination. That’s the space that provides the most freedom and openness. My art attempts to deal directly with concepts that I can’t get to otherwise. Perhaps mathematical languages enable something similar. My journey in astronomy has been a search for connection: understanding that we are not separate from the universe, but are intrinsically linked.

Your work Light Bulb to Simulate Moonlight (2008) does exactly what it says on the tin. The bulb was created in collaboration with engineers at OSRAM. Can you explain how it was made?

I approached Dieter Lang, innovation manager and lighting engineer at OSRAM, and asked him to adapt the methods they use to make ‘daylight bulbs’ to recreate moonlight. I wanted to create a whole lifetime of moonlight – a bulb that lasts the length of an average human life. Dieter took light measurements under a full moon in the countryside outside Munich. I’d always imagined the futility of trying to recreate something as ineffable as moonlight, yet I was happy with the result – the light bulbs burn very brightly, a yellowy-blue tinged light, which changes according to your distance to it, just like the moon.



Do you see projects like the “dead stars” works or History of Darkness as attempts to connect us to the vastness of deep space and time? Or might they in fact suggest the futility of trying to keep track of all that has happened in the observable cosmos?

It oscillates somewhere in between. History of Darkness has futility written into it, capturing infinite darkness from across space and time. Each slide could contain millions of worlds, and learning that these images refer to places beyond human life and even the Earth may expand our relationship to these phenomena, and enhance the sense of our fallibility. All the Dead Stars was made in 2009. I’d like to update it in years to come – it might become an expanse of white dots, as telescopes become even more powerful and abundant.
I’m always drawn to the idea of the universe as deep wilderness. No matter how extensive our research and advanced technologies become, we can never ever truly access the great beyond. I read that our ‘cosmic horizon’ is around 42 billion light years away. What lies beyond, whether finite or infinite, will forever remain outside our understanding. Creating artwork is as much my own way of grappling with the “divine incommensurability” of our position in the universe, as much as an attempt to communicate it with others.

In Earth-Moon-Earth (Moonlight Sonata Reflected from the Surface of the Moon) (2007), you encoded Beethoven’s sonata in Morse code, broadcast it to the surface of the moon in radio waves, and reconstructed the partial score from the reflections. That evidently required some powerful technology. And in 2014 an ESA mission to the Space Shuttle enabled your project of returning a fragment of meteorite to earth orbit. How do these collaborations with scientific institutions come about?

Earth-Moon-Earth was created with “moon bouncer” radio enthusiasts: underground groups of people sending messages to each other via the moon. I simply wrote them letters. While studying at the Slade [art school in London] I wandered into the Rock & Ice Physics Laboratory next door [in University College London]. They allowed me to play my glacial ice records in their walk-in freezers. That was when I found out quite how easy it was to approach others in different fields. With the moonlight bulb I simply called round a number of lighting companies till I came across the right person. The map of the dead stars involved hundreds of researchers. Some scientists are far more involved than others, from sharing data (NASA gave me the recipe for the scent of Saturn’s moon) to developing the artworks very closely with myself and my studio. [Astronomers] Richard Ellis and Steve Fossey have played an enormous role. I tend to approach people who are experts in niche fields, such as type 2a supernova, and I ask to draw on their specialization. It’s their passion, so they are generally receptive. This can be a chance to share their knowledge in a way that they haven’t been asked to before, that will become manifest in an artwork engaging with totally different audiences. Of course there can be bafflement, but so far it’s been overwhelmingly positive.
Recently, for the first time researchers from came to me. I received a message from a group of scientists working on a mission proposal to NASA, inviting me to join their team as a ‘space-artist/co-investigator’ inquiring into cosmic dust. I’m extremely happy about this, not only for the creative potential but because the scientists have shown genuine concern that an artist might have something of value to contribute to their research. The group understands that art can be a way to share their knowledge through a different, more experiential, channel.

Your concepts clearly draw on – and indeed derive from – new scientific discoveries and techniques. For example, The Cosmic Spectrum (2019) is a large rotating colour wheel on which segments show the “average colour” of the Universe (as perceived by the human eye) from the Big Bang until the present, partly using data from the 2dF Galaxy Redshift Survey. How do you stay abreast of the latest scientific developments, and what do you tend to look for in them?

I discovered [astronomer] Ivan Baldry’s work on the cosmic spectrum several years ago. Many of my ideas sit on the back burner for years and manifest themselves at later stages. I don’t feel on top of scientific developments, but sometimes just one experience has enough potency to carry projects through years later.
I’m drawn to current investigations into the sunsets on Mars caught by NASA’s Mars Curiosity rover – but equally by botanical records from bygone eras, or the ray of light in a Florentine cathedral that marks the solstice built centuries ago. Sometimes just looking at titles on the shelves of science libraries can be enough to evoke compelling images. My inspirations have been wide and varied: from looking through telescopes to extremely distant galaxies, to tending a moss garden in a Zen monastery (a universe in itself). I’ve always drawn inspiration from artists, writers, musicians and thinkers whose work has a cosmic dimension: for example, raku ceramicists molding ‘the cosmos in a tea bowl’.

Some of your works exist only as the ongoing collection of ideas in the book A Place That Exists Only in Moonlight (2019). Occasionally they find a striking resonance with concepts that, for a cosmologist or physicist say, might almost seem like a thought experiment or research proposal: “A reset button for the universe pressed only once”, say, or “The speed of light slowed to absolute stillness”. Do you ever find that the scientists you collaborate with or encounter are inspired by your ideas into asking new questions or conducting new investigations themselves?

A Place that Exists Only in Moonlight arose out of a period of heavy production. I wanted to find a ‘lighter’ approach, which is the creative core of everything for me; just the ideas themselves. The book contains artworks to exist in the mind, many of which refer to suns, stars, moons, planets, earthly and cosmic matter. The cover is printed with cosmic dust: a mixture of moondust, dust from Mars, shooting stars, ancient meteorites and asteroids. I wanted the reader to be able to hold and touch the material the words describe, while taking them in. The Ideas are like thought experiments, Zen koans, Gedankenexperiment. In a way that’s true of all my artworks. What time is it on Venus? What texts will be read by unborn people? Is it possible to plant a forest using saplings from the oldest tree on earth, can we make ink to be read only under moonlight? I’m always curious. I will post copies of the book to everyone I have worked with, and I would be very happy indeed if they chose to conduct new investigations themselves.

A Place That Exists Only in Moonlight, an exhibition that pairs Paterson’s works with studies of light, sky and landscapes by J. M. W. Turner, is at the Turner Gallery in Margate, UK, until 6 May.

Monday, April 15, 2019

Out of the ashes of Notre Dame



There is no positive spin to put on the fire that has gutted Notre Dame Cathedral, and it could sound idiotic to think otherwise. This was one of the masterpieces of the Gothic era, a place where – as Napoleon allegedly said of Chartres – an atheist would feel uneasy (although this atheist instead felt moved and inspired). I don’t yet know the extent of the damage, but it is hard to imagine that the thirteenth-century northern rose window will have survived the inferno, or that the west front of the building, which has been called “one of the supreme architectural achievements of all time”, will emerge intact. Even if the building is eventually restored – and I am sure it will be – one might wonder what will be the point of a twenty-first-century facsimile, bereft of the spirit and philosophy that motivated the original construction.

And yet… The Gothic cathedrals already undermine notions of “authenticity”. In past ages, they weren’t seen as buildings that had to be maintained in some “pristine” state at all costs. Ever since they were erected, they were modified and redesigned, sometimes with very little care for their integrity. This happened at Notre Dame in the seventeenth century, when the flame of Gothic had long gone out. There was a fashion for plonking grotesque, kitsch marble sculptures in place of medieval statuary, which was indeed the fate of Notre Dame’s high altar. The vandalism went on through the eighteenth century – and that was even before the Revolutionaries did their worst, melting down metal bells, grilles and reliquaries and then using the cathedral as a kind of warehouse. The Gothic revival of Viollet-le-Duc in the nineteenth century had better intentions but not always better taste.

This was ever the way, even in the Middle Ages: bishops would decide that their cathedral had become old-fashioned, and would commission some new extension or renovation that as often as not ended up as a jarring clash of styles. The notion of conservation and a “respect for the old” simply didn’t exist.

And that’s even before we consider the ravages of unintentional damage. Many of the wonders of Gothic architecture only came about as a result of fire in the first place. That is how we got Chartres: thanks to a fire in 1194 that destroyed the building commissioned in the 1020s (after the cathedral before that was burnt down). The conflagration was devastating to the morale of the local people: according to a document written in 1210, they “considered as the totality of their misfortune the fact that they, unhappy wretches, in justice for their own sins, had lost the palace of the Blessed Virgin, the special glory of the city, the showpiece of the entire region, the incomparable house of prayer”. Yet look what they got in its place.

And they had no hesitation in putting a positive spin on it. Another early thirteenth-century account asserted that this was God’s will – or the Virgin’s – all along: “She therefore permitted the old and inadequate church to become the victim of the flames, thus making room for the present basilica, which has no equal throughout the entire world.”

And so it went on throughout the Middle Ages and beyond: the astonishing edifices of the Gothic masters fell or burnt down, got neglected or half-dismembered, were subjected to undignified “improvement”, were ransacked or, later, bombed. Chartres has had catastrophic fires too: no one seems now too bothered that the original roof and allegedly wonderful timberwork beneath it were consumed by flames in 1836, or that the replacement we see today was originally intended only to be temporary.

What happened today at Notre Dame is truly a tragedy. But we shouldn’t forget that these magnificent buildings have always been works in progress, always in flux. Perhaps, in mourning what was lost, we can see it as an opportunity to marvel again at the worldview that produced it: at the ambition, the imagination, the profound union of technical skill and philosophical and spiritual conviction. And we can consider it a worthy challenge to see if we can find some way of matching and honouring that vision.

Wednesday, December 12, 2018

How to write a science best-seller

Everyone knows how science writing works. Academic scientists labour with great diligence to tease nuanced truths from theory and experiment, only for journalists and popularizers to reduce them to simplistic sound bites for the sake of a good story.

I’ve been moved to ponder that narrative by the widespread appearance on Christmas science/non-fiction books lists of two books by leading science academics: Steven Pinker’s Enlightenment Now and Robert Plomin’s Blueprint. I reviewed both books at length in Prospect, and my feelings about both of them were surprisingly similar: they have some important and valuable things to say, but are both infuriating too in terms of what they fudge, leave out or misrepresent.

I won’t recapitulate those views here. Plomin has taken some flak for the genetic determinism that his book seems to encourage – most recently from Angela Saini in the latest Prospect, whose conclusion I fully endorse: “Scientists… should concentrate on engaging with historians and social scientists to better understand humans not as simple biological machines but as complex, social beings.” Pinker has been excoriated in one or two places (most vigorously, and some would say predictable, by John Gray) for using the “Enlightenment” ahistorically as a concept to be moulded at will to fit his agenda (not to mention his simplistic and obsolete characterization of Nietzsche).

What both books do is precisely what the caricature of science journalism above is said to do, albeit with more style and more graphs: to eschew nuance and caveats in order to tell a story that is only partly true.

And here’s the moral: it works! By delivering a controversial message in this manner, both books have received massive media attention. If they had been more careful, less confrontational, more ready to tell a complex story, I very much doubt that they would have been awarded anything like as much coverage.

Now, my impression here – having spoken to both Pinker and Plomin – is that they both genuinely believe what they wrote. Yes, Pinker did acknowledge that he was using a simplified picture of the Enlightenment for rhetorical ends, and in conversation Plomin and I were broadly in agreement most of the time about what genetic analyses do and don’t show about human behaviour. But I don’t think either of them was setting out cynically to present a distorted message in order to boost book sales. What seems to be happening here is more in the line of a tacit collusion between academics keen to push a particular point of view (nothing wrong with that in itself) and publishers keen to see an eye-catching and controversial message. And we have, of course, been here before (The God Delusion, anyone?).

Stephen Hawking’s book Brief Answers to the Big Questions was also a popular book choice for 2018 that, in a different way, often veered towards the reductively simplistic, though it seemed to fall only to me (so far as I was able) and my esteemed colleague Michael Brooks to point that out in our reviews.

It seems, then, increasingly to be the job of science writers and critics, like Angela and Michael, to hold the “specialists” to account – and not vice versa.

I could nobly declare that I decline to adopt such a tactic to sell my own books. But the truth is that I couldn’t do it even if I wanted to. My instincts are too set against it. For one thing, it would cause me too much discomfort, even pain, to knowingly ignore or cherry-pick historical or scientific facts (which isn’t to say that I will sometimes get them wrong), or to decline to enter areas of enquiry that might dilute a catchy thesis. But perhaps even more importantly, I would find simplistic narratives and theses to be just a bit too boring to sustain me through a book project. What interests me is not winning some constructed argument but exploring ideas – including the fascinating ideas in Enlightenment Now and Blueprint.

Wednesday, October 31, 2018

Musical illusions

Here's the English version of my column on music cognition for the current issue of the Italian science magazine Sapere.

_____________________________________________________________

“In studying [optical] illusions”, writes Kathryn Schulz in her book Being Wrong, “scientists aren’t learning how our visual system fails. They are learning how it works.” What Schulz means is that normal visual processing is typically a matter of integrating confusing information into a plausible story that lets us navigate the world. Colour constancy is a good example: the brain “corrects” for variations in brightness so that objects don’t appear to change hue as the lighting conditions alter. The famous “checkerboard shadow” illusion devised by vision scientist Edward Adelson fools this automatic recalibration of perception.


Adelson’s checkerboard illusion. The squares A and B are the same shade of grey.

In this regard as in many others, auditory perception mirrors the visual. The brain often rearranges what we hear to create something that “makes more sense” – with the same potential for creating illusions. Psychologist of music Diana Deutsch has delved deeply into the subject of musical illusions, some of which are presented on a CD released by Philomel in 1995. Several of these have to be heard through stereo headphones: they deliver different pitches to the left and right ears, which the brain reassigns to create coherence. For example, in the “scale illusion” the notes of two simultaneous scales – one ascending, one descending – are sent alternately to each ear. But what one hears is a much simpler pattern: an ascending scale in one ear, descending in the other. Here the brain is choosing to perceive the more likely pattern, even though it’s wrong.

Another example reveals the limitations of pitch perception. A familiar tune (Yankee Doodle) is played with each note assigned a random octave. It sounds incomprehensible. The test shows that, in deciphering melody, we attend not so much to absolute pitch class (a C or D, say) as to relative pitch: how big pitch jumps are between successive notes. Arnold Schoenberg’s twelve-tone serialism ignored this, which is why the persistence of his “tone rows” is often inaudible.

Perhaps the strangest thing about optical illusions is that we enjoy them, even if – indeed, because – we find them perplexing. Instead of being upset by the brain’s inability to “get it right”, we are apt to laugh – not a common response to wrongness, although it’s actually how a lot of comedy works. You might, then, expect to find musical illusions put to pleasurable use in music, especially by jokers like Mozart. But they are rather rare, maybe because we simply won’t notice them unless we see the score. Something like the scale illusion is used, however, in the second movement of Rachmaninov’s Second Suite for Two Pianos, where two sets of seesawing notes on each piano are heard as two sets of single repeated notes. It seems likely that Rachmaninov (not noted for jocularity) wasn’t just having fun – it’s merely easier to play these rapid quavers using pitch jumps rather than on the same note.

Monday, October 29, 2018

Why brief answers are sometimes not enough

I reviewed Stephen Hawking's last book Brief Answers to the Big Questions for New Scientist, but it needed shortening and, in the print version, didn't come out as I'd intended. Here's the original.

_____________________________________________________________________

Most people as famous as Stephen Hawking have their character interrogated with forensic intimacy. But Hawking’s personality was in its way as insulated as the Queen’s, impermeably fortified by the role allotted to him. There’s a hint in Brief Answers that he knew this: “I fit the stereotype of a disabled genius”, he writes. Unworldly intelligence, a wry sense of humour, and tremendous resilience against adversity: that seemed to suffice for the celebrity in the wheelchair with the computerized voice (itself another part of the armour, of course).

It made me uneasy though. The public Hawking was that stereotype, and while it was delightful to see how he demolished the does-he-take-sugar laziness that links physical with mental disability, he did so only by taking matters to the other extreme ("such a mind in such a body!"). It perhaps suited Hawking that the media were content with the cliché – he didn’t give much impression of caring for the touchy-feely. (Eddie Redmayne, who played Hawking in the 2014 biopic The Theory of Everything, reminds us in his foreword that the physicist would have preferred the film to have “more physics and fewer feelings”.) But his story suggests we still have some way to go in integrating people with disabilities into able-bodied society.

I approached this book, a collection of Hawking’s later essays on “big questions”, with some trepidation. You know you won’t go wrong with the cosmology, relativity and quantum mechanics, but in other areas, even within science, it’s touch and go. The scientific essays supply a series of now-familiar Greatest Hits: his work with Roger Penrose on gravitational singularities and their relation to the Big Bang; his realization that black holes will emit energy (Hawking radiation) from their event horizons; his speculations about the origin of the universe in a chance quantum fluctuation; the debate – still unresolved – about whether black holes destroy information. Hawking, as Kip Thorne reminds us in his introduction, helped to integrate several of the central concepts of physics: general relativity, quantum mechanics, thermodynamics and information theory. It’s a phenomenal body of work.

Sometimes there’s a plainness to his prose that can be touching even while it sounds like an anodyne self-help manual: “Be brave, be curious, be determined, overcome the odds. It can be done.” Who would argue with Hawking’s right to that sentiment? His plea for the importance of inspirational teaching, his concerns about climate change and environmental degradation, his contempt for Trump and the regressive aspects of Brexit, and (albeit not here) his championing of the NHS, sometimes made you glad to have Hawking on your side. People listened.

A common danger with collections of this kind is repetition, which the editors have been curiously unconcerned to avoid. But the recurring and familiar passages are in themselves quite telling, for they show Hawking curating his image: the boy who was always taking things apart but not always managing to put them back together again, the man who told us to “look up at the stars and no down at your feet.”

There’s no doubt that Hawking cared passionately about the future of humankind and the potential of science to improve it. His advocacy resembles the old-fashioned boosterism into which H. G. Wells often strayed in later life, tempered like Wells by an awareness of the destructive potential of technologies in malicious or plain foolish hands. But what are Hawking’s resources for developing that agenda? One of the most striking features of this book is the lack of extra-curricular references – to art, music, philosophy, literature, say. This would not matter so much (though it’s a bit odd) if it were not that the scope of some of pieces exposes these gaps painfully.

Beginning an essay called “Is there a God” by saying that “people will always cling to religion, because it gives comfort, and they do not trust or understand science” tells you pretty much what to expect from it, and you’d not be wrong. God, as no theologian said ever, is all about explaining the origin of the universe. And most people, Hawking tells us, define God as “a human-like being, with whom one can have a personal relationship.” I suspect “most people’s” views of what a molecule or light is would bear similarly scant resemblance to what well-informed folks say on the matter, but I doubt Hawking would give those views precedence.

As for history, try this: “People might well have argued that it was a waste of money to send Columbus on a wild goose chase. Yet the discovery of the New World made a profound difference to the Old. Just think, we wouldn’t have had the Big Mac or KFC.” The lame joke might have been just about tolerable if one didn’t sense it is there because Hawking could think of nothing to put in its place. This remark, as you might guess, is part of a defense of human space exploration, during which Hawking demonstrates no more inclination to probe the real reasons for the space race in the 1960s than he does to examine what Columbus was all about. He feels that the human race has no future if we don’t colonize space, although it isn’t clear why his generally dim view of our self-destructive idiocies becomes so rosy once we are on other worlds. Maybe the answer lies with the fact that here, as elsewhere, his main point of reference is Star Trek. But I suspect he knew he was preaching to the converted, so that mere assertion (“We have no other option”) was all he needed in lieu of argument.

There’s a glib insouciance to some of the other scientific speculations too. “If there is intelligent life elsewhere”, he writes, “it must be a very long way away otherwise it would have visited earth by now. And I think we would’ve known if we had been visited; it would be like the film Independence Day.” Assertion again replaces explanation in Hawking’s assumption apropos artificial intelligence that the human brain is just like a computer, as if this were not hotly disputed among neuroscientists. Here too, his vision seems mainly informed by the science fiction within easiest reach: his fears for the dangers of AI conjure up the Terminator series’ Skynet and tropes of supercomputers declaring themselves God and fusing the plug. Science fiction has plenty to tell us about our fears of the present, but probably rather less about the realities of the future.

It is best, too, not to rely on Hawking’s history of science, which for example parrots the myth of Max Planck postulating the quantum to avoid the ‘ultraviolet catastrophe’ of blackbody radiation. (Planck did not mention it.) Don’t expect more than the usual clichés: here comes Feynman, playing the bongos in a strip joint (what a guy!), there goes Einstein riding on a light wave.

This is all, in a sense, so very unfair. Hawking was a great scientist who had a remarkable life, but in another universe without motor neurone disease (well, he did like the Many Worlds interpretation of quantum mechanics) we’d have no reason to confer such authority on his thoughts about all and sundry, or to notice or care that he entered the peculiar time-warp that is Stringfellows “gentlemen’s club”. We would not deny him the right to his ordinariness, and we would see his occasional brash arrogance and egotism for no more or less than it is.

There’s every reason to believe that Hawking enjoyed his fame, and that’s a cheering thought. The Hawking phenomenon is our problem, not his. He likes to remind us that he was born on the same date that Galileo died, but it’s Brecht’s Galileo that comes to mind here: to paraphrase, unhappy is the land that needs a guru.

Thursday, September 13, 2018

The "dark woman of DNA" goes missing again

There’s a curious incident that took place at the excellent "Schrödinger at 75: The Future of Life" meeting in Dublin last week that I’ve been pondering ever since.

One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.

Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.

I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.

You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?

Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).

But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.

I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.

And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.

Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.