Thursday, April 25, 2019

A Place That Exists Only In Moonlight: a Q&A with Katie Paterson

I have a Q&A with Katie Paterson in the 25 April issue of Nature. There was a lot in Katie’s comments that I didn’t have room for there, so here is the extended interview. The exhibition is wonderful, though sadly it only runs for a couple more weeks. This is science-inspired art at its finest.


Scottish artist Katie Paterson is one of the most scientifically engaged of contemporary artists. Her work has been described as “combining a Romantic sensibility with a research-based approach, conceptual rigour and coolly minimalist presentation.” It makes use of meteorites, astronomical observations, fossils and experiments in sound and light to foster a human engagement with scales in time and space that far exceed our everyday experience.

Many of her works have astronomical themes. All the Dead Stars depicts, on a sheet of black etched steel, the location of around 27,000 stars that are no longer visible. For the Dying Star Letters (2011-) she wrote letters of condolence for every star newly recorded has having “died” – a task that got ever more challenging with advances in observing technologies. And History of Darkness (2010-) is an ongoing archive of slides of totally dark areas of the universe at different epochs and locations.

For Future Library (2014-2114), 100 writers including Margaret Atwood and David Mitchell will write stories (one is commissioned each year since 2014) that will be kept in sealed storage until 2114, when they will be printed on paper made from 1,000 trees being planted in a forest in Norway. Paterson has said of the project that “it questions the present tendency to think in short bursts of time, making decisions only for us living now.”

Some of your works speak to concerns about degradation of the environment and the onset of the Anthropocene – Future Library, for example, and the Vatnajökull project (2007-8) that relays the live sound of meltwater flowing within an Icelandic glacier to listeners who dial in on mobile phones. Do you think that what can seem like an overwhelming problem of environmental change on scales that are hard to contemplate can be made tangible and intelligible through art?

Future Library has a circular ecology built into it: words become enmeshed in growing trees, which, fed by water and light, a century later will become books. It’s a gathering, and the trees spell out time. The artwork is made with simple materials, people, nature and words, and its connected to feelings and senses. The phone call I set up to the glacier was an intimate one-to-one experience; listening to a graveyard of ice. The crisis of global warming does not feel intimate when it’s screeching at us through screens and graphs – yet of course it is. Our planet is disappearing. Humans understand suffering, the cycle of birth and dying. We need a contemporary approach to what Stephen Hawking called ‘Cathedral thinking’: far-reaching vision that is humanly relatable.

David Mitchell sees an optimistic message in Future Library (as well as an exercise in trust): it is, he says, “a vote of confidence in the future. Its fruition is predicated upon the ongoing existence of Northern Europe, of libraries, of Norwegian spruces, of books and of readers.” How confident are you that the books will be made?

We have put many methods in place to ensure that the books will be made. Each tree is marked on a computerized system, and the foresters take great care. We are investigating the likely methods of making ink in 100 years’ time. The city of Oslo has taken this artwork to their heart, and even the king and queen of Norway are involved. We have a Trust whose mandate is to “compassionately sustain the artwork for its 100 year duration.” Yes, Future Library is an exercise in trust. This year’s author Han Kang described the project as having an undercurrent of love flowing through it. It concerns me, and certainly says something about our moment in time, that we even question whether it will be possible to make books in just 100 years. We have clearly reached a crisis.

You have said “Time runs through everything I make.” Your work deals with the scales of distance and time that astronomers and geologists have to consider routinely, but which far exceed human intuition. How can we cope with that?

I find professions that routinely deal with long timescales fascinating. For the foresters in Future Library, 100 years is normal. Geologists work across time periods where major extinctions become plots on a map. Astronomers work with spans of time that go beyond everything that has ever lived. However, this routineness may blur the immensity of the concepts at hand. All the same, we can unearth materials fallen from space and comprehend that they go back far beyond humanity’s time on earth. Our technologies are advanced enough to look to a time beyond the Earth’s existence, approaching the Big Bang. Humans have devised and created these images, yet they exceed our capacity to understand them.
For me the route to a different kind of understanding of time is through the imagination. That’s the space that provides the most freedom and openness. My art attempts to deal directly with concepts that I can’t get to otherwise. Perhaps mathematical languages enable something similar. My journey in astronomy has been a search for connection: understanding that we are not separate from the universe, but are intrinsically linked.

Your work Light Bulb to Simulate Moonlight (2008) does exactly what it says on the tin. The bulb was created in collaboration with engineers at OSRAM. Can you explain how it was made?

I approached Dieter Lang, innovation manager and lighting engineer at OSRAM, and asked him to adapt the methods they use to make ‘daylight bulbs’ to recreate moonlight. I wanted to create a whole lifetime of moonlight – a bulb that lasts the length of an average human life. Dieter took light measurements under a full moon in the countryside outside Munich. I’d always imagined the futility of trying to recreate something as ineffable as moonlight, yet I was happy with the result – the light bulbs burn very brightly, a yellowy-blue tinged light, which changes according to your distance to it, just like the moon.

Do you see projects like the “dead stars” works or History of Darkness as attempts to connect us to the vastness of deep space and time? Or might they in fact suggest the futility of trying to keep track of all that has happened in the observable cosmos?

It oscillates somewhere in between. History of Darkness has futility written into it, capturing infinite darkness from across space and time. Each slide could contain millions of worlds, and learning that these images refer to places beyond human life and even the Earth may expand our relationship to these phenomena, and enhance the sense of our fallibility. All the Dead Stars was made in 2009. I’d like to update it in years to come – it might become an expanse of white dots, as telescopes become even more powerful and abundant.
I’m always drawn to the idea of the universe as deep wilderness. No matter how extensive our research and advanced technologies become, we can never ever truly access the great beyond. I read that our ‘cosmic horizon’ is around 42 billion light years away. What lies beyond, whether finite or infinite, will forever remain outside our understanding. Creating artwork is as much my own way of grappling with the “divine incommensurability” of our position in the universe, as much as an attempt to communicate it with others.

In Earth-Moon-Earth (Moonlight Sonata Reflected from the Surface of the Moon) (2007), you encoded Beethoven’s sonata in Morse code, broadcast it to the surface of the moon in radio waves, and reconstructed the partial score from the reflections. That evidently required some powerful technology. And in 2014 an ESA mission to the Space Shuttle enabled your project of returning a fragment of meteorite to earth orbit. How do these collaborations with scientific institutions come about?

Earth-Moon-Earth was created with “moon bouncer” radio enthusiasts: underground groups of people sending messages to each other via the moon. I simply wrote them letters. While studying at the Slade [art school in London] I wandered into the Rock & Ice Physics Laboratory next door [in University College London]. They allowed me to play my glacial ice records in their walk-in freezers. That was when I found out quite how easy it was to approach others in different fields. With the moonlight bulb I simply called round a number of lighting companies till I came across the right person. The map of the dead stars involved hundreds of researchers. Some scientists are far more involved than others, from sharing data (NASA gave me the recipe for the scent of Saturn’s moon) to developing the artworks very closely with myself and my studio. [Astronomers] Richard Ellis and Steve Fossey have played an enormous role. I tend to approach people who are experts in niche fields, such as type 2a supernova, and I ask to draw on their specialization. It’s their passion, so they are generally receptive. This can be a chance to share their knowledge in a way that they haven’t been asked to before, that will become manifest in an artwork engaging with totally different audiences. Of course there can be bafflement, but so far it’s been overwhelmingly positive.
Recently, for the first time researchers from came to me. I received a message from a group of scientists working on a mission proposal to NASA, inviting me to join their team as a ‘space-artist/co-investigator’ inquiring into cosmic dust. I’m extremely happy about this, not only for the creative potential but because the scientists have shown genuine concern that an artist might have something of value to contribute to their research. The group understands that art can be a way to share their knowledge through a different, more experiential, channel.

Your concepts clearly draw on – and indeed derive from – new scientific discoveries and techniques. For example, The Cosmic Spectrum (2019) is a large rotating colour wheel on which segments show the “average colour” of the Universe (as perceived by the human eye) from the Big Bang until the present, partly using data from the 2dF Galaxy Redshift Survey. How do you stay abreast of the latest scientific developments, and what do you tend to look for in them?

I discovered [astronomer] Ivan Baldry’s work on the cosmic spectrum several years ago. Many of my ideas sit on the back burner for years and manifest themselves at later stages. I don’t feel on top of scientific developments, but sometimes just one experience has enough potency to carry projects through years later.
I’m drawn to current investigations into the sunsets on Mars caught by NASA’s Mars Curiosity rover – but equally by botanical records from bygone eras, or the ray of light in a Florentine cathedral that marks the solstice built centuries ago. Sometimes just looking at titles on the shelves of science libraries can be enough to evoke compelling images. My inspirations have been wide and varied: from looking through telescopes to extremely distant galaxies, to tending a moss garden in a Zen monastery (a universe in itself). I’ve always drawn inspiration from artists, writers, musicians and thinkers whose work has a cosmic dimension: for example, raku ceramicists molding ‘the cosmos in a tea bowl’.

Some of your works exist only as the ongoing collection of ideas in the book A Place That Exists Only in Moonlight (2019). Occasionally they find a striking resonance with concepts that, for a cosmologist or physicist say, might almost seem like a thought experiment or research proposal: “A reset button for the universe pressed only once”, say, or “The speed of light slowed to absolute stillness”. Do you ever find that the scientists you collaborate with or encounter are inspired by your ideas into asking new questions or conducting new investigations themselves?

A Place that Exists Only in Moonlight arose out of a period of heavy production. I wanted to find a ‘lighter’ approach, which is the creative core of everything for me; just the ideas themselves. The book contains artworks to exist in the mind, many of which refer to suns, stars, moons, planets, earthly and cosmic matter. The cover is printed with cosmic dust: a mixture of moondust, dust from Mars, shooting stars, ancient meteorites and asteroids. I wanted the reader to be able to hold and touch the material the words describe, while taking them in. The Ideas are like thought experiments, Zen koans, Gedankenexperiment. In a way that’s true of all my artworks. What time is it on Venus? What texts will be read by unborn people? Is it possible to plant a forest using saplings from the oldest tree on earth, can we make ink to be read only under moonlight? I’m always curious. I will post copies of the book to everyone I have worked with, and I would be very happy indeed if they chose to conduct new investigations themselves.

A Place That Exists Only in Moonlight, an exhibition that pairs Paterson’s works with studies of light, sky and landscapes by J. M. W. Turner, is at the Turner Gallery in Margate, UK, until 6 May.

Monday, April 15, 2019

Out of the ashes of Notre Dame

There is no positive spin to put on the fire that has gutted Notre Dame Cathedral, and it could sound idiotic to think otherwise. This was one of the masterpieces of the Gothic era, a place where – as Napoleon allegedly said of Chartres – an atheist would feel uneasy (although this atheist instead felt moved and inspired). I don’t yet know the extent of the damage, but it is hard to imagine that the thirteenth-century northern rose window will have survived the inferno, or that the west front of the building, which has been called “one of the supreme architectural achievements of all time”, will emerge intact. Even if the building is eventually restored – and I am sure it will be – one might wonder what will be the point of a twenty-first-century facsimile, bereft of the spirit and philosophy that motivated the original construction.

And yet… The Gothic cathedrals already undermine notions of “authenticity”. In past ages, they weren’t seen as buildings that had to be maintained in some “pristine” state at all costs. Ever since they were erected, they were modified and redesigned, sometimes with very little care for their integrity. This happened at Notre Dame in the seventeenth century, when the flame of Gothic had long gone out. There was a fashion for plonking grotesque, kitsch marble sculptures in place of medieval statuary, which was indeed the fate of Notre Dame’s high altar. The vandalism went on through the eighteenth century – and that was even before the Revolutionaries did their worst, melting down metal bells, grilles and reliquaries and then using the cathedral as a kind of warehouse. The Gothic revival of Viollet-le-Duc in the nineteenth century had better intentions but not always better taste.

This was ever the way, even in the Middle Ages: bishops would decide that their cathedral had become old-fashioned, and would commission some new extension or renovation that as often as not ended up as a jarring clash of styles. The notion of conservation and a “respect for the old” simply didn’t exist.

And that’s even before we consider the ravages of unintentional damage. Many of the wonders of Gothic architecture only came about as a result of fire in the first place. That is how we got Chartres: thanks to a fire in 1194 that destroyed the building commissioned in the 1020s (after the cathedral before that was burnt down). The conflagration was devastating to the morale of the local people: according to a document written in 1210, they “considered as the totality of their misfortune the fact that they, unhappy wretches, in justice for their own sins, had lost the palace of the Blessed Virgin, the special glory of the city, the showpiece of the entire region, the incomparable house of prayer”. Yet look what they got in its place.

And they had no hesitation in putting a positive spin on it. Another early thirteenth-century account asserted that this was God’s will – or the Virgin’s – all along: “She therefore permitted the old and inadequate church to become the victim of the flames, thus making room for the present basilica, which has no equal throughout the entire world.”

And so it went on throughout the Middle Ages and beyond: the astonishing edifices of the Gothic masters fell or burnt down, got neglected or half-dismembered, were subjected to undignified “improvement”, were ransacked or, later, bombed. Chartres has had catastrophic fires too: no one seems now too bothered that the original roof and allegedly wonderful timberwork beneath it were consumed by flames in 1836, or that the replacement we see today was originally intended only to be temporary.

What happened today at Notre Dame is truly a tragedy. But we shouldn’t forget that these magnificent buildings have always been works in progress, always in flux. Perhaps, in mourning what was lost, we can see it as an opportunity to marvel again at the worldview that produced it: at the ambition, the imagination, the profound union of technical skill and philosophical and spiritual conviction. And we can consider it a worthy challenge to see if we can find some way of matching and honouring that vision.

Wednesday, December 12, 2018

How to write a science best-seller

Everyone knows how science writing works. Academic scientists labour with great diligence to tease nuanced truths from theory and experiment, only for journalists and popularizers to reduce them to simplistic sound bites for the sake of a good story.

I’ve been moved to ponder that narrative by the widespread appearance on Christmas science/non-fiction books lists of two books by leading science academics: Steven Pinker’s Enlightenment Now and Robert Plomin’s Blueprint. I reviewed both books at length in Prospect, and my feelings about both of them were surprisingly similar: they have some important and valuable things to say, but are both infuriating too in terms of what they fudge, leave out or misrepresent.

I won’t recapitulate those views here. Plomin has taken some flak for the genetic determinism that his book seems to encourage – most recently from Angela Saini in the latest Prospect, whose conclusion I fully endorse: “Scientists… should concentrate on engaging with historians and social scientists to better understand humans not as simple biological machines but as complex, social beings.” Pinker has been excoriated in one or two places (most vigorously, and some would say predictable, by John Gray) for using the “Enlightenment” ahistorically as a concept to be moulded at will to fit his agenda (not to mention his simplistic and obsolete characterization of Nietzsche).

What both books do is precisely what the caricature of science journalism above is said to do, albeit with more style and more graphs: to eschew nuance and caveats in order to tell a story that is only partly true.

And here’s the moral: it works! By delivering a controversial message in this manner, both books have received massive media attention. If they had been more careful, less confrontational, more ready to tell a complex story, I very much doubt that they would have been awarded anything like as much coverage.

Now, my impression here – having spoken to both Pinker and Plomin – is that they both genuinely believe what they wrote. Yes, Pinker did acknowledge that he was using a simplified picture of the Enlightenment for rhetorical ends, and in conversation Plomin and I were broadly in agreement most of the time about what genetic analyses do and don’t show about human behaviour. But I don’t think either of them was setting out cynically to present a distorted message in order to boost book sales. What seems to be happening here is more in the line of a tacit collusion between academics keen to push a particular point of view (nothing wrong with that in itself) and publishers keen to see an eye-catching and controversial message. And we have, of course, been here before (The God Delusion, anyone?).

Stephen Hawking’s book Brief Answers to the Big Questions was also a popular book choice for 2018 that, in a different way, often veered towards the reductively simplistic, though it seemed to fall only to me (so far as I was able) and my esteemed colleague Michael Brooks to point that out in our reviews.

It seems, then, increasingly to be the job of science writers and critics, like Angela and Michael, to hold the “specialists” to account – and not vice versa.

I could nobly declare that I decline to adopt such a tactic to sell my own books. But the truth is that I couldn’t do it even if I wanted to. My instincts are too set against it. For one thing, it would cause me too much discomfort, even pain, to knowingly ignore or cherry-pick historical or scientific facts (which isn’t to say that I will sometimes get them wrong), or to decline to enter areas of enquiry that might dilute a catchy thesis. But perhaps even more importantly, I would find simplistic narratives and theses to be just a bit too boring to sustain me through a book project. What interests me is not winning some constructed argument but exploring ideas – including the fascinating ideas in Enlightenment Now and Blueprint.

Wednesday, October 31, 2018

Musical illusions

Here's the English version of my column on music cognition for the current issue of the Italian science magazine Sapere.


“In studying [optical] illusions”, writes Kathryn Schulz in her book Being Wrong, “scientists aren’t learning how our visual system fails. They are learning how it works.” What Schulz means is that normal visual processing is typically a matter of integrating confusing information into a plausible story that lets us navigate the world. Colour constancy is a good example: the brain “corrects” for variations in brightness so that objects don’t appear to change hue as the lighting conditions alter. The famous “checkerboard shadow” illusion devised by vision scientist Edward Adelson fools this automatic recalibration of perception.

Adelson’s checkerboard illusion. The squares A and B are the same shade of grey.

In this regard as in many others, auditory perception mirrors the visual. The brain often rearranges what we hear to create something that “makes more sense” – with the same potential for creating illusions. Psychologist of music Diana Deutsch has delved deeply into the subject of musical illusions, some of which are presented on a CD released by Philomel in 1995. Several of these have to be heard through stereo headphones: they deliver different pitches to the left and right ears, which the brain reassigns to create coherence. For example, in the “scale illusion” the notes of two simultaneous scales – one ascending, one descending – are sent alternately to each ear. But what one hears is a much simpler pattern: an ascending scale in one ear, descending in the other. Here the brain is choosing to perceive the more likely pattern, even though it’s wrong.

Another example reveals the limitations of pitch perception. A familiar tune (Yankee Doodle) is played with each note assigned a random octave. It sounds incomprehensible. The test shows that, in deciphering melody, we attend not so much to absolute pitch class (a C or D, say) as to relative pitch: how big pitch jumps are between successive notes. Arnold Schoenberg’s twelve-tone serialism ignored this, which is why the persistence of his “tone rows” is often inaudible.

Perhaps the strangest thing about optical illusions is that we enjoy them, even if – indeed, because – we find them perplexing. Instead of being upset by the brain’s inability to “get it right”, we are apt to laugh – not a common response to wrongness, although it’s actually how a lot of comedy works. You might, then, expect to find musical illusions put to pleasurable use in music, especially by jokers like Mozart. But they are rather rare, maybe because we simply won’t notice them unless we see the score. Something like the scale illusion is used, however, in the second movement of Rachmaninov’s Second Suite for Two Pianos, where two sets of seesawing notes on each piano are heard as two sets of single repeated notes. It seems likely that Rachmaninov (not noted for jocularity) wasn’t just having fun – it’s merely easier to play these rapid quavers using pitch jumps rather than on the same note.

Monday, October 29, 2018

Why brief answers are sometimes not enough

I reviewed Stephen Hawking's last book Brief Answers to the Big Questions for New Scientist, but it needed shortening and, in the print version, didn't come out as I'd intended. Here's the original.


Most people as famous as Stephen Hawking have their character interrogated with forensic intimacy. But Hawking’s personality was in its way as insulated as the Queen’s, impermeably fortified by the role allotted to him. There’s a hint in Brief Answers that he knew this: “I fit the stereotype of a disabled genius”, he writes. Unworldly intelligence, a wry sense of humour, and tremendous resilience against adversity: that seemed to suffice for the celebrity in the wheelchair with the computerized voice (itself another part of the armour, of course).

It made me uneasy though. The public Hawking was that stereotype, and while it was delightful to see how he demolished the does-he-take-sugar laziness that links physical with mental disability, he did so only by taking matters to the other extreme ("such a mind in such a body!"). It perhaps suited Hawking that the media were content with the cliché – he didn’t give much impression of caring for the touchy-feely. (Eddie Redmayne, who played Hawking in the 2014 biopic The Theory of Everything, reminds us in his foreword that the physicist would have preferred the film to have “more physics and fewer feelings”.) But his story suggests we still have some way to go in integrating people with disabilities into able-bodied society.

I approached this book, a collection of Hawking’s later essays on “big questions”, with some trepidation. You know you won’t go wrong with the cosmology, relativity and quantum mechanics, but in other areas, even within science, it’s touch and go. The scientific essays supply a series of now-familiar Greatest Hits: his work with Roger Penrose on gravitational singularities and their relation to the Big Bang; his realization that black holes will emit energy (Hawking radiation) from their event horizons; his speculations about the origin of the universe in a chance quantum fluctuation; the debate – still unresolved – about whether black holes destroy information. Hawking, as Kip Thorne reminds us in his introduction, helped to integrate several of the central concepts of physics: general relativity, quantum mechanics, thermodynamics and information theory. It’s a phenomenal body of work.

Sometimes there’s a plainness to his prose that can be touching even while it sounds like an anodyne self-help manual: “Be brave, be curious, be determined, overcome the odds. It can be done.” Who would argue with Hawking’s right to that sentiment? His plea for the importance of inspirational teaching, his concerns about climate change and environmental degradation, his contempt for Trump and the regressive aspects of Brexit, and (albeit not here) his championing of the NHS, sometimes made you glad to have Hawking on your side. People listened.

A common danger with collections of this kind is repetition, which the editors have been curiously unconcerned to avoid. But the recurring and familiar passages are in themselves quite telling, for they show Hawking curating his image: the boy who was always taking things apart but not always managing to put them back together again, the man who told us to “look up at the stars and no down at your feet.”

There’s no doubt that Hawking cared passionately about the future of humankind and the potential of science to improve it. His advocacy resembles the old-fashioned boosterism into which H. G. Wells often strayed in later life, tempered like Wells by an awareness of the destructive potential of technologies in malicious or plain foolish hands. But what are Hawking’s resources for developing that agenda? One of the most striking features of this book is the lack of extra-curricular references – to art, music, philosophy, literature, say. This would not matter so much (though it’s a bit odd) if it were not that the scope of some of pieces exposes these gaps painfully.

Beginning an essay called “Is there a God” by saying that “people will always cling to religion, because it gives comfort, and they do not trust or understand science” tells you pretty much what to expect from it, and you’d not be wrong. God, as no theologian said ever, is all about explaining the origin of the universe. And most people, Hawking tells us, define God as “a human-like being, with whom one can have a personal relationship.” I suspect “most people’s” views of what a molecule or light is would bear similarly scant resemblance to what well-informed folks say on the matter, but I doubt Hawking would give those views precedence.

As for history, try this: “People might well have argued that it was a waste of money to send Columbus on a wild goose chase. Yet the discovery of the New World made a profound difference to the Old. Just think, we wouldn’t have had the Big Mac or KFC.” The lame joke might have been just about tolerable if one didn’t sense it is there because Hawking could think of nothing to put in its place. This remark, as you might guess, is part of a defense of human space exploration, during which Hawking demonstrates no more inclination to probe the real reasons for the space race in the 1960s than he does to examine what Columbus was all about. He feels that the human race has no future if we don’t colonize space, although it isn’t clear why his generally dim view of our self-destructive idiocies becomes so rosy once we are on other worlds. Maybe the answer lies with the fact that here, as elsewhere, his main point of reference is Star Trek. But I suspect he knew he was preaching to the converted, so that mere assertion (“We have no other option”) was all he needed in lieu of argument.

There’s a glib insouciance to some of the other scientific speculations too. “If there is intelligent life elsewhere”, he writes, “it must be a very long way away otherwise it would have visited earth by now. And I think we would’ve known if we had been visited; it would be like the film Independence Day.” Assertion again replaces explanation in Hawking’s assumption apropos artificial intelligence that the human brain is just like a computer, as if this were not hotly disputed among neuroscientists. Here too, his vision seems mainly informed by the science fiction within easiest reach: his fears for the dangers of AI conjure up the Terminator series’ Skynet and tropes of supercomputers declaring themselves God and fusing the plug. Science fiction has plenty to tell us about our fears of the present, but probably rather less about the realities of the future.

It is best, too, not to rely on Hawking’s history of science, which for example parrots the myth of Max Planck postulating the quantum to avoid the ‘ultraviolet catastrophe’ of blackbody radiation. (Planck did not mention it.) Don’t expect more than the usual clichés: here comes Feynman, playing the bongos in a strip joint (what a guy!), there goes Einstein riding on a light wave.

This is all, in a sense, so very unfair. Hawking was a great scientist who had a remarkable life, but in another universe without motor neurone disease (well, he did like the Many Worlds interpretation of quantum mechanics) we’d have no reason to confer such authority on his thoughts about all and sundry, or to notice or care that he entered the peculiar time-warp that is Stringfellows “gentlemen’s club”. We would not deny him the right to his ordinariness, and we would see his occasional brash arrogance and egotism for no more or less than it is.

There’s every reason to believe that Hawking enjoyed his fame, and that’s a cheering thought. The Hawking phenomenon is our problem, not his. He likes to remind us that he was born on the same date that Galileo died, but it’s Brecht’s Galileo that comes to mind here: to paraphrase, unhappy is the land that needs a guru.

Thursday, September 13, 2018

The "dark woman of DNA" goes missing again

There’s a curious incident that took place at the excellent "Schrödinger at 75: The Future of Life" meeting in Dublin last week that I’ve been pondering ever since.

One of the eminent attendees was James Watson, who was, naturally, present at the conference dinner. And one of the movers behind the meeting gave an impromptu (so it seemed) speech that acknowledged Watson’s work with Crick and its connection to Schrödinger’s “aperiodic crystal.” Fair enough.

Then he added that he wanted to recognize also the contribution of the “third man” of DNA, Maurice Wilkins – and who could cavil at that, given Wilkins’ Dublin roots? Wilkins, after all, was another physicist-turned-biologist who credited Schrödinger’s book What Is Life? as an important influence.

I imagined at this stage we might get a nod to the “fourth person” of DNA Rosalind Franklin, whose role was also central but was of course for some years under-recognized. But no. Instead the speaker spoke of how it was when Wilkins showed Watson his X-ray photo of DNA that Watson became convinced crystallography could crack the structure.

You could hear a ripple go around the dining hall. Wilkins’ photo?! Wasn’t it Franklin’s photo – Photo 51 – that provided Watson and Crick with the crucial part of the puzzle?

Well, yes and no. It doesn’t seem too clear who actually took Photo 51, and it seems more likely to have been Franklin’s student Ray Gosling. Neither is it completely clear that this photo was quite so pivotal to Watson and Crick’s success. Neither, indeed, is it really the case that Wilkins did something terribly unethical in showing Watson the photo (which was in any event from the Franklin-Gosling effort), given that it had already been publicly displayed previously. Matthew Cobb examines this part of the story carefully and thoroughly in his book Life’s Greatest Secret (see also here and here).

But nevertheless. Watson’s appalling treatment of Franklin, the controversy about Photo 51, and the sad fact that Franklin died before a Nobel became a possibility, are all so well known that it seemed bizarre, to the point of confrontational, to make no mention of Franklin at all in this context, and right in front of Watson himself to boot.

I figured that the attribution of “the photo” to Wilkins was so peculiar that it could only have another explanation than error or denial. I don’t know the details of the story well enough, but I told myself that the speaker must be referring to some other, earlier occasion when Wilkins had shown Watson more preliminary crystallographic work of his own that persuaded Watson this was an avenue worth pursuing.

And perhaps that is true – I simply don’t know. But if so, to refer to it in this way, when everyone is going to think of the notorious Photo 51 incident, is at best perverse and at worst a deliberate provocation. Even Adam Rutherford, sitting next to me, who knows much more about the story of DNA than I or most other people do, was confused by what he could possibly have meant.

Well, with Franklin’s name still conspicuous by its absence, Watson stood up to take a bow, which prompts me to make a request of scientific meeting and dinner organizers. Please do your attendees the favour of not forcing them to have to decide whether to reluctantly applaud Watson or join the embarrassed cohort of those who feel they can no longer do so in good conscience.

Friday, September 07, 2018

What Is Life? Schrödinger at 75

The conference “Schrödinger at 75: The Future of Life” in Dublin, from which I’m now returning, was a fabulous event, packed with good talks equally from eminent folks (including several Nobel laureates) and young rising stars. Ostensibly an exploration of the legacy of Erwin Schrödinger’s influential 1944 book What Is Life?, based on the lectures he gave 75 years ago as director of physical sciences at the Dublin Institute for Advanced Study (on which, more here), it was in fact largely a wonderful excuse to get a bunch of very smart people in the same hall to talk about many areas of the life (and chemical) sciences today and to speculate about what the future holds for them. I think I took away something interesting from every talk.

There was of course much dutiful nodding towards Schrödinger’s book, and also to some of his writing elsewhere, especially his essays in Mind and Matter (1958), where he offered some speculations about mind and consciousness (about half of the speakers worked on aspects of brain, mind and cognition). This didn’t seem merely tokenistic to me – I felt that all the speakers who mentioned Schrödinger had a genuine respect for his ideas. This is all the more interesting given that, as I say in my Nature piece, there wasn’t in some ways a great deal that was truly new and productive of further research in the book. Of course, what gets mentioned most is Schrödinger’s reference to a “code-script” that governs life and which is inherited, and his suggestion that this is encoded in the chromosomes as an “aperiodic crystal”. That image certainly resonated with Francis Crick, who wrote to Schrödinger in 1953 to tell him so.

But the idea of a “code”, as well as the notion that it could be replicated in a manner reminiscent of the ‘templating’ of structure in a crystal, were not really new. It seems rather to be something about the way Schrödinger expressed this idea that mattered, and indeed I can see why: his book is beautifully written, achieving persuasive force without seeming like the imposition of an arrogant physicist.

All of this I enjoyed. But what I missed was a historical presentation that could have put these tributes to What Is Life? in context. There was, for instance, a sense of unease about Schrödinger’s references to “order” and “organization”. What exactly was he getting at here? One suggestion was that “order” here was standing in for that crucial missing word: “information”. But this isn’t really true. Schrödinger’s “code-script” was presented as the means by which an organism’s “organization” is maintained, although quite how it does so he found wholly mysterious, even if the inter-generational transmission of the script by the “aperiodic crystal” was far less so.

What we need to know here is that “organization” had become a biological power-word, a symbol of what it was about living systems that distinguishes them from non-living. In the early nineteenth century this unique property of life was conferred by élan vital in the formulation of vitalism. As vitalism waned, it had to become something more tangible and physical. Some believed, like Thomas Henry Huxley, that the key was a special chemical composition, which made up the stuff of “protoplasm”, the primal living substance from which all life was descended. But as the chemical complexity and heterogeneity of living matter became apparent from the work of late nineteenth-century physiologists, and as the cell came to be seen as the fundamental unit of life, the idea arose that life was distinguished by some peculiar state of “organization” below the level that microscopes could resolve. There were a few tantalizing glimpses of this subcellular organization, for example in the stained chromosome fibres and organelles like the nucleus and mitochondria. These were, however, nothing but blurry blobs, offering no real clue about how their (presumably) molecular nature gave them the apparent agency that distinguished life.

And so, as Andrew Reynolds has shown, “order” and “organization” served a role that was barely more than metaphorical, patching over an ignorance about “what is life”. There’s nothing deplorable about that; it’s the kind of thing science must do all the time, giving a name to an absence of understanding so that it can be contained and built into contingent theories. But for Schrödinger to still be using it in the 1940s shows how his biological reading was rather archaic, for by that stage it had already become apparent that cell physiology relies on enzyme action, and crystallographers like J. Desmond Bernal and Bill Astbury were beginning to apply X-ray crystallography to these proteins to understand their structure. Sure, the origins and nature of the “organization” that cells seemed to exhibit were still pretty obscure, but it was getting less necessary to invoke that nebulous concept.

There were also suggestions at the Dublin meeting that Schrödinger’s “order” was what he meant with his talk of “negative entropy”. There’s some justification to think that, but Schrödinger wasn’t just thinking about how cells prevent their “organization” from falling into entropic disarray. He was puzzled by how this organization could exist in the first place. I don’t think one can really understand his discussion of order and entropy in What Is Life? unless one recognizes that many physical scientists in the early twentieth century considered the molecular world to be fundamentally random. It seems remarkable to me that no accounts of What Is Life? that I have seen refer to Schrodinger’s 1944 essay in Nature on “The Statistical Law in Nature”, where it is almost as if Schrödinger is telling us: ‘this is what I’m thinking about in my book’. The article is a paean to Ludwig Boltzmann, whose influence Schrödinger felt strongly in his early years at Vienna. Schrödinger seems to assert here that there are no laws in nature that do not rely on the statistical averaging over the behaviours of countless microscopic particles. It would have seemed all but meaningless then to suppose that one could speak about law-like, deterministic behaviour at the level of individual molecules, and quantum mechanics had seemed only to confirm this. That is what puzzled Schrödinger so much about the apparent persistence of phenotypic traits that seemed necessarily to arise from the specific details of genes at the molecular scale.

As a consequence, What Is Life? reads a little weirdly to chemists today, or indeed even by the 1950s, to whom the notion that a complex molecule can adopt and sustain a particular structure even in the face of thermal fluctuations seemed unproblematic. Schrödinger’s invocation of quantum mechanics to explain this phenomenon looks rather laboured now, and is quite possibly a part of what irritated Linus Pauling and Max Perutz about the book. It’s also why Schrödinger seems so keen to cement the structure of the gene in place as a “solid”, rather than simply regarding it as a large molecule carrying a linear code.

And what about that code itself? This wasn’t interrogated at the meeting, which was a shame. Indeed, it was sometimes still attributed by speakers the all-mighty agency that Schrödinger himself gave it. It rather astonishes me to see how the claim that the genome contains “all the information you need to make the organism” raises no eyebrows. What surprises me is that scientists are typically a rather sceptical crowd, and demand evidence to support the claims they make. But there is, to my knowledge, no evidence whatsoever that one can make even the simplest organism, let alone a human, from the information contained in the genome. Oh, but surely you can? You can (in principle, and now in some cases in practice) just make the genome from scratch, put it in a cell, and off it goes… Wait. Put it in a cell? So you need a cell to actually enact the “code-script”? Well sure, but the cell goes without saying, right?

Metaphors in biology are always imperfect and often treacherous, but I think this one (a simile, really) has some mileage: saying that the genome is the complete blueprint for an organism is a bit like saying that the Oxford English Dictionary is a blueprint for King Lear. It’s all in there, right? Ok, there’s a lot in there that you don’t need for Lear, but then there’s a lot of junk in the genome too (perhaps!). Sure, to get Lear out of the OED you need to feed the words into William Shakespeare, but Shakespeare goes without saying, right?

For a human, it’s still more complicated. Human cells can of course replicate in a culture medium, but none has ever replicated into an embryo, let alone a person. What they can do – what some induced stem cells can do – is proliferate into an embryoid, an organoid with embryo-like structures. But that won’t make a human. For that, you need not only a cell but a uterus. It’s rather like saying, so the text of King Lear has “all the information” – and then giving it to, say, a Chinese factory worker in Lanzhou. Well OK, so to actually enact Lear in a meaningful way it has to be read by someone who reads English – or translated… But come on, the English goes without saying…

Once we start talking in terms of the information needed to make an organism, though, quite what’s in the genome becomes far less clear. Indeed, we know for sure that maternal factors supply some vital information for the early development of a fertilized egg. And the self-organizing abilities of cells can only create an organism in the right context: every cell needs the right signals from its environment for the whole to assemble properly. Genes somehow encode neurons, but neurons don’t develop properly if they don’t get stimuli from their environment during a critical period.

Are these environmental signals and context then a part of the information needed to make an organism “as nature [meaning evolution, I guess] intends”? Is an understanding of English a part of the information needed for King Lear to be anything more than marks on paper?

Evidently this is an issue of how “information” acquires meaning, which of course was notoriously what Shannon left out of his information theory. And that is why information in Shannon’s sense is greatest when the Shannon entropy is greatest. Periodic solids have rather low entropy. What is needed in biology, then, is a theory for where meaningful information comes from and how it gives rise to causal flows. There’s no doubt that lots of meaningful information is encoded in the genome that contributes to how organisms are built and how they function. But when we say that “the genome contains all the information needed to build an organism”, we are dealing with ill-defined terms. What I solely missed at this meeting was a presentation about how a theory of biological information can be developed, and how to define and measure “meaning” within that theory. Daniel Dennett acknowledged this lacuna in his keynote address, saying that understanding “semantic information” as opposed to Shannon information is still “work in progress”.

A close reading of Schrödinger starts us in that direction too, and is a part of his legacy.