Wednesday, October 08, 2014

The moment of uncertainty

As part of a feature section in the October issue of La Recherche on uncertainty, I interviewed Robert Crease, historian and philosopher of science at Stony Brook University, New York, on the cultural impact of Heisenberg’s principle. It turned out that Robert had just written a book looking at this very issue – in fact, at the cultural reception of quantum theory in general. It’s called The Quantum Moment, is coauthored by Alfred Scharff Goldhaber, and is a great read – I have written a mini-review for the next (November) issue of Prospect. Here’s the interview, which otherwise appears only in French in La Recherche. Since Robert has such a great way with words, it was one of the easiest I’ve ever done.

________________________________________________________

What led Heisenberg to formulate the uncertainty principle? Was it something that fell out of the formalism in mathematical terms?

That’s a rather dramatic story. The uncertainty principle emerged in exchange of letters between Heisenberg and Pauli, and fell out of the work that Heisenberg had done on quantum theory the previous year, called matrix mechanics. In autumn 1926, he and Pauli were corresponding about how to understand its implications. Heisenberg insisted that the only way to understand it involved junking classical concepts such as position and momentum in the quantum world. In February 1927 he visited Niels Bohr in Copenhagen. Bohr usually helped Heisenberg to think, but this time the visit didn’t have the usual effect. They grew frustrated, and Bohr abandoned Heisenberg to go skiing. One night, walking by himself in the park behind Bohr’s institute, Heisenberg had an insight. He wrote to Pauli: “One will always find that all thought experiments have this property: when a quantity p is pinned down to within an accuracy characterized by the average error p, then... q can only be given at the same time to within an accuracy characterized by the average error q1 ≈ h/p1.” That’s the uncertainty principle. But like many equations, including E = mc2 and Maxwell’s equations, its first appearance is not in its now-famous form. Anyway, Heisenberg sent off a paper on his idea that was published in May.

How did Heisenberg interpret it in physical terms?

He didn’t, really; at the time he kept claiming that the uncertainty principle couldn’t be interpreted in physical terms, and simply reflected the fact that the subatomic world could not be visualized. Newtonian mechanics is visualizable: each thing in it occupies a particular place at a particular time. Heisenberg thought the attempt to construct a visualizable solution for quantum mechanics might lead to trouble, and so he advised paying attention only to the mathematics. Michael Frayn captures this side of Heisenberg well in his play Copenhagen. When the Bohr character charges that Heisenberg doesn't pay attention to the sense of what he’s doing so long as the mathematics works out, the Heisenberg character indignantly responds, "Mathematics is sense. That's what sense is".

Was Heisenberg disturbed by the implications of what he was doing?

No. Both he and Bohr were excited about what they had discovered. From the very beginning they realized that it had profound philosophical implications, and were thrilled to be able to explore them. Almost immediately both began thinking and writing about the epistemological implications of the uncertainty principle.

Was anyone besides Heisenberg and Bohr troubled?

The reaction was mixed. Arthur Eddington, an astronomer and science communicator, was thrilled, saying that the epistemological implications of the uncertainty principle heralded a new unification of science, religion, and the arts. The Harvard physicist Percy Bridgman was deeply disturbed, writing that “the bottom has dropped clean out” of the world. He was terrified about its impact on the public. Once the implications sink in, he wrote, it would “let loose a veritable intellectual spree of licentious and debauched thinking.”

Did physicists all share the same view of the epistemological implications of quantum mechanics?

No, they came up with several different ways to interpret it. As the science historian Don Howard has shown, the notion that the physics community of the day shared a common view, one they called the “Copenhagen interpretation,” is a myth promoted in the 1950s by Heisenberg for his own selfish reasons.

How much did the public pay attention to quantum theory before the uncertainty principle?

Not much. Newspapers and magazines treated it as something of interest because it excited physicists, but as far too complicated to explain to the public. Even philosophers didn’t see quantum physics as posing particularly interesting or significant philosophical problems. The uncertainty principle’s appearance in 1927 changed that. Suddenly, quantum mechanics was not just another scientific theory – it showed that the quantum world works very differently from the everyday world.

How did the uncertainty principle get communicated to a broader public?

It took about a year. In August 1927, Heisenberg, who was not yet a celebrity, gave a talk at a meeting of the British Association for the Advancement of Science, but it sailed way over the heads of journalists. The New York Times’s science reporter said trying to explain it to the public was like “trying to tell an Eskimo what the French language is like without talking French.” Then came a piece of luck. Eddington devoted a section to the uncertainty principle in his book The Nature of the Physical World, published in 1928. He was a terrific explainer, and his imagery and language were very influential.

How did the public react?

Immediately and enthusiastically. A few days after October 29, 1929, the New York Times, tongue-in-cheek, invoked the uncertainty principle as the explanation for the stock market crash.

And today?

Heisenberg and his principle still feature in popular culture. In fact, thanks to the uncertainty principle, I think I’d argue that Heisenberg has made an even greater impact on popular culture than Einstein. In the American television drama series Breaking Bad, 'Heisenberg' is the pseudonym of the protagonist, a high school chemistry teacher who manufactures and sells the illegal drug crystal methamphetamine. The religious poet Christian Wiman, in his recent book about facing cancer, writes that "to feel enduring love like a stroke of pure luck" amid "the havoc of chance" makes God "the ultimate Uncertainty Principle." In The Ascent of Man, the Polish-British scientist Jacob Bronowski calls the uncertainty principle the Principle of Tolerance. There’s even an entire genre of uncertainty principle jokes. A police officer pulls Heisenberg over and says, "Did you know that you were going 90 miles an hour?" Heisenberg says, "Thanks. Now I'm lost."

Has the uncertainty principle been used for serious philosophical purposes?

Yes. Already in 1929, John Dewey wrote about it to promote his ideas about pragmatism, and in particular his thoughts about the untenability of what he called the “spectator theory of knowledge.” The literary critic George Steiner has used the uncertainty principle to describe the process of literary criticism – how it involves transforming the “object” – that is, text – interpreted, and delivers it differently to the generation that follows. More recently, the Slovene philosopher Slavoj Žižek has devoted attention to the philosophical implications of the uncertainty principle.

Some popular culture uses of the uncertainty principle are off the wall. How do you tell meaningful uses from the bogus ones?

It’s not easy. Popular culture often uses scientific terms in ways that are pretentious, erroneous, wacky, or unverifiable. It’s nonsense to apply the uncertainty principle to medicines or self-help issues, for instance. But how is that different from Steiner using it to describe the process of literary criticism?

Outside of physics, has our knowledge that uncertainty is a feature of the subatomic world, and the uses that it has been put by writers and philosophers, helped to change our worldview in any way?

I think so. The contemporary world does not always feel smooth, continuous, and law-governed, like the Newtonian World. Our world instead often feels jittery, discontinuous, and irrational. That has sometimes prompted writers to appeal to quantum imagery and language to describe it. John Updike’s characters, for instance, sometimes appeal to the uncertainty principle, while Updike himself did so in speaking of the contemporary world as full of “gaps, inconsistencies, warps, and bubbles in the surface of circumstance.” Updike and other writers and poets have found this imagery metaphorically apt.

The historians Betty Dobbs and Margaret Jacob have remarked that the Newtonian Moment provided “the material and mental universe – industrial and scientific – in which most Westerners and some non-Westerners now live, one aptly described as modernity.” But that universe is changing. Quantum theory showed that at a more fundamental level the world is not Newtonian at all, but governed by notions such as chance, probability, and uncertainty.

Robert Crease’s book (with Alfred S. Goldhaber) The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught Us to Love Uncertainty will be published by Norton in October 2014.

Uncertain about uncertainty

This is the English version of the cover article (in French) of the latest issue of La Recherche (October). It’s accompanied by an interview that I conducted with Robert Crease about the cultural impact of the uncertainty principle, which I’ll post next.

______________________________________________________________________

If there’s one thing most people know about quantum physics, it’s that it is uncertain. There’s a fuzziness about the quantum world that prevents us from knowing everything about it with absolute detail and clarity. Almost 90 years ago, the German physicist Werner Heisenberg pointed this out in his famous Uncertainty Principle. Yet over the few years there has been heated debate among physicists about just what Heisenberg meant, and whether he was correct. The latest experiments seem to indicate that one version of the Uncertainty Principle presented by Heisenberg might be quite wrong, and that we can get a sharper picture of quantum reality than he thought.

In 1927 Heisenberg argued that we can’t measure all the attributes of a quantum particle at the same time and as accurately as we like [1]. In particular, the more we try to pin down a particle’s exact location, the less accurately we can measure its speed, and vice versa. There’s a precise limit to this certainty, Heisenberg said. If the uncertainty is position is denoted Δx, and the uncertainty in momentum (mass times velocity) is Δp, then their product ΔxΔp can be no smaller than ½h, where h [read this as h bar] is the fundamental constant called Planck’s constant, which sets the scale of the ‘granularity’ of the quantum world – the size of the ‘chunks’ into which energy is divided.

Where does this uncertainty come from? Heisenberg’s reasoning was mathematical, but he felt he needed to give some intuitive explanation too. For something as small and delicate as a quantum particle, he suggested, it is virtually impossible to make a measurement without disturbing and altering what we’re trying to measure. It we “look” at an electron by bouncing a photon of light off it in a microscope, that collision will change the path of the electron. The more we try to reduce the intrinsic inaccuracy or “error” of the measurement, say by using a brighter beam of photons, the more we create a disturbance. According to Heisenberg, error (Δe) and disturbance (Δd) are also related by an uncertainty principle in which ΔeΔd can’t be smaller than ½h.

The American physicist Earle Hesse Kennard showed very soon after Heisenberg’s original publication that in fact his thought experiment is superfluous to the issue of uncertainty in quantum theory. The restriction on precise knowledge of both speed and position is an intrinsic property of quantum particles, not a consequence of the limitations of experiments. All the same, might Heisenberg’s “experimental” version of the Uncertainty Principle – his relationship between error and disturbance – still be true?

“When we explain the Uncertainty Principle, especially to non-physicists,” says physicist Aephraim Steinberg of the University of Toronto in Canada, “we tend to describe the Heisenberg microscope thought experiment.” But he says that, while everyone agrees that measurements disturb systems, many physicists no longer think that Heisenberg’s equation relating Δe and Δd describes that process adequately.

Japanese physicist Masanao Ozawa of Nagoya University was one of the first to question Heisenberg. In 2003 he argued that it should be possible to defeat the apparent limit on error and disturbance [2]. Ozawa was motivated by a debate that began in the 1980s on the accuracy of measurements of gravity waves, the ripples in spacetime predicted by Einstein’s theory of general relativity and expected to be produced by violent astrophysical events such as those involving black holes. No one has yet detected a gravity wave, but the techniques proposed to do so entail measuring the very small distortions in space that will occur when such a wave passes by. These disturbances are so tiny – fractions of the size of atoms – that at first glance the Uncertainty Principle would seem to determine if they are feasible at all. In other words, the accuracy demanded in some modern experiments like this means that this question of how measurement disturbs the system has real, practical ramifications.

In 1983 Horace Yuen of Northwestern University in Illinois suggested that, if gravity-wave measurement were done in a way that barely disturbed the detection system at all, the apparently fundamental limit on accuracy dictated by Heisenberg’s error-disturbance relation could be beaten. Others disputed that idea, but Ozawa defended it. This led him to reconsider the general question of how experimental error is related to the degree of disturbance it involves, and in his 2003 paper he proposed a new relationship between these two quantities in which two other terms were added to the equation. In other words, ΔeΔd + A + Bh/2, so that ΔeΔd itself could be smaller than h/2 without violating the limit..

Last year, Cyril Branciard of the University of Queensland in Australia (now at the CNRS Institut Néel at Grenoble) tightened up Ozawa’s new uncertainty equation [3]. “I asked whether all values of Δe and Δd that satisfy his relation are allowed, or whether there could be some values that are nevertheless still forbidden by quantum theory”, Branciard explains. “I showed that there are actually more values that are forbidden. In other words, Ozawa's relation is ‘too weak’.”

But Ozawa’s relationship had by then already been shown to give an adequate account of uncertainty for most purposes, since in 2012 it was put to the test experimentally by two teams [4,5]. Steinberg and his coworkers in Toronto figured out how to measure the quantities in Ozawa’s equation for photons of infrared laser light travelling along optical fibres and being sensed by detectors. They used a way of detecting the photons that perturbed their state as little as possible, and found that indeed they could exceed the relationship between precision and disturbance proposed by Heisenberg but not that of Ozawa. Meanwhile, Ozawa himself teamed up with a team at the Vienna University of Technology led by Yuji Hasegawa, who made measurements on the quantum properties of a beam of neutrons passing through a series of detectors. They too found that the measurements could violate the Heisenberg limit but not Ozawa’s.

Very recent experiments have confirmed that conclusion with still greater accuracy, verifying Branciard’s relationships too [6,7]. Branciard himself was a collaborator on one of those studies, and he says that “experimentally we could get very close indeed to the bounds imposed by my relations.”

Doesn’t this prove that Heisenberg was wrong about how error is connected to disturbance in experimental measurements? Not necessarily. Last year, a team of European researchers claimed to have a theoretical proof that in fact this version of Heisenberg’s Uncertainty Principle is correct after all [8]. They argued that Ozawa’s theory, and the experiments testing it, were using the wrong definitions of error. So they might be correct in their own terms, but weren’t really saying anything about Heisenberg’s error-disturbance principle. As team member Paul Busch of the University of York in England puts it, “Ozawa effectively proposed a wrong relationship between his own definitions of error and disturbance, wrongly ascribed it to Heisenberg, then showed how to fix it.”

So Heisenberg was correct after all in the limits he set on the tradeoff, argues Busch: “if the error is kept small, the disturbance must be large.”

Who is right? It seems to depend on exactly how you pose the question. What, after all, does measurement error mean? If you make a single measurement, there will be some random error that reflects the limits on the accuracy of your technique. But that’s why experimentalists typically make many measurements on the same system, so that you average out some of the randomness. Yet surely, some argue, the whole spirit of Heisenberg’s original argument was about making measurements of different properties on a particular, single quantum object, not averages for a whole bunch of such objects?

It now seems that Heisenberg’s limit on how small the combined uncertainty can be for error and disturbance holds true if you think about averages of many measurements, but that Ozawa’s smaller limit applies if you think about particular quantum states. In the first case you’re effectively measuring something like the “disturbing power” of a specific instrument; in the second case you’re quantifying how much we can know about an individual state. So whether Heisenberg was right or not depends on what you think he meant (and perhaps on whether you think he even recognized the difference).

As Steinberg explains, Busch and colleagues “are really asking how much a particular measuring apparatus is capable of disturbing a system, and they show that they get an equation that looks like the familiar Heisenberg form. We think it is also interesting to ask, as Ozawa did, how much the measuring apparatus disturbs one particular system. Then the less restrictive Ozawa-Branciard relations apply.”

Branciard agrees with Steinberg that this isn’t a question of who’s right and who’s wrong, but just a matter of how you make your definitions. “The two approaches simply address different questions. They each argue that the problem they address was probably the one Heisenberg had in mind. But Heisenberg was simply not clear enough on what he had in mind, and it is always dangerous to put words in someone else's mouth. I believe both questions are interesting and worth studying.”

There’s a broader moral to be drawn, for the debate has highlighted how quantum theory is no longer perceived to reveal an intrinsic fuzziness in the microscopic world. Rather, what the theory can tell you depends on what exactly you want to know and how you intend to find out about it. It suggests that “quantum uncertainty” isn’t some kind of resolution limit, like the point at which objects in a microscope look blurry, but is to some degree chosen by the experimenter. This fits well with the emerging view of quantum theory as, at root, a theory about information and how to access it. In fact, recent theoretical work by Ozawa and his collaborators turns the error-disturbance relationship into a question about the cost of gaining information about one property of a quantum system on the other properties of that system [9]. It’s a little like saying that you begin with a box that you know is red and think weighs one kilogram – but if you want to check that weight exactly, you weaken the link to redness, so that you can’t any longer be sure that the box you’re weighing is a red one. The weight and the colour start to become independent pieces of information about the box.

If this seems hard to intuit, that’s just a reflection of how interpretations of quantum theory are starting to change. It appears to be telling us that what we can know about the world depends on how we ask. To that extent, then, we choose what kind of a world we observe.

The issue isn’t just academic, since an approach to quantum theory in which quantum states are considered to encode information is now starting to produce useful technologies, such as quantum cryptography and the first prototype quantum computers. “Deriving uncertainty relations for error-disturbance or for joint measurement scenarios using information-theoretical definitions of errors and disturbance has a great potential to be useful for proving the security of cryptographic protocols, or other information-processing applications”, says Branciard. “This is a very interesting and timely line of research.”

References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
3. C. Branciard, Proc. Natl. Acad. Sci. U.S.A. 110, 6742 (2013).
4. J. Erhart, S. Sponar, G. Sulyok, G. Badurek, M. Ozawa & Y. Hasegawa, Nat. Phys. 8, 185 (2012).
5. L. A. Rozema, A. Darabi, D. H. Mahler, A. Hayat, Y. Soudagar & A. M. Steinberg, Phys. Rev. Lett. 109, 100404 (2012).
6. F. Kandea, S.-Y. Baek, M. Ozawa & K. Edamatsu, Phys. Rev Lett. 112, 020402 (2014).
7. M. Ringbauer, D. N. Biggerstaff, M. A. Broome, A. Fedrizzi, C. Branciard & A. G. White, Phys. Rev. Lett. 112, 020401 (2014).
8. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
9. F. Buscemi, M. J. W. Hall, M. Ozawa & M. W. Wilde, Phys. Rev. Lett. 112, 050401 (2014).

Tuesday, October 07, 2014

Waiting for the green (and blue) light


This was intended as a "first response" to the Nobel announcement this morning, destined for the Prospect blog. But as it can take a little while for things to appear there, here it is anyway while the news is still ringing in the air. I'm delighted by the choice.

____________________________________________________

Did you notice when traffic lights began to change colour? The green “go” light once was once a yellowish pea green, but today it has a turquoise hue. And whereas the lights would switch with a brief moment of fading up and down, now they blink on and off in an instant.

I will be consigning myself to the farthest reaches of geekdom by admitting this, but I used to feel a surge of excitement whenever, a decade or so ago, I noticed these new-style traffic lights. That’s because I knew I was witnessing the birth of a new age of light technology. Even if traffic lights didn’t press your buttons, the chances are that you felt the impact of the same innovations in other ways, most notably when the definition of your DVD player got a boost from the introduction of Blu-Ray technology, which happened about a decade ago. What made the difference was the development of a material that could be electrically stimulated into emitting bright blue light: the key component of blue light-emitting diodes (LEDs), used in traffic lights and other full-colour signage displays, and of lasers, which read the information on Blu-Ray DVDs.

It’s for such reasons that this year’s Nobel laureates in physics have genuinely changed the world. Japanese scientists Isamu Akasaki, Hiroshi Amano and Shuji Nakamura only perfected the art of making blue-light-emitting semiconductor devices in the 1990s, and as someone who watched that happen I still feel astonished at how quickly this research progressed from basic lab work to a huge commercial technology. By adding blue (and greenish-blue) to the spectrum of available colours, these Japanese researchers have transformed LED displays from little glowing dots that simply told you if the power was on or off to full-colour screens in which the old red-green-blue system of colour televisions, previously produced by firing electron beams at phosphor materials on the screen, can now be achieved instead with compact, low-power and ultra-bright electronics.

It’s because LEDs need much less power than conventional incandescent light bulbs that the invention of blue LEDs is ultimately so important. Sure, they also switch faster, last longer and break less easily than old-style bulbs – you’ll see fewer out-of-service traffic lights these days – but the low power requirements (partly because far less energy is wasted as heat) mean that LED light sources are also good for the environment. Now that they can produce blue light too, it’s possible to make white-light sources from a red-green-blue combination that can act as regular lighting sources for domestic and office use. What’s more, that spectral mixture can be tuned to simulate all kinds of lighting conditions, mimicking daylight, moonlight, candle-light or an ideal spectrum for plant growth in greenhouses. The recent Making Colour exhibition at the National Gallery in London featured a state-of-the-art LED lighting system to show how different the hues of a painting can seem under different lighting conditions.

As with so many technological innovations, the key was finding the right material. Light-emitting diodes are made from semiconductors that convert electrical current into light. Silicon is no good at doing this, which is why it has been necessary to search out other semiconductors that are relatively inexpensive and compatible with the silicon circuitry on which all microelectronics is based. For red and yellow-green light that didn’t prove so hard: semiconductors such as gallium arsenide and gallium aluminium arsenide have been used since the 1960s for making LEDs and semiconductor lasers for optical telecommunications. But getting blue light from a semiconductor proved much more elusive. From the available candidates around the early 1990s, both Akasaki and Amano at Nagoya University and Nakamura at the chemicals company Nichia put their faith in a material called gallium nitride. It seemed clear that this stuff could be made to emit light at blue wavelengths, but the challenge was to grow crystals of sufficient quality to do that efficiently – if there were impurities or flaws in the crystal, it wouldn’t work well enough. Challenges of this kind are typically an incremental business rather than a question of some sudden breakthrough: you have to keep plugging away and refining your techniques, improving the performance of your system little by little.

Nakamura’s case is particularly appealing because Nichia was a small, family-run company on the island of Shikoku, generally considered a rural backwater – not the kind of place you would expect to beat the giants of Silicon Valley in a race for such a lucrative goal. It was his conviction that gallium nitride really was the best material for the job that kept him going.

The Nobel committee has come up trumps here – it’s a choice that rewards genuinely innovative and important work, which no one will grumble about, and which in retrospect seems obvious. And it’s a reminder that physics is everywhere, not just in CERN and deep space.

Friday, September 26, 2014

Science needn't hide its mistakes

An ex-Nature editor says that peer review is dead? I hope that isn’t what you’d be left thinking from my Comment in the Guardian (below). We need peer review. It is flawed in all kinds of ways – I think, with the “replication crisis” in the scientific literature, we’re starting to appreciate just how flawed – but it remains a valuable way of determining what is publishable. My point is that there are also other valid ways of presenting science these days, arXiv certainly being one of them. Some folks worry about how bad science might get an airing if peer review isn’t seen as an obligatory gatekeeper – but my God, have you ever looked at what gets published already? Peer review is mostly a good way of making mediocre science less error-prone, not of preventing the dissemination of grandiose dross. I’d prefer to think that people can be taught to understand that, if something hasn’t been peer-reviewed, it should be approached with a pinch of salt and with the knowledge that one needs to hear the assessments of other experts too. It’s possible that a blanket insistence on only announcing peer-reviewed work would make people less likely to get taken in, not more so, because they would come to realise that science is always contingent and liable to be wrong, whether it is peer-reviewed or not, and that peer review isn’t a guarantee of veracity. The filtering process in science is a many-staged one, which includes the post-publication assessment of peers and the longer-term sieve of history – it’s not something that happens, or ought to happen, all at once. With blogs, preprint servers, social media and so forth, this is much more true now than it was 20 years ago, and we need to recognize that.

As for the BICEP2 results themselves, it does seem that the team was rather hasty and sloppy in not waiting for the Planck data but apparently basing their assessment of the dust issue on preliminary findings presented at a conference. But this is no great sin. I’m pleased that people have been able to see that scientists like this are grappling with these huge and difficult questions, and that there are ways we can look for answers, and that sometimes we’ll get wrong ones. Our best protection against oversold and misleading claims is to admit that scientists can make blunders, because they are just people doing their best to figure out these difficult and amazing questions, not priests handing down answers written in stone. So anyway: here it is.

_______________________________________________________________________

It was announced in headlines worldwide as one of the biggest scientific discoveries for decades, sure to garner Nobel prizes. But now it looks very likely that the alleged evidence of both gravitational waves and the ultra-fast expansion called inflation in the Big Bang has literally turned to dust. Last March a team using a telescope called BICEP2 at the South Pole claimed to have read the signatures of these two elusive phenomena in the twisting patterns of the cosmic microwave background radiation, the afterglow of the Big Bang. The latest results from an international consortium using a space telescope called Planck show that BICEP2’s data is very likely to have come not from the microwave background at all, but from warm dust scattered through our own galaxy.

Some will regard this as a huge embarrassment, not only for the BICEP2 team but for science itself. As the evidence against the earlier claims has been mounting over the past months, already some researchers have criticized the team for making a premature announcement to the press before their work had been properly peer-reviewed.

But there’s no shame here. On the contrary, this episode is good for science. This sequence of excitement followed by deflation, debate and controversy is perfectly normal – it’s just that in the past it would have happened out of the public gaze. Only when the dust had settled would a sober and sanitized version of events have been reported, if indeed there was anything left to report.

That has been the Standard Model of science ever since the media first acknowledged it. A hundred years ago, headlines in the New York Times had all the gravitas of a papal edict: “Men of Science Convene” and so forth. They were authoritative, decorous, and totally contrived. That image started to unravel after James Watson published The Double Helix, his scurrilous behind-the-scenes account of the pursuit of the structure of DNA. But even now, some scientists would prefer the mask to remain, insisting that results are only announced after they have passed “peer review”: that is, been checked by experts and published in a reputable journal.

There are many reasons why this will no longer wash. Those days of deference to patrician authority are over, probably for the better. We no longer take on trust what we are told by politicians and leaders, experts and authorities. There are hazards to such skepticism, but good motivations too. Few regret that the old “public understanding of science” model – spoon-feeding facts to the ignorant masses – has been replaced with attempts to engage and include the public.

But science itself has changed too. Information and communications technologies mean that, not only is it all but impossible to keep hot findings under wraps, but few even try. In physics in particular, researchers put their papers on publicly accessible preprint servers before formal publication so that they can be seen and discussed, while specialist bloggers give new claims an informal but often penetrating analysis. This enriches the scientific process, and means that problems can be spotted and debated that “peer reviewers” for journals might not notice. Peer review is highly imperfect anyway – a valuable check, but far from infallible and notoriously conservative.

It is because of these new models of dissemination that we were all able to enjoy the debate in 2011 about particles called neutrinos that were alleged to travel faster than light, in defiance of the theory of special relativity. Those findings were announced, disputed, and finally rejected, all without any papers being formally published. The arguments were heated but never bitter, and the public got a glimpse of science at its most vibrant: astonishing claims mixed with careful deliberation, leading ultimately to a clear consensus. How much more informative it was than the tidy fictions that published papers often become.

Of course, there will always be dangers in “publication by press conference”, especially if the findings relate to, say, human health. All the more reason for us to become more realistic, informed and grown-up in assessing science: to listen to what other experts say, to grasp the basic arguments, and not just to be seduced by headlines. Researchers who abuse the process will very quickly feel the heat.

Aren’t some premature announcements just perfidious attempts to grab priority, and thereby fame and prizes? Probably – and this exposes how distorted the reward systems of science can be. It’s time we stopped awarding special status to people who, having more resources or leverage with editors or just plain luck, are first past a post that everyone else is stampeding towards. Who cares? Rewards in science should be for sustained creative thinking, insight, experimental ingenuity, not for being in the right place at the right time. A bottle of bubbly will suffice for that.

What, then, of gravitational waves? If, as it seems, BICEP2 never saw them bouncing from the repercussions of the Big Bang, then we’re back to looking for them the hard way, by trying to detect the incredibly tiny distortions they should introduce in spacetime as they ripple past. Now the BICEP2 and Planck teams are pooling their data to see if anything can be salvaged. Good on them. Debate, discussion, deliberation: science happening just as it should.

Thursday, September 25, 2014

Whatever happened to the heroes?

Here for the record is my article published yesterday on the Guardian History of Science blog (The H Word). Seems you get a better class of commenter there than on Comment is Free, which is nice – some thoughtful responses. This piece is a kind of trailer for the paperback publication of Serving the Reich by Vintage.

__________________________________________________________________________

Scientists’ historical opposition to ideological manipulation has mostly been feeble at best. The failings are not individual but institutional.

“Unhappy is the land that needs a hero”, Galileo tells his disillusioned former student Andrea in Bertold Brecht’s Life of Galileo, after he has recanted on his heliocentric theory of the cosmos. Andrea thought that Galileo would martyr himself, but faced with the rack and thumbscrews the astronomer didn’t hesitate to sign a recantation. “I was afraid of physical pain”, he admits.

Galileo’s reputation hasn’t suffered for that weakness. Heedless of Brecht’s admonition, science makes Galileo a hero and martyr persecuted by the cruel and ignorant Church. What’s more, it’s often implied that his fate might have been shared by anyone who, from the Middle Ages to the early Enlightenment, dared to advocate Copernicus’s theory that the Sun, not the Earth, lay at the centre of the heavens. It’s still widely believed that the Italian friar Giordano Bruno was burnt at the stake for holding that view, 33 years before Galileo recanted.

Historians of science oscillate between exasperation and resignation at the fact that nothing they say seems able to dislodge these convictions. They can point out that Copernicus’ book, published in 1543, elicited little more than mild disapproval from the Church for almost a century before Galileo’s trial. They can explain that Bruno’s cosmological ideas constituted a rather minor part of the heretical charges made against him. They can show that it was Galileo’s provocative style and personality – his readiness to lampoon the Pope, say – that landed him in trouble, and that he was wrong anyway in some of his astronomical theories and disputes with clerics (on tides and comets, say). They can reveal that the conventional narrative of science versus the Church was largely the polemical invention of John William Draper and Andrew Dickson White in the late nineteenth century. It makes no difference. In the “battle for reason”, science must have its heroic martyrs.

Is this perhaps because they are so hard to find? For over the course of history science’s resistance to ideological intervention and manipulation has been largely rather feeble. One of my most disillusioning realizations while researching my book Serving the Reich was of how little scientists in Germany did to oppose the Nazis.

They were of course in an extreme and hazardous situation, yet several German artists, writers (even journalists!), industrialists and, yes, religious leaders voiced criticisms that were nowhere to be found among scientists. The Austrian scientific editor Paul Rosbaud, who himself showed extraordinary bravery working as a spy for the Allies, noted how scientists at Göttingen University vowed to “rise like one man to protest” if the Nazis dared to dismiss their “non-Aryan” colleagues – and yet when it happened, they all seemed to forget this intention, and some even condemned those who resisted their dismissal.

If few scientists in Germany found the fortitude to show active resistance to the Nazis, that partly reflects how rare physical courage is – and who are we to judge them for that? But this isn’t really the issue. Those German scientists who had no sympathy for the National Socialists didn’t just stay silent to save their own skins and careers; they considered it their duty to do so. You could grumble in private, but as a professional academic one was expected to remain “apolitical”, a loyal and patriotic servant of the state. When Einstein denounced the Nazi laws publicly, he was vilified as a traitor to his country, an “atrocity-mongerer” who deserved to be expelled from scientific institutions.

This attitude explains much of the post-war silence of the German scientists. It’s not just that they lacked the honesty and self-awareness to confess, as the Dutch physicist Hendrik Casimir did, that they were held back by fear; most of them didn’t even feel there was a case to be answered. Their aim, they insisted, had been simply to “stay true to science”: an aspiration that became a shield against any recognition of broader civic responsibilities.

This is where danger still lies. Individual scientists are, in my experience, at least as principled, politically engaged and passionate as any other members of society. The passivity that historian Joseph Haberer deplored in 1969 – in which scientists merely offer their technical advice to the prevailing political system – seems instead to stem from science as an institution.

It isn’t just that science has in general lacked structures for mounting effective resistance to political and ideological interference. Until recently, many scientists still saw it as a virtue to avoid “political” positions. The Observer’s “Rational Heroes” column asks scientists why so few of them go into politics; physicist Steven Weinberg’s triumphant answer was that in science “you can sometimes be sure that what you say is true”. The implication is that science occupies a higher plane, unsullied by the compromised dissembling of politics.

This was Werner Heisenberg’s view too, and it enabled him to turn a blind eye to the depravities of the Nazis and to advance his career in Germany without exactly supporting them. “We should conscientiously fulfil the duties and tasks that life presents to us without asking much about the why or the wherefore”, he wrote.

At the top of many scientists’ political agenda are not political questions as such but demands for more funding. They should beware the example of the German physicists like Heisenberg who triumphantly proclaimed their cleverness at getting money out of the Nazis, whereas in fact Himmler and Goering were perfectly happy to fund tame academics. “Whether they support the regime or not”, a group of leading science historians has written recently, “most scientists, or perhaps better put [indeed!], scientific communities, will do what they have to in order to be able to do science.”

When inspiring opponents of political repression, such as Andrei Sakharov and Fang Lizhi, have arisen from the ranks of scientists, it has been their personal courage, not their beliefs in the role of science in society, that has sustained them, and they were afforded no official backing from the scientific bodies of their countries.

Because science works best when it is approached without prejudices (as far as that is humanly possible), it is tempting to equate this operational prerequisite with freedom of thought more generally. Yet not only does science have no monopoly on that, but it risks deluding itself if it elevates prickly, brilliant iconoclasts to the status of champions of free speech. History gives no support to that equation.

Wednesday, September 24, 2014

Sympathy for the devil


I have two half-Italian friends who have independently decided to flee that country, partly in despair at the state it’s in. The science magazine Sapere is trying to restore a little intellectual culture, and I'm glad to contribute a regular column on music cognition. Here is the latest installment.

_____________________________________________________________________

Many people who dislike the atonal music of composers such as Arnold Schoenberg say that it’s because their works are full of harsh dissonances: notes that sound horrible together. Schoenberg argued that dissonance is just a matter of convention: there’s nothing intrinsically wrong with it, it’s just that we’re not used to it.

The truth is a bit of both. Some dissonance really is convention: in the Middle Ages, a major third chord (C and E, say) was considered dissonant, but by Mozart’s time it was perfectly harmonious. But there’s also a “sensory dissonance” that stems from the basic physics of sound. If two pure tones very close in acoustic frequency are played together, the sound waves interfere to create a rattle-like sensation called roughness, which is genuinely grating. This seems to imply that any notes should sound okay as long as they’re not close in pitch. But because instruments and voices produce overtones with a whole range of frequencies, you have to add up all the possible combinations to figure out how “rough” two notes will sound together. The nineteenth-century German scientist Hermann von Helmholtz was the first to do this, and modern calculations confirm his findings: perfect fifth chords (C-G) and octaves have very little sensory dissonance, but all other two-note combinations have much the same roughness except for the minor second (C-C#), which has a lot.

So maybe Schoenberg was right! As long as we don’t play notes that are directly adjacent on the keyboard, shouldn’t any chord sound fine? Not so fast. Some researchers claim that we have an innate preference for the chords that are conventionally labelled consonant – that we like a fourth (C-F), say, more than a tritone (C-F#, often called the ‘devil’s interval’ and used to represent the demonic). These claims come from studies of very young infants, whose preferences about sounds can be judged from their attention or agitation. The idea is that, if the children are young enough, their preferences haven’t been conditioned by hearing lots of consonant nursery rhymes.

But is that so? Babies can hear sounds in the womb, and they learn voraciously. So it’s extremely hard to know whether any preferences are truly innate even in newborns. One study claimed to find a slight preference for consonance in two-day-old babies of deaf parents, who wouldn’t have heard their parents sing in the womb. But the evidence either way is marginal at best.

In any case, culture seems to over-write any innate tastes in harmony. The ganga folk songs of Croatia use harmonized minor seconds that are usually deemed intrinsically dissonant, while Indonesian gamelan music uses tunings that sound jarring to Western ears. In comparison, Schoenberg doesn’t seem to be asking so much.

Saturday, September 20, 2014

The real drug dealers

What sort of company, then, is GSK? I ask because they seem to be trying very hard to convince us that Ben Goldacre actually gave Big Pharma an easy ride. I can’t help feeling that GlaxoSmithKline are wanting to see how close they can get to the boundaries of downright evil before we begin to really care. The head of GSK China, Mark Reilly, has just pleaded guilty to charges of bribery and given a three-year prison sentence. GSK has ensured that he gets deported so that he can serve the sentence here, because, you know, you can screw the Chinese in their own country but you don’t want to suffer their justice.

What’s he done? According to the Guardian, “The bribery case involved allegations that GSK sales executives paid up to 3bn yuan to doctors to encourage them to use its drugs… GSK was alleged to have used a network of more than 700 middlemen and travel agencies to bribe doctors and lawyers with cash and even sexual favours.” GSK is now saddled with a fine of comparable magnitude – close to £300m, approaching the GDP of a small nation.

In other words, it’s the old stuff – this is much the same as what GSK was fined $3 bn for in the US back in 2012. After I wrote about that case, I was fairly stunned at the response of the former GSK CEO Richard Sykes, who, when approached for a comment by journalists – remember that this stuff happened on his watch – said that he couldn’t comment until he’d read more about the case in the papers. In other words, he knew no more about it than you or I did. I wasn’t sure which was worse: that he expected us to believe this, or that it might be true.

But that, it seems, is the way GSK management views these scandals. For a start, there’s a whiff here of old-style orientalism: this is the way they do business in those Eastern countries, so we might as well join in. But the response of Andrew Witty, GSK’s current CEO, is just as astonishing. “Reaching a conclusion in the investigation of our Chinese business is important, but this has been a deeply disappointing matter for GSK”, he said. Uh, give me that again? “Reaching a conclusion is important” – that is, it’s “important” that this case has ended? I’m still struggling to find any objective meaning in these words. “Deeply disappointing” – meaning what, exactly? Disappointing that you got caught? Yes, I can see that. Disappointing that the ruling went against you? You mean, after Reilly admitted he’d done it? Disappointing that you are being run in such a sociopathic way? Missing a friend’s birthday party is disappointing. Pushing drugs on people by bribing doctors is many things, but disappointing isn’t the word that springs to mind.

Witty goes on: “We have and will continue to learn from this.” A shred of comfort here: I can stop worrying about whether my daughter is being taught English well enough to prepare her for a successful career. That aside: you will learn from this? What will you learn? That you shouldn’t bribe doctors? That you should hide malpractice better? That you seem to be rather bad at selecting your senior management? No, there is no lesson to be learnt here. There is just stuff to be deeply ashamed of – more ashamed, even, than is evidenced by taking a quarter of a million cut in your two million quid annual bonus.

Ah, but GSK has learnt. “The company said it had fundamentally changed the incentive programme for its sales force.” In other words, whereas before the incentive programme made it all too tempting to commit crimes, now it doesn’t. Oh, the lessons life teaches us.

Wednesday, September 03, 2014

Upside down and inside out

Tomorrow a new exhibition by Peter Randall-Page opens at Pangolin London, called Upside Down & Inside Out. Peter has a long-standing interest in natural processes responsible for the appearance of pattern and form, inspired by the ideas of D’Arcy Thompson. It has been my privilege to write an essay for the catalogue of this exhibition, which is freely available online. Here’s the piece anyway.

___________________________________________________________________________

There are, in the crudest of terms, two approaches to understanding the world. Some seek to uncover general, universal principles behind the bewildering accumulation of particulars; others find more enlightenment in life’s variety than in the simplifying approximations demanded in a quest for unity. The former are Platonists, and in science they tend to be found in greater numbers among physicists. The latter are Aristotelians, and they are best represented in biology. The Platonists follow the tree to its trunk, the Aristotelians work in the other direction, towards branch and leaf.

The work of artist and sculptor Peter Randall-Page explores these opposing – or perhaps one should say complementary – tendencies. He sees them in terms of the musical notion of theme and variation: a single Platonic theme can give rise to countless Aristotelian variations. The theme alone risks being static, even monotonous; a little disorder, a dash of unpredictability, generates enriching diversity, but that random noise must be kept under control if the result is not to become incomprehensible chaos. It is perhaps precisely because this tension obtains in evolution, in music and language, in much of our experience of life and world, that its expression in art has the potential to elicit emotion and identification from abstract forms. This balance of order and chaos is one that we recognize instinctively.

This is why Peter’s works commonly come as a series: they are multiple expressions of a single underlying idea, and only when viewed together do they give us a sense both of the fundamental generating principle and its fecund creative potential. The diversity depends on chance, on happy accidents or unplanned contingencies that allow the generative laws to unfold across rock or paper in ways quite unforeseen and unforeseeable. Like Paul Klee, Peter takes lines for a walk – but they are never random walks, there are rules that they must respect. And as with Klee, this apparent constraint is ultimately liberating to the imagination: given the safety net of the basic principles, the artist’s mind is free to play.

It might seem odd to talk about creativity in what is essentially an algorithmic process, an unfolding of laws. But it is hard to think of a better or more appropriate term to describe the “endless forms most beautiful” that we find in nature, and not just in animate nature. We could hardly fail to marvel at the inventiveness of a mind that could conceive of the countless variations on a theme that we observe in snowflakes, and it seems unfair to deny nature here inventiveness merely because we can see no need to attribute to her a mind, just as Alan Turing insisted that we have no grounds for denying a machine “intelligence” if we cannot distinguish its responses from those of a human.

This emergence of variety from simplicity is an old notion. “Nature”, wrote Ralph Waldo Emerson, “is an endless combination and repetition of a very few laws. She hums the old well-known air through innumerable variations.” When Emerson attested that such “sublime laws play indifferently through atoms and galaxies”, it is surely the word “play” that speaks loudest: there is a gaiety and spontaneity here that seems far removed from the mechanical determinism of which physics is sometimes accused. For Charles Darwin, one can’t help feel that the Aristotelian diversity of nature – in barnacles, earthworms and orchids – held at least as much attraction as the Platonic principle of natural selection.

But one of Peter’s most inspirational figures was skeptical of an all-embracing Darwinism as the weaver of nature’s threads. The Scottish zoologist D’Arcy Thompson felt that natural selection was all too readily advanced as the agency of every wrinkle and rhythm of organic nature. The biologists of his time tended to claim that all shape, form and regularity was the way it was because of adaptation. If biology has a more nuanced view today, Thompson must take some of the credit. He argued that it was often physical and mechanical principles that governed nature’s forms and patterns, not some infinitely malleable Darwinian force. Yet at root, Thompson’s picture – presented in his encyclopaedic 1917 book On Growth and Form – was not so different from Darwin’s insofar as it posited some quite general principles that could give rise to a vast gallery of variations. Thompson simply said that those principles need not be Darwinian or selective, but could apply both to the living and the inorganic worlds. In this view, it should be no coincidence that the branching shapes of river networks resemble those of blood vessels or lung passages, or that a potato resembles a pebble, or that the filigree skeletal shell of a radiolarian echoes the junctions of soap films in a foam. Thompson was a pioneer of the field loosely termed morphogenesis: the formation of shape. In particular, he established the idea that the appearance of pattern and regularity in nature may be a spontaneous affair, arising from the interplay of conflicting tendencies. No genes specify where a zebra’s stripes are to go: if anything is genetically encoded, it is merely the biochemical machinery for covering an arbitrary form with stripes.


The exoskeleton of a radiolarian

It is a fascination with these ideas that gives nearly all of Peter’s works their characteristic and compelling feature: you can’t quite decide whether the impetus for these complex but curiously geometric forms came from biology or from elsewhere, from cracks and crystals and splashes. That ambiguity fixes the imagination, inviting us to decode the riddle. This dance between geometry and organism is immediately apparent in the monumental sculpture Seed commissioned by the Eden Project in Cornwall: an egg-shaped block of granite 13 feet high and weighing 70 tonnes, the surface of which is covered in bumps that you quickly discern to be as apparently orderly as atoms packed together in a crystal. But are they? These bumps adapt their size to the curvature of the surface, and you soon notice that they progress around the ovoid in spirals, recalling the arrangements of leaflets on a pine-cone or florets on a sunflower head. Can living nature really be so geometric? Certainly it can, for both of those plant structures, like the compartments on a pineapple, obey mathematical laws that have puzzled botanists (including Darwin) for centuries. These plant patterns are called phyllotaxis, and the reason for them is still being debated. Some argue that they are ordered by the constraints on the buckling and wrinkling of new stem tissue, others that there is a biochemical process – not unlike that responsible for the zebra’s stripes and the leopard’s spots – that generates order among the successively sprouting buds.



Seed, by Peter Randall-Page, at the Eden Project, Cornwall, and the inspiration provided by pine cones.

The bulbous, raspberry-like surface of Seed was carved out of the pristine rock. But in nature such structures are typically grown from the inside outwards, the cells and compartments budding and swelling under the expansive pressures of biological proliferation. “Everything is what it is”, D’Arcy Thompson wrote, “because it got that way” – a seemingly obvious statement, but one that brings the focus to how it got that way: to the process of growth that created it. With this in mind, the bronze casts that Peter has created for this exhibition are also made “from the inside”. They are cast from natural boulders shaped by erosion, but Peter has worked the inner surfaces of the moulds using a special tool to scoop out hemispherical impressions packed like the cells of a honeycomb, so that the shapes cast from them follow the basic contours of the boulders while acquiring these new frogspawn-like cellular patterns on their surface. By subtracting material from the mould, the cast object is itself “grown”, emerging transformed and hitherto unseen from its chrysalis.


A new work by Peter Randall-Page (on the right) being cast at the foundry.

The organic and unfolding character of Peter’s work is nowhere more evident than in his “drawings” of branching, tree-like networks: Blood Tree, Sap River and Source Seed. These are made by allowing ink or wet pigment to flow under gravity across the paper in a quasi-controlled manner, so that not only does the flow generate repeated bifurcations but the branches acquire perfect mirror symmetry by folding the absorbent paper, just like the bilateral symmetry of the human body. The results are ordered, but punctuated and decorated with unique accidents. The final images are inverted so that the rivulets seem to stream upwards in increasingly fine filaments, defying gravity: a process of division without end, arbitrarily truncated and all emanating from a single seed. The inversion suggests growth and vitality, a reaching towards the infinite, although of course in real plants we know that these branches are echoed downwards in the traceries of the roots. There is irony too in the fact that, while sap does indeed rise from trunk to tip, driven by the evaporation of water from the leaf, water in a river network flows the other way, being gathered into the tributaries and converging into the central channel. Nature indeed makes varied use of these branching networks – and often for the same reason, that they are particular efficient at distributing fluid and dissipating the energy of flow. But we must be vigilant in making distinctions as well as analogies in how they are used.



Peter Randall-Page, Blood Tree and Sap River V.

Were real trees ever quite so regular, however? Some of these look more like genealogies, a mathematically precise doubling of branch density by bifurcation in each generation – until, perhaps, the individual branches blur into a continuum. We could almost be looking at a circuit diagram or technical chart – and yet the splodgy irregularities of the channels warn us that there is still something unpredictable here, as though these are computer networks grown from bacteria (as indeed some researchers are attempting to do). If there can be said to be beauty in the images, it depends on this uncertainty: as Ernst Gombrich put it, the aesthetic sense is awakened by “a struggle between two opponents of equal power, the formless chaos, on which we impose our ideas, and the all-too-formed monotony, which we brighten up by new accents”.

The vision of the world offered by Peter Randall-Page is therefore neither Platonic nor Aristotelian. We might better describe it as Neoplatonic: as asserting analogies and correspondences between apparently unrelated things. This tendency, which thrived in the Renaissance and can be discerned in the parallels that Leonardo da Vinci drew between the circulation of blood and of natural waters in rivers, later came to seem disreputable: like so much of the occult philosophy, it attempted to connect the unconnected, relying on mere visual puns and resemblances without regard to causative mechanisms (or perhaps, mistaking those analogies for a kind of mechanism itself). But thanks to the work of D’Arcy Thompson, and now modern scientific theories of complexity and pattern formation, a contemporary Neoplatonism has re-emerged as a valid way to understand the natural world. There are indeed real, quantifiable and verifiable reasons why zebra stripes look like the ripples of windblown sand, or why both the Giant’s Causeway and the tortoise shell are divided into polygonal networks. When we experience these objects and structures, we experience what art historian Martin Kemp has called “structural intuitions”, which are surely what the Neoplatonists were responding to. And these intuitions are what Peter’s work, with all its intricate balance of order and randomness, awakens in us.

To find out more: see Peter Randall-Page, “On theme and variation”, Interdisciplinary Science Reviews 38, 52-62 (2013) [here].

Saturday, August 30, 2014

When and why does biology go quantum?

Here is my latest Crucible column for Chemistry World. Do look out for Jim and Johnjoe’s book Life of the Edge, which very nicely rounds up where quantum biology stands right now – and Jim has just started filming a two-parter on this (for BBC4, I believe).

_________________________________________________________________________

“Quantum biology” was always going to be a winning formula. What could be more irresistible than the idea that two of the most mysterious subjects in science – quantum physics and the existence of life – are connected? Indeed, you get the third big mystery – consciousness – thrown in for good measure, if you accept the highly controversial suggestion by Roger Penrose and Stuart Hameroff that quantum behaviour of protein filaments called microtubules are responsible for the computational capability of the human mind [1].

Chemists might sigh that once again those two attention-grabbers, physics and biology, are appropriating what essentially belongs to chemistry. For the fact is that all of the facets of quantum biology that are so far reasonably established or at least well grounded in experiment and theory are chemical ones. The most arguably mundane, but at the same time the least disputable, area in which quantum effects make their presence felt in a biological context is enzyme catalysis, where quantum tunneling processes operate during reactions involving proton and electron transfer [2]. It also appears beyond dispute that photosynthesis involves transfer of energy from the excited chromophore to the reaction centre in an excitonic wavefunction that maintains a state of quantum coherence [3,4]. It still seems rather staggering to find in the warm, messy environment of the cell a quantum phenomenon that physicists and engineers are still struggling to harness at cryogenic conditions for quantum computing. The riskier reaches of quantum biology also address chemical problems: the mechanism of olfaction (proposed to happen by sensing of odorant vibrational spectra using electron tunneling [5]) and of magnetic direction-sensing in birds (which might involve quantum entanglement of electron spins on free radicals [6]).

Yet it is no quirk of fate that these phenomena are sold as a union of physics and biology, bypassing chemistry. For as Jim Al-Khalili and Johnjoe McFadden explain in a forthcoming comprehensive overview of the field, Life On the Edge (Doubleday), the first quantum biologists were pioneers of quantum theory: Pascual Jordan, Niels Bohr and Erwin Schrödinger. Bohr was never shy of pushing his view of quantum theory – the Copenhagen interpretation – into fields beyond physics, and his 1932 lecture “Light and Life” seems to have been influential in persuading Max Delbrück to turn from physics to genetics, on which his work later won him a Nobel Prize.

But it is Schrödinger’s contribution that is probably best known, for the notes from his lectures at Trinity College Dublin that he collected into his little 1944 book What Is Life? remain remarkable for their prescience and influence. Most famously, Schrödinger here formulated the idea that life somehow opposes the entropic tendency towards dissolution – it feeds on negative entropy, as he put it – and he also argued that genetic information might be transmitted by an arrangement of atoms that he called an “aperiodic crystal” – a description of DNA, whose structure was decoded nine years later (partly by another former physicist, Francis Crick), that still looks entirely apt.

One of the most puzzling of biological facts for Schrödinger was that genetic mutations, which were fundamentally probabilistic quantum events on a single-atom scale, could become fixed into the genome and effect macroscopic changes of phenotype. By the same token, replication of genes (which was understood before Crick and Watson revealed the mechanism) happened with far greater fidelity than one should expect from the statistical nature of molecular interactions. Schrödinger reconciled these facts by arguing that it was the very discreteness of quantum events that gave them an accuracy and stability not amenable to classical continuum states.

But this doesn’t sound right today. For the fact is that Schrödinger was underestimating biology. Far from being at the mercy of replication errors incurred by thermal fluctuations, cells have proof-reading mechanisms to check for and correct these mistakes.

There is an equal danger that quantum biologists may overestimate biology. For it’s all too tempting, when a quantum effect such as tunneling is discovered in a biological process, to assume that evolution has put it there, or at least found a way to capitalize on it. Tunnelling is nigh inevitable in proton transfer; but if we want to argue that biology exploits quantum physics here, we need to ask if its occurrence is enhanced by adaptation. Nobel laureate biochemist Arieh Warshel has rejected that idea, calling it a “red herring” [7].

Similarly in photosynthesis, it’s not yet clear if quantum coherence is adaptive. It does seem to help the efficiency of energy transfer, but that might be a happy accident – Graham Fleming, one of the pioneers in this area, says that it may be simply “a byproduct of the dense packing of chromophores required to optimize solar absorption” [8].

These are the kind of questions that may determine what becomes of quantum biology. For its appeal lies largely with the implication that biology and quantum physics collaborate, rather than being mere fellow travellers. We have yet to see how far that is true.

1. R. Penrose, Shadows of the Mind (Oxford University Press, 1994).
2. A. Kohen & J. P. Klinman, Acc. Chem. Res. 31, 397 (1998).
3. G. S. Engel et al., Nature 446, 782 (2007).
4. H. Lee, Y.-C. Cheng & G. R. Fleming, Science 316, 1462 (2007).
5. L. Turin, Chem. Senses 21, 773 (1996).
6. E. M. Gauger, E. Rieper, J. J. L. Morton, S. C. Benjamin & V. Vedral, Phys. Rev. Lett. 106, 040503 (2011).
7. P. Ball, Nature 431, 396 (2004).
8. P. Ball, Nature 474, 272 (2011).

Thursday, August 07, 2014

Calvino's culturomics

Italo Calvino’s If On a Winter’s Night a Traveller is one of the finest and funniest meditations on writing that I’ve ever read. It also contains a glorious pre-emptive critique on what began as Zipf’s law and is now called culturomics: the statistical mining of vast bodies of text for word frequencies, trends and stylistic features. What is so nice about it (apart from the wit) is that Calvino seems to recognize that this approach is not without validity (and I certainly think it is not), while at the same time commenting on the gulf that separates this clinical enumeration from the true craft of writing – and for that matter, of reading. I am going to quote the passage in full – I don’t know what copyright law might have to say about that, but I am trusting to the fact that anyone familiar with Calvino’s book would be deterred from trying to enforce ownership of the text by the baroque level of irony that would entail.

__________________________________________________________

[From Vintage edition 1998, translated by William Weaver]

I asked Lotaria if she has already read some books of mine that I lent her. She said no, because here she doesn’t have a computer at her disposal.

She explained to me that a suitably programmed computer can read a novel in a few minutes and record the list of all the words contained in the text, in order of frequency. ‘That way I can have an already completed reading at hand,” Lotaria says, “with an incalculable saving of time. What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings? An electronic reading supplies me with a list of the frequencies, which I have only to glance at to form an idea of the problems the book suggests to my critical study. Naturally, at the highest frequencies the list records countless articles, pronouns, particles, but I don’t pay them any attention. I head straight for the words richest in meaning; they can give me a fairly precise notion of the book.”

Lotaria brought me some novels electronically transcribed, in the form of words listed in the order of their frequency. “In a novel of fifty to a hundred thousand words,” she said to me, “I advise you to observe immediately the words that are repeated about twenty times. Look here. Words that appear nineteen times:
“blood, cartridge belt, commander, do, have, immediately, it, life, seen, sentry, shots, spider, teeth, together, your…”
“Words that appear eighteen times:
“boys, cap, come, dead, eat, enough, evening, French, go, handsome, new, passes, period, potatoes, those, until…”

“Don’t you already have a clear idea what it’s about?” Lotaria says. “There’s no question: it’s a war novel, all actions, brisk writing, with a certain underlying violence. The narration is entirely on the surface, I would say; but to make sure, it’s always a good idea to take a look at the list of words used only once, though no less important for that. Take this sequence, for example:
“underarm, underbrush, undercover, underdog, underfed, underfoot, undergo, undergraduate, underground, undergrowth, underhand, underprivileged, undershirt, underwear, underweight…”

“No, the book isn’t completely superficial, as it seemed. There must be something hidden; I can direct my research along these lines.”

Lotaria shows me another series of lists. “This is an entirely different novel. It’s immediately obvious. Look at the words that recur about fifty times:
“had, his, husband, little, Riccardo (51) answered, been, before, has, station, what (48) all, barely, bedroom, Mario, some, Times (47) morning, seemed, went, whom (46) should (45) hand, listen, until, were (43) Cecilia, Delaia, evening, girl, hands, six, who, years (42) almost, alone, could, man returned, window (41) me, wanted (40) life (39)"

“What do you think of that? An intimatist narration, subtle feelings, understated, a humble setting, everyday life in the provinces … As a confirmation, we’ll take a sample of words used a single time:
“chilled, deceived, downward, engineer, enlargement, fattening, ingenious, ingenious, injustice, jealous, kneeling, swallow, swallowed, swallowing…"

“So we already have an idea of the atmosphere, the moods, the social background… We can go on to a third book:
“according, account, body, especially, God, hair, money, times, went (29) evening, flour, food, rain, reason, somebody, stay, Vincenzo, wine (38) death, eggs, green, hers, legs, sweet, therefore (36) black, bosom, children, day, even, ha, head, machine, make, remained, stays, stuffs, white, would (35)"

“Here I would say we’re dealing with a full-blooded story, violent, everything concrete, a bit brusque, with a direct sensuality, no refinement, popular eroticism. But here again, let’s go on to the list of words with a frequency of one. Look, for example:
“ashamed, shame, shamed, shameful, shameless, shames, shaming, vegetables, verify, vermouth, virgins…"

“You see? A guilt complex, pure and simple! A valuable indication: the critical inquiry can start with that, establish some working hypothesis…What did I tell you? Isn’t this a quick, effective system?”

The idea that Lotaria reads my books in this way creates some problems for me. Now, every time I write a word, I see it spun around by the electronic brain, ranked according to its frequency, next to other words whose identity I cannot know, and so I wonder how many times I have used it, I feel the whole responsibility of writing weigh on those isolated syllables, I try to imagine what conclusions can be drawn from the fact that I have used this word once or fifty times. Maybe it would be better for me to erase it…But whatever other word I try to use seems unable to withstand the test…Perhaps instead of a book I could write lists of words, in alphabetical order, an avalanche of isolated words which expresses that truth I still do not know, and from which the computer, reversing its program, could construct the book, my book.

On the side of the angels


Here’s my take on Dürer’s Melencolia I on its 500th anniversary, published in Nature this week.

________________________________________________________

Albrecht Dürer’s engraving Melencholia I, produced 500 years ago, seems an open invitation to the cryptologist. Packed with occult symbolism from alchemy, astrology, mathematics and medicine, it promises hidden messages and recondite meanings. What it really tells us, however, is that Dürer was a philosopher-artist of the same stamp as Leonardo da Vinci, immersed in the intellectual currents of his time. In the words of art historian John Gage, Melencolia I is “almost an anthology of alchemical ideas about the structure of matter and the role of time” [1].

Dürer’s brooding angel is surrounded by, the instruments of the proto-scientist: a balance, an hourglass, measuring calipers, a crucible on a blazing fire. Here too is numerological symbolism in the “magic square” of the integers 1-16, the rows, columns and main diagonals of which all add up to 34: a common emblem of both folk and philosophical magic. Here is the astrological portent of a comet, streaming across a sky in which an improbable rainbow arches, a symbol of the colour-changing processes of the alchemical route to the philosopher’s stone. And here is the title itself: melancholy, associated in ancient medicine with black bile, the same colour of the material with which the alchemist’s Great Work to make gold was supposed to begin.

But why the tools of the craftsman – the woodworking implements in the foreground, the polygonal block of stone awaiting the sculptor’s hammer and chisel? Why the tormented, introspective eyes of the androgynous angel?

Melencolia I is part of a trio of complex etchings on copper plate that Dürer made in 1513-14. Known as the Master Engravings, they are considered collectively to raise this new art to an unprecedented standard of technical skill and psychological depth. This cluttered, virtuosic image is widely thought often said to represent a portrait of Dürer’s own artistic spirit. Melancholy, often considered the least desirable of the four classical humours then believed to govern health and medicine, was traditionally associated with insanity. But during the Renaissance it was ‘reinvented’ as the humour of the artistic temperament, originating the link popularly asserted between madness and creative genius. The German physician and writer Cornelius Agrippa, whose influential Occult Philosophy (widely circulated in manuscript form from 1510) Dürer is almost certain to have read, claimed that “celestial spirits” were apt to possess the melancholy man and imbue him with the imagination required of an “excellent painter”. For it took imagination to be an image-maker – but also to be a magician.

The connection to Agrippa was first made by the art historian Erwin Panofsky, a doyen of symbolism in art, in 1943. He argued that what leaves Dürer’s art-angel so vexed is the artist’s constant sense of failure: an inability to fly, to exceed the bounds of the human imagination and create the truly wondrous. Her tools, in consequence, lie abandoned. Why astronomy, geometry, meteorology and chemistry should have any relation to the artistic temperament is not obvious today, but in the early sixteenth century the connection would have been taken for granted by anyone familiar with the Neoplatonic idea of correspondences in nature. This notion, which pervades Agrippa’s writing, held that, which joined all natural phenomena, including the predispositions of humankind, are joined into a web of hidden forces and symbols. Melancholy, for instance, is the humour governed by the planet Saturn, whence “saturnine.” That blend of ideas was still present in Robert Burton’s The Anatomy of Melancholy, published a century later, which called melancholics “dull, sad, sour, lumpish, ill-disposed, solitary, any way moved, or displeased.” A harsh description perhaps, but Burton reminds us that “from these melancholy dispositions no man living is free” – for melancholy is in the end “the character of Mortality.” But some are more prone than others: Agrippa reminded his readers of Aristotle’s opinion “that all men that were excellent in any Science, were for the most part melancholy.”

So there would have been nothing obscure about this picture for its intended audience of intellectual connoisseurs. It was precisely because Dürer mastered and exploited the new technologies of printmaking that he could distribute these works widely, and he indicated in his diaries that he sold many on his travels, as well as giving others as gifts to friends and humanist scholars such as Erasmus of Rotterdam. Unlike paintings, you needed only moderate wealth to afford a print. Ferdinand Columbus, son of Christopher, collected over 3,000, 390 of which were by Dürer and his workshop [2].

But even if the alchemical imagery of Melencolia I was part of the ‘occult parcel’ that this engraving presents, Besides all this, it would be wrong to imagine that alchemy was, to Dürer and his contemporaries, purely an esoteric art associated with gold-making. As Lawrence Principe has recently argued (The Secrets of Alchemy, University of Chicago Press, 2013), this precursor to chemistry was not just or even primarily about furtive and futile experimentation to make gold from base metals. It was also a practical craft, not least in providing artists with their pigments. For this reason, artists commonly knew something of its techniques; Dürer’s friend, the German artist Lucas Cranach the Elder, was a pharmacist on the side, which may explain why he was almost unique in Northern Europe in using the rare and poisonous yellow pigment orpiment, an arsenic sulphide. The extent of Dürer’s chemical knowledge is not known, but he was one of the first artists to use acids for etching metal, a technique developed only at the start of the sixteenth century. The process required specialist knowledge: it typically used nitric acid, made from saltpetre, alum and ferrous sulphate, mixed with dilute hydrochloric acid and potassium chlorate (“Dutch mordant”).

Humility should perhaps compel us to concur with art historian Keith Moxey that “the significance of Melencolia I is ultimately and necessarily beyond our capacity to define” [3] – we are too removed from it now for its themes to resonate. But what surely endures in this image is a reminder that for the Renaissance artist there was continuity between theories about the world, matter and human nature, the practical skills of the artisan, and the business of making art.

References
1. Gage, J. Colour and Culture, p.149. Thames & Hudson, London, 1993.
2. McDonald, M. in Albrecht Dürer and his Legacy, ed. G. Bartrum. British Museum, London, 2003.
3. Moxey, K. The Practice of Theory, p.93. Cornell University Press, Ithaca, 1994.

Wednesday, August 06, 2014

All hail the man who makes the bangs


The nerd with the safety specs who is always cropping up on TV doing crazy experiments for Jim Al-Khalili or Mark Miodownik or Michael Mosley, while threatening to upstage them with his patter? That’s Andrea Sella of UCL, who has just been awarded the Michael Faraday Prize by the Royal Society. And this is a very splendid thing. With previous recipients including Peter Medawar, Richard Dawkins, David Attenborough, Robert Winston and Brian Cox, it is clear what a prestigious award this is. But whereas those folks have on the whole found themselves celebrated and supported for their science-communication work, Andrea has sometimes been under a lot of pressure to justify doing this stuff instead of concentrating on his research (on lanthanides). I hope very much that this recognition will help to underline the value of what we now call “outreach activities” when conducted by people in regular research positions, rather than just by those who have managed to establish science communication as a central component of their work. Being able to talk about science (and in Andrea’s case, show it in spectacular fashion) is a rare skill, the challenge of which is sometimes under-estimated and under-valued, and so it is very heartening to see it recognized here.

Monday, August 04, 2014

Dreams of invisibility

Here’s my Point of View piece from the Guardian Review a week ago. It’s fair to say that my new book Invisible is now out, and I’m delighted that John Carey seemed to like it (although I’m afraid you can’t fully see why without a subscription).

___________________________________________________________________

H. G. Wells claimed in his autobiography that he and Joseph Conrad had “never really ‘got on’ together”, but you’d never suspect that from the gushing fan letter Conrad sent to Wells, 8 years his junior but far more established as a writer, in 1897. Before their friendship soured Conrad was a great admirer of Wells, and in that letter he rhapsodized the author of scientific romances as the “Realist of the Fantastic”. It’s a perceptive formulation of the way Wells blended speculative invention with social realism: tea and cakes and time machines. That aspect is nowhere more evident than in the book that stimulated Conrad to write to his idol: The Invisible Man.

To judge from Wells’ own account of his aims, Conrad had divined them perfectly. “For the writer of fantastic stories to help the reader to play the game properly”, he wrote in 1934, “he must help him in every possible unobtrusive way to domesticate the impossible hypothesis… instead of the usual interview with the devil or a magician, an ingenious use of scientific patter might with advantage be substituted. I simply brought the fetish stuff up to date, and made it as near actual theory as possible.”

In other words, Wells wanted to turn myth into science, or at least something that would pass for it. This is why The Invisible Man is a touchstone for interpreting the claims of modern physicists and engineers to be making what they call “invisibility cloaks”: physical structures that try to hide from sight what lies beneath. The temptation is to suggest that, as with atomic bombs, Wells’ fertile imagination was anticipating what science would later realise. But the light that his invisible man sheds on today’s technological magic is much more revealing.

It’s likely Wells was explicitly updating myth. One of the earliest stories about invisibility appears near the start of Plato’s Republic, a book that had impressed Wells in his youth. Plato’s narrator Glaucon tells of a Lydian shepherd named Gyges who discovered a ring of invisibility in the bowels of the earth. Without further ado, Gyges used the power to seduce the queen, kill the king and establish a new dynasty of Lydian rulers. In a single sentence Plato tells us what many subsequent stories of invisibility would reiterate about the desires that the dream of invisibility feeds: they are about sex, power and death.

Evidently this power corrupts – which is one reason why Tolkien made much more mythologically valid use of invisibility magic than did J. K. Rowling. But Glaucon’s point has nothing to do with invisibility itself; it is about moral responsibility. Given this power to pass unseen, he says, no one “would be so incorruptible that he would stay on the path of justice, when he could with impunity take whatever he wanted from the market, go into houses and have sexual relations with anyone he wanted, kill anyone, and do the other things which would make him like a god among men.” The challenge is how to keep rulers just if they can keep their injustices hidden.

The point about Gyges’ ring is that it doesn’t need to be explained, because it is metaphorical. The same is true of this and other magic effects in fairy tales: they just happen, because they are not about the doing but the consequences. Fairy-tale invisibility often functions as an agent of seduction and voyeurism (see the Grimms’ “The Twelve Dancing Princesses”), or a gateway to Faerie and other liminal realms. It’s precisely because children don’t ask “how is that possible?” that we shouldn’t fret about filling them with false beliefs.

But it seems to be a peculiarity of our age that we focus on the means of making magic and not the motive. The value of The Invisible Man is precisely that it highlights the messy outcome of this collision between science and myth. True, Wells makes some attempt to convince us that his anti-hero Griffin is corrupted by discovering the “secret of invisibility” – but it is one of the central weaknesses of the tale that Griffin scarcely has any distance to fall, since he is thoroughly obnoxious from the outset, driving his poor father to suicide by swindling him out of money he doesn’t possess in order to fund his lone research. If we are meant to laugh at the superstitions of the bucolic villages of Iping as the invisible Griffin rains blows on them, I for one root for the bumpkins.

No, where the book both impresses and exposes is in its description of how Griffin becomes invisible. A plausible account of that trick had been attempted before, for example in Edward Page Mitchell’s 1881 short story “The Crystal Man”, but Wells had enough scientific nous to make it convincing. While Mitchell’s scientist simply makes his body transparent, Wells knew that it was necessary not just to eliminate pigmentation (which Griffin achieves chemically) but to eliminate refraction too: the bending of light that we see through glass or water. There was no known way of doing that, and Wells was forced to resort to the kind of “jiggery-pokery magic” he had criticized in Mary Shelley’s Frankenstein. He exploited the very recent discovery of X-rays by saying that Griffin had discovered another related form of “ethereal vibration” that gives materials the same refractive strength as air.

Despite this, Griffin finds that invisibility is more a burden than a liberation. He dreams of world domination but, forgetting to vanish his clothes too, has to wander naked in the winter streets of London, bruised by unseeing crowds and frightened that he will be betrayed by the snow that threatens to settle on his body and record his footsteps. His eventual demise has no real tragedy in it but is like the lynching of a common criminal, betrayed by sneezes, sore feet and his digestive tract (in which food visibly lingers for a time). In all this, Wells shows us what it means to domesticate the impossible, and what we should expect when science tries to do magic.

That same gap between principle and practice hangs over today’s “invisibility cloaks”. They work in a different, and technologically marvelous, way: not by transparency, but by guiding light around the object they hide. But when the first of them was unveiled in 2006, it was perplexing: for there it sat, several concentric rings of printed circuits, as visible as you or me. It was, the scientists explained, invisible to microwaves, not to visible light. What had this to do with Gyges, or even with Griffin?

Some scientists argue that, for all their technical brilliance (which is considerable, and improving steadily), these constructs should be regarded as clever optical devices, not as invisibility cloaks. It’s hard to imagine how they could ever conceal a person walking around in daylight. This “magic” is cumbersome and compromised: it is not the way to seduce the queen, kill the king and become a tyrant.

This isn’t to disparage the invention and imagination that today’s “invisibility cloaks” embody. But it’s a reminder that myth is not a technical challenge, not a blueprint for the engineer. It’s about us, with all our desires, flaws, and dreams.

Cutting-edge metallurgy


This is my Materials Witness column for the August issue of Nature Materials. I normally figure these columns are a bit too specialized to put up here, but this subject is just lovely: there is evidently so much more to the "sword culture" of the so-called Dark Ages, the Viking era and the early medieval period than a bunch of blokes running amok with big blades. As Snorri Sturluson surely said, you can't beat a good sword.

__________________________________________________________________

There can be few more mythologized ancient materials technologies than sword-making. The common view – that ancient metalsmiths had an extraordinary empirical grasp of how to manipulate alloy microstructure to make the finest-quality blades – contains a fair amount of truth. Perhaps the most remarkable example of this was discovered several years ago: the near-legendary Damascus blades used by Islamic warriors, which were flexible yet strong and hard enough to cleave the armour of Crusaders, contained carbon nanotubes [1]. Formation of the nanotubes was apparently catalysed by impurities such as vanadium in the steel, and these nanostructures assisted the growth of cementite (Fe3C) fibres that thread through the unusually high-carbon steel known as wootz, making it hard without paying the price of brittleness.

Yet it seems that the skill of the swordsmith wasn’t directed purely at making swords mechanically superior. Thiele et al. report that the practice called pattern-welding, well established in swords from the second century AD to the early medieval period, was primarily used for decorative rather than mechanical purposes and, unless used with care, could even have compromised the quality of the blades [2].

Pattern-welding involved the lamination and folding of two materials – high-phosphorus iron and low-phosphorus mild steel or iron – to produce a surface that could be polished and etched to striking decorative effect. After twisting and grinding, the metal surface could acquire striped, chevron and sinuous patterns that were highly prized. A letter to a Germanic tribe in the sixth century AD, complimenting them for the swords they gave to the Ostrogothic king Theodoric, conqueror of Italy, praised the interplay of shadows and colours in the blades, comparing the pattern to tiny snakes.


This and the image above are modern pattern-welded swords made by Patrick Barta using traditional methods.

But was it all about appearance? Surely what mattered most to a warrior was that his sword could be relied on to slice, stab and maim without breaking? It seems not. Thiele et al. commissioned internationally renowned swordsmith Patrick Barta to make pattern-welded rods for them using traditional techniques and re-smelted medieval iron. In these samples the high-phosphorus component was iron and not, as some earlier studies have mistakenly assumed, steel.

They subjected the samples to mechanical tests that probed the stresses typically experienced by a sword: impact, bending and buckling. In no cases did the pattern-welded samples perform any better than hardened and tempered steel. This is not so surprising, given that phosphoric iron itself has rather poor toughness, no matter how it is laminated with other materials.

The prettiness of pattern welding didn’t, however, have to compromise the sword’s strength, since – at least in later examples – the patterned section was confined to panels in the central “fuller” of the blade, while the cutting edge was steel. All the same, here’s an example of how materials use may be determined as much by social as by technical and mechanical considerations. From the Early to the High Middle Ages, swords weren’t just or even primarily for killing people with. For the Frankish warrior, the spear and axe were the main weapons; swords were largely symbols of power and status, carried by chieftains, jarls and princes but used only rarely. Judging by the modern reproductions, they looked almost too gorgeous to stain with blood.

References
1. Reibold, M. et al., Nature 444, 286 (2006).
2. Thiele, A., Hosek, J., Kucypera, P. & Dévényi, L. Archaeometry online publication doi:10.1111/arcm.12114 (2014).