Thursday, December 03, 2015

Can science be made to work better?

Here is a longer version of the leader that I wrote for Nature this week.

_______________________________________________________________________

Suppose you’re seeking to develop a technique for transferring proteins from a gel to a plastic substrate for easier analysis. Useful, maybe – but will you gain much kudos for it? Will it enhance the reputation of your department? One of the sobering findings of last year’s survey of the 100 most cited papers on the Web of Science (Nature 514, 550; 2014) was how many of them reported such apparently mundane methodological research (this one was number six).

Not all prosaic work reaches such bibliometric heights, but that doesn’t deny its value. Overcoming the hurdles of nanoparticle drug delivery, for example, requires the painstaking characterization of pathways and rates of breakdown and loss in the body: work that is probably unpublishable, let alone unglamorous. One can cite comparable demands of detail for getting just about any bright idea to work in practice – but it’s the initial idea, not the hard grind, that garners the praise and citations.

An aversion to routine yet essential legwork seems at face value to be quite the opposite of the conclusions of a new study on how scientists pick their research topics. This analysis of discovery and innovation in biochemistry (A. Rzhetsky et al., Proc. Natl Acad. Sci. USA 112, 14569; 2015) finds that, in this field at least, choices of research problems are becoming more conservative and risk-averse. The results suggest that this trend over the past 30 years is quite the reverse of what is needed to make scientific discovery efficient.

But these problems – avoidance of both risk and drudge – are just opposite sides of the same coin. They reflect the fact that scientific norms, institutions and reward structures increasingly force researchers to aim at a “sweet spot” that will maximize their career prospects: work that is novel enough to be publishable but orthodox enough not to alarm or offend referees. That situation is surely driven in large degree by the importance attached to citation indices, as well as by the insistence of grant agencies that the short-term impact of the work can be defined in advance.

One might quibble with the necessarily crude measures of research strategy and knowledge generation employed in the PNAS study. But its general conclusion – that current norms discourage risk and therefore slow down scientific advance, and that the problem is worsening – ring true. It’s equally concerning that the incentives for boring but essential collection of fine-grained data to solve a specific problem are vanishing in a publish-or-perish culture.

A fashionably despairing cry of “Science is broken!” is not the way forward. The wider virtue of Rzhetsky et al.’s study is that it floats the notion of tuning practices and institutions to accelerate the process of scientific discovery. The researchers conclude, for example, that publication of experimental failures would assist this goal by avoiding wasteful repetition. Journals chasing impact factors might not welcome that, but they are no longer to sole repositories of scientific findings. Rzhetsky et al. also suggest some shifts in institutional structures that might help promote riskier but potentially more groundbreaking research – for example, spreading both risk and credit among teams or organizations, as used to be common at Bell Labs.

The danger is that efforts to streamline discovery simply become codified into another set of guidelines and procedures, creating yet more hoops that grant applicants have to jump through. If there’s one thing science needs less of, it is top-down management. A first step would be to recognize the message that research on complex systems has emphasized over the past decade or so: efficiencies are far more likely to come from the bottom up. The aim is to design systems with basic rules of engagement for participating agents that best enable an optimal state to emerge. Such principles typically confer adaptability, diversity, and robustness. There could be a wider mix of grant sources and sizes, say, less rigid disciplinary boundaries, and an acceptance that citation records are not the only measure of worth.

But perhaps more than anything, the current narrowing of objectives, opportunities and strategies in science reflects an erosion of trust. Obsessive focus on “impact” and regular scrutiny young (and not so young) researchers’ bibliometric data betray a lack of trust that would have sunk many discoveries and discoverers of the past. Bibliometrics might sometimes be hard to avoid as a first-pass filter for appointments (Nature 527, 279; 2015), but a steady stream of publications is not the only or even the best measure of potential.

Attempts to tackle these widely acknowledged problems are typically little more than a timid rearranging of deckchairs. Partly that’s because they are seen as someone else’s problem: the culprits are never the complainants, but the referees, grant agencies and tenure committees who oppress them. Yet oddly enough, these obstructive folk are, almost without exception, scientists too (or at least, once were).

It’s everyone’s problem. Given the global challenges that science now faces, inefficiencies can exact a huge price. It is time to get serious about oiling the gears.

Friday, October 16, 2015

The ethics of freelance reporting

There’s a very interesting post (if you’re a science writer) on journalistic ethics from Erik Vance here. I confess that I’ve been blissfully ignorant of this PR sideline that many science writers apparently have. It makes for a fairly clear division – either you’re writing PR or you’re not – but it doesn’t speak to my situation, and I can’t be alone in that. Erik worries about stories that come out of “institutionally sponsored trips”. I’m not entirely clear what he means by that, but I’m often in a situation like this:

A lab or department has asked if I might come and give a talk or take part in a seminar or some such. They’ll pay my expenses, including accommodation if necessary. And if I think it’ll be interesting, I’ll try to do it.

Is this then a junket? You see, what often happens is that the institute in question might line up a little programme of visits to researchers at the place in question, because I might find their work interesting or perhaps just because they would like to talk to me. And indeed I might well find their work interesting and want to write about it, or perhaps about the broader issues of the field they bring to my attention.

Now the question is: am I compromised by having the trip paid for me? Even more so on those rare occasions that I’m paid an honorarium? It’s for such reasons that Nature would always insist that the journal, not the visited institution, pays the way for its writers. This seems fair enough for a journal, but shouldn’t the same apply to a freelancer then?

I could say that life as a freelancer is already hard enough, given for example the more or less permanent freeze in pay rates, without our having to pay ourselves for any travelling that might produce a story (not least because you don’t always know that in advance). When a journal writer goes to give a talk or makes a lab visit, they are being paid by their employer to do it. As a freelancer, you are sacrificing working time to do that, and so are essentially already losing money by making the trip even if your travel and accommodation are covered.

But that doesn’t really answer the question, does it? It doesn’t mean that the piece you write is uncompromised just because you couldn’t afford to have gone if your expenses weren’t paid.

I don’t know what the answer is here. I do know that as a freelancer you’ll only get to write a piece if you pitch it to an editor who likes it, i.e. if it is a genuinely good story in the first place. In fact, you’ll probably only want to write it anyway if you sense it’s a good story yourself, and you do yourself no favours by pitching weak stories. But will your coverage be influenced by having been put up at a nice (if you’re lucky!) hotel by the institution? Erik is right to warn about unconscious biases, but I can’t easily see why the story would come out any different than if you’d come across the same work in a journal paper – you’d still be getting outside comment on it from objective specialists and so on. Still, I might be missing some important considerations here, and would be glad to have any pointed out to me.

It seems to me that a big part of this comes down to the attitude of the writer. If you start off from the position that you’re a cheerleader for science, you’re likely to be uncritical however you discover the story. If you consider yourself a critic in the proper sense, like a music or theatre critic, you’ll tend to look at the work accordingly. The same, it seems to me, has always applied to the issue of showing the authors a draft of the piece you’ve written about their work. Some journalists consider this an absolute no-no. I’ve never really understood why. If the scientist comes back pointing out technical errors in the piece, as they often do (and almost invariably in the nicest possible way), you get to give your readers a more accurate account. If they start demanding changes that seem unnecessary, interfering or pedantic, for example insisting that Professor Plum’s comments on their work are way off key, you just say sorry guys, this is the way it stays. That’s surely the job of a journalist. I can’t remember a time when feedback from authors on a rough draft was ever less than helpful and improving. So I guess I just don’t see what the problem is here.

But I am very conscious that I’ve never had any real training, as far as I can recall, in ethics in journalism. So I might be out of touch with what the issues are.

Multiverse of Stone

This summer I went to one of the most extraordinary scientific gatherings I’ve ever attended. Where else would you find Martin Rees, Rolf Hauer, Carlos Frenk, Alex Vilenkin and Bernard Carr assembled to talk about the multiverse idea? The meeting was convened by architect and designer Charles Jencks to mark the opening of his remarkable new landscape, the Crawick Multiverse, in Dumfries on the Scottish borders. And the setting was no less striking: it took place in Drumlanrig Castle, a splendid baronial edifice that is the ancenstral home of the Duke of Buccleugh, whose generosity and hospitality made it probably the most congenial meeting I’ve ever been to. Representing the humanities were Mary-Jane Rubenstein, whose excellent book Worlds Without End (2014) places the multiverse in historical and theological perspective, Martin Kemp, who talked about spirals in nature (look out for Martin’s forthcoming book Structural Intuitions) and Michael Benson, whose Cosmigraphics (2014) shows how we have depicted and conceptualized the universe over time. I talked about pattern formation in nature.

Despite all this, the piece that I wrote about the event has not found a home, having fallen between too many stools in various potential forums. So I’ll put it here. You will also be able to download a pdf of this article from my website here, once some site reworking has been completed.

______________________________________________________________________

With the Crawick Multiverse, landscape architect and designer Charles Jencks has set the archaeologists of the future a delightful puzzle. They will spin theories of various degrees of fancifulness to explain why this earthwork was built in the rather beautiful but undeniably stark wilds of Dumfries and Galloway. Is there a cosmic significance in the alignment of the stone-flanked avenue? What do these twinned spiralling tumuli denote, these little crescent lagoons, these radial splashes of stone paving? Whence these cryptic inscriptions “Balanced Universe” and “PIC” on slabs and monoliths?


The Crawick Multiverse

If any futurist historian is on hand to explain, there are two ways in which her story might go. Either she will say that the monument marks the moment when ancient science awoke to the realization that, as every child now knows, ours is not the only universe but is merely one among the multiverse of worlds, all springing perpetually into existence in an expanding matrix of “false vacuum”, each with its unique laws of physics. Or she will explain (with a warning that we should not Whiggishly mock the seemingly odd and absurd ideas of the past) that the Crawick site was built at a time when scientists seriously entertained so peculiar and now obviously misguided a notion.

If only we could tell which way it will go! But right now, that’s anyone’s guess. Whatever the outcome, Jencks, the former theorist of postmodernism who today takes risks simultaneously intellectual, aesthetic, critical and financial in his efforts to represent scientific ideas about the cosmos at a herculean scale, has created an extraordinarily ambitious landscape that manages to blend Goldsworthy-style nature art with cutting-edge cosmology and more than a touch of what might be interpreted as New Age paganism. At the grand opening of the Crawick (pronounced “Croyck”) Multiverse in late June, no one seemed too worried if the science will stand up to scrutiny. Instead there were pipe bands, singing schoolchildren, performance art and generous blasts of Hibernian weather.

Jencks is no stranger to this kind of grand statement. His house at Portrack, near Dumfries and a 30-minute drive from Crawick, sits amidst the Garden of Cosmic Speculation, a landscape of undulating turf terraces, stones, water pools and ornate metal sculptures that represents all manner of scientific ideas, from the spacetime-bending antics of black holes and the helical forms of DNA to mathematical fractals and the “symmetry-breaking” shifts that produced structure and order in the early universe. Jencks opens the garden to the public for one day each year to raise funds for Maggie’s Centres, the drop-in centres for cancer patients that Jencks established after the death of his wife Maggie Keswick Jencks from cancer in 1995.


A panorama of Charles Jencks’ Garden of Cosmic Speculation at Portrack House, Dumfries. (Photo: Michael Benson.)

Jencks also designed the lawn that fronts the Scottish National Gallery of Modern Art in Edinburgh, a series of crescent-shaped stepped mounds and pools inspired by chaos theory and “the way nature organizes itself”, in Jencks’ words. By drawing on cutting-edge scientific ideas, Jencks has cultivated strong ties with scientists themselves, and a plan for a landscape at the European particle-physics centre of CERN, near Geneva, sits on the shelf, awaiting funding.



Charles Jencks’ science-inspired land art in the Garden of Cosmic Speculation (top) and the garden of the Scottish National Gallery of Modern Art in Edinburgh (bottom).

The Multiverse project began when the Duke of Buccleuch and Queensberry, whose ancestral home at Drumlanrig Castle stands near to Crawick, asked Jencks to reclaim the site, dramatically surrounded by rolling hills but disfigured by the slag heaps from open-cast coal mining. When work began in 2012, the excavations unearthed thousands of boulders half-buried in the ground, which Jencks has used to create a panorama of standing stones and sculpted tumuli.

“As we discovered more and more rocks, we laid out the four cardinal points, made the north-south axis the primary one, and thereby framed both the far horizons and the daily and monthly movements of the sun”, Jencks says. “One theory of pre-history is that stone circles frame the far hills and key points, and while I wanted to capture today’s cosmology not yesterday’s, I was aware of this long landscape tradition.”

Visitors to the site should recognize the spiral form of our own Milky Way Galaxy, Jencks says – but the layout invites them to delve deeper into cosmogenic origins. The Milky Way, he says, “emerged from our Local Group of galaxies, but where did they come from? From the supercluster of galaxies, and where did they come from? From the largest structures in the universe, the web of filaments? And so on and on.” Ultimately this leads to the questions confronted by theories of the Big Bang in which our own universe is thought to have formed – and to questions about whether this cosmic outburst, or others, might also have spawned other universes, or a multiverse.

How many universes do you need?

A decade or two ago, allusions to the notion that there are many – perhaps infinitely many – universes would have been regarded as dabbling on the fringes of respectable science. Now the multiverse idea is embraced by many leading cosmologists and other physicists. That’s not because we have any evidence for it, but because it seems to offer a simultaneous resolution to several outstanding problems on the wild frontier where fundamental physics – the science of the immeasurably small – blends with cosmology, which attempts to explain the origin and evolution of all the vastness of space.

“In the last twenty years the multiverse has developed from an exotic speculation into a full-blown theory”, says Jencks. “From avant-garde conjecture held by the few to serious hypothesis entertained by the many, leading thinkers now believe the multiverse is a plausible description of an ensemble of universes.”

To explore how the multiverse came in from the cold, Jencks convened a gathering of cosmologists and particle physicists whose eminence would rival the finest of international science conventions. While the opening celebrations braved the elements at Crawick, the scientists were hosted by the duke at Drumlanrig Castle – perhaps the most stunning example of French-inflected Scottish baronial architecture, fashioned from the gorgeous red stone of Dumfries. In one long afternoon while the sun conveyed its rare blessing on the jaw-dropping gardens outside, these luminaries explained to an invited audience why they have come to suppose a multiplicity of universes beyond all reasonable measure: why an understanding of the deepest physical laws is compelling us to make the position of humanity in the cosmos about as insignificant as it could possibly be.


Drumlanrig Castle near Dumfries, where scientists convened to discuss the multiverse.

It was a decidedly surreal gathering, with Powerpoint presentations on complex physics amidst Louis XIV furniture, while massive portraits of the duke’s illustrious ancestors (including Charles II’s unruly illegitimate son the 1st Duke of Monmouth) looked on. When art historian Martin Kemp, opening the proceedings with a survey of spiral patterns, discussed the nature art of Andy Goldsworthy, only to have the artist himself pop up to explain his intentions, one had to wonder if we had already strayed into some parallel universe.

Martin Rees, Astronomer Royal and past President of the Royal Society, suggested that the multiverse theory represents a “fourth Copernican revolution”: the fourth time since Copernicus shoved the earth away from the centre of creation that we have been forced to downgrade our status in the heavens. Yet curiously, this latest perspective also gives our very existence a central role in any explanation of why the basic laws of nature are the way they are.

Here’s the problem. A quest for the much-vaunted Theory of Everything – a set of “simple” laws, or perhaps just a single equation, from which all the other principles of physics can be derived, and which will achieve the much-sought reconciliation of gravity and quantum theory – has landed us in the perplexing situation of having more alternatives to choose from than there are fundamental particles in the known universe. To be precise, the latest version of string theory, which many physicists who delve into these waters insist is the best candidate for a “final theory”, offers 10**500 (1 followed by 500 zeros) distinct solutions: that many possible variations on the laws of physics, with no obvious reason to prefer one over any other. Some are tempted to conclude that this is the fault of string theory, not of the universe, and so prefer to ditch the whole edifice, which without doubt is built on some debatable assumptions and remains far beyond our means to test directly for the foreseeable future.

If that were all there was to it, you might well wonder if indeed we should be wiping the board clean and starting again. But cosmology now suggests that this crazy proliferation of physical laws can be put to good use. The standard picture of the Big Bang – albeit not the one that all physicists embrace – posits that, a fraction of a second after the universe began to expand from its mysterious origin, it underwent a fleeting instant of expansion at an enormous rate, far faster than the speed of light, called inflation. This idea explains, in what might seem like but is not a paradox, both why the universe is so uniform everywhere we look and why it is not perfectly so. Inflation blew up the “fireball” to a cosmic scale before it had a chance to get too clumpy.

That primordial state would, however, have been unavoidably ruffled by the tiny chance variations that quantum physics creates. These fluctuations are now preserved at astronomical scales in slight differences in temperature of the cosmic microwave background radiation, the faint afterglow of the Big Bang itself that several satellite-based telescopes have now mapped out in fine detail. As astrophysicist Carlos Frenk explained at Drumlanrig, the match between the spectrum of temperature variations – their size at different distance scales – predicted by inflationary theory and that measured is so good that, were it not so well attested in so huge an international effort, it would probably arouse suspicions of data-rigging.


The temperature variations of the cosmic microwave background, as mapped by the European Space Agency’s Planck space telescope in 2013. The tiny variations correspond to regions of slightly different density in the very early universe that seeded the formation of clumps of matter – galaxies and stars – today.

What has this got to do with multiverses? Well, to put it one way: if you have a theory for how the Big Bang happened as a natural phenomenon, almost by definition you no longer have reason to regard it as a one-off event. The current view is that the Big Bang itself was a kind of condensation of energy-filled empty space – the “true vacuum” – out of an unstable medium called the “false vacuum”, much as mist condenses from the moist air of the Scottish hills. But this false vacuum, for reasons I won’t attempt to explain, should also be subject to a kind of inflation in which it expands at fantastic speed. Then our universe appears as a sort of growing “bubble” in the false vacuum. But others do too: not just 13.6 billion years ago (the age of our universe) but constantly. It’s a scenario called “eternal inflation”, as one of its pioneers, cosmologist Alex Vilenkin, explained at the meeting. In this view, there are many, perhaps infinitely many, universes appearing and growing all the time.

The reason this helps with string theory is that it relieves us of the need to select any one of the 10**500 solutions it yields. There are enough homes for all versions. That’s not just a matter of accommodating homeless solutions to an equation. One of the most puzzling questions of modern cosmology is why the vacuum is not stuffed full of unimaginable amounts of energy. Quantum theory predicts that empty space should be so full of particles popping in and out of existence all the time, just because they can, that it should hold far more energy than the interior of a star. Evidently it doesn’t, and for a long time it was simply assumed that some unknown effect must totally purge the vacuum of all this energy. But the discovery of dark energy in the late 1990s – which manifests itself as an acceleration of the expansion of our universe – forced cosmologists to accept that a tiny amount of that vacuum energy does in fact remain. In this view, that’s precisely what dark energy is. Yet it is so tiny an amount – 10**-122 of what is predicted – that it seems almost a cosmic joke that the cancellation should be so nearly complete but not quite.

But if there is a multiverse, this puzzle of “fine-tuning” goes away. We just happen to be living in one of the universes in which the laws of nature are, out of all the versions permitted by string theory, set up this way. Doesn’t that seem too much of an extraordinary good fortune? Well no, because without this near cancellation of the vacuum energy, atoms could not exist, and so neither could ordinary matter, stars – or us. In any universe in which these conditions pertain, intelligent beings might be scratching their heads over this piece of apparent providence. In those – far more numerous – where that’s not the case, there is no one to lament it.

The pieces of the puzzle, bringing together the latest ideas in cosmology and fundamental physics, seem suddenly to dovetail rather neatly. Too neatly for some, who say that such arguments are metaphysical sleight of hand – a kind of cheating in which we rescue ourselves from theoretical problems not by solving them but by dressing them up as their own solution. How can we test these assertions, they ask? And isn’t it defeatist to accept that there’s ultimately no fundamental reason why the fundamental constant of nature have the values they do, because in other universes they don’t?

But there’s no denying that, without the multiverse, the “fine-tuning” problem of dark energy alone looks tailor-made for a theologian’s “argument by design”. If you don’t want a God, astrophysicist Bernard Carr has quipped (only half-jokingly), you’d better have a multiverse. It’s not the first time a “plurality of worlds” has sparked theological debate, as philosopher of religion Mary-Jane Rubenstein reminded the Drumlanrig gathering – his interpretation (albeit not simply his assertion) of such a multiplicity was partly what got the Dominican friar Giordano Bruno burnt at the stake in 1600.

Do these questions drift beyond science into metaphysics? Perhaps – but why should we worry about that, Carr asked the meeting? At the very least, if true science must be testable, who is to say on what timescale it must happen? (The current realistic possibilities at CERN are certainly more modest, as its Director General Rolf Heuer explained – but even they don’t exclude an exploration of other types of multiverse ideas, such as a search for the mini-black holes predicted by some theories that invoke extra, “invisible” dimensions of space beyond our familiar three.)

Reclaiming the multiverse

How much of all this finds its way into Jencks’ Crawick Multiverse is another matter. In line with his thinking about the hierarchy of “cosmic patterns” through which we trace our place in the cosmos, many of the structures depict our immediate environment. Two corkscrew hillocks represent the Milky Way galaxy and its neighbour Andromeda, while the local “supercluster” of galaxies becomes a gaggle of rock-paved artificial drumlins. The Sun Amphitheatre, which can house 5,000 people (though it’s a brave soul who organizes outdoor performances on a Scottish hillside at any time of year), is designed to depict the crescent shapes of a solar eclipse. The Multiverse itself is a mound up which mudstone slabs trace a spiral path, some of them carved to symbolize the different kinds of universe the theory predicts.


The local universe represented in the Crawick Multiverse.

But why create a Multiverse on a Scottish hillside anyway? Because, Jencks says, “it is our metaphysics, or at least is fast becoming so. And all art aspires to the condition of its present metaphysics. That’s so true today, in the golden age of cosmology, when the boundaries of truth, nature, and culture are being rewritten and people are again wondering in creative ways about the big issues.” “I wanted to confront the basic question which so many cosmologists raise: why is our universe so well-balanced, and in so many ways? What does the apparent fine-tuning mean, how can we express it, make it comprehensible, palpable?”

“Apart from all this”, he adds, “if you have a 55-acre site, and almost half the available money has to go into decontamination alone, then you’d better have a big idea for 2000 free boulders.”


Charles Jencks introduces his multiverse. (Photo: Michael Benson.)

The sculptures and forms of the Crawick Multiverse reflect Jencks’ own unique and sometimes impressionistic take on the theories. For example, he prefers to replace “anthropic” reasoning that uses our own observation of the observable universe as an explanation of apparent contingencies with the notion that this universe (at least) has a tendency to spawn ever more complexity: his Principle of Increasing Complexity (PIC). He is critical of some of science’s “Pentagon metaphors – wimps and machos (candidates for the mysterious dark matter that exceeds the amount of ordinary visible matter by a factor of around four), selfish genes and so on. “The universe did not start in a big bang”, Jencks says. “It was smaller than a quark, and noise wasn’t its most significant quality.” He prefers the term “Hot Stretch”.

But his intention isn’t really pedagogical – it’s about giving some meaning to this former site of mining-induced desolation. “I hope to achieve, first, something for the economically depressed coal-mining towns in the area”, Jencks says. “Richard [Buccleuch] had an obligation to make good the desolation, and he feels this responsibility strongly. I wanted to create something that related to this local culture. Like Arte Povera it makes use of what is to hand: virtually everything comes from the site, or three miles away. Second, I was keen on getting an annual festival based on local culture – the pipers in the area, the Riding of the Marches, the performing artists, the schools.”

Visitors to the site seem likely to be offered only the briefest of introductions to the underlying cosmic themes. That’s probably as it should be, not only because the theories are so provisional (they’ll surely look quite different in 20 years time, when the earthworks have had a chance to bed themselves into the landscape) but because, just like the medieval cosmos encoded in the Gothic cathedrals, this sort of architecture is primarily symbolic. It will speak to us not like a lecture, but through what Martin Kemp has called “structural intuitions”, an innate familiarity with the patterns of the natural world. Some scientists might look askance at any suggestion that the Crawick Multiverse can be seen as a sacred place. But it’s hard to imagine how even the most secular of them, if they really take the inflationary multiverse seriously, could fail to find within it some of the awe that a peasant from the wheatfields of the Beauce must have experienced on entering the nave of Chartres Cathedral – a representation in stone of the medieval concept of an orderly Platonic universe – and stepping into its cosmic labyrinth.

Friday, October 02, 2015

When bioethics goes bad

I have just received a copy of the Australian science magazine Cosmos in the post, as I have an article in it on invisibility. And it reaffirms the impression I had when I reacquainted myself with the magazine during a visit to Melbourne earlier this year: it is a thoroughly splendid publication which deserves to have a wider global reach. The production values are high, the writing is smart, and it’s altogether an accessible but non-sensationalist digest of what’s up in science. In the latest issue you get Dan Falk on 100 years of general relativity, Robin McKie on dark matter… what more could you ask?

So now I’m going to gripe. Not about the magazine per se, but I want to take issue with something that is said in the latest issue. One of the columns is written by Laurie Zoloth, a professor of medical ethics and humanities at Northwestern University. I came across Zoloth before when I wrote my book Unnatural. She is one of the opponents of “advanced reproductive technologies”, including genetic therapies applied to embryos, and she represents a perfect example of how vague scaremongering and woolly moralizing can be used to damn promising new technologies in this field. For example, she complains that we have a tendency to treat infertility as a disease which must be cured. This is a fair complaint insofar as it refers to the way assisted reproduction is often marketed by private clinics (especially in the US, where regulation seems disturbingly lax). But as I wrote in Unnatural,
It is characteristic of critics like Zoloth that they duck the unpalatable corollaries of their criticisms. Should we, then, ban IVF and tell those who suffer from infertility that they must simply learn to live with it? Or might we merely want to constrain and monitor how it is done? At the same time Zoloth herself pathologizes infertility to a grotesque degree, saying that ‘the hunger of the infertile is ravenous, desperate’ – with the implication that it is also dangerous and lacking all moral restraint.

That last comment of hers is unforgiveable – but, as I point out in my book, entirely in accord with traditional views of infertility as something morally suspect.

Zoloth doesn’t see these technologies as attempts to alleviate serious medical conditions, but rather, as narcissistic quests for perfection: for the ideal “designer baby”. Her criticisms of cloning reveal as much; as I said,
Bioethicist Laurie Zoloth bases her objections to cloning on the idea that it will generate a ‘close-as-can-be’ replica, and that this would indeed be a clone’s raison d’être. In a car-crash of metaphors, she asserts that in child-rearing we must ‘learn to have the stranger, not the copy, live by our side as though out of our side’. (Didn’t Eve come, asexually, out of Adam’s side?) Even in literal terms, it seems odd to imply that the cloned child of a couple would be, to the non-cloned parent, less of a ‘stranger’ than a child sharing half his or her genes. But Zoloth’s real fear seems to be that, for reasons unspecified, the parents of a cloned child will, like Victor Frankenstein, fail to parent it as they would any other child. [As she wrote]:
“The whole point of ‘making babies’ is not the production, it is the careful rearing of persons, the promise to have bonds of love that extend far beyond the initial ask and answer of the marketplace.


Again, that last comment is both meretricious and true to the mythical roots of such discomfort: a cloned child will, for some reason, not be “normal”, nor will it have a “normal” upbringing. I’m not arguing in favour of human reproductive cloning, which undoubtedly raises important ethical questions quite beyond any considerations of safety. Rather, I just want us to consider those questions with open eyes, and not to cast lazy aspersions based on ancient prejudices.

Well, Zoloth is at it again in her Cosmos column, which considers the prospects of CRISPR/Cas9 editing of human genomes. She says that germline genetic modification “is a code for engineering embryos. It has been rejected by every political, religious and ethical body that has considered it.” So there’s your argument: it is wrong because folks like me have decided it’s wrong. No mention of whether such work might not actually be used to “make babies” anyway, but, as in the case of the recent application in the UK, to solve medical problems related to conception. The use of embryos that are never intended, or legally allowed, to be implanted for full-term gestation, is in Zoloth’s view just a trick for “deflecting criticism”. The scientists in the US and China are, she warns, “continuing to refine the technique” despite the Chinese work discovering that there could be “disastrous consequences for the embryo”. (Never mind the fact that the Chinese work was conducted precisely to find out if that would be the case, i.e. to assess the risks.)

And so to Zoloth’s conclusion: “Our knowledge of unforeseen consequences is too poor; our capacity for greed and narcissism too strong; our society already too unjust to begin to design babies to a spec sheet.” Oh, she has a way with outrageous phrases all right. What does that “designing babies to a spec sheet” actually mean for all serious researchers thinking about the possibilities of using CRISPR/Cas9 (if and only if it looks safe enough) for humans? It means “curing babies of debilitating genetic diseases” (or avoiding the termination of embryos screened and found to contain them). There is no room in Zoloth’s litany of our evils for any recognition that we also have compassion. For these folk, compassion is merely what paves the road to the Brave New World.

Zoloth sits on the US Recombinant DNA Advisory Committee, which, she says, “reviews every proposed clinical trial of gene therapy”. I am very alarmed by that.

Thursday, September 03, 2015

Nature: the biography

Here is a review of Melinda Baldwin’s basically sound and thoughtful “biography” of Nature. It was destined for the Observer, but scheduling complications left it orphaned. So it comes to rest here instead.

______________________________________________________________

Making Nature: The History of a Scientific Journal
Melinda Baldwin

University of Chicago Press, 2015
ISBN 978-0-226-26145-4
299 pages

When James Watson and Francis Crick figured out the structure of the DNA molecule in late 1952 – as they put it with characteristic exaggeration, “discovered the secret of life” – there was no argument about where they would publish their epochal work. Of course it should be sent to Nature.

The science journal is still the most prestigious in the world, a British institution comparable to Punch or the Spectator. The scientists who have published there include Darwin, Einstein, Hawking, Niels Bohr and Enrico Fermi. As Melinda Baldwin puts it in this biography of the journal, “Nature has not only shaped scientific research… [it] has been a site where practitioners defined the very idea of modern science.” A natural history of Nature was long overdue.

If Nature shaped the business of science, the converse is also true. The hope of its founder, astronomer Norman Lockyer, in 1869, was that the journal would speak to “men of science” (a deliberately gendered label) and lay readers alike. But few leading scientists showed much inclination or aptitude to write for the non-specialist (plus ça change), and within a decade most contributions to Nature were beyond the ken of the educated public. As physicist Oliver Lodge professed in 1893, “Perhaps few are able to say that they read Nature all the way through as Mr. Darwin did.” By extension, few were equipped to contribute either: parsons reporting the first robin of spring were no longer welcomed, and as Baldwin says, “Nature was a key site where the qualifications for membership in British science were proposed, debated, and established.”

Nature’s purpose and status depended on who chose to contribute. It was lucky to attract the patronage of New Zealand-born physicist Ernest Rutherford, whose ambitions to establish priority for his discoveries in nuclear physics and radioactivity were well served by its rapid publication times. Sir John Maddox, the (non-contiguous) editor from 1966 to 1995, attested that one of its greatest early assets was the speed of the Royal Mail.

What ultimately distinguished it, however, was character. As Maddox, who revitalized Nature’s flagging reputation as a place for scientific news, gossip and controversy, put it, “A journal really has to have an opinion.” That, more than the quality or significance of published papers, is what has set it apart from its American rival Science, established in conscious imitation in 1880 but, as the official organ of the American Association for the Advancement of Science, less free to ruffle feathers.

Baldwin convincingly demonstrates that the story of Nature is the story of how science has been reported and to who, of science’s authority, conduct and sense of internationalism. In short, it is about the position of science in society. The journal’s editors, of whom there have been just seven, have been key to this role. Both Maddox and Sir Richard Gregory, editor from 1919 to 1938, had an acute sense of how to situate the journal at the centre of current scientific debates. All the same, Baldwin risks making same mistake as many of the journal’s would-be contributors in imagining that the editorial position was monolithic and determined by the editor; in reality, the modern Nature has also been shaped by a strong-willed staff, sometimes through stormy internal conflict.

Happily, though, she gives due credit to Maddox’s erstwhile assistant Mary Sheehan, who often seemed the only person capable of holding the maverick editor in check. By the late 1980s his office had become a black hole for submissions, stacked high with loose papers. Somewhere in there a promised special issue on the Chernobyl accident vanished, to my knowledge forever.

That John Maddox was so unpredictable and stubborn did nothing to deter the loyalty and affection he induced in his editors. It was often frustrating and infuriating to work for him, but it was never dull. His journalistic instincts might sometimes have got the better of him, but usually they were sharper than those of his younger staff, who he doubtless often felt were too conservative and timid.

The modern Nature is covered only sketchily here. Its current editor Philip Campbell has been in the post for two decades, yet is denied the analysis awarded to all his predecessors. The expansion of the journal into the Nature Publishing Group, with almost 40 spin-off publications now bearing the "Nature" brand, is as important a development as anything that happened in the journal’s earlier history, but is awarded only a paragraph.

This neglect of the near-present is odd, since there is no shortage of stories, nor of witnesses to them. The battle with the Sunday Times over its AIDS denial shamefully indulged by Andrew Neil, de-recognition of the editors’ trade union in the Thatcherite 1990s, the takeover of Macmillan by the Holtzbrinck group – all are overlooked. Perhaps the biggest lacuna is the absence of debate about the dominant role today of “prestige” journals like Nature and Science in scientific tenure and grant decisions. Nobel laureate biologist Randy Schekman recently announced a self-imposed boycott, somehow forgetting that this inflated influence has been awarded by no one but his own colleagues and community. For better or worse, they are still making Nature.

Tuesday, September 01, 2015

Not so spooky

The impressive experiments described in a preprint by Ronald Hanson at Delft and colleagues have been widely reported (for example, here and here) as if to imply that they confirm quantum “spooky action at a distance” (in other words, entanglement). With all due respect to my excellent colleagues (who of course don’t write their own headlines), this is not true.

Einstein’s phrase is of course too nice to resist. But there’s a clue here. Einstein? You, know, the guy who wasn’t convinced by the Copenhagen Interpretation of quantum mechanics that reality is just what we can measure, and that nothing deeper lies “beneath”? Einstein, who suspected that there might be “hidden variables” that restore local properties to the quantum world?

Einstein’s “spooky action at a distance” was predicated on that view. It was action at a distance if, via this thing we call (that is, which Schrödinger called) entanglement, an influence at one location (via a measurement) is transmitted instantaneously to another. Only in some kind of local or hidden-variables view do you need to invoke that picture.

Quantum nonlocality – which is what is supported by a violation of Bell’s inequality, and what the new experiments now confirm by closing another of the loopholes that could have permitted a violation in other circumstances – is not spooky action at a distance, but the alternative to it. It says that we can’t always characterise the properties of a particle in ways local to that particle: its state is a smeared-out thing (to put it crudely) that may be correlated with the state of another distant particle. And so it appears to be. In this view, there is no action at a distance when we make a measurement on one particle – rather, there are nonlocal quantum correlations with the state of another. It is hard to find words for this. But they are not “spooky action at a distance.”

I don’t expect these words to make a blind bit of difference, but here they are anyway.

Friday, August 28, 2015

Songwriting by numbers

Can a crowd write a song? That’s what an online experiment by computer programmer Brendon Ferris in the Dominican Republic is hoping to determine. Users are invited to vote on the notes of a melody, one note at a time, and the most popular choice secures the next note. The melody is selected to fit an anodyne chord sequence, and as far as I can make out the choices of notes are restricted to those in the major scale of C, the key signature of the composition. I’m not sure if the notes are allowed to stray out of the single octave range beginning on middle C (the New Scientist article provides very few details), but so far they haven’t. In other words, the rules are set up to ensure that this will be a pretty crappy song come what may, with all the melodic invention and vocal range of Morrissey (oh ouch, that’s not going to be popular!).

Even putting that aside, the experiment bears no relation to how music is composed. No one decides on a melody note by note, or at least not outside of the extremes of, say, total serialism, where everything is determined algorithmically. Neither do we hear a melody that way. We group or “chunk” the notes into phrases, and one of the most salient aspects of a melodic line that we attend to – it’s what infants first discern, irrespective of the exact relationships between successive pitches – is the overall contour. Does it go up or down? Is the pitch jump a big or small one? The melodic phrase is, in general, a single meaningful unit, and its musicality disappears once it is atomized into notes. The very basis of our musicality lies in our propensity to arrange sound stimuli into groups: to bind notes together.

But this doesn’t mean that the experiment is worthless (even if it’s worthless as music). It potentially raises some interesting questions (though as I say below, the answers in this case are highly compromised by the constraints). Will this democratic approach to making melody result in a tune that shares the characteristics of other conventional tonal melodies? In other words, can the crowd as a whole intuit the “rules” that seem empirically to guide melodic composition? It seems that to a certain extent they can. For example:

- the crowdsourced melody (to the extent that can be judged so far) exhibits the same kind of arch contour as many common tunes (think of “Ode to Joy” or “The Grand Old Duke of York”, say), rising at the start and then falling at the end of the phrase.

- the contours tend to be fairly smooth: an ascent, once started, persists for several notes in the same direction, before eventually reversing.

- the statistics of pitch jumps between one note and the next exhibit the same general pattern, within the limited statistics so far, as is seen for music pretty universally: that’s to say, there are more small pitch steps than large ones, with most being just zero, one or two semitones (especially two, since this corresponds to the distance between most successive note pairs in the diatonic scale). Here’s the comparison: the statistics for a sample of Western classical music are shown in grey, the thick black line is for this song:


But there are some anomalies, like those weird downward jumps of a seventh, which I suspect are a consequence of a silly restriction on the span of the allowed note to exclude the upper note of the tonic octave: you have to go back down to C because you can’t go up. So perhaps all we really learn in this case is totally unsurprising: people have assimilated enough from nursery rhymes not to be picking notes at random or putting rests in weird places, they have intuited some basic principles of harmony (so that we’re not getting B naturals against an F chord), and that if you permit only the blandest of note choices against the blandest of chord sequences, you’ll get a tune that is of no real interest to anyone.

That’s the opposite of what Ferris was hoping for. “My way of thinking was, if the crowd decides what the next note is, then there must be something there that appeals to the most people,” he has said. “The song should sound good to everybody.” But even if the rules weren’t so badly chosen, this totally misunderstands what music is about. What snags our attention is not the obvious, the consensual, the average, but the unusual, the unexpected. But that can’t be arbitrary: there are also rules of a sort that help to make the unexpected work and prevent it from seeming unmotivated. Whether the crowd could, if given the right options, find its way to that sort of inventiveness remains to be seen; I’d be astonished if it could do so note by note.

Something of this same nature was tried before, with more insight, by the avant-garde jazz musician David Soldier, who is the pseudonym of the neuroscientist David Sulzer at Columbia University. Sulzer wrote a song based on surveys of hundreds of people to discover what elements, such as instrumentation, tempo and lyrics they liked best. He called the result "Most Wanted Song". I haven’t heard it myself, but some people have described it as a sickly abomination, while others have said that it sounds a bit like Celine Dion. Which I suppose is the same thing.

Sulzer’s whole point is that trying to define the perfect song according to some kind of measure of popularity is liable to end badly. I think Ferris is discovering that too.

Thursday, August 20, 2015

The cost of faking it

Here, a little belatedly, is my July column for Nature Materials, which considers the issues around bioprinting of fake rhino horn.

________________________________________________________________

Debates about distinctions between “natural” and “synthetic” materials date back to antiquity, when Plato and Aristotle wondered if human “art” can rival that of nature. Scepticism about alchemists’ claims to make gold in the Middle Ages weren’t so much about whether their gold was “real” but whether it could compare in quality to natural gold. Such questions persisted into the modern age, for example in painters’ initial suspicions of synthetic ultramarine and in current consumer confusion over the integrity of synthesized natural products such as vitamin C.

It is all too easy for materials technologists to overlook the fact that what to them seems like a question of chemical identity is for users often as much a matter of symbolism. Luxury materials become such because of their cost, not their composition, while attitudes to the synthetic/natural distinction are hostage to changing fashions and values. The market for fake fur expanded in the 1970s as a result of a greater awareness of animal conservation and cruelty, but providing a synthetic alternative was not without complications and controversy. Some animal-rights groups argue that even fakes perpetuate an aesthetic that feeds the real-fur market, while recently there has been a rise in real fur being passed off as faux – a striking inversion of values – to capture the market of “ethical” fur fans. The moral – familiar to marketeers and economists if less so to materials scientists – is that market forces are dictated by much more than chemical composition.

These considerations resonate strongly in the current debate over plans by Seattle-based bioengineering company Pembient to use 3D printing for making fake rhinoceros horn from keratin. The company hopes to reduce rhino poaching by providing a synthetic alternative that, by some accounts, is virtually indistinguishable in composition, appearance and smell from the real thing. It claims that 45% of rhino horn traders have said they would buy the substitute. How to interpret that figure, even taken at face value, is unclear: will it help save the rhino, or does it show that over half of the buyers value something more than material identity? In the black-market Chinese and Vietnamese medicines that use the horn, it is supposed to imbue the drugs with an essence of the wild animal’s vitality: it is not just an ingredient in the same sense as egg is a part of cake mix, but imparts potency and status.

The same is true of the tiger bone traded illegally for medicines and wine. Even providing the real thing in a way that purports to curb the threat to wildlife, as for example when tigers are farmed in China to supposedly relieve the pressure on wild populations, can backfire in the marketplace: some experts say that tiger farming has revitalized what was a waning demand.

Critics of Pembient’s plans – the company intends to print tiger bone too – make similar complaints, saying that the objective should be to change the culture that creates a demand for these products rather than pandering to it. There’s surely a risk here of unintended outcomes in manipulating markets, but also a need to remember that materials, when they enter culture, become more than what they’re made of.

Thursday, July 30, 2015

Liquid-state particle physics

Here’s my latest column for Nature Materials.

_______________________________________________________________________

The ability of condensed-matter physics to offer models for fundamental and particle physics has a distinguished history. Arguably it commenced with the liquid-droplet model of the atomic nucleus formulated in 1936 by Niels Bohr, which provided a simple approximation for thinking about nuclear stability and fission in terms of familiar concepts such as surface tension and heat of vaporization. Since then, real materials systems have offered all manner of laboratory analogues for exploring fundamental physical phenomena that lie outside the range of direct experimentation: for example, the use of liquid crystals to mimic the topological defects of cosmic strings and monopoles [1], the representation of graphene’s electronic structure in terms of massless relativistic Dirac fermions [2], or the way topological insulators made from oxide materials might manifest the same properties as Majorana fermions, putative spin-½ particles that are their own antiparticles [3].

These cases and others supply an elegant demonstration that physics is unified not so much by reduction to a small set of underlying equations describing its most fundamental entities, but by universal principles operating at many scales, of which symmetry breaking, phase transitions and collective phenomena are the most obvious. It’s perhaps curious, then, that particle physics has traditionally focused on individual rather than collective states – as Ollitrault has recently put it, “on rare events and the discovery of new elementary particles, rather than the “bulk” of particles” [4]. One indication that bulk properties are as important for high-energy physics as for materials science, he suggests, is the new discovery by the CMS Collaboration at CERN in Geneva that the plasma of quarks and gluons created by a proton collision with a lead nucleus has emergent features characteristic of a liquid [5].

It was initially expected that the quark-gluon plasma (QGP) – a soup of the fundamental constituents of nucleons – produced in collisions of heavy nuclei would resemble a gas. In this case, as in an ideal gas, the “bulk” properties of the plasma can be derived rather straightforwardly from those of its individual components. But instead the QGP turns out to be more like a liquid, in which many-body effects can’t be neglected.

Shades of Bohr, indeed. But how many many-body terms are relevant? Earlier studies of the tiny blob of QGP formed in lead-proton collisions, containing just 1,000 or so fundamental particles, showed significant two-particle correlations [6]. But in an ordinary liquid, hydrodynamic flow produces coherent structures in which the motions of many molecules are correlated. The new CMS results show that the QGP also has small but measurable six- and eight-body correlations – suggestive of collective flow effects – that are evident in the variations in particle numbers with the azimuthal angle relative to the line of collision. The azimuthal variations indicate that this flow is anisotropic, and the CMS team proposes that the anisotropy comes from a hydrodynamic amplification of random quantum fluctuations of the colliding particles.

So exactly what kind of liquid is this? Since the strong force between quarks and gluons doesn’t diminish with distance, the QGP seems likely to be quite unlike any we know so far. But might it be within the wit of colloid scientists to tune inter-particle forces so as to create a simple laboratory analogue?

References
1. Davis, A.-C. & Brandenberger, R. Formation and Interactions of Topological Defects (Springer, New York, 2012).
2. Novoselov, K. S. et al., Nature 438, 197-200 (2005).
3. Fu, L. & Kane, C. L., Phys. Rev. Lett. 100, 096407 (2008).
4. Ollitrault, J.-Y., http://physics.aps.org/articles/v8/61 (2015) [here].
5. Khachatryan, V. et al. (CMS Collaboration), Phys. Rev. Lett. 115, 012301 (2015) [here].
6. CMS Collaboration, Phys. Lett. B 718, 795-814 (2013).

Added note: Jean-Yves Ollitrault reminds me that perhaps the best example of particle physics borrowing from condensed-matter physics is the Higgs mechanism, which was inspired by the model of conventional superconductivity.

Friday, July 24, 2015

Silence of the geronotologists

I was perhaps a bit cryptic in tweeting about my New Statesman piece on “the immortality business” (which I’m afraid I can’t put up here, but it should be online soon – and NS is always worth its modest cover price anyway). This is what I meant.

When I pester researchers for comments on a topic I’m writing about, I recognize of course that none is under the slightest obligation to respond. That they almost always do (even if it’s to apologize for being unable to help) is a testament to the extraordinary generosity of the research community, and is one of the abiding joys and privileges of writing about science – my impression is that some other disciplines don’t fully share this willingness to explain and discuss their work. Occasionally I do simply get no response at all from a researcher, although it is unusual that a gentle follow-up enquiry will not at least elicit an explanation that the person concerned is too busy or otherwise indisposed to comment.

That’s why my experience in writing this piece was so clearly anomalous. I contacted a large number of gerontologists and others working on ageing, explaining what I was trying to do with this piece. With the very few honourable exceptions named in my article, none responded at all. (One other did at least have the grace to pretend that this was “not really my field”, despite that being self-evidently untrue.) I am almost certain that this is because these folks have decided that any “journalist” contacting them while mentioning names like Aubrey de Grey wants to write another uncritical piece about how he and others like him are going to conquer ageing.

I can understand this fear, especially in the light of what I said in the article: some researchers feel that even allowing the immortalists the oxygen of publicity is counter-productive. But truly, chaps, burying your head in the sand is the worst way to deal with this. A blanket distrust of the press, while to some degree understandable, just takes us back to the bad old days of adversarial science communication, the kind of “us versus them” mentality that, several years ago, I saw John Sulston so dismayingly portray at a gathering of scientists and science writers. What researchers need to do instead is to be selective and discerning: to decide that all writers are going to recycle the same old rubbish is not only silly but damaging to the public communication of science. I would even venture to say that, in figuring out how to deal with the distortions and misrepresentations that science sometimes undoubtedly suffers from, scientists need help. While it is understandable that, say, IVF pioneer Robert Edwards should have bemoaned the way “Frankenstein or Faust or Jekyll… [loom] over every biological debate”, I see little indication that biologists and medics really know how to grapple with that fact rather than just complain about it. You really need to talk to us, guys – we will (some of us) do our very best to help.

Wednesday, July 22, 2015

Understanding the understanding of science

That the computer scientist Charles Simonyi has endowed a professorial chair at Oxford for the Public Understanding of Science seems a rather splendid thing, acknowledging as it does the cultural importance of science communication (which was for a long time disdained by some academics, as Carl Sagan knew only too well). Richard Dawkins was the natural choice for the first occupant of the position, and indeed it seems to have been created partly with him in mind.

When his incumbency ended and applications were invited for his successor, a few well-meaning folks told me “you should have a go!” I quickly assured them that I am simply not in that league. Little did I know, however, that should I have been overcome with mad delusions of grandeur, I’d not only have stood less than a cat’s chance in hell but would have been specifically excluded from consideration in the first place. The full text of Simonyi’s manifesto in creating the position is reproduced in the second volume of Dawkins’ autobiography, Brief Candle in the Dark. It doesn’t simply say, as it might quite reasonably have done, that the post is for academics and not professional science communicators. No, it goes out of its way to insult the latter. Get this, fellow science hacks:

The university chair is intended for accomplished scholars who have made original contributions to their field, and who are able to grasp the subject, when necessary, at the highest levels of abstraction. A populariser, on the other hand, focuses mainly on the size of the audience and frequently gets separated from the world of scholarship. Popularisers often write on immediate concerns or even fads. In some cases they seduced less educated audiences by offering a patronizingly oversimplified or exaggerated view of the state of the art or the scientific process itself. This is best seen in hindsight, as we remember the ‘giant brains’ computer books of yesteryear but I suspect many current science books will in time be recognized as having fallen into this category. While the role of populariser may [may, note] still be valuable, nevertheless it is not one supported by this chair.

OK, I won’t even get started in on this. Richard doesn’t reproduce this without comment, however. He says he wants to “call attention especially” to “the distinction between popularizers of science and scientists (with original scientific contributions to their credit) who also popularize.” It’s not clear why he does this, especially as the distinction is spurious for many reasons.

I might add that Simonyi also stipulates that “preference should be given to specialities which express or achieve their results mainly by symbolic manipulation, such as Particle physics, Molecular biology, Cosmology, Genetics, Computer Science, Linguistics, Brain research, and of course, Mathematics.” So stuff you, chemists and earth scientists. Actually, stuff you too, cell biologists, immunologists and many others.

It doesn’t much matter to the world that I find this citation offensive. I think it does matter that it displays such ignorance of what science communication is about. I would be much more troubled, however, if the chair were not currently occupied by such a profoundly apt, capable and broad-minded individual as Marcus du Sautoy. If it continues to attract incumbents of such quality, I guess we needn’t trouble ourselves too much about the attitudes of its founder and patron.

Friday, July 17, 2015

Dawkins and the Spotted Dick mystery

I have agreed, with some trepidation, to review volume 2 of Richard Dawkins’ autobiography, this one called Brief Candle in the Dark. I guess I figured it might be refreshing to return to the pre-God-bashing, pre-Twitter Dawkins, when he was rightly known primarily as our pre-eminent science communicator (who called out the idiocies of creationism). And on the whole it is: rather than appearing to be the polarizing caricature that Dawkins is often presented as today, he comes across so far in the book as simply a chap with appealing features as well as foibles, not least of which among the former being his touching generosity to students. Sure, there are Pooterish touches (note to editors: if I ever write anything autobiographical that includes the line “I think my speech went down quite well”, then I’m counting on you guys), but also a sense of the humane individual (not to mention the splendid writer) who these days it can be hard to discern behind all the controversy that surrounds him. I should add that I’m still only on page 50.

But there are also occasional glimpses of the Twitter-era Dawkins, springing out Hyde-like from the good Jekyllish doctor. I was particularly struck by a passage in which, apropos of nothing in particular, Dawkins tells us about a “care home for old people in England” at which a “local government inspector” banned the traditional pudding Spotted Dick from the menu on the grounds that its name was “sexist”. This looked to me for all the world like one of those apocryphal “PC gone mad” stories that the Daily Mail loves to run (and then occasionally retract a few weeks later in small print). Could it really be true?

Thee only item that comes up after a quick Google is one reported – well, what did you expect? – by the Daily Mail. There, the change in naming was not occasioned by a prudish, PC government inspector. The story says that staff in a council canteen were totally fed up with a few customers (one in particular) who kept on making lewd and childish remarks whenever Spotted Dick was on the menu, and so they decided to take matters into their own hands – with the extremely ill-advised idea of calling it instead Spotted Richard. A council official then rather shamefacedly decided to intervene and reverse this policy because it looked so silly (and because it was being reported as an example of political correctness). There was no mention of anyone finding the name sexist, nor of officialdom actually trying to be politically correct.

Some Twitter comments challenged Dawkins about this, and his response was that this was not the same story at all. Rather, the Spotted Dickgate that he heard was from “a personal acquaintance, personally vouched for,” and not the infamous Flintshire Spotted Dickgate. And that, it seems, is all we are going to get from him (though you might think he’d be curious about the parallels).

So you must make up your own minds, people. Was Dawkins’ acquaintance recounting what shows every sign of becoming an urban myth, or was this really a case of Spotted Dick strikes again? Can anyone, in any event, figure out how Spotted Dick could be construed as “sexist” – or even, to paraphrase Spinal Tap, as ”sexy”? The anecdote doesn’t really make sense.

Alleged political correctness has of course become one of Dawkins’ bête noirs (bêtes noir?) – after all, it did for his good friend James Watson after Watson betrayed his racist views once too often, and it also came close to doing for his friend Tim Hunt (a much nicer man than Watson) after Tim said something stupidly sexist. Could it possibly be that it suited Dawkins to believe what he was told without feeling the need to inquire further?

If that’s so, it’s simply another example of the kind of confirmation bias that often leads scientists astray, as I discussed here. What is ironic is that this passage comes so soon after Dawkins has given us a rather nice account of the critical thinking that interview questions at Oxford aim to probe. But it’s one thing to be led to false conclusions in research by seeking out the answer you are already predisposed to find; it’s quite another to recycle an anecdote in a way that makes you sound like a ranter in the comments section of the Daily Mail website.

So pending a full disclosure of data and references, preferably in a major peer-reviewed journal, I propose we should avoid propagating the “Spotted Dick” meme, even if the inventor of memes himself repeats it. This has been a public service announcement.

Monday, July 13, 2015

Beckett's epic fail (again)

One of my esteemed colleagues recently finished a nice piece on careers in science by quoting Samuel Beckett: “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.” The sentiment is entirely laudable: you’ll get things wrong, but don’t be deterred – every time you attempt something and fail, you get a little better. Or something like that.

Yet whenever I see Beckett put to use this way, I can’t help thinking “Hmphrgh”. This is Beckett you’re quoting. Yes, Samuel Beckett. Does anyone believe that he was ever going to write a soundbite of fist-punching, keep-on-goin’ self-motivation?

The line comes, of course, from Beckett’s late work Worstward Ho. I say of course because that’s commonly acknowledged, but I wonder how many have seen or read Worstward Ho. It is, shall we say, opaque even by the standards of a master of opacity. Dense, you might say. Difficult. Now, I love Beckett and find him an intensely funny writer, but funny because of a wry bleakness that makes Will Self seem like a bouncing-bunny optimist. It’s a braver soul than me who will pronounce with certainty on what Beckett was driving at with “Fail better”, but I will bet a pint of Guinness that he did not intend this to be a boiled-down version of that pious little primary-school mantra “If at first you don’t succeed, try, try again.”

It’s wise not to get too po-faced and spluttery about this misappropriation, not least because Beckett would doubtless have appreciated the joke. We get the memes we seem to need, like the martyrdom of Giordano Bruno or the misuse of “deconstruct”, and I’d be a sad fool indeed to think that a blog comment is going to make the slightest difference in squelching them.

But it’s sad that the irony here is so seldom recognized. Indeed, what seems particularly sad is that the opportunity to take a more nuanced view of failure is bypassed by this bit of repurposed wisdom.

Mark O’Connell has a great piece on Slate, called “How Samuel Beckett became Silicon Valley’s life coach.” He says “What has happened here, I suppose, is that a small shard of a fragmentary and difficult work of literature has been salvaged from the darkness of its setting, sanded and smoothed of the jagged remnants of that context”. The result, O’Connell says, is that Beckett is pressed “into service as a kind of highbrow motivational thought-leader.” But in truth “his attitude toward success and failure was more complex and perverse than this interpretation suggests.” That’s surely true.

What, then, was that attitude? Maggi Dawn has a nice interpretation on her blog: “there is a sense in which claiming always to fail is comedy not tragedy. It releases us from the lie of success, frees us from the obligation to adopt its thin veneer, and allows us to do whatever it is we do for its own sake.”

My own suspicion is that Beckett was hinting at the glorious tragedy of our own self-delusion, in which we tell ourselves that we will eventually transform failure into success, and that the world really cares whether we do or not. We are not Steve Jobs but Harold Steptoe (and if you’re too young to get that allusion, you can thank me later for broadening your horizons), doomed forever to be making pathetic plans for betterment in a kind of frenzied desperation, forever glimpsing our cherished goal only to have it snatched from our grasp by the realities of our sad and miserable existence. And perhaps to realise that our only real hope of solace lies in accepting that Albert will always thwart our efforts, so that we might ask well celebrate failure and get drunk with the surly old sod.

But imagine trying to sell that in Silicon Valley.

Wednesday, July 08, 2015

Does anyone have any questions?

That I can be fairly relied upon to put my foot in it was confirmed after a talk I gave at the Royal Society last week. The Q&A seemed to be going well enough, but then the RS staff said “Well, we’ll have to bring it to an end there.”

“Oh, there’s just one more”, I quickly interjected, pointing out the chap at the end of the row with his hand up. What I didn’t know was that this fellow is a regular at RS events, where he apparently makes a habit of getting bolshy. The attempt to end the proceedings before handing him the mike was not an oversight but a tactful intervention – which I’d now undermined.

As the question began, I thought I could see a way to create a valid question out of what seemed like his skepticism about the way science is used (“delinquent science” was the term). But as he went on (and on), it became clear that this wasn’t a question at all but a rant about how science wastes taxpayers’ money making things that no one wants or needs, regardless of the consequences, and how the person who switched on the LHC didn’t give a damn whether it would make a black hole that would swallow us all, and – OK, you get the point. One of the organizers had to step in to halt the bitter diatribe.

I try to make a point of turning any question into a reason to say something that I hope will be of interest, even if the connection with the question itself is slender. That’s to say, I will try to answer questions as directly as I can, but when they aren’t really questions at all, or when they are questions about fairies or telepathy, I’ll try to move the discussion in what I hope is a useful direction. I have no problem with disagreeing with a questioner (and if, say, I was confronting a climate sceptic then I’d feel obliged to do so). But I would feel uncomfortable making my answer a put-down. Speakers are in a position of relative power in these situations, and so it seems only fair to try to engage with the issues raised rather than to dismiss, far less ridicule, them. The number of times I have been approached after a talk by someone saying “I have a stupid question, so I didn’t want to ask it in public…” makes me realize how many people, probably because of experiences at school, are extremely nervous about putting up their hand, thinking everyone will laugh at them. (I’m not sure that the questions which follow such a disclaimer have ever been stupid in any event – in my experience, people whose questions are genuinely of dubious value, for example when they serve only to showcase the erudition of the questioner, are rarely averse to asking them.)

So what did I do on this occasion? I waffled something about how modest the aims of most science is, and about the common contrast between the way a piece of work is presented to the public and what its real goals are. I don’t know, it was something to say, but it wasn’t terribly insightful. But I came away troubled. Not because I’d been attacked by the questioner, but because I felt I hadn’t dealt with it in the best way. So I asked my wife later – she being far more generous, perceptive and sensitive than I am – if she thought that on this occasion I should have answered more firmly – not by getting into the vague and paranoid issues that the questioner was, after a fashion, raising, but to say explicitly that they were not relevant here. I realized that what he had said was in fact rather rude – not to me (or rather, that aspect doesn’t greatly bother me), but to the audience, who hadn’t come to hear some aimless angry diatribe against science in general. Was it really right to be so tolerant and irenic in this situation? No, she said, it wasn’t. I had every right to deal with such a “question” with firm curtness – to say, perhaps, that I had trouble discerning any kind of question at all in his comments, and that I wasn’t going to launch into a general defence of what science is all about and why it is done. That’s all it would have taken.

I think she is right. Speakers have a responsibility to treat an audience with respect, but the reverse applies too – at least, in a situation like this. I see no reason why questions should not be challenging, even angry, when controversial subjects are being aired (mine certainly wasn't one such), but even then they need to be brief and to the point.

I wonder how others deal with situations like this? The likelihood of getting flaky or strange or irrelevant questions after a public scientific talk (“Do you think drugs allow us to see other dimensions?”) is of course fairly high. One can perhaps try, as I heard Adam Rutherford do recently, anticipate that by asking at the outset “Please try not to be mad”. (It didn’t work though, did it Adam? – the question above is one of those that followed.) But mad questions aren’t so much the issue (though of course one has to try to be sensitive to genuine mental-health issues here, and I’m not being facetious). Rather, what’s the best way of dealing with folks whose determination to mount a hobby horse, or push a particular point of view, or show off, leads them into confrontational or boring rudeness? Should one treat them the way stand-ups treat hecklers, with an acerbic put-down? Or by politely declining to answer the question? (“You know, I don’t think I can say anything very intelligent about that.”) Or with a brusque and magisterial Dawkinsesque dismissal? With attempted humour? (“What have you been drinking?”) When do you hold back, and when do you let rip?

[Postscript: Incidentally, I was awed by how, at a talk last week at the Royal Institution, Frank Wilczek was able instantly to cut to the physics core of left-field questions. Like this:
Questioner: [apropos supersymmetry] Does this have anything to do with wave-particle duality?
Me: [thinks] Um yeah, does it? Or are you just mouthing a buzzword you’ve heard?
Wilczek: Wave-particle duality is what makes this possible, because [I paraphrase] it's the bosonic picture of quantum fields that we’re hoping to unify with the fermionic nature of matter.
Me: [thinks] Well yeah, I knew that.]

Wednesday, July 01, 2015

Perkin's purple: a journey around London

I have just presented one of BBC Radio 4’s Science Stories, a new series looking at episodes in the history of science. This one tells the tale of William Perkin’s purple coal-tar dye and how it changed the course of chemistry. That, of course, is the kind of grand and often contentious claim these programmes inevitably end up making, but I do feel that there is a case to be made for it here.

The initial plan was for me to take a journey across London, visiting the key locations en route: from Shadwell in the East End to the Royal College of Chemistry in the West End and then the site of the Perkins’ factory in Greenford Green on the outskirts of west London. In the end it didn’t quite happen that way, but I got a few pictures of some of the relevant locations as we recorded, and so wanted to include these here with the original draft of the script – it changed considerably, and I’m sure very much for the better, but this at least tells and illustrates the story. For more details, see Simon Garfield's excellent book Mauve, Tony Travis's authoritative The Rainbow Makers, and my own Bright Earth.

_________________________________________________________________________

“A reservoir of dirt, drunkenness and drabs” – that’s what Dickens called Shadwell, and I’m not sure that he wasn’t being affectionate. There’s not a lot of Dickens’ Shadwell left: whatever the bombs didn’t destroy during the war disappeared soon after in the slum clearances. But I can’t say that what took its place has added much to its appeal: all these ugly flats and traffic bollards.

But here’s the place I want. King David Lane. Just down here in the mid-nineteenth century there was a big old house at 1 King David Fort, but now it’s just a council block.


Visiting the site of William Perkin’s family home in Shadwell – on a very blustery spring day!

This was the home of the Perkin family, who were wealthy by the standards of Shadwell. George Perkin was a successful carpenter who could afford to indulge his son William’s passion for chemistry. William had a little home laboratory on the top floor of the house – just a simple place, with a table and bottles of chemicals, no running water, no gas. But when he was 18 years old and still a student, he discovered something here that for once justifies that awful cliché: it changed the world.

There’s a blue plaque here to back me up. “Sir William Henry Perkin, FRS, discovered the first aniline dyestuff, March 1856, while working in his home laboratory on this site, and went on to found science-based industry.”


The blue plaque marking the spot where Perkin discovered mauveine.

Listen to that again: “went on to found science-based industry”. In other words, what Perkin discovered led to the whole idea that industry might be based on science.

That’s an astonishing claim. What could this young lad have found that was so important?

Let’s start with a gin and tonic.

For the British army in India in the nineteenth century, this drink really was medicinal. The troops were issued with their bitter tonic water at daybreak, but the officers started taking this medicine on the verandah as the sun set, not just with a spoonful of sugar but with a splash of lime and a generous shot of gin.

You see, the bitter taste was due to quinine, the only effective anti-malarial drug then known. This stuff was extracted at great labour and expense from the bark of a Peruvian tree called the cinchona. The bark had been known since the seventeenth century to help treat and prevent malaria. No one really knew what was in it until two French chemists separated and purified quinine in 1820. With quinine to protect them, the Europeans were able to begin the colonization of Africa, the consequences of which are still reverberating today.

You really didn’t want to get malaria. Chills, convulsions, fever, vomiting, delirium, and quite possibly at the end of it all – death. But quinine cost a fortune. Peru was then just about the only place where the tree was found and the bark contained only tiny amounts of it. And the Peruvians kept a monopoly by outlawing the export of cinchona seeds or saplings. In the nineteenth century, the East India Company was spending about £100,000 every year to keep the officers and officials in the colonies healthy.

But what if, instead of extracting this stuff drip by drip from tree bark, you could make it from scratch?

What does that mean? Well, over the previous centuries, chemists had found how to take simple chemical ingredients and get them to combine to make entirely different chemicals: useful substances like soap, soda, bleach. Might they be able to make a complicated natural drug like quinine?

One man in particular had this dream of using chemistry to reproduce and even rival nature. He was a German chemist called August Wilhelm Hofmann, and many people, including Prince Albert, hoped that he’d be the savior of British chemistry. In 1845 Hofmann was appointed director of the Royal College of Chemistry in London, which had been set up at Albert’s request.


August Wilhelm Hofmann

So what do we know about Hofmann? Well, according to the sign that now marks the spot in Oxford Street where the Royal College of Chemistry used to stand [it’s next to Moss Bros, opposite John Lewis’s], he “inspired the young to do great things in chemistry, and relate them to both academic and everyday life.”


The plaque erected by the Royal Society of Chemistry to mark the former site of the Royal College of Chemistry in Oxford Street, London.

There were two aspects of everyday life that Perkin, walking down these streets in the mid-nineteenth century, couldn’t fail to have noticed. In the lanes and docks of Shadwell, Dickens said, everyone seemed to be wearing rough blue sailors’ jackets, oilskin hats and big canvas trousers. But up here in the fashionable West End, it wasn’t so different to the style emporiums of today: ladies wore the latest colours: yellow silks from France and fabrics printed in patterns of rich madder red and indigo. Those last two colours were plant extracts, and they faded after lots of washing and being out in the sun. But the yellow silk, which had graced the Great Exhibition in 1851, was coloured with a new dye that was made artificially – by chemistry.

And the stuff it was made of was a by-product of the other thing that distinguished the splendor of Oxford Street from the gloomy alleys of Shadwell: the street lights. They had brightened up the evenings since the start of the century, burning gas that was extracted from coal.

Left over from that process was a thick, smelly tar called, naturally enough, coal tar. At first it seemed to be just noxious waste, and was often just dumped into streams. But then folks figured out that coal tar might be useful. Charles Macintosh used it to make waterproof raincoats. And if you distilled it, then you could extract a whole range of chemicals, like coal itself primarily composed of carbon. They often had an acrid smell – aromatics, the chemists called them. One was carbolic acid, also known as phenol. You remember that stinky old coal-tar soap? That’s phenol you were smelling, and it was in there to act as a disinfectant, one of its main uses since the 1850s.

But phenol was also the starting ingredient for the yellow silk dye that rich ladies bought from Lyon. Yes, this coal tar had some valuable stuff within it.

No one knew that better than August Hofmann, who had become pretty much the world expert on coal-tar compounds. So when William Perkin enrolled at the Royal College of Chemistry in 1853, pretty soon he found himself working on coal tar.

And when Hofmann set Perkin the challenging task of trying to make synthetic quinine in 1856, the coal-tar compounds seemed like good materials to start from.

We need to do some chemistry now. But don’t worry. I’ve got a Scrabble set to help me. You see, molecules are like poems: you have to get the words in the right order. Each word is a cluster of letters, and we can think of each letter as an atom. Making molecules is like stringing together these letters in an order that has some meaning. Now, some molecules, like polythene or DNA, really are a lot like strings of atoms. But others have other shapes. Benzene, for example, which is at the heart of all the coal-tar aromatic compounds, is a ring of six carbon atoms, each with a hydrogen atom attached. I take all six C’s for carbon – and yes, this isn’t exactly a regular Scrabble set – and put them in a ring.

But the problem was that in Perkin’s day no one knew that molecules have shapes like this, with atoms in particular arrangements. All they knew was the relative amounts of each kind of element, like carbon and hydrogen, a substance contained. Benzene was equal parts of carbon and hydrogen, rather like a G&T is one part gin to three parts tonic water.

So then, what Hofmann and Perkin knew about the element cocktail that is quinine was that it is twenty parts carbon, to twenty four of hydrogen, two of nitrogen and two of oxygen.

What gives quinine its meaning – what lets it cure malaria – is its particular arrangement of these atoms. But Perkin knew nothing about that. His strategy – so crude that in retrospect it was obviously hopeless – was, roughly speaking, to take a compound that had half of these amounts – ten parts carbon and so on – and try and stick them together, as if mixing up these two piles of letters is going to miraculously give them the same meaning as quinine.

It’s not surprising he didn’t succeed. When he did the experiment at home one night, instead of colourless quinine he got a red sludge.

He could have been forgiven for just flushing it down the drain. But he was too good a student for that, which is why Hofmann had made Perkin his personal assistant.

Instead, he thinks, well, what seems to be going on here? Let’s try the same reaction with another two identical piles of letters, rather like the ones before but a bit simpler. And so he goes through the same procedure with a different coal-tar extract, one of Hofmann’s own favourites: a compound called aniline.

Well, this time the result is even worse. Now the gunk is black. Even so, Perkin keeps going. He dries the stuff and swills it around in methylated spirits.

And now at last, something nice. It dissolves to turn the liquid a beautiful purple.

Here Perkin thinks of those fine ladies of Oxford Street in their bright silks. He knows that the textile industry is hungry for new dyes. And so takes a piece of white silk and dips it into the liquid, and when he pulls it out the colour has stuck fast to the fabric.

So what now? Perkin manages to get hold of the name of a dye works in Scotland and he sends them a piece of his purple-dyed silk. When the reply comes a few months later, it must make his heart beat faster:
“If your discovery does not make the goods too expensive it is decidedly one of the most valuable that has come out for a very long time. This colour is one which has been very much wanted in all classes of goods and could not be had fast on silk and only at great expense on cotton yarns… the best lilac we have… is done by only one house in the United Kingdom… and they get any price they wish for it, but… it does not stand the tests that yours does and fades by exposure to air.”

So there it was: Perkin had a potential new dye on his hands.

But remember what the man had said: “If your discovery does not make the goods too expensive”. Well, aniline was expensive. If this dye was going to succeed, Perkin had to find a way of making it cheaply – which meant, on an industrial scale.

He realized that he wasn’t going to be able to do that while he was still a chemistry student. So he told Hofmann that he was quitting. But Hofmann had made the young man his protégé, and as Perkin recalled many years later, “he appeared much annoyed”. What was his best student thinking of, abandoning a promising career in pure research to go into industry? As Perkin recalled,
“Hofmann perhaps anticipated that the undertaking would be a failure, and was very sorry to think that I should be so foolish as to leave my scientific work for such an object, especially as I was then but a lad of eighteen years of age.”

The funny thing is that purple was already fashionable even before Perkin discovered his aniline dye. From the 1830s a purple dye called murexide became popular, though probably its fans had little idea that it was made from Peruvian bird droppings. Another purple dye was made from an extract of lichen. In the year that Perkin made his discovery, the Pre-Raphaelite Arthur Hughes painted his picture April Love, showing a young woman in the kind of long flowing purple dress then in style. The French, who even at that time called the shots in fashion, had a word for these rather pale purples. It was what they called the purple-flowered mallow: mauve.


April Love (1856), by Arthur Hughes.

But he did leave, and when he couldn’t find a backer for the factory he proposed to build, his father George put up his life savings, even though he’d never wanted William to become a chemist in the first place. William’s older brother Thomas chipped in to help too.

Now they had to give aniline purple a catchy trade name. Perkin thought of the famous royal purple of Rome, originally made in the Phoenician city of Tyre from a substance extracted a drop at a time from shellfish. Why not call it Tyrian purple?

But it didn’t catch on. Soon enough the aniline dye he’d intended to call Tyrian purple had become synonymous instead with the colour mauve.

There was nowhere suitable in the East End for the coal-tar dyeworks of Perkin & Sons, and in the end they found a meadow right over in Greenford Green, near Harrow, northwest of London, conveniently close to the Grand Junction Canal. In less than six months, a factory was turning it into purple for the dyers of Great Britain.

Well, I can’t say that the industrial estate in Greenford Green is much of an improvement on the faceless modern development in Shadwell. But I guess it wasn’t any better in Perkin’s day. His dyeworks grew quickly, and it looks pretty grim in old engravings and photos, with its tall chimneys belching smoke and toxic nitrous fumes. He found a way to make aniline cheaply on the site from benzene, sulphuric and nitric acid, so goodness knows what the factory’s chemical vats spewed into the canal. The chemical process was dangerously explosive, and none of the Perkins had any experience with industrial-scale chemistry. It’s a wonder the whole place didn’t go up in smoke.


A photograph of the Perkins’ dyeworks in Greenford Green.

The last traces of the old factory were destroyed in 1976, but there’s a blue plaque here to mark its place… and here it is. “William Henry Perkin established on this site in 1857 the first synthetic dye factory in the world.”


The blue plaque at Greenford Green where the original coal-tar dye factory of Perkin and Sons once stood.

It became so much the rage in London that it even drew comment from Dickens in 1859:
“As I look out of my window, the apotheosis of Perkin’s purple seems at hand – purple hands wave from open carriages – purple hands shake each other at street doors – purple hands threaten each other from opposite sides of the street; purple-striped gowns cram barouches, jam up cabs, throng steamers, fill railway stations; all flying countryward, like so many migrating birds of purple Paradise.”

Perkin’s Greenford Green factory marks the end of the beginning – for aniline dyes and for the entire synthetic chemicals industry.

Perkin & Sons couldn’t get the French patent rights for their mauve, and within a year French and German companies started to make it too. Soon the coal-tar dyes were everywhere – not just purple but green, red, blue, black. The liberation of colour had arrived, and fashion became positively gaudy.

Bright colour – once the preserve of the rich – could be worn in all walks of life. Gone was the colour-coding of social hierarchies that had existed since the Middle Ages. Colour became a matter of individual expression.

What began as a stroke of serendipity in Shadwell was now becoming an exact science. Chemists came to understand that the particular arrangement of atoms in a molecule determines what it does – what, as I said earlier, the molecule means. And what it does might include which colours it absorbs and which it reflects, when light shines onto it.

So on the one hand, it became possible to make new colours to order. By carefully studying aniline dyes, chemists in the late nineteenth century could predict from the architecture of these compounds what colour they were likely to have. This is now the entire business of synthetic chemistry: constructing molecules with particular atomic arrangements and therefore particular properties.

On the other hand, if there was a substance found in nature that had useful properties – like quinine, say – then if you could figure out the shape of its atomic framework you had a chance of working out how to make it synthetically, perhaps more cheaply than harvesting it from plants.

But what became of the natural dyes, such as indigo and madder? They didn’t go out of fashion; instead, synthetic chemistry re-invented them. Getting these substances pure and in large amounts was costly and labour-intensive, and indigo plantations in India were the British Empire’s most lucrative business in all of Asia.

But as chemists came to understand that molecules were made of atoms linked together into particular architectures, they turned themselves into molecular architects who could even aspire to construct the molecules of nature. They figured out how, from simple ingredients like coal-tar substances, they could string together atoms to make the very molecules that gave indigo and madder their colours.



The molecular structures of indigo (top) and alizarin (bottom), which gives madder red its colour.

When two German chemists figured out how to make synthetic madder red in 1868 from the coal-tar compound anthracene, William Perkin quickly figured out how to do it more cheaply and on an industrial scale. By 1873 he’d got rich enough from this and other dyes to sell his company and return to pure research.


The blue plaque in Victory Place, near Elephant and Castle in southeast London, showing where the dyeworks of Simpson, Maule and Nicholson was situated. The company was established here in 1853, and in 1860 it began to manufacture aniline red dye, known also as magenta. Three years later they marketed an aniline violet, discovered by August Hofmann, that offered Perkin’s mauve some stiff competition. In 1873 William Perkin sold his dye company to the firm that Simpson, Maule and Nicholson had become, called Brooke, Simpson and Spiller. I was terribly excited when I discovered this plaque on my usual cycling route into London; I suspect I was the only person who could say that for a good many years.


Portrait of William Henry Perkin, painted in 1906 by Arthur S. Cope.

Perkin’s main competitor for synthetic madder was the German chemicals company BASF. If you’re like me, the name BASF will put you in mind of cassette tapes. But that’s just an example of how the dye companies diversified into other areas, because BASF stands for Badische Anilin und Soda Fabrik: the aniline and soda makers of Baden.

In 1877 one of their academic consultants, the German chemist Adolf Baeyer, worked out how to make indigo from the coal-tar extract toluene. BASF was soon producing it by the hundreds of tons. Within just a few years the price of indigo plummeted and the colonial plantations were put out of business, which the British government declared a national calamity.

Doesn’t this then make the chemist a kind of modern Prometheus? If you can control the shapes of molecules, what can you not create?

These colour manufacturers now pervade our language, our material world, our history. ICI, Hoescht, Agfa, Novartis – all began with dyes. In 1925 some of the major German dye companies merged to form the notorious cartel IG Farben, a force powerful enough to dictate its terms to Hitler. The diversification into pesticides left IG Farben with the patent for the poison gas Zyklon B, which it licensed for use in the concentration camps.

The diversification of the great dye companies into areas like pharmaceuticals had begun by the late nineteenth century. The coal-tar dyes themselves showed the way. In the 1870s the German physician Paul Ehrlich began to use the dyes for staining cells, which made them easier to see and distinguish under the microscope. He found that some dyes actually killed the microorganisms they stuck too.

That sounded useful. In 1909 Ehrlich discovered an arsenic-containing dye that would destroy the microorganism responsible for one of the most feared and deadly afflictions of the day: the disease that dare not speak its name, syphilis. Other coal-tar dyes worked as antibiotics.

Before this time, most drugs were, like quinine, extracts from natural sources, mostly from plants – like the extract of willow bark called salicylic acid that had long used as a painkiller. In 1897 a chemist at the German dye company Bayer turned phenol into a compound related to salicylic acid but which worked even better. The company started selling it under a trade name: aspirin.

To make sense of the science behind all this, chemicals companies couldn’t just any longer rely on hiring the services of academics. They started to employ their own chemists, who could design products like drugs based on a rational understanding of how the molecules needed to be shaped, and what they would do.

This, then, is what science-based industry is all about. It’s what the pharmaceuticals industry looks like today.

All the same, the revolution that Perkin began is in some ways still just getting started. We now know that there’s more to the way a drug works than just a good fit with the biological molecule that it aims to latch onto, like a lock and key. But we still can’t always fully understand or predict how a given drug will behave: you can’t be sure of designing it at the drawing board. Instead, most drug discovery still relies on trial and error, on shuffling molecular fragments into many different shapes and then seeing which ones work best.

What’s more, synthetic chemistry still has plenty of problems to solve: scientists struggle to put together some of the complicated molecules that nature produces. And even if they succeed, the route is often too long and too expensive to be useful in industry. This is why chemical synthesis is still as much an art as a science.

But Perkin is now regarded as one of its finest early stylists: a man who first gave us a glimpse of what might be possible if we can get clever enough at molecular architecture. And for that we have to thank the colour purple.